Eisenberg, Sarita; Guo, Ling-Yu
2016-05-01
This article reviews the existing literature on the diagnostic accuracy of two grammatical accuracy measures for differentiating children with and without language impairment (LI) at preschool and early school age based on language samples. The first measure, the finite verb morphology composite (FVMC), is a narrow grammatical measure that computes children's overall accuracy of four verb tense morphemes. The second measure, percent grammatical utterances (PGU), is a broader grammatical measure that computes children's accuracy in producing grammatical utterances. The extant studies show that FVMC demonstrates acceptable (i.e., 80 to 89% accurate) to good (i.e., 90% accurate or higher) diagnostic accuracy for children between 4;0 (years;months) and 6;11 in conversational or narrative samples. In contrast, PGU yields acceptable to good diagnostic accuracy for children between 3;0 and 8;11 regardless of sample types. Given the diagnostic accuracy shown in the literature, we suggest that FVMC and PGU can be used as one piece of evidence for identifying children with LI in assessment when appropriate. However, FVMC or PGU should not be used as therapy goals directly. Instead, when children are low in FVMC or PGU, we suggest that follow-up analyses should be conducted to determine the verb tense morphemes or grammatical structures that children have difficulty with. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Optical coherence tomography for glucose monitoring in blood
NASA Astrophysics Data System (ADS)
Ullah, Hafeez; Hussain, Fayyaz; Ikram, Masroor
2015-08-01
In this review, we have discussed the potential application of the emerging imaging modality, i.e., optical coherence tomography (OCT) for glucose monitoring in biological tissues. OCT provides monitoring of glucose diffusion in different fibrous tissues like in sclera by determining the permeability rate with acceptable accuracy both in type 1 and in type 2 diabetes. The maximum precision of glucose measurement in Intralipid suspensions, for example, with the OCT technique yields the accuracy up to 4.4 mM for 10 % Intralipid and 2.2 mM for 3 % Intralipid.
NASA Astrophysics Data System (ADS)
Bardant, Teuku Beuna; Dahnum, Deliana; Amaliyah, Nur
2017-11-01
Simultaneous Saccharification Fermentation (SSF) of palm oil (Elaeis guineensis) empty fruit bunch (EFB) pulp were investigated as a part of ethanol production process. SSF was investigated by observing the effect of substrate loading variation in range 10-20%w, cellulase loading 5-30 FPU/gr substrate and yeast addition 1-2%v to the ethanol yield. Mathematical model for describing the effects of these three variables to the ethanol yield were developed using Response Surface Methodology-Cheminformatics (RSM-CI). The model gave acceptable accuracy in predicting ethanol yield for Simultaneous Saccharification and Fermentation (SSF) with coefficient of determination (R2) 0.8899. Model validation based on data from previous study gave (R2) 0.7942 which was acceptable for using this model for trend prediction analysis. Trend prediction analysis based on model prediction yield showed that SSF gave trend for higher yield when the process was operated in high enzyme concentration and low substrate concentration. On the other hand, even SHF model showed better yield will be obtained if operated in lower substrate concentration, it still possible to operate in higher substrate concentration with slightly lower yield. Opportunity provided by SHF to operate in high loading substrate make it preferable option for application in commercial scale.
Iftikhar, Imran H; Alghothani, Lana; Sardi, Alejandro; Berkowitz, David; Musani, Ali I
2017-07-01
Transbronchial lung cryobiopsy is increasingly being used for the assessment of diffuse parenchymal lung diseases. Several studies have shown larger biopsy samples and higher yields compared with conventional transbronchial biopsies. However, the higher risk of bleeding and other complications has raised concerns for widespread use of this modality. To study the diagnostic accuracy and safety profile of transbronchial lung cryobiopsy and compare with video-assisted thoracoscopic surgery (VATS) by reviewing available evidence from the literature. Medline and PubMed were searched from inception until December 2016. Data on diagnostic performance were abstracted by constructing two-by-two contingency tables for each study. Data on a priori selected safety outcomes were collected. Risk of bias was assessed with the Quality Assessment of Diagnostic Accuracy Studies tool. Random effects meta-analyses were performed to obtain summary estimates of the diagnostic accuracy. The pooled diagnostic yield, pooled sensitivity, and pooled specificity of transbronchial lung cryobiopsy were 83.7% (76.9-88.8%), 87% (85-89%), and 57% (40-73%), respectively. The pooled diagnostic yield, pooled sensitivity, and pooled specificity of VATS were 92.7% (87.6-95.8%), 91.0% (89-92%), and 58% (31-81%), respectively. The incidence of grade 2 (moderate to severe) endobronchial bleeding after transbronchial lung cryobiopsy and of post-procedural pneumothorax was 4.9% (2.2-10.7%) and 9.5% (5.9-14.9%), respectively. Although the diagnostic test accuracy measures of transbronchial lung cryobiopsy lag behind those of VATS, with an acceptable safety profile and potential cost savings, the former could be considered as an alternative in the evaluation of patients with diffuse parenchymal lung diseases.
Wang, Z X; Chen, S L; Wang, Q Q; Liu, B; Zhu, J; Shen, J
2015-06-01
The aim of this study was to evaluate the accuracy of magnetic resonance imaging in the detection of triangular fibrocartilage complex injury through a meta-analysis. A comprehensive literature search was conducted before 1 April 2014. All studies comparing magnetic resonance imaging results with arthroscopy or open surgery findings were reviewed, and 25 studies that satisfied the eligibility criteria were included. Data were pooled to yield pooled sensitivity and specificity, which were respectively 0.83 and 0.82. In detection of central and peripheral tears, magnetic resonance imaging had respectively a pooled sensitivity of 0.90 and 0.88 and a pooled specificity of 0.97 and 0.97. Six high-quality studies using Ringler's recommended magnetic resonance imaging parameters were selected for analysis to determine whether optimal imaging protocols yielded better results. The pooled sensitivity and specificity of these six studies were 0.92 and 0.82, respectively. The overall accuracy of magnetic resonance imaging was acceptable. For peripheral tears, the pooled data showed a relatively high accuracy. Magnetic resonance imaging with appropriate parameters are an ideal method for diagnosing different types of triangular fibrocartilage complex tears. © The Author(s) 2015.
Armon-Lotem, Sharon; Meir, Natalia
2016-11-01
Previous research demonstrates that repetition tasks are valuable tools for diagnosing specific language impairment (SLI) in monolingual children in English and a variety of other languages, with non-word repetition (NWR) and sentence repetition (SRep) yielding high levels of sensitivity and specificity. Yet, only a few studies have addressed the diagnostic accuracy of repetition tasks in bilingual children, and most available research focuses on English-Spanish sequential bilinguals. To evaluate the efficacy of three repetition tasks (forward digit span (FWD), NWR and SRep) in order to distinguish mono- and bilingual children with and without SLI in Russian and Hebrew. A total of 230 mono- and bilingual children aged 5;5-6;8 participated in the study: 144 bilingual Russian-Hebrew-speaking children (27 with SLI); and 52 monolingual Hebrew-speaking children (14 with SLI) and 34 monolingual Russian-speaking children (14 with SLI). Parallel repetition tasks were designed in both Russian and Hebrew. Bilingual children were tested in both languages. The findings confirmed that NWR and SRep are valuable tools in distinguishing monolingual children with and without SLI in Russian and Hebrew, while the results for FWD were mixed. Yet, testing of bilingual children with the same tools using monolingual cut-off points resulted in inadequate diagnostic accuracy. We demonstrate, however, that the use of bilingual cut-off points yielded acceptable levels of diagnostic accuracy. The combination of SRep tasks in L1/Russian and L2/Hebrew yielded the highest overall accuracy (i.e., 94%), but even SRep alone in L2/Hebrew showed excellent levels of sensitivity (i.e., 100%) and specificity (i.e., 89%), reaching 91% of total diagnostic accuracy. The results are very promising for identifying SLI in bilingual children and for showing that testing in the majority language with bilingual cut-off points can provide an accurate classification. © 2016 Royal College of Speech and Language Therapists.
Improved accuracy for finite element structural analysis via an integrated force method
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.
1992-01-01
A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.
Analysis of low levels of rare earths by radiochemical neutron activation analysis
Wandless, G.A.; Morgan, J.W.
1985-01-01
A procedure for the radiochemical neutron-activation analysis for the rare earth elements (REE) involves the separation of the REE as a group by rapid ion-exchange methods and determination of yields by reactivation or by energy dispersive X-ray fluorescence (EDXRF) spectrometry. The U. S. Geological Survey (USGS) standard rocks, BCR-1 and AGV-1, were analyzed to determine the precision and accuracy of the method. We found that the precision was ??5-10% on the basis of replicate analysis and that, in general the accuracy was within ??5% of accepted values for most REE. Data for USGS standard rocks BIR-1 (Icelandic basalt) and DNC-1 (North Carolina diabase) are also presented. ?? 1985 Akade??miai Kiado??.
Frequency domain laser velocimeter signal processor: A new signal processing scheme
NASA Technical Reports Server (NTRS)
Meyers, James F.; Clemmons, James I., Jr.
1987-01-01
A new scheme for processing signals from laser velocimeter systems is described. The technique utilizes the capabilities of advanced digital electronics to yield a smart instrument that is able to configure itself, based on the characteristics of the input signals, for optimum measurement accuracy. The signal processor is composed of a high-speed 2-bit transient recorder for signal capture and a combination of adaptive digital filters with energy and/or zero crossing detection signal processing. The system is designed to accept signals with frequencies up to 100 MHz with standard deviations up to 20 percent of the average signal frequency. Results from comparative simulation studies indicate measurement accuracies 2.5 times better than with a high-speed burst counter, from signals with as few as 150 photons per burst.
Improved accuracy for finite element structural analysis via a new integrated force method
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo
1992-01-01
A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.
NASA Technical Reports Server (NTRS)
Cibula, William G.; Nyquist, Maurice O.
1987-01-01
An unsupervised computer classification of vegetation/landcover of Olympic National Park and surrounding environs was initially carried out using four bands of Landsat MSS data. The primary objective of the project was to derive a level of landcover classifications useful for park management applications while maintaining an acceptably high level of classification accuracy. Initially, nine generalized vegetation/landcover classes were derived. Overall classification accuracy was 91.7 percent. In an attempt to refine the level of classification, a geographic information system (GIS) approach was employed. Topographic data and watershed boundaries (inferred precipitation/temperature) data were registered with the Landsat MSS data. The resultant boolean operations yielded 21 vegetation/landcover classes while maintaining the same level of classification accuracy. The final classification provided much better identification and location of the major forest types within the park at the same high level of accuracy, and these met the project objective. This classification could now become inputs into a GIS system to help provide answers to park management coupled with other ancillary data programs such as fire management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biplab, S; Soumya, R; Paul, S
2014-06-01
Purpose: For the first time in the world, BrainLAB has integrated its iPlan treatment planning system for clinical use with Elekta linear accelerator (Axesse with a Beam Modulator). The purpose of this study was to compare the calculated and measured doses with different chambers to establish the calculation accuracy of iPlan system. Methods: The iPlan has both Pencil beam (PB) and Monte Carlo (MC) calculation algorithms. Beam data include depth doses, profiles and output measurements for different field sizes. Collected data was verified by vendor and beam modelling was done. Further QA tests were carried out in our clinic. Dosemore » calculation accuracy verified point, volumetric dose measurement using ion chambers of different volumes (0.01cc and 0.125cc). Planner dose verification was done using diode array. Plans were generated in iPlan and irradiated in Elekta Axesse linear accelerator. Results: Dose calculation accuracies verified using ion chamber for 6 and 10 MV beam were 3.5+/-0.33(PB), 1.7%+/-0.7(MC) and 3.9%+/-0.6(PB), 3.4%+/-0.6(MC) respectively. Using a pin point chamber, dose calculation accuracy for 6MV and 10MV was 3.8%+/-0.06(PB), 1.21%+/-0.2(MC) and 4.2%+/-0.6(PB), 3.1%+/-0.7(MC) respectively. The calculated planar dose distribution for 10.4×10.4 cm2 was verified using a diode array and the gamma analysis for 2%-2mm criteria yielded pass rates of 88 %(PB) and 98.8%(MC) respectively. 3mm-3% yields 100% passing for both MC and PB algorithm. Conclusion: Dose calculation accuracy was found to be within acceptable limits for MC for 6MV beam. PB for both beams and MC for 10 MV beam were found to be outside acceptable limits. The output measurements were done twice for conformation. The lower gamma matching was attributed to meager number of measured profiles (only two profiles for PB) and coarse measurement resolution for diagonal profile measurement (5mm). Based on these measurements we concluded that 6 MV MC algorithm is suitable for patient treatment.« less
Abbott ARCHITECT iPhenytoin assay versus similar assays for measuring free phenytoin concentrations.
Tacker, Danyel Hermes; Robinson, Randy; Perrotta, Peter L
2014-01-01
To measure free phenytoin (FP) concentrations in filtered specimens using the Abbott ARCHITECT iPhenytoin assay and to compare results from this method with results from the Abbott TDx/FLx assays. We verified accuracy, analytic measurement range, and precision for FP measurements. For correlation and therapeutic interval studies, we used filtered calibrators, controls, proficiency-testing materials, and surplus clinical samples. After implementation, we determined proficiency testing results. The analytic measurement range was 2.0 to 25.0 micromol/L. Quality control materials (6.1, 12.6, and 20.1 micromol/L) provided mean (SD) recoveries of 96.1 (5.0%), 99.2 (5.0%), and 99.3 (5.7%), respectively, and coefficients of variation of 5.2%, 5.0%, and 5.8%, respectively. Clinical specimens produced mean (SD) FP recovery levels of 103.7 (10.6%) (bias, 0.1 [0.3] micromol/L). Altering the FP therapeutic range (4.0-8.0 micromol/L) was unnecessary. Proficiency testing yielded consistently acceptable results. Our accuracy, precision, and correlation results were similar for the TDx/FLx and ARCHITECT assays, which demonstrates that the ARCHITECT iPhenytoin assay is acceptable for clinical FP measurements.
Influence of droplet spacing on drag coefficient in nonevaporating, monodisperse streams
NASA Astrophysics Data System (ADS)
Mulholland, J. A.; Srivastava, R. K.; Wendt, J. O. L.
1988-10-01
Trajectory measurements on single, monodisperse, nonevaporating droplet streams whose droplet size, velocity, and spacing were varied to yield initial Re numbers in the 90-290 range are presently used to ascertain the influence of droplet spacing on the drag coefficient of individual drops injected into a quiescent environment. A trajectory model containing the local drag coefficient was fitted to the experimental data by a nonlinear regression; over 40 additional trajectories were predicted with acceptable accuracy. This formulation will aid the computation of waste-droplet drag in flames for improved combustion-generated pollutant predictions.
O'Bryant, Sid E; Xiao, Guanghua; Barber, Robert; Huebinger, Ryan; Wilhelmsen, Kirk; Edwards, Melissa; Graff-Radford, Neill; Doody, Rachelle; Diaz-Arrastia, Ramon
2011-01-01
There is no rapid and cost effective tool that can be implemented as a front-line screening tool for Alzheimer's disease (AD) at the population level. To generate and cross-validate a blood-based screener for AD that yields acceptable accuracy across both serum and plasma. Analysis of serum biomarker proteins were conducted on 197 Alzheimer's disease (AD) participants and 199 control participants from the Texas Alzheimer's Research Consortium (TARC) with further analysis conducted on plasma proteins from 112 AD and 52 control participants from the Alzheimer's Disease Neuroimaging Initiative (ADNI). The full algorithm was derived from a biomarker risk score, clinical lab (glucose, triglycerides, total cholesterol, homocysteine), and demographic (age, gender, education, APOE*E4 status) data. Alzheimer's disease. 11 proteins met our criteria and were utilized for the biomarker risk score. The random forest (RF) biomarker risk score from the TARC serum samples (training set) yielded adequate accuracy in the ADNI plasma sample (training set) (AUC = 0.70, sensitivity (SN) = 0.54 and specificity (SP) = 0.78), which was below that obtained from ADNI cerebral spinal fluid (CSF) analyses (t-tau/Aβ ratio AUC = 0.92). However, the full algorithm yielded excellent accuracy (AUC = 0.88, SN = 0.75, and SP = 0.91). The likelihood ratio of having AD based on a positive test finding (LR+) = 7.03 (SE = 1.17; 95% CI = 4.49-14.47), the likelihood ratio of not having AD based on the algorithm (LR-) = 3.55 (SE = 1.15; 2.22-5.71), and the odds ratio of AD were calculated in the ADNI cohort (OR) = 28.70 (1.55; 95% CI = 11.86-69.47). It is possible to create a blood-based screening algorithm that works across both serum and plasma that provides a comparable screening accuracy to that obtained from CSF analyses.
Belgiu, Mariana; Dr Guţ, Lucian
2014-10-01
Although multiresolution segmentation (MRS) is a powerful technique for dealing with very high resolution imagery, some of the image objects that it generates do not match the geometries of the target objects, which reduces the classification accuracy. MRS can, however, be guided to produce results that approach the desired object geometry using either supervised or unsupervised approaches. Although some studies have suggested that a supervised approach is preferable, there has been no comparative evaluation of these two approaches. Therefore, in this study, we have compared supervised and unsupervised approaches to MRS. One supervised and two unsupervised segmentation methods were tested on three areas using QuickBird and WorldView-2 satellite imagery. The results were assessed using both segmentation evaluation methods and an accuracy assessment of the resulting building classifications. Thus, differences in the geometries of the image objects and in the potential to achieve satisfactory thematic accuracies were evaluated. The two approaches yielded remarkably similar classification results, with overall accuracies ranging from 82% to 86%. The performance of one of the unsupervised methods was unexpectedly similar to that of the supervised method; they identified almost identical scale parameters as being optimal for segmenting buildings, resulting in very similar geometries for the resulting image objects. The second unsupervised method produced very different image objects from the supervised method, but their classification accuracies were still very similar. The latter result was unexpected because, contrary to previously published findings, it suggests a high degree of independence between the segmentation results and classification accuracy. The results of this study have two important implications. The first is that object-based image analysis can be automated without sacrificing classification accuracy, and the second is that the previously accepted idea that classification is dependent on segmentation is challenged by our unexpected results, casting doubt on the value of pursuing 'optimal segmentation'. Our results rather suggest that as long as under-segmentation remains at acceptable levels, imperfections in segmentation can be ruled out, so that a high level of classification accuracy can still be achieved.
Test battery for measuring the perception and recognition of facial expressions of emotion
Wilhelm, Oliver; Hildebrandt, Andrea; Manske, Karsten; Schacht, Annekathrin; Sommer, Werner
2014-01-01
Despite the importance of perceiving and recognizing facial expressions in everyday life, there is no comprehensive test battery for the multivariate assessment of these abilities. As a first step toward such a compilation, we present 16 tasks that measure the perception and recognition of facial emotion expressions, and data illustrating each task's difficulty and reliability. The scoring of these tasks focuses on either the speed or accuracy of performance. A sample of 269 healthy young adults completed all tasks. In general, accuracy and reaction time measures for emotion-general scores showed acceptable and high estimates of internal consistency and factor reliability. Emotion-specific scores yielded lower reliabilities, yet high enough to encourage further studies with such measures. Analyses of task difficulty revealed that all tasks are suitable for measuring emotion perception and emotion recognition related abilities in normal populations. PMID:24860528
Donders, Jacobus; Janke, Kelly
2008-07-01
The performance of 40 children with complicated mild to severe traumatic brain injury on the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV; Wechsler, 2003) was compared with that of 40 demographically matched healthy controls. Of the four WISC-IV factor index scores, only Processing Speed yielded a statistically significant group difference (p < .001) as well as a statistically significant negative correlation with length of coma (p < .01). Logistic regression, using Processing Speed to classify individual children, yielded a sensitivity of 72.50% and a specificity of 62.50%, with false positive and false negative rates both exceeding 30%. We conclude that Processing Speed has acceptable criterion validity in the evaluation of children with complicated mild to severe traumatic brain injury but that the WISC-IV should be supplemented with other measures to assure sufficient accuracy in the diagnostic process.
Dynamic sample size detection in learning command line sequence for continuous authentication.
Traore, Issa; Woungang, Isaac; Nakkabi, Youssef; Obaidat, Mohammad S; Ahmed, Ahmed Awad E; Khalilian, Bijan
2012-10-01
Continuous authentication (CA) consists of authenticating the user repetitively throughout a session with the goal of detecting and protecting against session hijacking attacks. While the accuracy of the detector is central to the success of CA, the detection delay or length of an individual authentication period is important as well since it is a measure of the window of vulnerability of the system. However, high accuracy and small detection delay are conflicting requirements that need to be balanced for optimum detection. In this paper, we propose the use of sequential sampling technique to achieve optimum detection by trading off adequately between detection delay and accuracy in the CA process. We illustrate our approach through CA based on user command line sequence and naïve Bayes classification scheme. Experimental evaluation using the Greenberg data set yields encouraging results consisting of a false acceptance rate (FAR) of 11.78% and a false rejection rate (FRR) of 1.33%, with an average command sequence length (i.e., detection delay) of 37 commands. When using the Schonlau (SEA) data set, we obtain FAR = 4.28% and FRR = 12%.
Genomic selection across multiple breeding cycles in applied bread wheat breeding.
Michel, Sebastian; Ametz, Christian; Gungor, Huseyin; Epure, Doru; Grausgruber, Heinrich; Löschenberger, Franziska; Buerstmayr, Hermann
2016-06-01
We evaluated genomic selection across five breeding cycles of bread wheat breeding. Bias of within-cycle cross-validation and methods for improving the prediction accuracy were assessed. The prospect of genomic selection has been frequently shown by cross-validation studies using the same genetic material across multiple environments, but studies investigating genomic selection across multiple breeding cycles in applied bread wheat breeding are lacking. We estimated the prediction accuracy of grain yield, protein content and protein yield of 659 inbred lines across five independent breeding cycles and assessed the bias of within-cycle cross-validation. We investigated the influence of outliers on the prediction accuracy and predicted protein yield by its components traits. A high average heritability was estimated for protein content, followed by grain yield and protein yield. The bias of the prediction accuracy using populations from individual cycles using fivefold cross-validation was accordingly substantial for protein yield (17-712 %) and less pronounced for protein content (8-86 %). Cross-validation using the cycles as folds aimed to avoid this bias and reached a maximum prediction accuracy of [Formula: see text] = 0.51 for protein content, [Formula: see text] = 0.38 for grain yield and [Formula: see text] = 0.16 for protein yield. Dropping outlier cycles increased the prediction accuracy of grain yield to [Formula: see text] = 0.41 as estimated by cross-validation, while dropping outlier environments did not have a significant effect on the prediction accuracy. Independent validation suggests, on the other hand, that careful consideration is necessary before an outlier correction is undertaken, which removes lines from the training population. Predicting protein yield by multiplying genomic estimated breeding values of grain yield and protein content raised the prediction accuracy to [Formula: see text] = 0.19 for this derived trait.
NASA Astrophysics Data System (ADS)
House, Rachael; Lasso, Andras; Harish, Vinyas; Baum, Zachary; Fichtinger, Gabor
2017-03-01
PURPOSE: Optical pose tracking of medical instruments is often used in image-guided interventions. Unfortunately, compared to commonly used computing devices, optical trackers tend to be large, heavy, and expensive devices. Compact 3D vision systems, such as Intel RealSense cameras can capture 3D pose information at several magnitudes lower cost, size, and weight. We propose to use Intel SR300 device for applications where it is not practical or feasible to use conventional trackers and limited range and tracking accuracy is acceptable. We also put forward a vertebral level localization application utilizing the SR300 to reduce risk of wrong-level surgery. METHODS: The SR300 was utilized as an object tracker by extending the PLUS toolkit to support data collection from RealSense cameras. Accuracy of the camera was tested by comparing to a high-accuracy optical tracker. CT images of a lumbar spine phantom were obtained and used to create a 3D model in 3D Slicer. The SR300 was used to obtain a surface model of the phantom. Markers were attached to the phantom and a pointer and tracked using Intel RealSense SDK's built-in object tracking feature. 3D Slicer was used to align CT image with phantom using landmark registration and display the CT image overlaid on the optical image. RESULTS: Accuracy of the camera yielded a median position error of 3.3mm (95th percentile 6.7mm) and orientation error of 1.6° (95th percentile 4.3°) in a 20x16x10cm workspace, constantly maintaining proper marker orientation. The model and surface correctly aligned demonstrating the vertebral level localization application. CONCLUSION: The SR300 may be usable for pose tracking in medical procedures where limited accuracy is acceptable. Initial results suggest the SR300 is suitable for vertebral level localization.
Bradac, Ondrej; Steklacova, Anna; Nebrenska, Katerina; Vrana, Jiri; de Lacy, Patricia; Benes, Vladimir
2017-08-01
Frameless stereotactic brain biopsy systems are widely used today. VarioGuide (VG) is a relatively novel frameless system. Its accuracy was studied in a laboratory setting but has not yet been studied in the clinical setting. The purpose of this study was to determine its accuracy and diagnostic yield and to compare this with frame-based (FB) stereotaxy. Overall, 53 patients (33 males and 20 females, 60 ± 15 years old) were enrolled into this prospective, randomized, single-center study. Twenty-six patients were randomized into the FB group and 27 patients into the VG group. Real trajectory was pointed on intraoperative magnetic resonance. The distance of the targets and angle deviation between the planned and real trajectories were computed. The overall discomfort of the patient was subjectively assessed by the visual analog scale score. The median lesion volume was 5 mL (interquartile range [IQR]: 2-16 mL) (FB) and 16 mL (IQR: 2-27 mL) (VG), P = 0.133. The mean distance of the targets was 2.7 ± 1.1 mm (FB) and 2.9 ± 1.3 mm (VG), P = 0.456. Mean angle deviation was 2.6 ± 1.3 deg (FB) and 3.5 ± 2.1 deg (VG), P = 0.074. Diagnostic yield was 93% (25/27) in VG and 96% (25/26) in FB, P = 1.000. Mean operating time was 47 ± 26 minutes (FB) and 59 ± 31 minutes (VG), P = 0.140. One minor bleeding was encountered in the VG group. Overall patient discomfort was significantly higher in the FB group (visual analog scale score 2.5 ± 2.1 vs. 1.2 ± 0.6, P = 0,004). The VG system proved to be comparable in terms of the trajectory accuracy, rate of complications and diagnostic yield compared with the "gold standard" represented by the traditional FB stereotaxy for patients undergoing brain biopsy. VG is also better accepted by patients. Copyright © 2017 Elsevier Inc. All rights reserved.
Real-time robot deliberation by compilation and monitoring of anytime algorithms
NASA Technical Reports Server (NTRS)
Zilberstein, Shlomo
1994-01-01
Anytime algorithms are algorithms whose quality of results improves gradually as computation time increases. Certainty, accuracy, and specificity are metrics useful in anytime algorighm construction. It is widely accepted that a successful robotic system must trade off between decision quality and the computational resources used to produce it. Anytime algorithms were designed to offer such a trade off. A model of compilation and monitoring mechanisms needed to build robots that can efficiently control their deliberation time is presented. This approach simplifies the design and implementation of complex intelligent robots, mechanizes the composition and monitoring processes, and provides independent real time robotic systems that automatically adjust resource allocation to yield optimum performance.
Kolbl, Sabina; Paloczi, Attila; Panjan, Jože; Stres, Blaž
2014-02-01
The primary aim of the study was to develop and validate an in-house upscale of Automatic Methane Potential Test System II for studying real-time inocula and real-scale substrates in batch, codigestion and enzyme enhanced hydrolysis experiments, in addition to semi-continuous operation of the developed equipment and experiments testing inoculum functional quality. The successful upscale to 5L enabled comparison of different process configurations in shorter preparation times with acceptable accuracy and high-through put intended for industrial decision making. The adoption of the same scales, equipment and methodologies in batch and semi-continuous tests mirroring those at full scale biogas plants resulted in matching methane yields between the two laboratory tests and full-scale, confirming thus the increased decision making value of the approach for industrial operations. Copyright © 2013 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Daniels, Brian; Volpe, Robert J.; Fabiano, Gregory A.; Briesch, Amy M.
2017-01-01
This study examines the classification accuracy and teacher acceptability of a problem-focused screener for academic and disruptive behavior problems, which is directly linked to evidence-based intervention. Participants included 39 classroom teachers from 2 public school districts in the Northeastern United States. Teacher ratings were obtained…
Schoemans, H; Goris, K; Durm, R V; Vanhoof, J; Wolff, D; Greinix, H; Pavletic, S; Lee, S J; Maertens, J; Geest, S D; Dobbels, F; Duarte, R F
2016-08-01
The EBMT Complications and Quality of Life Working Party has developed a computer-based algorithm, the 'eGVHD App', using a user-centered design process. Accuracy was tested using a quasi-experimental crossover design with four expert-reviewed case vignettes in a convenience sample of 28 clinical professionals. Perceived usefulness was evaluated by the technology acceptance model (TAM) and User satisfaction by the Post-Study System Usability Questionnaire (PSSUQ). User experience was positive, with a median of 6 TAM points (interquartile range: 1) and beneficial median total, and subscale PSSUQ scores. The initial standard practice assessment of the vignettes yielded 65% correct results for diagnosis and 45% for scoring. The 'eGVHD App' significantly increased diagnostic and scoring accuracy to 93% (+28%) and 88% (+43%), respectively (both P<0.05). The same trend was observed in the repeated analysis of case 2: accuracy improved by using the App (+31% for diagnosis and +39% for scoring), whereas performance tended to decrease once the App was taken away. The 'eGVHD App' could dramatically improve the quality of care and research as it increased the performance of the whole user group by about 30% at the first assessment and showed a trend for improvement of individual performance on repeated case evaluation.
Chow, Benjamin J W; Freeman, Michael R; Bowen, James M; Levin, Leslie; Hopkins, Robert B; Provost, Yves; Tarride, Jean-Eric; Dennie, Carole; Cohen, Eric A; Marcuzzi, Dan; Iwanochko, Robert; Moody, Alan R; Paul, Narinder; Parker, John D; O'Reilly, Daria J; Xie, Feng; Goeree, Ron
2011-06-13
Computed tomographic coronary angiography (CTCA) has gained clinical acceptance for the detection of obstructive coronary artery disease. Although single-center studies have demonstrated excellent accuracy, multicenter studies have yielded variable results. The true diagnostic accuracy of CTCA in the "real world" remains uncertain. We conducted a field evaluation comparing multidetector CTCA with invasive CA (ICA) to understand CTCA's diagnostic accuracy in a real-world setting. A multicenter cohort study of patients awaiting ICA was conducted between September 2006 and June 2009. All patients had either a low or an intermediate pretest probability for coronary artery disease and underwent CTCA and ICA within 10 days. The results of CTCA and ICA were interpreted visually by local expert observers who were blinded to all clinical data and imaging results. Using a patient-based analysis (diameter stenosis ≥50%) of 169 patients, the sensitivity, specificity, positive predictive value, and negative predictive value were 81.3% (95% confidence interval [CI], 71.0%-89.1%), 93.3% (95% CI, 85.9%-97.5%), 91.6% (95% CI, 82.5%-96.8%), and 84.7% (95% CI, 76.0%-91.2%), respectively; the area under receiver operating characteristic curve was 0.873. The diagnostic accuracy varied across centers (P < .001), with a sensitivity, specificity, positive predictive value, and negative predictive value ranging from 50.0% to 93.2%, 92.0% to 100%, 84.6% to 100%, and 42.9% to 94.7%, respectively. Compared with ICA, CTCA appears to have good accuracy; however, there was variability in diagnostic accuracy across centers. Factors affecting institutional variability need to be better understood before CTCA is universally adopted. Additional real-world evaluations are needed to fully understand the impact of CTCA on clinical care. clinicaltrials.gov Identifier: NCT00371891.
Improving the baking quality of bread wheat by genomic selection in early generations.
Michel, Sebastian; Kummer, Christian; Gallee, Martin; Hellinger, Jakob; Ametz, Christian; Akgöl, Batuhan; Epure, Doru; Güngör, Huseyin; Löschenberger, Franziska; Buerstmayr, Hermann
2018-02-01
Genomic selection shows great promise for pre-selecting lines with superior bread baking quality in early generations, 3 years ahead of labour-intensive, time-consuming, and costly quality analysis. The genetic improvement of baking quality is one of the grand challenges in wheat breeding as the assessment of the associated traits often involves time-consuming, labour-intensive, and costly testing forcing breeders to postpone sophisticated quality tests to the very last phases of variety development. The prospect of genomic selection for complex traits like grain yield has been shown in numerous studies, and might thus be also an interesting method to select for baking quality traits. Hence, we focused in this study on the accuracy of genomic selection for laborious and expensive to phenotype quality traits as well as its selection response in comparison with phenotypic selection. More than 400 genotyped wheat lines were, therefore, phenotyped for protein content, dough viscoelastic and mixing properties related to baking quality in multi-environment trials 2009-2016. The average prediction accuracy across three independent validation populations was r = 0.39 and could be increased to r = 0.47 by modelling major QTL as fixed effects as well as employing multi-trait prediction models, which resulted in an acceptable prediction accuracy for all dough rheological traits (r = 0.38-0.63). Genomic selection can furthermore be applied 2-3 years earlier than direct phenotypic selection, and the estimated selection response was nearly twice as high in comparison with indirect selection by protein content for baking quality related traits. This considerable advantage of genomic selection could accordingly support breeders in their selection decisions and aid in efficiently combining superior baking quality with grain yield in newly developed wheat varieties.
Dynamic Algorithms for Transition Matrix Generation
NASA Astrophysics Data System (ADS)
Yevick, David; Lee, Yong Hwan
The methods of [D. Yevick, Int. J. Mod. Phys. C, 1650041] for constructing transition matrices are applied to the two dimensional Ising model. Decreasing the system temperature during the acquisition of the matrix elements yields a reasonably precise specific heat curve for a 32x32 spin system for a limited number (50-100M) of realizations. If the system is instead evolved to first higher and then lower energies within a restricted interval that is steadily displaced in energy as the computation proceeds, a modification which permits backward displacements up to a certain lower bound for each forward step ensures acceptable accuracy. Additional constraints on the transition rule are also investigated. The Natural Sciences and Engineering Research Council of Canada (NSERC) and CIENA are acknowledged for financial support.
Testing the accuracy of growth and yield models for southern hardwood forests
H. Michael Rauscher; Michael J. Young; Charles D. Webb; Daniel J. Robison
2000-01-01
The accuracy of ten growth and yield models for Southern Appalachian upland hardwood forests and southern bottomland forests was evaluated. In technical applications, accuracy is the composite of both bias (average error) and precision. Results indicate that GHAT, NATPIS, and a locally calibrated version of NETWIGS may be regarded as being operationally valid...
7 CFR 400.53 - Yield certification and acceptability.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 6 2011-01-01 2011-01-01 false Yield certification and acceptability. 400.53 Section 400.53 Agriculture Regulations of the Department of Agriculture (Continued) FEDERAL CROP INSURANCE CORPORATION, DEPARTMENT OF AGRICULTURE GENERAL ADMINISTRATIVE REGULATIONS Actual Production History § 400.53...
7 CFR 400.53 - Yield certification and acceptability.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 6 2012-01-01 2012-01-01 false Yield certification and acceptability. 400.53 Section 400.53 Agriculture Regulations of the Department of Agriculture (Continued) FEDERAL CROP INSURANCE CORPORATION, DEPARTMENT OF AGRICULTURE GENERAL ADMINISTRATIVE REGULATIONS Actual Production History § 400.53...
7 CFR 400.53 - Yield certification and acceptability.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 6 2013-01-01 2013-01-01 false Yield certification and acceptability. 400.53 Section 400.53 Agriculture Regulations of the Department of Agriculture (Continued) FEDERAL CROP INSURANCE CORPORATION, DEPARTMENT OF AGRICULTURE GENERAL ADMINISTRATIVE REGULATIONS Actual Production History § 400.53...
7 CFR 400.53 - Yield certification and acceptability.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 6 2014-01-01 2014-01-01 false Yield certification and acceptability. 400.53 Section 400.53 Agriculture Regulations of the Department of Agriculture (Continued) FEDERAL CROP INSURANCE CORPORATION, DEPARTMENT OF AGRICULTURE GENERAL ADMINISTRATIVE REGULATIONS Actual Production History § 400.53...
7 CFR 400.53 - Yield certification and acceptability.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 6 2010-01-01 2010-01-01 false Yield certification and acceptability. 400.53 Section 400.53 Agriculture Regulations of the Department of Agriculture (Continued) FEDERAL CROP INSURANCE CORPORATION, DEPARTMENT OF AGRICULTURE GENERAL ADMINISTRATIVE REGULATIONS Actual Production History § 400.53...
Resolving Conflicts Between Syntax and Plausibility in Sentence Comprehension
Andrews, Glenda; Ogden, Jessica E.; Halford, Graeme S.
2017-01-01
Comprehension of plausible and implausible object- and subject-relative clause sentences with and without prepositional phrases was examined. Undergraduates read each sentence then evaluated a statement as consistent or inconsistent with the sentence. Higher acceptance of consistent than inconsistent statements indicated reliance on syntactic analysis. Higher acceptance of plausible than implausible statements reflected reliance on semantic plausibility. There was greater reliance on semantic plausibility and lesser reliance on syntactic analysis for more complex object-relatives and sentences with prepositional phrases than for less complex subject-relatives and sentences without prepositional phrases. Comprehension accuracy and confidence were lower when syntactic analysis and semantic plausibility yielded conflicting interpretations. The conflict effect on comprehension was significant for complex sentences but not for less complex sentences. Working memory capacity predicted resolution of the syntax-plausibility conflict in more and less complex items only when sentences and statements were presented sequentially. Fluid intelligence predicted resolution of the conflict in more and less complex items under sequential and simultaneous presentation. Domain-general processes appear to be involved in resolving syntax-plausibility conflicts in sentence comprehension. PMID:28458748
Lakes, Kimberley D.; Bryars, Tracy; Sirisinahal, Swetha; Salim, Nimrah; Arastoo, Sara; Emmerson, Natasha; Kang, Daniel; Shim, Lois; Wong, Doug; Kang, Chang Jin
2013-01-01
There is growing consensus that exercise improves cognitive functioning, but research is needed to identify exercise interventions that optimize effects on cognition. The objective of this pilot study was to evaluate Taekwondo implemented in public middle school physical education (PE). Two classes were randomly assigned to either: five sessions per week of PE or three sessions of PE and two sessions of Taekwondo. In PE sessions, evidence-based curriculum to address the Presidential Core Fitness Guidelines and California Physical Fitness Tests was implemented. Taekwondo sessions included traditional techniques and forms taught in an environment emphasizing respect and self-control. Sixty students were evaluated at baseline and during the last week of the intervention (nine months later). Differences in mean residualized change scores for parent-rated inhibitory behavioral control yielded a significant, large effect size (d =.95, p =.00), reflecting greater improvement among Taekwondo students. Results from an executive function computer-administered task revealed greater accuracy on the congruent trial (d = 2.00, p = .02) for Taekwondo students. Differences in mean residualized change scores for BMI z scores yielded a moderate, non-significant effect size (d = − .51, p = .16). The majority of Taekwondo students reported positive perceptions of Taekwondo and perceived self-improvement in self-control and physical fitness. Results suggest that Taekwondo is an exercise program that improves cognitive functioning and is both feasible and acceptable to implement in a public school setting. PMID:24563664
Evaluation of a Performance-Based Expert Elicitation: WHO Global Attribution of Foodborne Diseases.
Aspinall, W P; Cooke, R M; Havelaar, A H; Hoffmann, S; Hald, T
2016-01-01
For many societally important science-based decisions, data are inadequate, unreliable or non-existent, and expert advice is sought. In such cases, procedures for eliciting structured expert judgments (SEJ) are increasingly used. This raises questions regarding validity and reproducibility. This paper presents new findings from a large-scale international SEJ study intended to estimate the global burden of foodborne disease on behalf of WHO. The study involved 72 experts distributed over 134 expert panels, with panels comprising thirteen experts on average. Elicitations were conducted in five languages. Performance-based weighted solutions for target questions of interest were formed for each panel. These weights were based on individual expert's statistical accuracy and informativeness, determined using between ten and fifteen calibration variables from the experts' field with known values. Equal weights combinations were also calculated. The main conclusions on expert performance are: (1) SEJ does provide a science-based method for attribution of the global burden of foodborne diseases; (2) equal weighting of experts per panel increased statistical accuracy to acceptable levels, but at the cost of informativeness; (3) performance-based weighting increased informativeness, while retaining accuracy; (4) due to study constraints individual experts' accuracies were generally lower than in other SEJ studies, and (5) there was a negative correlation between experts' informativeness and statistical accuracy which attenuated as accuracy improved, revealing that the least accurate experts drive the negative correlation. It is shown, however, that performance-based weighting has the ability to yield statistically accurate and informative combinations of experts' judgments, thereby offsetting this contrary influence. The present findings suggest that application of SEJ on a large scale is feasible, and motivate the development of enhanced training and tools for remote elicitation of multiple, internationally-dispersed panels.
Evaluation of a Performance-Based Expert Elicitation: WHO Global Attribution of Foodborne Diseases
Aspinall, W. P.; Cooke, R. M.; Havelaar, A. H.; Hoffmann, S.; Hald, T.
2016-01-01
For many societally important science-based decisions, data are inadequate, unreliable or non-existent, and expert advice is sought. In such cases, procedures for eliciting structured expert judgments (SEJ) are increasingly used. This raises questions regarding validity and reproducibility. This paper presents new findings from a large-scale international SEJ study intended to estimate the global burden of foodborne disease on behalf of WHO. The study involved 72 experts distributed over 134 expert panels, with panels comprising thirteen experts on average. Elicitations were conducted in five languages. Performance-based weighted solutions for target questions of interest were formed for each panel. These weights were based on individual expert’s statistical accuracy and informativeness, determined using between ten and fifteen calibration variables from the experts' field with known values. Equal weights combinations were also calculated. The main conclusions on expert performance are: (1) SEJ does provide a science-based method for attribution of the global burden of foodborne diseases; (2) equal weighting of experts per panel increased statistical accuracy to acceptable levels, but at the cost of informativeness; (3) performance-based weighting increased informativeness, while retaining accuracy; (4) due to study constraints individual experts’ accuracies were generally lower than in other SEJ studies, and (5) there was a negative correlation between experts' informativeness and statistical accuracy which attenuated as accuracy improved, revealing that the least accurate experts drive the negative correlation. It is shown, however, that performance-based weighting has the ability to yield statistically accurate and informative combinations of experts' judgments, thereby offsetting this contrary influence. The present findings suggest that application of SEJ on a large scale is feasible, and motivate the development of enhanced training and tools for remote elicitation of multiple, internationally-dispersed panels. PMID:26930595
NASA Astrophysics Data System (ADS)
Matongera, Trylee Nyasha; Mutanga, Onisimo; Dube, Timothy; Sibanda, Mbulisi
2017-05-01
Bracken fern is an invasive plant that presents serious environmental, ecological and economic problems around the world. An understanding of the spatial distribution of bracken fern weeds is therefore essential for providing appropriate management strategies at both local and regional scales. The aim of this study was to assess the utility of the freely available medium resolution Landsat 8 OLI sensor in the detection and mapping of bracken fern at the Cathedral Peak, South Africa. To achieve this objective, the results obtained from Landsat 8 OLI were compared with those derived using the costly, high spatial resolution WorldView-2 imagery. Since previous studies have already successfully mapped bracken fern using high spatial resolution WorldView-2 image, the comparison was done to investigate the magnitude of difference in accuracy between the two sensors in relation to their acquisition costs. To evaluate the performance of Landsat 8 OLI in discriminating bracken fern compared to that of Worldview-2, we tested the utility of (i) spectral bands; (ii) derived vegetation indices as well as (iii) the combination of spectral bands and vegetation indices based on discriminant analysis classification algorithm. After resampling the training and testing data and reclassifying several times (n = 100) based on the combined data sets, the overall accuracies for both Landsat 8 and WorldView-2 were tested for significant differences based on Mann-Whitney U test. The results showed that the integration of the spectral bands and derived vegetation indices yielded the best overall classification accuracy (80.08% and 87.80% for Landsat 8 OLI and WorldView-2 respectively). Additionally, the use of derived vegetation indices as a standalone data set produced the weakest overall accuracy results of 62.14% and 82.11% for both the Landsat 8 OLI and WorldView-2 images. There were significant differences {U (100) = 569.5, z = -10.8242, p < 0.01} between the classification accuracies derived based on Landsat OLI 8 and those derived using WorldView-2 sensor. Although there were significant differences between Landsat and WorldView-2 accuracies, the magnitude of variation (9%) between the two sensors was within an acceptable range. Therefore, the findings of this study demonstrated that the recently launched Landsat 8 OLI multispectral sensor provides valuable information that could aid in the long term continuous monitoring and formulation of effective bracken fern management with acceptable accuracies that are comparable to those obtained from the high resolution WorldView-2 commercial sensor.
Rutkoski, Jessica; Poland, Jesse; Mondal, Suchismita; Autrique, Enrique; Pérez, Lorena González; Crossa, José; Reynolds, Matthew; Singh, Ravi
2016-01-01
Genomic selection can be applied prior to phenotyping, enabling shorter breeding cycles and greater rates of genetic gain relative to phenotypic selection. Traits measured using high-throughput phenotyping based on proximal or remote sensing could be useful for improving pedigree and genomic prediction model accuracies for traits not yet possible to phenotype directly. We tested if using aerial measurements of canopy temperature, and green and red normalized difference vegetation index as secondary traits in pedigree and genomic best linear unbiased prediction models could increase accuracy for grain yield in wheat, Triticum aestivum L., using 557 lines in five environments. Secondary traits on training and test sets, and grain yield on the training set were modeled as multivariate, and compared to univariate models with grain yield on the training set only. Cross validation accuracies were estimated within and across-environment, with and without replication, and with and without correcting for days to heading. We observed that, within environment, with unreplicated secondary trait data, and without correcting for days to heading, secondary traits increased accuracies for grain yield by 56% in pedigree, and 70% in genomic prediction models, on average. Secondary traits increased accuracy slightly more when replicated, and considerably less when models corrected for days to heading. In across-environment prediction, trends were similar but less consistent. These results show that secondary traits measured in high-throughput could be used in pedigree and genomic prediction to improve accuracy. This approach could improve selection in wheat during early stages if validated in early-generation breeding plots. PMID:27402362
Portnoy, Galina A; Haskell, Sally G; King, Matthew W; Maskin, Rachel; Gerber, Megan R; Iverson, Katherine M
2018-06-06
Veterans are at heightened risk for perpetrating intimate partner violence (IPV), yet there is limited evidence to inform practice and policy for the detection of IPV perpetration. The present study evaluated the accuracy and acceptability of a potential IPV perpetration screening tool for use with women veterans. A national sample of women veterans completed a 2016 web-based survey that included a modified 5-item Extended-Hurt/Insult/Threaten/Scream (Modified E-HITS) and the Revised Conflict Tactics Scales (CTS-2). Items also assessed women's perceptions of the acceptability and appropriateness of the modified E-HITS questions for use in healthcare settings. Accuracy statistics, including sensitivity and specificity, were calculated using the CTS-2 as the reference standard. Primary measures included the Modified E-HITS (index test), CTS-2 (reference standard), and items assessing acceptability. This study included 187 women, of whom 31 women veterans (16.6%) reported past-6-month IPV perpetration on the CTS-2. The Modified E-HITS demonstrated good overall accuracy (area under the curve, 0.86; 95% confidence interval, 0.78-0.94). In addition, the majority of women perceived the questions to be acceptable and appropriate. Findings demonstrate that the Modified E-HITS is promising as a low-burden tool for detecting of IPV perpetration among women veterans. This tool may help the Veterans Health Administration and other health care providers detect IPV perpetration and offer appropriate referrals for comprehensive assessment and services. Published by Elsevier Inc.
The accuracy of Genomic Selection in Norwegian red cattle assessed by cross-validation.
Luan, Tu; Woolliams, John A; Lien, Sigbjørn; Kent, Matthew; Svendsen, Morten; Meuwissen, Theo H E
2009-11-01
Genomic Selection (GS) is a newly developed tool for the estimation of breeding values for quantitative traits through the use of dense markers covering the whole genome. For a successful application of GS, accuracy of the prediction of genomewide breeding value (GW-EBV) is a key issue to consider. Here we investigated the accuracy and possible bias of GW-EBV prediction, using real bovine SNP genotyping (18,991 SNPs) and phenotypic data of 500 Norwegian Red bulls. The study was performed on milk yield, fat yield, protein yield, first lactation mastitis traits, and calving ease. Three methods, best linear unbiased prediction (G-BLUP), Bayesian statistics (BayesB), and a mixture model approach (MIXTURE), were used to estimate marker effects, and their accuracy and bias were estimated by using cross-validation. The accuracies of the GW-EBV prediction were found to vary widely between 0.12 and 0.62. G-BLUP gave overall the highest accuracy. We observed a strong relationship between the accuracy of the prediction and the heritability of the trait. GW-EBV prediction for production traits with high heritability achieved higher accuracy and also lower bias than health traits with low heritability. To achieve a similar accuracy for the health traits probably more records will be needed.
Ueno, Tamio; Matuda, Junichi; Yamane, Nobuhisa
2013-03-01
To evaluate the occurrence of out-of acceptable ranges and accuracy of antimicrobial susceptibility tests, we applied a new statistical tool to the Inter-Laboratory Quality Control Program established by the Kyushu Quality Control Research Group. First, we defined acceptable ranges of minimum inhibitory concentration (MIC) for broth microdilution tests and inhibitory zone diameter for disk diffusion tests on the basis of Clinical and Laboratory Standards Institute (CLSI) M100-S21. In the analysis, more than two out-of acceptable range results in the 20 tests were considered as not allowable according to the CLSI document. Of the 90 participating laboratories, 46 (51%) experienced one or more occurrences of out-of acceptable range results. Then, a binomial test was applied to each participating laboratory. The results indicated that the occurrences of out-of acceptable range results in the 11 laboratories were significantly higher when compared to the CLSI recommendation (allowable rate < or = 0.05). The standard deviation indices(SDI) were calculated by using reported results, mean and standard deviation values for the respective antimicrobial agents tested. In the evaluation of accuracy, mean value from each laboratory was statistically compared with zero using a Student's t-test. The results revealed that 5 of the 11 above laboratories reported erroneous test results that systematically drifted to the side of resistance. In conclusion, our statistical approach has enabled us to detect significantly higher occurrences and source of interpretive errors in antimicrobial susceptibility tests; therefore, this approach can provide us with additional information that can improve the accuracy of the test results in clinical microbiology laboratories.
Navarro-Mesa, Juan L.; Juliá-Serdá, Gabriel; Ramírez-Ávila, G. Marcelo; Ravelo-García, Antonio G.
2018-01-01
Our contribution focuses on the characterization of sleep apnea from a cardiac rate point of view, using Recurrence Quantification Analysis (RQA), based on a Heart Rate Variability (HRV) feature selection process. Three parameters are crucial in RQA: those related to the embedding process (dimension and delay) and the threshold distance. There are no overall accepted parameters for the study of HRV using RQA in sleep apnea. We focus on finding an overall acceptable combination, sweeping a range of values for each of them simultaneously. Together with the commonly used RQA measures, we include features related to recurrence times, and features originating in the complex network theory. To the best of our knowledge, no author has used them all for sleep apnea previously. The best performing feature subset is entered into a Linear Discriminant classifier. The best results in the “Apnea-ECG Physionet database” and the “HuGCDN2014 database” are, according to the area under the receiver operating characteristic curve, 0.93 (Accuracy: 86.33%) and 0.86 (Accuracy: 84.18%), respectively. Our system outperforms, using a relatively small set of features, previously existing studies in the context of sleep apnea. We conclude that working with dimensions around 7–8 and delays about 4–5, and using for the threshold distance the Fixed Amount of Nearest Neighbours (FAN) method with 5% of neighbours, yield the best results. Therefore, we would recommend these reference values for future work when applying RQA to the analysis of HRV in sleep apnea. We also conclude that, together with the commonly used vertical and diagonal RQA measures, there are newly used features that contribute valuable information for apnea minutes discrimination. Therefore, they are especially interesting for characterization purposes. Using two different databases supports that the conclusions reached are potentially generalizable, and are not limited by database variability. PMID:29621264
Martín-González, Sofía; Navarro-Mesa, Juan L; Juliá-Serdá, Gabriel; Ramírez-Ávila, G Marcelo; Ravelo-García, Antonio G
2018-01-01
Our contribution focuses on the characterization of sleep apnea from a cardiac rate point of view, using Recurrence Quantification Analysis (RQA), based on a Heart Rate Variability (HRV) feature selection process. Three parameters are crucial in RQA: those related to the embedding process (dimension and delay) and the threshold distance. There are no overall accepted parameters for the study of HRV using RQA in sleep apnea. We focus on finding an overall acceptable combination, sweeping a range of values for each of them simultaneously. Together with the commonly used RQA measures, we include features related to recurrence times, and features originating in the complex network theory. To the best of our knowledge, no author has used them all for sleep apnea previously. The best performing feature subset is entered into a Linear Discriminant classifier. The best results in the "Apnea-ECG Physionet database" and the "HuGCDN2014 database" are, according to the area under the receiver operating characteristic curve, 0.93 (Accuracy: 86.33%) and 0.86 (Accuracy: 84.18%), respectively. Our system outperforms, using a relatively small set of features, previously existing studies in the context of sleep apnea. We conclude that working with dimensions around 7-8 and delays about 4-5, and using for the threshold distance the Fixed Amount of Nearest Neighbours (FAN) method with 5% of neighbours, yield the best results. Therefore, we would recommend these reference values for future work when applying RQA to the analysis of HRV in sleep apnea. We also conclude that, together with the commonly used vertical and diagonal RQA measures, there are newly used features that contribute valuable information for apnea minutes discrimination. Therefore, they are especially interesting for characterization purposes. Using two different databases supports that the conclusions reached are potentially generalizable, and are not limited by database variability.
Koole, Olivier; Thai, Sopheak; Khun, Kim Eam; Pe, Reaksmey; van Griensven, Johan; Apers, Ludwig; Van den Ende, Jef; Mao, Tan Eang; Lynen, Lutgarde
2011-01-01
Background In 2007 WHO issued a guideline to improve the diagnosis of smear-negative and extrapulmonary tuberculosis (EPTB) in HIV-positive patients. This guideline relies heavily on the acceptance of HIV-testing and availability of chest X-rays. Methods and Findings Cohort study of TB suspects in four tuberculosis (TB) clinics in Phnom Penh, Cambodia. We assessed the operational performance of the guideline, the incremental yield of investigations, and the diagnostic accuracy for smear-negative tuberculosis in HIV-positive patients using culture positivity as reference standard. 1,147 (68.9%) of 1,665 TB suspects presented with unknown HIV status, 1,124 (98.0%) agreed to be tested, 79 (7.0%) were HIV-positive. Compliance with the guideline for chest X-rays and sputum culture requests was 97.1% and 98.3% respectively. Only 35 of 79 HIV-positive patients (44.3%) with a chest X-ray suggestive of TB started TB treatment within 10 days. 105 of 442 HIV-positive TB suspects started TB treatment (56.2% smear-negative pulmonary TB (PTB), 28.6% smear-positive PTB, 15.2% EPTB). The median time to TB treatment initiation was 5 days (IQR: 2–13 days), ranging from 2 days (IQR: 1–11.5 days) for EPTB, over 2.5 days (IQR: 1–4 days) for smear-positive PTB to 9 days (IQR: 3–17 days) for smear-negative PTB. Among the 34 smear-negative TB patients with a confirmed diagnosis, the incremental yield of chest X-ray, clinical suspicion or abdominal ultrasound, and culture was 41.2%, 17.6% and 41.2% respectively. The sensitivity and specificity of the algorithm to diagnose smear-negative TB in HIV-positive TB suspects was 58.8% (95%CI: 42.2%–73.6%) and 79.4% (95%CI: 74.8%–82.4%) respectively. Conclusions Pending point-of-care rapid diagnostic tests for TB disease, diagnostic algorithms are needed. The diagnostic accuracy of the 2007 WHO guideline to diagnose smear-negative TB is acceptable. There is, however, reluctance to comply with the guideline in terms of immediate treatment initiation. PMID:21494694
Koole, Olivier; Thai, Sopheak; Khun, Kim Eam; Pe, Reaksmey; van Griensven, Johan; Apers, Ludwig; Van den Ende, Jef; Mao, Tan Eang; Lynen, Lutgarde
2011-04-06
In 2007 WHO issued a guideline to improve the diagnosis of smear-negative and extrapulmonary tuberculosis (EPTB) in HIV-positive patients. This guideline relies heavily on the acceptance of HIV-testing and availability of chest X-rays. Cohort study of TB suspects in four tuberculosis (TB) clinics in Phnom Penh, Cambodia. We assessed the operational performance of the guideline, the incremental yield of investigations, and the diagnostic accuracy for smear-negative tuberculosis in HIV-positive patients using culture positivity as reference standard. 1,147 (68.9%) of 1,665 TB suspects presented with unknown HIV status, 1,124 (98.0%) agreed to be tested, 79 (7.0%) were HIV-positive. Compliance with the guideline for chest X-rays and sputum culture requests was 97.1% and 98.3% respectively. Only 35 of 79 HIV-positive patients (44.3%) with a chest X-ray suggestive of TB started TB treatment within 10 days. 105 of 442 HIV-positive TB suspects started TB treatment (56.2% smear-negative pulmonary TB (PTB), 28.6% smear-positive PTB, 15.2% EPTB). The median time to TB treatment initiation was 5 days (IQR: 2-13 days), ranging from 2 days (IQR: 1-11.5 days) for EPTB, over 2.5 days (IQR: 1-4 days) for smear-positive PTB to 9 days (IQR: 3-17 days) for smear-negative PTB. Among the 34 smear-negative TB patients with a confirmed diagnosis, the incremental yield of chest X-ray, clinical suspicion or abdominal ultrasound, and culture was 41.2%, 17.6% and 41.2% respectively. The sensitivity and specificity of the algorithm to diagnose smear-negative TB in HIV-positive TB suspects was 58.8% (95%CI: 42.2%-73.6%) and 79.4% (95%CI: 74.8%-82.4%) respectively. Pending point-of-care rapid diagnostic tests for TB disease, diagnostic algorithms are needed. The diagnostic accuracy of the 2007 WHO guideline to diagnose smear-negative TB is acceptable. There is, however, reluctance to comply with the guideline in terms of immediate treatment initiation.
Digital Versus Conventional Impressions in Fixed Prosthodontics: A Review.
Ahlholm, Pekka; Sipilä, Kirsi; Vallittu, Pekka; Jakonen, Minna; Kotiranta, Ulla
2018-01-01
To conduct a systematic review to evaluate the evidence of possible benefits and accuracy of digital impression techniques vs. conventional impression techniques. Reports of digital impression techniques versus conventional impression techniques were systematically searched for in the following databases: Cochrane Central Register of Controlled Trials, PubMed, and Web of Science. A combination of controlled vocabulary, free-text words, and well-defined inclusion and exclusion criteria guided the search. Digital impression accuracy is at the same level as conventional impression methods in fabrication of crowns and short fixed dental prostheses (FDPs). For fabrication of implant-supported crowns and FDPs, digital impression accuracy is clinically acceptable. In full-arch impressions, conventional impression methods resulted in better accuracy compared to digital impressions. Digital impression techniques are a clinically acceptable alternative to conventional impression methods in fabrication of crowns and short FDPs. For fabrication of implant-supported crowns and FDPs, digital impression systems also result in clinically acceptable fit. Digital impression techniques are faster and can shorten the operation time. Based on this study, the conventional impression technique is still recommended for full-arch impressions. © 2016 by the American College of Prosthodontists.
NASA Astrophysics Data System (ADS)
Hibino, Daisuke; Hsu, Mingyi; Shindo, Hiroyuki; Izawa, Masayuki; Enomoto, Yuji; Lin, J. F.; Hu, J. R.
2013-04-01
The impact on yield loss due to systematic defect which remains after Optical Proximity Correction (OPC) modeling has increased, and achieving an acceptable yield has become more difficult in the leading technology beyond 20 nm node production. Furthermore Process-Window has become narrow because of the complexity of IC design and less process margin. In the past, the systematic defects have been inspected by human-eyes. However the judgment by human-eyes is sometime unstable and not accurate. Moreover an enormous amount of time and labor will have to be expended on the one-by-one judgment for several thousands of hot-spot defects. In order to overcome these difficulties and improve the yield and manufacturability, the automated system, which can quantify the shape difference with high accuracy and speed, is needed. Inspection points could be increased for getting higher yield, if the automated system achieves our goal. Defect Window Analysis (DWA) system by using high-precision-contour extraction from SEM image on real silicon and quantifying method which can calculate the difference between defect pattern and non-defect pattern automatically, which was developed by Hitachi High-Technologies, has been applied to the defect judgment instead of the judgment by human-eyes. The DWA result which describes process behavior might be feedback to design or OPC or mask. This new methodology and evaluation results will be presented in detail in this paper.
Wu, Mixia; Shu, Yu; Li, Zhaohai; Liu, Aiyi
2016-01-01
A sequential design is proposed to test whether the accuracy of a binary diagnostic biomarker meets the minimal level of acceptance. The accuracy of a binary diagnostic biomarker is a linear combination of the marker’s sensitivity and specificity. The objective of the sequential method is to minimize the maximum expected sample size under the null hypothesis that the marker’s accuracy is below the minimal level of acceptance. The exact results of two-stage designs based on Youden’s index and efficiency indicate that the maximum expected sample sizes are smaller than the sample sizes of the fixed designs. Exact methods are also developed for estimation, confidence interval and p-value concerning the proposed accuracy index upon termination of the sequential testing. PMID:26947768
NASA Technical Reports Server (NTRS)
Grossman, Bernard
1999-01-01
The technical details are summarized below: Compressible and incompressible versions of a three-dimensional unstructured mesh Reynolds-averaged Navier-Stokes flow solver have been differentiated and resulting derivatives have been verified by comparisons with finite differences and a complex-variable approach. In this implementation, the turbulence model is fully coupled with the flow equations in order to achieve this consistency. The accuracy demonstrated in the current work represents the first time that such an approach has been successfully implemented. The accuracy of a number of simplifying approximations to the linearizations of the residual have been examined. A first-order approximation to the dependent variables in both the adjoint and design equations has been investigated. The effects of a "frozen" eddy viscosity and the ramifications of neglecting some mesh sensitivity terms were also examined. It has been found that none of the approximations yielded derivatives of acceptable accuracy and were often of incorrect sign. However, numerical experiments indicate that an incomplete convergence of the adjoint system often yield sufficiently accurate derivatives, thereby significantly lowering the time required for computing sensitivity information. The convergence rate of the adjoint solver relative to the flow solver has been examined. Inviscid adjoint solutions typically require one to four times the cost of a flow solution, while for turbulent adjoint computations, this ratio can reach as high as eight to ten. Numerical experiments have shown that the adjoint solver can stall before converging the solution to machine accuracy, particularly for viscous cases. A possible remedy for this phenomenon would be to include the complete higher-order linearization in the preconditioning step, or to employ a simple form of mesh sequencing to obtain better approximations to the solution through the use of coarser meshes. . An efficient surface parameterization based on a free-form deformation technique has been utilized and the resulting codes have been integrated with an optimization package. Lastly, sample optimizations have been shown for inviscid and turbulent flow over an ONERA M6 wing. Drag reductions have been demonstrated by reducing shock strengths across the span of the wing.
The influence of sampling interval on the accuracy of trail impact assessment
Leung, Y.-F.; Marion, J.L.
1999-01-01
Trail impact assessment and monitoring (IA&M) programs have been growing in importance and application in recreation resource management at protected areas. Census-based and sampling-based approaches have been developed in such programs, with systematic point sampling being the most common survey design. This paper examines the influence of sampling interval on the accuracy of estimates for selected trail impact problems. A complete census of four impact types on 70 trails in Great Smoky Mountains National Park was utilized as the base data set for the analyses. The census data were resampled at increasing intervals to create a series of simulated point data sets. Estimates of frequency of occurrence and lineal extent for the four impact types were compared with the census data set. The responses of accuracy loss on lineal extent estimates to increasing sampling intervals varied across different impact types, while the responses on frequency of occurrence estimates were consistent, approximating an inverse asymptotic curve. These findings suggest that systematic point sampling may be an appropriate method for estimating the lineal extent but not the frequency of trail impacts. Sample intervals of less than 100 m appear to yield an excellent level of accuracy for the four impact types evaluated. Multiple regression analysis results suggest that appropriate sampling intervals are more likely to be determined by the type of impact in question rather than the length of trail. The census-based trail survey and the resampling-simulation method developed in this study can be a valuable first step in establishing long-term trail IA&M programs, in which an optimal sampling interval range with acceptable accuracy is determined before investing efforts in data collection.
NASA Astrophysics Data System (ADS)
Rahayu, A. P.; Hartatik, T.; Purnomoadi, A.; Kurnianto, E.
2018-02-01
The aims of this study were to estimate 305 day first lactation milk yield of Indonesian Holstein cattle from cumulative monthly and bimonthly test day records and to analyze its accuracy.The first lactation records of 258 dairy cows from 2006 to 2014 consisted of 2571 monthly (MTDY) and 1281 bimonthly test day yield (BTDY) records were used. Milk yields were estimated by regression method. Correlation coefficients between actual and estimated milk yield by cumulative MTDY were 0.70, 0.78, 0.83, 0.86, 0.89, 0.92, 0.94 and 0.96 for 2-9 months, respectively, meanwhile by cumulative BTDY were 0.69, 0.81, 0.87 and 0.92 for 2, 4, 6 and 8 months, respectively. The accuracy of fitting regression models (R2) increased with the increasing in the number of cumulative test day used. The used of 5 cumulative MTDY was considered sufficient for estimating 305 day first lactation milk yield with 80.6% accuracy and 7% error percentage of estimation. The estimated milk yield from MTDY was more accurate than BTDY by 1.1 to 2% less error percentage in the same time.
Face Validity of Test and Acceptance of Generalized Personality Interpretations
ERIC Educational Resources Information Center
Delprato, Dennis J.
1975-01-01
The degree to which variations in the face validity of psychological tests affected students' willingness to accept personality interpretations was studied. Acceptance of personality interpretations was compared for four types of tests which varied in face validity. The relationship between judged accuracy and rated likability of the…
Flight test experience using advanced airborne equipment in a time-based metered traffic environment
NASA Technical Reports Server (NTRS)
Morello, S. A.
1980-01-01
A series of test flights have demonstrated that time-based metering guidance and control was acceptable to pilots and air traffic controllers. The descent algorithm of the technique, with good representation of aircraft performance and wind modeling, yielded arrival time accuracy within 12 sec. It is expected that this will represent significant fuel savings (1) through a reduction of the time error dispersions at the metering fix for the entire fleet, and (2) for individual aircraft as well, through the presentation of guidance for a fuel-efficient descent. Air traffic controller workloads were also reduced, in keeping with the reduction of required communications resulting from the transfer of navigation responsibilities to pilots. A second series of test flights demonstrated that an existing flight management system could be modified to operate in the new mode.
An economical method of analyzing transient motion of gas-lubricated rotor-bearing systems.
NASA Technical Reports Server (NTRS)
Falkenhagen, G. L.; Ayers, A. L.; Barsalou, L. C.
1973-01-01
A method of economically evaluating the hydrodynamic forces generated in a gas-lubricated tilting-pad bearing is presented. The numerical method consists of solving the case of the infinite width bearing and then converting this solution to the case of the finite bearing by accounting for end leakage. The approximate method is compared to the finite-difference solution of Reynolds equation and yields acceptable accuracy while running about one-hundred times faster. A mathematical model of a gas-lubricated tilting-pad vertical rotor systems is developed. The model is capable of analyzing a two-bearing-rotor system in which the rotor center of mass is not at midspan by accounting for gyroscopic moments. The numerical results from the model are compared to actual test data as well as analytical results of other investigators.
NASA Astrophysics Data System (ADS)
Davenport, F., IV; Harrison, L.; Shukla, S.; Husak, G. J.; Funk, C. C.
2017-12-01
We evaluate the predictive accuracy of an ensemble of empirical model specifications that use earth observation data to predict sub-national grain yields in Mexico and East Africa. Products that are actively used for seasonal drought monitoring are tested as yield predictors. Our research is driven by the fact that East Africa is a region where decisions regarding agricultural production are critical to preventing the loss of economic livelihoods and human life. Regional grain yield forecasts can be used to anticipate availability and prices of key staples, which can turn can inform decisions about targeting humanitarian response such as food aid. Our objective is to identify-for a given region, grain, and time year- what type of model and/or earth observation can most accurately predict end of season yields. We fit a set of models to county level panel data from Mexico, Kenya, Sudan, South Sudan, and Somalia. We then examine out of sample predicative accuracy using various linear and non-linear models that incorporate spatial and time varying coefficients. We compare accuracy within and across models that use predictor variables from remotely sensed measures of precipitation, temperature, soil moisture, and other land surface processes. We also examine at what point in the season a given model or product is most useful for determining predictive accuracy. Finally we compare predictive accuracy across a variety of agricultural regimes including high intensity irrigated commercial agricultural and rain fed subsistence level farms.
Accurate paleointensities - the multi-method approach
NASA Astrophysics Data System (ADS)
de Groot, Lennart
2016-04-01
The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.
The Empirical Foundations of Teledermatology: A Review of the Research Evidence
Shannon, Gary W.; Tejasvi, Trilokraj; Kvedar, Joseph C.; Gates, Michael
2015-01-01
Abstract Introduction: This article presents the scientific evidence for the merit of telemedicine interventions in the diagnosis and management of skin disorders (teledermatology) in the published literature. The impetus for this work derives from the high prevalence of skin disorders, the high cost, the limited availability of dermatologists in certain areas, and the promise of teledermatology to address unmet needs in this area. Materials and Methods: The findings are based on a targeted review of scientific studies published from January 2005 through April 2015. The initial search yielded some 5,020 articles in Google Scholar and 428 in PubMed. A review of the abstracts yielded 71 publications that met the inclusion criteria for this analysis. Evidence is organized according to the following: feasibility and acceptance; intermediate outcomes (use of service, compliance, and diagnostic and treatment concordance and accuracy); outcomes (health improvement and problem resolution); and cost savings. A special section is devoted to studies conducted at the Veterans Health Administration. Results: Definitions of teledermatology varied across a wide spectrum of skin disorders, technologies, diagnostic tools, provider types, settings, and patient populations. Outcome measures included diagnostic concordance, treatment plans, and health. Conclusions: Despite these complexities, sufficient evidence was observed consistently supporting the effectiveness of teledermatology in improving accessibility to specialty care, diagnostic and treatment concordance, and skin care provided by primary care physicians, while also reducing cost. One study reported suboptimal clinical results from teledermatology for patients with pigmented skin lesions. On the other hand, confocal microscopy and advanced dermoscopy improved diagnostic accuracy, especially when rendered by experienced teledermatologists. PMID:26394022
A sensitive chemiluminescent immunoassay to detect Chromotrope FB (Chr FB) in foods.
Xu, Kun; Long, Hao; Xing, Rongge; Yin, Yongmei; Eremin, Sergei A; Meng, Meng; Xi, Rimo
2017-03-01
Chromotrope FB (Chr FB) is a synthetic azo dye permitted for use in foods and medicines. An acceptable daily intake (ADI) of Chr FB was 0-0.5mg/kg in China. In this study, we synthesized a Chr FB hapten with an amino group to prepare its artificial immunogen. Polyclonal antibodies obtained from New Zealand rabbits were applied to develop an indirect competitive chemiluminescent immunoassay (icCLIA) to detect Chr FB in foods. A horseradish peroxidase (HRP)-luminol-H 2 O 2 system was used to yield CL signal with p-iodophenol as an enhancement reagent. The method showed good specificity towards Chr FB and could detect as low as 0.02ngmL -1 Chr FB in buffer, 0.07ngg -1 in yoghurt candy, 0.07ngg -1 in vitamin drink and 0.13ngg -1 in bread. Compared with HPLC method, the proposed method is more sensitive by two orders of magnitude. The accuracy and precision of this method are acceptable and comparable with HPLC method. Therefore, the proposed method could be used for rapid screening of Chr FB in the mentioned foodstuffs. Copyright © 2016. Published by Elsevier B.V.
Devito, Dennis P; Kaplan, Leon; Dietl, Rupert; Pfeiffer, Michael; Horne, Dale; Silberstein, Boris; Hardenbrook, Mitchell; Kiriyanthan, George; Barzilay, Yair; Bruskin, Alexander; Sackerer, Dieter; Alexandrovsky, Vitali; Stüer, Carsten; Burger, Ralf; Maeurer, Johannes; Donald, Gordon D; Gordon, Donald G; Schoenmayr, Robert; Friedlander, Alon; Knoller, Nachshon; Schmieder, Kirsten; Pechlivanis, Ioannis; Kim, In-Se; Meyer, Bernhard; Shoham, Moshe
2010-11-15
Retrospective, multicenter study of robotically-guided spinal implant insertions. Clinical acceptance of the implants was assessed by intraoperative radiograph, and when available, postoperative computed tomography (CT) scans were used to determine placement accuracy. To verify the clinical acceptance and accuracy of robotically-guided spinal implants and compare to those of unguided free-hand procedures. SpineAssist surgical robot has been used to guide implants and guide-wires to predefined locations in the spine. SpineAssist which, to the best of the authors' knowledge, is currently the sole robot providing surgical assistance in positioning tools in the spine, guided over 840 cases in 14 hospitals, between June 2005 and June 2009. Clinical acceptance of 3271 pedicle screws and guide-wires inserted in 635 reported cases was assessed by intraoperative fluoroscopy, where placement accuracy of 646 pedicle screws inserted in 139 patients was measured using postoperative CT scans. Screw placements were found to be clinically acceptable in 98% of the cases when intraoperatively assessed by fluoroscopic images. Measurements derived from postoperative CT scans demonstrated that 98.3% of the screws fell within the safe zone, where 89.3% were completely within the pedicle and 9% breached the pedicle by up to 2 mm. The remaining 1.4% of the screws breached between 2 and 4 mm, while only 2 screws (0.3%) deviated by more than 4 mm from the pedicle wall. Neurologic deficits were observed in 4 cases yet, following revisions, no permanent nerve damage was encountered, in contrast to the 0.6% to 5% of neurologic damage reported in the literature. SpineAssist offers enhanced performance in spinal surgery when compared to free-hand surgeries, by increasing placement accuracy and reducing neurologic risks. In addition, 49% of the cases reported herein used a percutaneous approach, highlighting the contribution of SpineAssist in procedures without anatomic landmarks.
Wright, Gavin; Harrold, Natalie; Bownes, Peter
2018-01-01
Aims To compare the accuracies of the convolution and TMR10 Gamma Knife treatment planning algorithms, and assess the impact upon clinical practice of implementing convolution-based treatment planning. Methods Doses calculated by both algorithms were compared against ionisation chamber measurements in homogeneous and heterogeneous phantoms. Relative dose distributions calculated by both algorithms were compared against film-derived 2D isodose plots in a heterogeneous phantom, with distance-to-agreement (DTA) measured at the 80%, 50% and 20% isodose levels. A retrospective planning study compared 19 clinically acceptable metastasis convolution plans against TMR10 plans with matched shot times, allowing novel comparison of true dosimetric parameters rather than total beam-on-time. Gamma analysis and dose-difference analysis were performed on each pair of dose distributions. Results Both algorithms matched point dose measurement within ±1.1% in homogeneous conditions. Convolution provided superior point-dose accuracy in the heterogeneous phantom (-1.1% v 4.0%), with no discernible differences in relative dose distribution accuracy. In our study convolution-calculated plans yielded D99% 6.4% (95% CI:5.5%-7.3%,p<0.001) less than shot matched TMR10 plans. For gamma passing criteria 1%/1mm, 16% of targets had passing rates >95%. The range of dose differences in the targets was 0.2-4.6Gy. Conclusions Convolution provides superior accuracy versus TMR10 in heterogeneous conditions. Implementing convolution would result in increased target doses therefore its implementation may require a revaluation of prescription doses. PMID:29657896
g-Factor of heavy ions: a new access to the fine structure constant.
Shabaev, V M; Glazov, D A; Oreshkina, N S; Volotka, A V; Plunien, G; Kluge, H-J; Quint, W
2006-06-30
A possibility for a determination of the fine structure constant in experiments on the bound-electron g-factor is examined. It is found that studying a specific difference of the g-factors of B- and H-like ions of the same spinless isotope in the Pb region to the currently accessible experimental accuracy of 7 x 10(-10) would lead to a determination of the fine structure constant to an accuracy which is better than that of the currently accepted value. Further improvements of the experimental and theoretical accuracy could provide a value of the fine structure constant which is several times more precise than the currently accepted one.
ERIC Educational Resources Information Center
Schumann, Scott; Sibthorp, Jim
2016-01-01
Accuracy in emerging outdoor educators' teaching self-efficacy beliefs is critical to student safety and learning. Overinflated self-efficacy beliefs can result in delayed skilled development or inappropriate acceptance of risk. In an outdoor education context, neglecting the accuracy of teaching self-efficacy beliefs early in an educator's…
NASA Technical Reports Server (NTRS)
Kranz, David William
2010-01-01
The goal of this research project was be to compare and contrast the selected materials used in step measurements during pre-fits of thermal protection system tiles and to compare and contrast the accuracy of measurements made using these selected materials. The reasoning for conducting this test was to obtain a clearer understanding to which of these materials may yield the highest accuracy rate of exacting measurements in comparison to the completed tile bond. These results in turn will be presented to United Space Alliance and Boeing North America for their own analysis and determination. Aerospace structures operate under extreme thermal environments. Hot external aerothermal environments in high Mach number flights lead to high structural temperatures. The differences between tile heights from one to another are very critical during these high Mach reentries. The Space Shuttle Thermal Protection System is a very delicate and highly calculated system. The thermal tiles on the ship are measured to within an accuracy of .001 of an inch. The accuracy of these tile measurements is critical to a successful reentry of an orbiter. This is why it is necessary to find the most accurate method for measuring the height of each tile in comparison to each of the other tiles. The test results indicated that there were indeed differences in the selected materials used in step measurements during prefits of Thermal Protection System Tiles and that Bees' Wax yielded a higher rate of accuracy when compared to the baseline test. In addition, testing for experience level in accuracy yielded no evidence of difference to be found. Lastly the use of the Trammel tool over the Shim Pack yielded variable difference for those tests.
Holz, Elisa Mira; Höhne, Johannes; Staiger-Sälzer, Pit; Tangermann, Michael; Kübler, Andrea
2013-10-01
Connect-Four, a new sensorimotor rhythm (SMR) based brain-computer interface (BCI) gaming application, was evaluated by four severely motor restricted end-users; two were in the locked-in state and had unreliable eye-movement. Following the user-centred approach, usability of the BCI prototype was evaluated in terms of effectiveness (accuracy), efficiency (information transfer rate (ITR) and subjective workload) and users' satisfaction. Online performance varied strongly across users and sessions (median accuracy (%) of end-users: A=.65; B=.60; C=.47; D=.77). Our results thus yielded low to medium effectiveness in three end-users and high effectiveness in one end-user. Consequently, ITR was low (0.05-1.44bits/min). Only two end-users were able to play the game in free-mode. Total workload was moderate but varied strongly across sessions. Main sources of workload were mental and temporal demand. Furthermore, frustration contributed to the subjective workload of two end-users. Nevertheless, most end-users accepted the BCI application well and rated satisfaction medium to high. Sources for dissatisfaction were (1) electrode gel and cap, (2) low effectiveness, (3) time-consuming adjustment and (4) not easy-to-use BCI equipment. All four end-users indicated ease of use as being one of the most important aspect of BCI. Effectiveness and efficiency are lower as compared to applications using the event-related potential as input channel. Nevertheless, the SMR-BCI application was satisfactorily accepted by the end-users and two of four could imagine using the BCI application in their daily life. Thus, despite moderate effectiveness and efficiency BCIs might be an option when controlling an application for entertainment. Copyright © 2013 Elsevier B.V. All rights reserved.
Hrabok, Marianne; Brooks, Brian L; Fay-McClymont, Taryn B; Sherman, Elisabeth M S
2014-01-01
The purpose of this article was to investigate the accuracy of the WISC-IV short forms in estimating Full Scale Intelligence Quotient (FSIQ) and General Ability Index (GAI) in pediatric epilepsy. One hundred and four children with epilepsy completed the WISC-IV as part of a neuropsychological assessment at a tertiary-level children's hospital. The clinical accuracy of eight short forms was assessed in two ways: (a) accuracy within +/- 5 index points of FSIQ and (b) the clinical classification rate according to Wechsler conventions. The sample was further subdivided into low FSIQ (≤ 80) and high FSIQ (> 80). All short forms were significantly correlated with FSIQ. Seven-subtest (Crawford et al. [2010] FSIQ) and 5-subtest (BdSiCdVcLn) short forms yielded the highest clinical accuracy rates (77%-89%). Overall, a 2-subtest (VcMr) short form yielded the lowest clinical classification rates for FSIQ (35%-63%). The short form yielding the most accurate estimate of GAI was VcSiMrBd (73%-84%). Short forms show promise as useful estimates. The 7-subtest (Crawford et al., 2010) and 5-subtest (BdSiVcLnCd) short forms yielded the most accurate estimates of FSIQ. VcSiMrBd yielded the most accurate estimate of GAI. Clinical recommendations are provided for use of short forms in pediatric epilepsy.
Using age on clothes size label to estimate weight in emergency paediatric patients.
Elgie, Laura D; Williams, Andrew R
2012-10-01
To study formulae that estimate children's weight using their actual age. To determine whether using the age on their clothes size label in these formulae can estimate weight when their actual age is unknown. The actual age and age on the clothes labels of 188 children were inserted into formulae that estimate children's weight. These estimates were compared with their actual weight. Bland-Altman plots calculated the precision and accuracy of each of these estimates. In all formulae, using age on the clothes sizes label provided a more precise estimate than the child's actual age. In emergencies where a child's age is unknown, use of the age on their clothes label in weight-estimating formulae yields acceptable weight estimates. Even in situations where a child's age is known, the age on their clothes label may provide a more accurate and precise weight estimate than the actual age.
Nagasawa, Shinji; Al-Naamani, Eman; Saeki, Akinori
2018-05-17
Owing to the diverse chemical structures, organic photovoltaic (OPV) applications with a bulk heterojunction framework have greatly evolved over the last two decades, which has produced numerous organic semiconductors exhibiting improved power conversion efficiencies (PCEs). Despite the recent fast progress in materials informatics and data science, data-driven molecular design of OPV materials remains challenging. We report a screening of conjugated molecules for polymer-fullerene OPV applications by supervised learning methods (artificial neural network (ANN) and random forest (RF)). Approximately 1000 experimental parameters including PCE, molecular weight, and electronic properties are manually collected from the literature and subjected to machine learning with digitized chemical structures. Contrary to the low correlation coefficient in ANN, RF yields an acceptable accuracy, which is twice that of random classification. We demonstrate the application of RF screening for the design, synthesis, and characterization of a conjugated polymer, which facilitates a rapid development of optoelectronic materials.
A globally efficient means of distributing UTC time and frequency through GPS
NASA Technical Reports Server (NTRS)
Kusters, John A.; Giffard, Robin P.; Cutler, Leonard S.; Allan, David W.; Miranian, Mihran
1995-01-01
Time and frequency outputs comparable in quality to the best laboratories have been demonstrated on an integrated system suitable for field application on a global basis. The system measures the time difference between 1 pulse-per-second (pps) signals derived from local primary frequency standards and from a multi-channel GPS C/A receiver. The measured data is processed through optimal SA Filter algorithms that enhance both the stability and accuracy of GPS timing signals. Experiments were run simultaneously at four different sites. Even with large distances between sites, the overall results show a high degree of cross-correlation of the SA noise. With sufficiently long simultaneous measurement sequences, the data shows that determination of the difference in local frequency from an accepted remote standard to better than 1 x 10(exp -14) is possible. This method yields frequency accuracy, stability, and timing stability comparable to that obtained with more conventional common-view experiments. In addition, this approach provides UTC(USNO MC) in real time to an accuracy better than 20 ns without the problems normally associated with conventional common-view techniques. An experimental tracking loop was also set up to demonstrate the use of enhanced GPS for dissemination of UTC(USNO MC) over a wide geographic area. Properly disciplining a cesium standard with a multi-channel GPS receiver, with additional input from USNO, has been found to permit maintaining a timing precision of better than 10 ns between Palo Alto, CA and Washington, DC.
Weather-based forecasts of California crop yields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lobell, D B; Cahill, K N; Field, C B
2005-09-26
Crop yield forecasts provide useful information to a range of users. Yields for several crops in California are currently forecast based on field surveys and farmer interviews, while for many crops official forecasts do not exist. As broad-scale crop yields are largely dependent on weather, measurements from existing meteorological stations have the potential to provide a reliable, timely, and cost-effective means to anticipate crop yields. We developed weather-based models of state-wide yields for 12 major California crops (wine grapes, lettuce, almonds, strawberries, table grapes, hay, oranges, cotton, tomatoes, walnuts, avocados, and pistachios), and tested their accuracy using cross-validation over themore » 1980-2003 period. Many crops were forecast with high accuracy, as judged by the percent of yield variation explained by the forecast, the number of yields with correctly predicted direction of yield change, or the number of yields with correctly predicted extreme yields. The most successfully modeled crop was almonds, with 81% of yield variance captured by the forecast. Predictions for most crops relied on weather measurements well before harvest time, allowing for lead times that were longer than existing procedures in many cases.« less
NASA Astrophysics Data System (ADS)
Zhu, Jun; Chen, Lijun; Ma, Lantao; Li, Dejian; Jiang, Wei; Pan, Lihong; Shen, Huiting; Jia, Hongmin; Hsiang, Chingyun; Cheng, Guojie; Ling, Li; Chen, Shijie; Wang, Jun; Liao, Wenkui; Zhang, Gary
2014-04-01
Defect review is a time consuming job. Human error makes result inconsistent. The defects located on don't care area would not hurt the yield and no need to review them such as defects on dark area. However, critical area defects can impact yield dramatically and need more attention to review them such as defects on clear area. With decrease in integrated circuit dimensions, mask defects are always thousands detected during inspection even more. Traditional manual or simple classification approaches are unable to meet efficient and accuracy requirement. This paper focuses on automatic defect management and classification solution using image output of Lasertec inspection equipment and Anchor pattern centric image process technology. The number of mask defect found during an inspection is always in the range of thousands or even more. This system can handle large number defects with quick and accurate defect classification result. Our experiment includes Die to Die and Single Die modes. The classification accuracy can reach 87.4% and 93.3%. No critical or printable defects are missing in our test cases. The missing classification defects are 0.25% and 0.24% in Die to Die mode and Single Die mode. This kind of missing rate is encouraging and acceptable to apply on production line. The result can be output and reloaded back to inspection machine to have further review. This step helps users to validate some unsure defects with clear and magnification images when captured images can't provide enough information to make judgment. This system effectively reduces expensive inline defect review time. As a fully inline automated defect management solution, the system could be compatible with current inspection approach and integrated with optical simulation even scoring function and guide wafer level defect inspection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zerouali, K; Aubry, J; Doucet, R
2016-06-15
Purpose: To implement the new EBT-XD Gafchromic films for accurate dosimetric and geometric validation of stereotactic radiosurgery (SRS) and stereotactic body radiation therapy (SBRT) CyberKnife (CK) patient specific QA. Methods: Film calibration was performed using a triplechannel film analysis on an Epson 10000XL scanner. Calibration films were irradiated using a Varian Clinac 21EX flattened beam (0 to 20 Gy), to ensure sufficient dose homogeneity. Films were scanned to a resolution of 0.3 mm, 24 hours post irradiation following a well-defined protocol. A set of 12 QA was performed for several types of CK plans: trigeminal neuralgia, brain metastasis, prostate andmore » lung tumors. A custom made insert for the CK head phantom has been manufactured to yield an accurate measured to calculated dose registration. When the high dose region was large enough, absolute dose was also measured with an ionization chamber. Dose calculation is performed using MultiPlan Ray-tracing algorithm for all cases since the phantom is mostly made from near water-equivalent plastic. Results: Good agreement (<2%) was found between the dose to the chamber and the film, when a chamber measurement was possible The average dose difference and standard deviations between film measurements and TPS calculations were respectively 1.75% and 3%. The geometric accuracy has been estimated to be <1 mm, combining robot positioning uncertainty and film registration to calculated dose. Conclusion: Patient specific QA measurements using EBT-XD films yielded a full 2D dose plane with high spatial resolution and acceptable dose accuracy. This method is particularly promising for trigeminal neuralgia plan QA, where the positioning of the spatial dose distribution is equally or more important than the absolute delivered dose to achieve clinical goals.« less
Fendler, Wojciech; Hogendorf, Anna; Szadkowska, Agnieszka; Młynarski, Wojciech
2011-01-01
Self-monitoring of blood glucose (SMBG) is one of the cornerstones of diabetes management. To evaluate the potential for miscoding of a personal glucometer, to define a target population among pediatric patients with diabetes for a non-coding glucometer and the accuracy of the Contour TS non-coding system. Potential for miscoding during self-monitoring of blood glucose was evaluated by means of an anonymous questionnaire, with worst and best case scenarios evaluated depending on the responses pattern. Testing of the Contour TS system was performed according to guidelines set by the national committee for clinical laboratory standards. Estimated frequency of individuals prone to non-coding ranged from 68.21% (95% 60.70- 75.72%) to 7.95% (95%CI 3.86-12.31%) for the worse and best case scenarios respectively. Factors associated with increased likelihood of non-coding were: a smaller number of tests per day, a greater number of individuals involved in testing and self-testing by the patient with diabetes. The Contour TS device showed intra- and inter-assay accuracy -95%, linear association with laboratory measurements (R2=0.99, p <0.0001) and consistent, but small bias of -1.12% (95% Confidence Interval -3.27 to 1.02%). Clarke error grid analysis showed 4% of values within the benign error zone (B) with the other measurements yielding an acceptably accurate result (zone A). The Contour TS system showed sufficient accuracy to be safely used in monitoring of pediatric diabetic patients. Patients from families with a high throughput of test-strips or multiple individuals involved in SMBG using the same meter are candidates for clinical use of such devices due to an increased risk of calibration errors.
Gjerde, Hallvard; Verstraete, Alain
2010-02-25
To study several methods for estimating the prevalence of high blood concentrations of tetrahydrocannabinol and amphetamine in a population of drug users by analysing oral fluid (saliva). Five methods were compared, including simple calculation procedures dividing the drug concentrations in oral fluid by average or median oral fluid/blood (OF/B) drug concentration ratios or linear regression coefficients, and more complex Monte Carlo simulations. Populations of 311 cannabis users and 197 amphetamine users from the Rosita-2 Project were studied. The results of a feasibility study suggested that the Monte Carlo simulations might give better accuracies than simple calculations if good data on OF/B ratios is available. If using only 20 randomly selected OF/B ratios, a Monte Carlo simulation gave the best accuracy but not the best precision. Dividing by the OF/B regression coefficient gave acceptable accuracy and precision, and was therefore the best method. None of the methods gave acceptable accuracy if the prevalence of high blood drug concentrations was less than 15%. Dividing the drug concentration in oral fluid by the OF/B regression coefficient gave an acceptable estimation of high blood drug concentrations in a population, and may therefore give valuable additional information on possible drug impairment, e.g. in roadside surveys of drugs and driving. If good data on the distribution of OF/B ratios are available, a Monte Carlo simulation may give better accuracy. 2009 Elsevier Ireland Ltd. All rights reserved.
Charnot-Katsikas, Angella; Tesic, Vera; Boonlayangoor, Sue; Bethel, Cindy; Frank, Karen M
2014-02-01
This study assessed the accuracy of bacterial and yeast identification using the VITEK MS, and the time to reporting of isolates before and after its implementation in routine clinical practice. Three hundred and sixty-two isolates of bacteria and yeast, consisting of a variety of clinical isolates and American Type Culture Collection strains, were tested. Results were compared with reference identifications from the VITEK 2 system and with 16S rRNA sequence analysis. The VITEK MS provided an acceptable identification to species level for 283 (78 %) isolates. Considering organisms for which genus-level identification is acceptable for routine clinical care, 315 isolates (87 %) had an acceptable identification. Six isolates (2 %) were identified incorrectly, five of which were Shigella species. Finally, the time for reporting the identifications was decreased significantly after implementation of the VITEK MS for a total mean reduction in time of 10.52 h (P<0.0001). Overall, accuracy of the VITEK MS was comparable or superior to that from the VITEK 2. The findings were also comparable to other studies examining the accuracy of the VITEK MS, although differences exist, depending on the diversity of species represented as well as on the versions of the databases used. The VITEK MS can be incorporated effectively into routine use in a clinical microbiology laboratory and future expansion of the database should provide improved accuracy for the identification of micro-organisms.
Analysis of pumping tests: Significance of well diameter, partial penetration, and noise
Heidari, M.; Ghiassi, K.; Mehnert, E.
1999-01-01
The nonlinear least squares (NLS) method was applied to pumping and recovery aquifer test data in confined and unconfined aquifers with finite diameter and partially penetrating pumping wells, and with partially penetrating piezometers or observation wells. It was demonstrated that noiseless and moderately noisy drawdown data from observation points located less than two saturated thicknesses of the aquifer from the pumping well produced an exact or acceptable set of parameters when the diameter of the pumping well was included in the analysis. The accuracy of the estimated parameters, particularly that of specific storage, decreased with increases in the noise level in the observed drawdown data. With consideration of the well radii, the noiseless drawdown data from the pumping well in an unconfined aquifer produced good estimates of horizontal and vertical hydraulic conductivities and specific yield, but the estimated specific storage was unacceptable. When noisy data from the pumping well were used, an acceptable set of parameters was not obtained. Further experiments with noisy drawdown data in an unconfined aquifer revealed that when the well diameter was included in the analysis, hydraulic conductivity, specific yield and vertical hydraulic conductivity may be estimated rather effectively from piezometers located over a range of distances from the pumping well. Estimation of specific storage became less reliable for piezemeters located at distances greater than the initial saturated thickness of the aquifer. Application of the NLS to field pumping and recovery data from a confined aquifer showed that the estimated parameters from the two tests were in good agreement only when the well diameter was included in the analysis. Without consideration of well radii, the estimated values of hydraulic conductivity from the pumping and recovery tests were off by a factor of four.The nonlinear least squares method was applied to pumping and recovery aquifer test data in confined and unconfined aquifers with finite diameter and partially penetrating piezometers and observation wells. Noiseless and moderately noisy drawdown data from observation points located less than two saturated thicknesses of the aquifer from the pumping well produced a set of parameters that agrees very well with piezometer test data when the diameter of the pumping well was included in the analysis. The accuracy of the estimated parameters decreased with increasing noise level.
Wong, Charles; Teitge, Braden; Ross, Marshall; Young, Paul; Robertson, Helen Lee; Lang, Eddy
2018-06-01
Point-of-care ultrasound (POCUS) has been suggested as an initial investigation in the management of renal colic. Our objectives were: 1) to determine the accuracy of POCUS for the diagnosis of nephrolithiasis and 2) to assess its prognostic value in the management of renal colic. The review protocol was registered to the PROSPERO database (CRD42016035331). An electronic database search of MEDLINE, Embase, and PubMed was conducted utilizing subject headings, keywords, and synonyms that address our research question. Bibliographies of included studies and narrative reviews were manually examined. Studies of adult emergency department patients with renal colic symptoms were included. Any degree of hydronephrosis was considered a positive POCUS finding. Accepted criterion standards were computed tomography evidence of renal stone or hydronephrosis, direct stone visualization, or surgical findings. Screening of abstracts, quality assessment with the QUADAS-2 instrument, and data extraction were performed by two reviewers, with discrepancies resolved by consensus with a third reviewer. Test performance was assessed by pooled sensitivity and specificity, calculated likelihood ratios, and a summary receiver operator curve (SROC). The secondary objective of prognostic value was reported as a narrative summary. The electronic search yielded 627 unique titles. After relevance screening, 26 papers underwent full-text review, and nine articles met all inclusion criteria. Of these, five high-quality studies (N = 1,773) were included in the meta-analysis for diagnostic accuracy and the remaining yielded data on prognostic value. The pooled results for sensitivity and specificity were 70.2% (95% confidence interval [CI] = 67.1%-73.2%) and 75.4% (95% CI = 72.5%-78.2%), respectively. The calculated positive and negative likelihood ratios were 2.85 and 0.39. The SROC generated did not show evidence of a threshold effect. Two of the studies in the meta-analysis found that the finding of moderate or greater hydronephrosis yielded a specificity of 94.4% (95% CI = 92.7%-95.8%). Four studies examining prognostic value noted a higher likelihood of a large stone when positive POCUS findings were present. The largest randomized trial showed lower cumulative radiation exposure and no increase in adverse events in those who received POCUS investigation as the initial renal colic investigation. Point-of-care ultrasound has modest diagnostic accuracy for diagnosing nephrolithiasis. The finding of moderate or severe hydronephrosis is highly specific for the presence of any stone, and the presence of any hydronephrosis is suggestive of a larger (>5 mm) stone in those presenting with renal colic. © 2018 by the Society for Academic Emergency Medicine.
Corn and soybean Landsat MSS classification performance as a function of scene characteristics
NASA Technical Reports Server (NTRS)
Batista, G. T.; Hixson, M. M.; Bauer, M. E.
1982-01-01
In order to fully utilize remote sensing to inventory crop production, it is important to identify the factors that affect the accuracy of Landsat classifications. The objective of this study was to investigate the effect of scene characteristics involving crop, soil, and weather variables on the accuracy of Landsat classifications of corn and soybeans. Segments sampling the U.S. Corn Belt were classified using a Gaussian maximum likelihood classifier on multitemporally registered data from two key acquisition periods. Field size had a strong effect on classification accuracy with small fields tending to have low accuracies even when the effect of mixed pixels was eliminated. Other scene characteristics accounting for variability in classification accuracy included proportions of corn and soybeans, crop diversity index, proportion of all field crops, soil drainage, slope, soil order, long-term average soybean yield, maximum yield, relative position of the segment in the Corn Belt, weather, and crop development stage.
Abdelrahman, Mostafa; Al-Sadi, Abdullah M; Pour-Aboughadareh, Alireza; Burritt, David J; Tran, Lam-Son Phan
2018-03-12
Developing more crops able to sustainably produce high yields when grown under biotic/abiotic stresses is an important goal, if crop production and food security are to be guaranteed in the face of ever-increasing human population and unpredictable global climatic conditions. However, conventional crop improvement, through random mutagenesis or genetic recombination, is time-consuming and cannot keep pace with increasing food demands. Targeted genome editing (GE) technologies, especially clustered regularly interspaced short palindromic repeats (CRISPR)/(CRISPR)-associated protein 9 (Cas9), have great potential to aid in the breeding of crops that are able to produce high yields under conditions of biotic/abiotic stress. This is due to their high efficiency, accuracy and low risk of off-target effects, compared with conventional random mutagenesis methods. The use of CRISPR/Cas9 system has grown very rapidly in recent years with numerous examples of targeted mutagenesis in crop plants, including gene knockouts, modifications, and the activation and repression of target genes. The potential of the GE approach for crop improvement has been clearly demonstrated. However, the regulation and social acceptance of GE crops still remain a challenge. In this review, we evaluate the recent applications of the CRISPR/Cas9-mediated GE, as a means to produce crop plants with greater resilience to the stressors they encounter when grown under increasing stressful environmental conditions. Copyright © 2018 Elsevier Masson SAS. All rights reserved.
Myakalwar, Ashwin Kumar; Sreedhar, S.; Barman, Ishan; Dingari, Narahara Chari; Rao, S. Venugopal; Kiran, P. Prem; Tewari, Surya P.; Kumar, G. Manoj
2012-01-01
We report the effectiveness of laser-induced breakdown spectroscopy (LIBS) in probing the content of pharmaceutical tablets and also investigate its feasibility for routine classification. This method is particularly beneficial in applications where its exquisite chemical specificity and suitability for remote and on site characterization significantly improves the speed and accuracy of quality control and assurance process. Our experiments reveal that in addition to the presence of carbon, hydrogen, nitrogen and oxygen, which can be primarily attributed to the active pharmaceutical ingredients, specific inorganic atoms were also present in all the tablets. Initial attempts at classification by a ratiometric approach using oxygen to nitrogen compositional values yielded an optimal value (at 746.83 nm) with the least relative standard deviation but nevertheless failed to provide an acceptable classification. To overcome this bottleneck in the detection process, two chemometric algorithms, i.e. principal component analysis (PCA) and soft independent modeling of class analogy (SIMCA), were implemented to exploit the multivariate nature of the LIBS data demonstrating that LIBS has the potential to differentiate and discriminate among pharmaceutical tablets. We report excellent prospective classification accuracy using supervised classification via the SIMCA algorithm, demonstrating its potential for future applications in process analytical technology, especially for fast on-line process control monitoring applications in the pharmaceutical industry. PMID:22099648
Modeling of Turbulent Natural Convection in Enclosed Tall Cavities
NASA Astrophysics Data System (ADS)
Goloviznin, V. M.; Korotkin, I. A.; Finogenov, S. A.
2017-12-01
It was shown in our previous work (J. Appl. Mech. Tech. Phys 57 (7), 1159-1171 (2016)) that the eddy-resolving parameter-free CABARET scheme as applied to two-and three-dimensional de Vahl Davis benchmark tests (thermal convection in a square cavity) yields numerical results on coarse (20 × 20 and 20 × 20 × 20) grids that agree surprisingly well with experimental data and highly accurate computations for Rayleigh numbers of up to 1014. In the present paper, the sensitivity of this phenomenon to the cavity shape (varying from cubical to highly elongated) is analyzed. Box-shaped computational domains with aspect ratios of 1: 4, 1: 10, and 1: 28.6 are considered. The results produced by the CABARET scheme are compared with experimental data (aspect ratio of 1: 28.6), DNS results (aspect ratio of 1: 4), and an empirical formula (aspect ratio of 1: 10). In all the cases, the CABARET-based integral parameters of the cavity flow agree well with the other authors' results. Notably coarse grids with mesh refinement toward the walls are used in the CABARET calculations. It is shown that acceptable numerical accuracy on extremely coarse grids is achieved for an aspect ratio of up to 1: 10. For higher aspect ratios, the number of grid cells required for achieving prescribed accuracy grows significantly.
Yield estimation of corn with multispectral data and the potential of using imaging spectrometers
NASA Astrophysics Data System (ADS)
Bach, Heike
1997-05-01
In the frame of the special yield estimation, a regular procedure conducted for the European Union to more accurately estimate agricultural yield, a project was conducted for the state minister for Rural Environment, Food and Forestry of Baden-Wuerttemberg, Germany) to test remote sensing data with advanced yield formation models for accuracy and timelines of yield estimation of corn. The methodology employed uses field-based plant parameter estimation from atmospherically corrected multitemporal/multispectral LANDSAT-TM data. An agrometeorological plant-production-model is used for yield prediction. Based solely on 4 LANDSAT-derived estimates and daily meteorological data the grain yield of corn stands was determined for 1995. The modeled yield was compared with results independently gathered within the special yield estimation for 23 test fields in the Upper Rhine Valley. The agrement between LANDSAT-based estimates and Special Yield Estimation shows a relative error of 2.3 percent. The comparison of the results for single fields shows, that six weeks before harvest the grain yield of single corn fields was estimated with a mean relative accuracy of 13 percent using satellite information. The presented methodology can be transferred to other crops and geographical regions. For future applications hyperspectral sensors show great potential to further enhance the results or yield prediction with remote sensing.
31 CFR 340.8 - Acceptance of bids.
Code of Federal Regulations, 2010 CFR
2010-07-01
... be determined by reference to a specially prepared table of bond yields, a copy of which will be made... representative, will notify any successful bidder of acceptance in the manner and form specified in the public...
Direct behavior rating as a school-based behavior universal screener: replication across sites.
Kilgus, Stephen P; Riley-Tillman, T Chris; Chafouleas, Sandra M; Christ, Theodore J; Welsh, Megan E
2014-02-01
The purpose of this study was to evaluate the utility of Direct Behavior Rating Single Item Scale (DBR-SIS) targets of disruptive, engaged, and respectful behavior within school-based universal screening. Participants included 31 first-, 25 fourth-, and 23 seventh-grade teachers and their 1108 students, sampled from 13 schools across three geographic locations (northeast, southeast, and midwest). Each teacher rated approximately 15 of their students across three measures, including DBR-SIS, the Behavioral and Emotional Screening System (Kamphaus & Reynolds, 2007), and the Student Risk Screening Scale (Drummond, 1994). Moderate to high bivariate correlations and area under the curve statistics supported concurrent validity and diagnostic accuracy of DBR-SIS. Receiver operating characteristic curve analyses indicated that although respectful behavior cut scores recommended for screening remained constant across grade levels, cut scores varied for disruptive behavior and academic engaged behavior. Specific cut scores for first grade included 2 or less for disruptive behavior, 7 or greater for academically engaged behavior, and 9 or greater for respectful behavior. In fourth and seventh grades, cut scores changed to 1 or less for disruptive behavior and 8 or greater for academically engaged behavior, and remained the same for respectful behavior. Findings indicated that disruptive behavior was particularly appropriate for use in screening at first grade, whereas academically engaged behavior was most appropriate at both fourth and seventh grades. Each set of cut scores was associated with acceptable sensitivity (.79-.87), specificity (.71-.82), and negative predictive power (.94-.96), but low positive predictive power (.43-.44). DBR-SIS multiple gating procedures, through which students were only considered at risk overall if they exceeded cut scores on 2 or more DBR-SIS targets, were also determined acceptable in first and seventh grades, as the use of both disruptive behavior and academically engaged behavior in defining risk yielded acceptable conditional probability indices. Overall, the current findings are consistent with previous research, yielding further support for the DBR-SIS as a universal screener. Limitations, implications for practice, and directions for future research are discussed. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Anatomy and dry weight yields of two Populus clones grown under intensive culture.
John B. Crist; David H. Dawson
1975-01-01
Two Populus clones grown for short rotations at three dense planting spacings produced some extremely high yields of material of acceptable quality. However, variation in yields and quality illustrates that selection of genetic material and the cultured regime under which a species is growth are significant factors that must be determined in maximum-yield systems....
Mollet, Pierre; Kery, Marc; Gardner, Beth; Pasinelli, Gilberto; Royle, Andy
2015-01-01
We conducted a survey of an endangered and cryptic forest grouse, the capercaillie Tetrao urogallus, based on droppings collected on two sampling occasions in eight forest fragments in central Switzerland in early spring 2009. We used genetic analyses to sex and individually identify birds. We estimated sex-dependent detection probabilities and population size using a modern spatial capture-recapture (SCR) model for the data from pooled surveys. A total of 127 capercaillie genotypes were identified (77 males, 46 females, and 4 of unknown sex). The SCR model yielded atotal population size estimate (posterior mean) of 137.3 capercaillies (posterior sd 4.2, 95% CRI 130–147). The observed sex ratio was skewed towards males (0.63). The posterior mean of the sex ratio under the SCR model was 0.58 (posterior sd 0.02, 95% CRI 0.54–0.61), suggesting a male-biased sex ratio in our study area. A subsampling simulation study indicated that a reduced sampling effort representing 75% of the actual detections would still yield practically acceptable estimates of total size and sex ratio in our population. Hence, field work and financial effort could be reduced without compromising accuracy when the SCR model is used to estimate key population parameters of cryptic species.
NASA Astrophysics Data System (ADS)
Krupinski, Elizabeth A.; Berbaum, Kevin S.; Caldwell, Robert; Schartz, Kevin M.
2012-02-01
Radiologists are reading more cases with more images, especially in CT and MRI and thus working longer hours than ever before. There have been concerns raised regarding fatigue and whether it impacts diagnostic accuracy. This study measured the impact of reader visual fatigue by assessing symptoms, visual strain via dark focus of accommodation, and diagnostic accuracy. Twenty radiologists and 20 radiology residents were given two diagnostic performance tests searching CT chest sequences for a solitary pulmonary nodule before (rested) and after (tired) a day of clinical reading. 10 cases used free search and navigation, and the other 100 cases used preset scrolling speed and duration. Subjects filled out the Swedish Occupational Fatigue Inventory (SOFI) and the oculomotor strain subscale of the Simulator Sickness Questionnaire (SSQ) before each session. Accuracy was measured using ROC techniques. Using Swensson's technique yields an ROC area = 0.86 rested vs. 0.83 tired, p (one-tailed) = 0.09. Using Swensson's LROC technique yields an area = 0.73 rested vs. 0.66 tired, p (one-tailed) = 0.09. Using Swensson's Loc Accuracy technique yields an area = 0.77 rested vs. 0.72 tired, p (one-tailed) = 0.13). Subjective measures of fatigue increased significantly from early to late reading. To date, the results support our findings with static images and detection of bone fractures. Radiologists at the end of a long work day experience greater levels of measurable visual fatigue or strain, contributing to a decrease in diagnostic accuracy. The decrease in accuracy was not as great however as with static images.
Anesthesia Recordkeeping: Accuracy of Recall with Computerized and Manual Entry Recordkeeping
ERIC Educational Resources Information Center
Davis, Thomas Corey
2011-01-01
Introduction: Anesthesia information management systems are rapidly gaining widespread acceptance. Aggressively promoted as an improvement to manual-entry recordkeeping systems in the areas of accuracy, quality improvement, billing and vigilance, these systems record all patient vital signs and parameters, providing a legible hard copy and…
Accuracy of activPAL Self-Attachment Methods
ERIC Educational Resources Information Center
Kringen, Nina L.; Healy, Genevieve N.; Winkler, Elisabeth A. H.; Clark, Bronwyn K.
2016-01-01
This study examined the accuracy of self-attachment of the activPAL activity monitor. A convenience sample of 50 participants self-attached the monitor after being presented with written material only (WMO) and then written and video (WV) instructions; and completed a questionnaire regarding the acceptability of the instructional methods.…
Field design factors affecting the precision of ryegrass forage yield estimation
USDA-ARS?s Scientific Manuscript database
Field-based agronomic and genetic research relies heavily on the data generated from field evaluations. Therefore, it is imperative to optimize the precision and accuracy of yield estimates in cultivar evaluation trials to make reliable selections. Experimental error in yield trials is sensitive to ...
NASA Technical Reports Server (NTRS)
Johnson, Kenneth L.; White, K. Preston, Jr.
2012-01-01
The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.
31 CFR 356.20 - How does the Treasury determine auction awards?
Code of Federal Regulations, 2012 CFR
2012-07-01
..., but not above, par when evaluated at the yield of awards to successful competitive bidders. (2... 31 Money and Finance:Treasury 2 2012-07-01 2012-07-01 false How does the Treasury determine... the yield or discount rate for the securities we are auctioning. (2) Accepting bids at the high yield...
31 CFR 356.20 - How does the Treasury determine auction awards?
Code of Federal Regulations, 2013 CFR
2013-07-01
..., but not above, par when evaluated at the yield of awards to successful competitive bidders. (2... 31 Money and Finance:Treasury 2 2013-07-01 2013-07-01 false How does the Treasury determine... the yield or discount rate for the securities we are auctioning. (2) Accepting bids at the high yield...
31 CFR 356.20 - How does the Treasury determine auction awards?
Code of Federal Regulations, 2011 CFR
2011-07-01
..., but not above, par when evaluated at the yield of awards to successful competitive bidders. (2... 31 Money and Finance:Treasury 2 2011-07-01 2011-07-01 false How does the Treasury determine... the yield or discount rate for the securities we are auctioning. (2) Accepting bids at the high yield...
Tekin, Eylul; Roediger, Henry L
2017-01-01
Researchers use a wide range of confidence scales when measuring the relationship between confidence and accuracy in reports from memory, with the highest number usually representing the greatest confidence (e.g., 4-point, 20-point, and 100-point scales). The assumption seems to be that the range of the scale has little bearing on the confidence-accuracy relationship. In two old/new recognition experiments, we directly investigated this assumption using word lists (Experiment 1) and faces (Experiment 2) by employing 4-, 5-, 20-, and 100-point scales. Using confidence-accuracy characteristic (CAC) plots, we asked whether confidence ratings would yield similar CAC plots, indicating comparability in use of the scales. For the comparisons, we divided 100-point and 20-point scales into bins of either four or five and asked, for example, whether confidence ratings of 4, 16-20, and 76-100 would yield similar values. The results show that, for both types of material, the different scales yield similar CAC plots. Notably, when subjects express high confidence, regardless of which scale they use, they are likely to be very accurate (even though they studied 100 words and 50 faces in each list in 2 experiments). The scales seem convertible from one to the other, and choice of scale range probably does not affect research into the relationship between confidence and accuracy. High confidence indicates high accuracy in recognition in the present experiments.
Random Forests for Global and Regional Crop Yield Predictions.
Jeong, Jig Han; Resop, Jonathan P; Mueller, Nathaniel D; Fleisher, David H; Yun, Kyungdahm; Butler, Ethan E; Timlin, Dennis J; Shim, Kyo-Moon; Gerber, James S; Reddy, Vangimalla R; Kim, Soo-Hyung
2016-01-01
Accurate predictions of crop yield are critical for developing effective agricultural and food policies at the regional and global scales. We evaluated a machine-learning method, Random Forests (RF), for its ability to predict crop yield responses to climate and biophysical variables at global and regional scales in wheat, maize, and potato in comparison with multiple linear regressions (MLR) serving as a benchmark. We used crop yield data from various sources and regions for model training and testing: 1) gridded global wheat grain yield, 2) maize grain yield from US counties over thirty years, and 3) potato tuber and maize silage yield from the northeastern seaboard region. RF was found highly capable of predicting crop yields and outperformed MLR benchmarks in all performance statistics that were compared. For example, the root mean square errors (RMSE) ranged between 6 and 14% of the average observed yield with RF models in all test cases whereas these values ranged from 14% to 49% for MLR models. Our results show that RF is an effective and versatile machine-learning method for crop yield predictions at regional and global scales for its high accuracy and precision, ease of use, and utility in data analysis. RF may result in a loss of accuracy when predicting the extreme ends or responses beyond the boundaries of the training data.
Data accuracy assessment using enterprise architecture
NASA Astrophysics Data System (ADS)
Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias
2011-02-01
Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.
NASA Astrophysics Data System (ADS)
Bach, Heike
1998-07-01
In order to test remote sensing data with advanced yield formation models for accuracy and timeliness of yield estimation of corn, a project was conducted for the State Ministry for Rural Environment, Food, and Forestry of Baden-Württemberg (Germany). This project was carried out during the course of the `Special Yield Estimation', a regular procedure conducted for the European Union, to more accurately estimate agricultural yield. The methodology employed uses field-based plant parameter estimation from atmospherically corrected multitemporal/multispectral LANDSAT-TM data. An agrometeorological plant-production-model is used for yield prediction. Based solely on four LANDSAT-derived estimates (between May and August) and daily meteorological data, the grain yield of corn fields was determined for 1995. The modelled yields were compared with results gathered independently within the Special Yield Estimation for 23 test fields in the upper Rhine valley. The agreement between LANDSAT-based estimates (six weeks before harvest) and Special Yield Estimation (at harvest) shows a relative error of 2.3%. The comparison of the results for single fields shows that six weeks before harvest, the grain yield of corn was estimated with a mean relative accuracy of 13% using satellite information. The presented methodology can be transferred to other crops and geographical regions. For future applications hyperspectral sensors show great potential to further enhance the results for yield prediction with remote sensing.
Michel, Sebastian; Ametz, Christian; Gungor, Huseyin; Akgöl, Batuhan; Epure, Doru; Grausgruber, Heinrich; Löschenberger, Franziska; Buerstmayr, Hermann
2017-02-01
Early generation genomic selection is superior to conventional phenotypic selection in line breeding and can be strongly improved by including additional information from preliminary yield trials. The selection of lines that enter resource-demanding multi-environment trials is a crucial decision in every line breeding program as a large amount of resources are allocated for thoroughly testing these potential varietal candidates. We compared conventional phenotypic selection with various genomic selection approaches across multiple years as well as the merit of integrating phenotypic information from preliminary yield trials into the genomic selection framework. The prediction accuracy using only phenotypic data was rather low (r = 0.21) for grain yield but could be improved by modeling genetic relationships in unreplicated preliminary yield trials (r = 0.33). Genomic selection models were nevertheless found to be superior to conventional phenotypic selection for predicting grain yield performance of lines across years (r = 0.39). We subsequently simplified the problem of predicting untested lines in untested years to predicting tested lines in untested years by combining breeding values from preliminary yield trials and predictions from genomic selection models by a heritability index. This genomic assisted selection led to a 20% increase in prediction accuracy, which could be further enhanced by an appropriate marker selection for both grain yield (r = 0.48) and protein content (r = 0.63). The easy to implement and robust genomic assisted selection gave thus a higher prediction accuracy than either conventional phenotypic or genomic selection alone. The proposed method took the complex inheritance of both low and high heritable traits into account and appears capable to support breeders in their selection decisions to develop enhanced varieties more efficiently.
Navigation strategy and filter design for solar electric missions
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Hagar, H., Jr.
1972-01-01
Methods which have been proposed to improve the navigation accuracy for the low-thrust space vehicle include modifications to the standard Sequential- and Batch-type orbit determination procedures and the use of inertial measuring units (IMU) which measures directly the acceleration applied to the vehicle. The navigation accuracy obtained using one of the more promising modifications to the orbit determination procedures is compared with a combined IMU-Standard. The unknown accelerations are approximated as both first-order and second-order Gauss-Markov processes. The comparison is based on numerical results obtained in a study of the navigation requirements of a numerically simulated 152-day low-thrust mission to the asteroid Eros. The results obtained in the simulation indicate that the DMC algorithm will yield a significant improvement over the navigation accuracies achieved with previous estimation algorithms. In addition, the DMC algorithms will yield better navigation accuracies than the IMU-Standard Orbit Determination algorithm, except for extremely precise IMU measurements, i.e., gyroplatform alignment .01 deg and accelerometer signal-to-noise ratio .07. Unless these accuracies are achieved, the IMU navigation accuracies are generally unacceptable.
User acceptance of intelligent avionics: A study of automatic-aided target recognition
NASA Technical Reports Server (NTRS)
Becker, Curtis A.; Hayes, Brian C.; Gorman, Patrick C.
1991-01-01
User acceptance of new support systems typically was evaluated after the systems were specified, designed, and built. The current study attempts to assess user acceptance of an Automatic-Aided Target Recognition (ATR) system using an emulation of such a proposed system. The detection accuracy and false alarm level of the ATR system were varied systematically, and subjects rated the tactical value of systems exhibiting different performance levels. Both detection accuracy and false alarm level affected the subjects' ratings. The data from two experiments suggest a cut-off point in ATR performance below which the subjects saw little tactical value in the system. An ATR system seems to have obvious tactical value only if it functions at a correct detection rate of 0.7 or better with a false alarm level of 0.167 false alarms per square degree or fewer.
NASA Astrophysics Data System (ADS)
Wijesingha, J. S. J.; Deshapriya, N. L.; Samarakoon, L.
2015-04-01
Billions of people in the world depend on rice as a staple food and as an income-generating crop. Asia is the leader in rice cultivation and it is necessary to maintain an up-to-date rice-related database to ensure food security as well as economic development. This study investigates general applicability of high temporal resolution Moderate Resolution Imaging Spectroradiometer (MODIS) 250m gridded vegetation product for monitoring rice crop growth, mapping rice crop acreage and analyzing crop yield, at the province-level. The MODIS 250m Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI) time series data, field data and crop calendar information were utilized in this research in Sa Kaeo Province, Thailand. The following methodology was used: (1) data pre-processing and rice plant growth analysis using Vegetation Indices (VI) (2) extraction of rice acreage and start-of-season dates from VI time series data (3) accuracy assessment, and (4) yield analysis with MODIS VI. The results show a direct relationship between rice plant height and MODIS VI. The crop calendar information and the smoothed NDVI time series with Whittaker Smoother gave high rice acreage estimation (with 86% area accuracy and 75% classification accuracy). Point level yield analysis showed that the MODIS EVI is highly correlated with rice yield and yield prediction using maximum EVI in the rice cycle predicted yield with an average prediction error 4.2%. This study shows the immense potential of MODIS gridded vegetation product for keeping an up-to-date Geographic Information System of rice cultivation.
High-yield positron systems for linear colliders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clendenin, J.E.
1989-04-01
Linear colliders, such as the SLC, are among those accelerators for which a high-yield positron source operating at the repetition rate of the accelerator is desired. The SLC, having electron energies up to 50 GeV, presents the possibility of generating positron bunches with useful charge even exceeding that of the initial electron bunch. The exact positron yield to be obtained depends on the particular capture, transport and damping system employed. Using 31 GeV electrons impinging on a W-type converter phase-space at the target to the acceptance of the capture rf section, the SLC source is capable of producing, for everymore » electron, up to two positrons within the acceptance of the positron damping ring. The design of this source and the performance of the positron system as built are described. Also, future prospects and limitations for high-yield positron systems are discussed. 11 refs., 5 figs., 3 tabs.« less
Search for {eta}-mesic helium using WASA-at-COSY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moskal, P.; Institut fuer Kernphysik and Juelich Center for Hadron Physics, Forschungszentrum Juelich, Juelich
2010-08-05
The installation of the WASA detector at the cooler synchrotron COSY opened the possibility to search for {eta}-mesic helium with high statistics and high acceptance. A search for the {sup 4}He--{eta} bound state is conducted via an exclusive measurement of the excitation function for the dd{yields}{sup 3}Hep{pi}{sup -} reaction varying continuously the beam momentum around the threshold for the dd{yields}{sup 4}He{eta} reaction. Ramping of the beam momentum and taking advantage of the large acceptance of the WASA detector allows to minimize systematical uncertainities.
Rational calculation accuracy in acousto-optical matrix-vector processor
NASA Astrophysics Data System (ADS)
Oparin, V. V.; Tigin, Dmitry V.
1994-01-01
The high speed of parallel computations for a comparatively small-size processor and acceptable power consumption makes the usage of acousto-optic matrix-vector multiplier (AOMVM) attractive for processing of large amounts of information in real time. The limited accuracy of computations is an essential disadvantage of such a processor. The reduced accuracy requirements allow for considerable simplification of the AOMVM architecture and the reduction of the demands on its components.
1-D grating based SPR biosensor for the detection of lung cancer biomarkers using Vroman effect
NASA Astrophysics Data System (ADS)
Teotia, Pradeep Kumar; Kaler, R. S.
2018-01-01
Grating based surface plasmon resonance waveguide biosensor have been reported for the detection of lung cancer biomarkers using Vroman effect. The proposed grating based multilayered biosensor is designed with high detection accuracy for Epidermal growth factor receptor (EGFR) and also analysed to show high detection accuracy with acceptable sensitivity for both cancer biomarkers. The introduction of periodic grating with multilayer metals generates a good resonance that make it possible for early detection of cancerous cells. Using finite difference time domain method, it is observed wavelength of biosensor get red-shifted on variations of the refractive index due to the presence of both the cancerous bio-markers. The reported detection accuracy and sensitivity of proposed biosensor is quite acceptable for both lung cancer biomarkers i.e. Carcinoembryonic antigen (CEA) and Epidermal growth factor receptor (EGFR) which further offer us label free early detection of lung cancer using these biomarkers.
Genotyping by sequencing for genomic prediction in a soybean breeding population.
Jarquín, Diego; Kocak, Kyle; Posadas, Luis; Hyma, Katie; Jedlicka, Joseph; Graef, George; Lorenz, Aaron
2014-08-29
Advances in genotyping technology, such as genotyping by sequencing (GBS), are making genomic prediction more attractive to reduce breeding cycle times and costs associated with phenotyping. Genomic prediction and selection has been studied in several crop species, but no reports exist in soybean. The objectives of this study were (i) evaluate prospects for genomic selection using GBS in a typical soybean breeding program and (ii) evaluate the effect of GBS marker selection and imputation on genomic prediction accuracy. To achieve these objectives, a set of soybean lines sampled from the University of Nebraska Soybean Breeding Program were genotyped using GBS and evaluated for yield and other agronomic traits at multiple Nebraska locations. Genotyping by sequencing scored 16,502 single nucleotide polymorphisms (SNPs) with minor-allele frequency (MAF) > 0.05 and percentage of missing values ≤ 5% on 301 elite soybean breeding lines. When SNPs with up to 80% missing values were included, 52,349 SNPs were scored. Prediction accuracy for grain yield, assessed using cross validation, was estimated to be 0.64, indicating good potential for using genomic selection for grain yield in soybean. Filtering SNPs based on missing data percentage had little to no effect on prediction accuracy, especially when random forest imputation was used to impute missing values. The highest accuracies were observed when random forest imputation was used on all SNPs, but differences were not significant. A standard additive G-BLUP model was robust; modeling additive-by-additive epistasis did not provide any improvement in prediction accuracy. The effect of training population size on accuracy began to plateau around 100, but accuracy steadily climbed until the largest possible size was used in this analysis. Including only SNPs with MAF > 0.30 provided higher accuracies when training populations were smaller. Using GBS for genomic prediction in soybean holds good potential to expedite genetic gain. Our results suggest that standard additive G-BLUP models can be used on unfiltered, imputed GBS data without loss in accuracy.
NASA Astrophysics Data System (ADS)
Lee, J.; Kang, S.; Jang, K.; Ko, J.; Hong, S.
2012-12-01
Crop productivity is associated with the food security and hence, several models have been developed to estimate crop yield by combining remote sensing data with carbon cycle processes. In present study, we attempted to estimate crop GPP and NPP using algorithm based on the LUE model and a simplified respiration model. The state of Iowa and Illinois was chosen as the study site for estimating the crop yield for a period covering the 5 years (2006-2010), as it is the main Corn-Belt area in US. Present study focuses on developing crop-specific parameters for corn and soybean to estimate crop productivity and yield mapping using satellite remote sensing data. We utilized a 10 km spatial resolution daily meteorological data from WRF to provide cloudy-day meteorological variables but in clear-say days, MODIS-based meteorological data were utilized to estimate daily GPP, NPP, and biomass. County-level statistics on yield, area harvested, and productions were used to test model predicted crop yield. The estimated input meteorological variables from MODIS and WRF showed with good agreements with the ground observations from 6 Ameriflux tower sites in 2006. For examples, correlation coefficients ranged from 0.93 to 0.98 for Tmin and Tavg ; from 0.68 to 0.85 for daytime mean VPD; from 0.85 to 0.96 for daily shortwave radiation, respectively. We developed county-specific crop conversion coefficient, i.e. ratio of yield to biomass on 260 DOY and then, validated the estimated county-level crop yield with the statistical yield data. The estimated corn and soybean yields at the county level ranged from 671 gm-2 y-1 to 1393 gm-2 y-1 and from 213 gm-2 y-1 to 421 gm-2 y-1, respectively. The county-specific yield estimation mostly showed errors less than 10%. Furthermore, we estimated crop yields at the state level which were validated against the statistics data and showed errors less than 1%. Further analysis for crop conversion coefficient was conducted for 200 DOY and 280 DOY. For the case of 280 DOY, Crop yield estimation showed better accuracy for soybean at county level. Though the case of 200 DOY resulted in less accuracy (i.e. 20% mean bias), it provides a useful tool for early forecasting of crop yield. We improved the spatial accuracy of estimated crop yield at county level by developing county-specific crop conversion coefficient. Our results indicate that the aboveground crop biomass can be estimated successfully with the simple LUE and respiration models combined with MODIS data and then, county-specific conversion coefficient can be different with each other across different counties. Hence, applying region-specific conversion coefficient is necessary to estimate crop yield with better accuracy.
Comparing diagnostic tests on benefit-risk.
Pennello, Gene; Pantoja-Galicia, Norberto; Evans, Scott
2016-01-01
Comparing diagnostic tests on accuracy alone can be inconclusive. For example, a test may have better sensitivity than another test yet worse specificity. Comparing tests on benefit risk may be more conclusive because clinical consequences of diagnostic error are considered. For benefit-risk evaluation, we propose diagnostic yield, the expected distribution of subjects with true positive, false positive, true negative, and false negative test results in a hypothetical population. We construct a table of diagnostic yield that includes the number of false positive subjects experiencing adverse consequences from unnecessary work-up. We then develop a decision theory for evaluating tests. The theory provides additional interpretation to quantities in the diagnostic yield table. It also indicates that the expected utility of a test relative to a perfect test is a weighted accuracy measure, the average of sensitivity and specificity weighted for prevalence and relative importance of false positive and false negative testing errors, also interpretable as the cost-benefit ratio of treating non-diseased and diseased subjects. We propose plots of diagnostic yield, weighted accuracy, and relative net benefit of tests as functions of prevalence or cost-benefit ratio. Concepts are illustrated with hypothetical screening tests for colorectal cancer with test positive subjects being referred to colonoscopy.
Freeform solar concentrator with a highly asymmetric acceptance cone
NASA Astrophysics Data System (ADS)
Wheelwright, Brian; Angel, J. Roger P.; Coughenour, Blake; Hammer, Kimberly
2014-10-01
A solar concentrator with a highly asymmetric acceptance cone is investigated. Concentrating photovoltaic systems require dual-axis sun tracking to maintain nominal concentration throughout the day. In addition to collecting direct rays from the solar disk, which subtends ~0.53 degrees, concentrating optics must allow for in-field tracking errors due to mechanical misalignment of the module, wind loading, and control loop biases. The angular range over which the concentrator maintains <90% of on-axis throughput is defined as the optical acceptance angle. Concentrators with substantial rotational symmetry likewise exhibit rotationally symmetric acceptance angles. In the field, this is sometimes a poor match with azimuth-elevation trackers, which have inherently asymmetric tracking performance. Pedestal-mounted trackers with low torsional stiffness about the vertical axis have better elevation tracking than azimuthal tracking. Conversely, trackers which rotate on large-footprint circular tracks are often limited by elevation tracking performance. We show that a line-focus concentrator, composed of a parabolic trough primary reflector and freeform refractive secondary, can be tailored to have a highly asymmetric acceptance angle. The design is suitable for a tracker with excellent tracking accuracy in the elevation direction, and poor accuracy in the azimuthal direction. In the 1000X design given, when trough optical errors (2mrad rms slope deviation) are accounted for, the azimuthal acceptance angle is +/- 1.65°, while the elevation acceptance angle is only +/-0.29°. This acceptance angle does not include the angular width of the sun, which consumes nearly all of the elevation tolerance at this concentration level. By decreasing the average concentration, the elevation acceptance angle can be increased. This is well-suited for a pedestal alt-azimuth tracker with a low cost slew bearing (without anti-backlash features).
2017-01-01
The World Health Organization (WHO) enzyme-linked immunosorbent assay (ELISA) guideline is currently accepted as the gold standard for the evaluation of immunoglobulin G (IgG) antibodies specific to pneumococcal capsular polysaccharide. We conducted validation of the WHO ELISA for 7 pneumococcal serotypes (4, 6B, 9V, 14, 18C, 19F, and 23F) by evaluating its specificity, precision (reproducibility and intermediate precision), accuracy, spiking recovery test, lower limit of quantification (LLOQ), and stability at the Ewha Center for Vaccine Evaluation and Study, Seoul, Korea. We found that the specificity, reproducibility, and intermediate precision were within acceptance ranges (reproducibility, coefficient of variability [CV] ≤ 15%; intermediate precision, CV ≤ 20%) for all serotypes. Comparisons between the provisional assignments of calibration sera and the results from this laboratory showed a high correlation > 94% for all 7 serotypes, supporting the accuracy of the ELISA. The spiking recovery test also fell within an acceptable range. The quantification limit, calculated using the LLOQ, for each of the serotypes was 0.05–0.093 μg/mL. The freeze-thaw stability and the short-term temperature stability were also within an acceptable range. In conclusion, we showed good performance using the standardized WHO ELISA for the evaluation of serotype-specific anti-pneumococcal IgG antibodies; the WHO ELISA can evaluate the immune response against pneumococcal vaccines with consistency and accuracy. PMID:28875600
Lee, Hyunju; Lim, Soo Young; Kim, Kyung Hyo
2017-10-01
The World Health Organization (WHO) enzyme-linked immunosorbent assay (ELISA) guideline is currently accepted as the gold standard for the evaluation of immunoglobulin G (IgG) antibodies specific to pneumococcal capsular polysaccharide. We conducted validation of the WHO ELISA for 7 pneumococcal serotypes (4, 6B, 9V, 14, 18C, 19F, and 23F) by evaluating its specificity, precision (reproducibility and intermediate precision), accuracy, spiking recovery test, lower limit of quantification (LLOQ), and stability at the Ewha Center for Vaccine Evaluation and Study, Seoul, Korea. We found that the specificity, reproducibility, and intermediate precision were within acceptance ranges (reproducibility, coefficient of variability [CV] ≤ 15%; intermediate precision, CV ≤ 20%) for all serotypes. Comparisons between the provisional assignments of calibration sera and the results from this laboratory showed a high correlation > 94% for all 7 serotypes, supporting the accuracy of the ELISA. The spiking recovery test also fell within an acceptable range. The quantification limit, calculated using the LLOQ, for each of the serotypes was 0.05-0.093 μg/mL. The freeze-thaw stability and the short-term temperature stability were also within an acceptable range. In conclusion, we showed good performance using the standardized WHO ELISA for the evaluation of serotype-specific anti-pneumococcal IgG antibodies; the WHO ELISA can evaluate the immune response against pneumococcal vaccines with consistency and accuracy. © 2017 The Korean Academy of Medical Sciences.
Šenk, Miroslav; Chèze, Laurence
2010-06-01
Optoelectronic tracking systems are rarely used in 3D studies examining shoulder movements including the scapula. Among the reasons is the important slippage of skin markers with respect to scapula. Methods using electromagnetic tracking devices are validated and frequently applied. Thus, the aim of this study was to develop a new method for in vivo optoelectronic scapular capture dealing with the accepted accuracy issues of validated methods. Eleven arm positions in three anatomical planes were examined using five subjects in static mode. The method was based on local optimisation, and recalculation procedures were made using a set of five scapular surface markers. The scapular rotations derived from the recalculation-based method yielded RMS errors comparable with the frequently used electromagnetic scapular methods (RMS up to 12.6° for 150° arm elevation). The results indicate that the present method can be used under careful considerations for 3D kinematical studies examining different shoulder movements.
Chromatic confocal microscopy for multi-depth imaging of epithelial tissue
Olsovsky, Cory; Shelton, Ryan; Carrasco-Zevallos, Oscar; Applegate, Brian E.; Maitland, Kristen C.
2013-01-01
We present a novel chromatic confocal microscope capable of volumetric reflectance imaging of microstructure in non-transparent tissue. Our design takes advantage of the chromatic aberration of aspheric lenses that are otherwise well corrected. Strong chromatic aberration, generated by multiple aspheres, longitudinally disperses supercontinuum light onto the sample. The backscattered light detected with a spectrometer is therefore wavelength encoded and each spectrum corresponds to a line image. This approach obviates the need for traditional axial mechanical scanning techniques that are difficult to implement for endoscopy and susceptible to motion artifact. A wavelength range of 590-775 nm yielded a >150 µm imaging depth with ~3 µm axial resolution. The system was further demonstrated by capturing volumetric images of buccal mucosa. We believe these represent the first microstructural images in non-transparent biological tissue using chromatic confocal microscopy that exhibit long imaging depth while maintaining acceptable resolution for resolving cell morphology. Miniaturization of this optical system could bring enhanced speed and accuracy to endomicroscopic in vivo volumetric imaging of epithelial tissue. PMID:23667789
Zurovac, Dejan; Larson, Bruce A.; Skarbinski, Jacek; Slutsker, Laurence; Snow, Robert W.; Hamel, Mary J.
2008-01-01
Using data on clinical practices for outpatients 5 years and older, test accuracy, and malaria prevalence, we model financial and clinical implications of malaria rapid diagnostic tests (RDTs) under the new artemether-lumefantrine (AL) treatment policy in one high and one low malaria prevalence district in Kenya. In the high transmission district, RDTs as actually used would improve malaria treatment (61% less over-treatment but 8% more under-treatment) and lower costs (21% less). Nonetheless, the majority of patients with malaria would not be correctly treated with AL. In the low transmission district, especially because the treatment policy was new and AL was not widely used, RDTs as actually used would yield a minor reduction in under-treatment errors (36% less but the base is small) with 41% higher costs. In both districts, adherence to revised clinical practices with RDTs has the potential to further decrease treatment errors with acceptable costs. PMID:18541764
A Decision Tree for Nonmetric Sex Assessment from the Skull.
Langley, Natalie R; Dudzik, Beatrix; Cloutier, Alesia
2018-01-01
This study uses five well-documented cranial nonmetric traits (glabella, mastoid process, mental eminence, supraorbital margin, and nuchal crest) and one additional trait (zygomatic extension) to develop a validated decision tree for sex assessment. The decision tree was built and cross-validated on a sample of 293 U.S. White individuals from the William M. Bass Donated Skeletal Collection. Ordinal scores from the six traits were analyzed using the partition modeling option in JMP Pro 12. A holdout sample of 50 skulls was used to test the model. The most accurate decision tree includes three variables: glabella, zygomatic extension, and mastoid process. This decision tree yielded 93.5% accuracy on the training sample, 94% on the cross-validated sample, and 96% on a holdout validation sample. Linear weighted kappa statistics indicate acceptable agreement among observers for these variables. Mental eminence should be avoided, and definitions and figures should be referenced carefully to score nonmetric traits. © 2017 American Academy of Forensic Sciences.
Integral method for transient He II heat transfer in a semi-infinite domain
NASA Astrophysics Data System (ADS)
Baudouy, B.
2002-05-01
Integral methods are suited to solve a non-linear system of differential equations where the non-linearity can be found either in the differential equations or in the boundary conditions. Though they are approximate methods, they have proven to give simple solutions with acceptable accuracy for transient heat transfer in He II. Taking in account the temperature dependence of thermal properties, direct solutions are found without the need of adjusting a parameter. Previously, we have presented a solution for the clamped heat flux and in the present study this method is used to accommodate the clamped-temperature problem. In the case of constant thermal properties, this method yields results that are within a few percent of the exact solution for the heat flux at the axis origin. We applied this solution to analyze recovery from burnout and find an agreement within 10% at low heat flux, whereas at high heat flux the model deviates from the experimental data suggesting the need for a more refined thermal model.
Intrinsic germanium detector used in borehole sonde for uranium exploration
Senftle, F.E.; Moxham, R.M.; Tanner, A.B.; Boynton, G.R.; Philbin, P.W.; Baicker, J.A.
1976-01-01
A borehole sonde (~1.7 m long; 7.3 cm diameter) using a 200 mm2 planar intrinsic germanium detector, mounted in a cryostat cooled by removable canisters of frozen propane, has been constructed and tested. The sonde is especially useful in measuring X- and low-energy gamma-ray spectra (40–400 keV). Laboratory tests in an artificial borehole facility indicate its potential for in-situ uranium analyses in boreholes irrespective of the state of equilibrium in the uranium series. Both natural gamma-ray and neutron-activation gamma-ray spectra have been measured with the sonde. Although the neutron-activation technique yields greater sensitivity, improvements being made in the resolution and efficiency of intrinsic germanium detectors suggest that it will soon be possible to use a similar sonde in the passive mode for measurement of uranium in a borehole down to about 0.1% with acceptable accuracy. Using a similar detector and neutron activation, the sonde can be used to measure uranium down to 0.01%.
Yield advances in peanut - weed control effects
USDA-ARS?s Scientific Manuscript database
Improvements in weed management are a contributing factor to advancements in peanut yield. Widespread use of vacuum planters and increased acceptance of narrow row patterns enhance weed control by lessening bareground caused by skips and promoting quick canopy closure. Cultivation was traditionall...
MODELING UNCERTAINTY OF RUNOFF AND SEDIMENT YIELD IN TWO EXPERIMENTAL WATERSHEDS
Sediment loading from agriculture is adversely impacting surface water quality and ecological conditions. In this regard, the use of distributed hydrologic models has gained acceptance in management of soil erosion and sediment yield from agricultural watersheds. Soil infiltrati...
IMPROVING WILLINGNESS-TO-ACCEPT RESPONSES USING ALTERNATE FORMS OF COMPENSATION
The purpose of this project is to design a pilot survey to investigate why surveys that ask willingness-to-accept compensation questions so often yield unreliable data and whether respondents would find alternate modes of compensation (specifically, public goods) more acceptab...
Navy Controls for Invoice, Receipt, Acceptance, and Property Transfer System Need Improvement
2016-02-25
iR APT as a web-based system to electronically invoice, receipt, and accept ser vices and product s from its contractors and vendors. The iR APT system...electronically shares document s bet ween DoD and it s contractors and vendors to eliminate redundant data entr y, increase data accuracy, and reduce...The iR APT system allows contractors to submit and track invoices and receipt and acceptance documents over the web and allows government personnel to
Hellmuth, Christian; Weber, Martina; Koletzko, Berthold; Peissner, Wolfgang
2012-02-07
Despite their central importance for lipid metabolism, straightforward quantitative methods for determination of nonesterified fatty acid (NEFA) species are still missing. The protocol presented here provides unbiased quantitation of plasma NEFA species by liquid chromatography-tandem mass spectrometry (LC-MS/MS). Simple deproteination of plasma in organic solvent solution yields high accuracy, including both the unbound and initially protein-bound fractions, while avoiding interferences from hydrolysis of esterified fatty acids from other lipid classes. Sample preparation is fast and nonexpensive, hence well suited for automation and high-throughput applications. Separation of isotopologic NEFA is achieved using ultrahigh-performance liquid chromatography (UPLC) coupled to triple quadrupole LC-MS/MS detection. In combination with automated liquid handling, total assay time per sample is less than 15 min. The analytical spectrum extends beyond readily available NEFA standard compounds by a regression model predicting all the relevant analytical parameters (retention time, ion path settings, and response factor) of NEFA species based on chain length and number of double bonds. Detection of 50 NEFA species and accurate quantification of 36 NEFA species in human plasma is described, the highest numbers ever reported for a LC-MS application. Accuracy and precision are within widely accepted limits. The use of qualifier ions supports unequivocal analyte verification. © 2012 American Chemical Society
Quantitative and descriptive comparison of four acoustic analysis systems: vowel measurements.
Burris, Carlyn; Vorperian, Houri K; Fourakis, Marios; Kent, Ray D; Bolt, Daniel M
2014-02-01
This study examines accuracy and comparability of 4 trademarked acoustic analysis software packages (AASPs): Praat, WaveSurfer, TF32, and CSL by using synthesized and natural vowels. Features of AASPs are also described. Synthesized and natural vowels were analyzed using each of the AASP's default settings to secure 9 acoustic measures: fundamental frequency (F0), formant frequencies (F1-F4), and formant bandwidths (B1-B4). The discrepancy between the software measured values and the input values (synthesized, previously reported, and manual measurements) was used to assess comparability and accuracy. Basic AASP features are described. Results indicate that Praat, WaveSurfer, and TF32 generate accurate and comparable F0 and F1-F4 data for synthesized vowels and adult male natural vowels. Results varied by vowel for women and children, with some serious errors. Bandwidth measurements by AASPs were highly inaccurate as compared with manual measurements and published data on formant bandwidths. Values of F0 and F1-F4 are generally consistent and fairly accurate for adult vowels and for some child vowels using the default settings in Praat, WaveSurfer, and TF32. Manipulation of default settings yields improved output values in TF32 and CSL. Caution is recommended especially before accepting F1-F4 results for children and B1-B4 results for all speakers.
Robust Vehicle Detection in Aerial Images Based on Cascaded Convolutional Neural Networks.
Zhong, Jiandan; Lei, Tao; Yao, Guangle
2017-11-24
Vehicle detection in aerial images is an important and challenging task. Traditionally, many target detection models based on sliding-window fashion were developed and achieved acceptable performance, but these models are time-consuming in the detection phase. Recently, with the great success of convolutional neural networks (CNNs) in computer vision, many state-of-the-art detectors have been designed based on deep CNNs. However, these CNN-based detectors are inefficient when applied in aerial image data due to the fact that the existing CNN-based models struggle with small-size object detection and precise localization. To improve the detection accuracy without decreasing speed, we propose a CNN-based detection model combining two independent convolutional neural networks, where the first network is applied to generate a set of vehicle-like regions from multi-feature maps of different hierarchies and scales. Because the multi-feature maps combine the advantage of the deep and shallow convolutional layer, the first network performs well on locating the small targets in aerial image data. Then, the generated candidate regions are fed into the second network for feature extraction and decision making. Comprehensive experiments are conducted on the Vehicle Detection in Aerial Imagery (VEDAI) dataset and Munich vehicle dataset. The proposed cascaded detection model yields high performance, not only in detection accuracy but also in detection speed.
Implicit time accurate simulation of unsteady flow
NASA Astrophysics Data System (ADS)
van Buuren, René; Kuerten, Hans; Geurts, Bernard J.
2001-03-01
Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright
Robust Vehicle Detection in Aerial Images Based on Cascaded Convolutional Neural Networks
Zhong, Jiandan; Lei, Tao; Yao, Guangle
2017-01-01
Vehicle detection in aerial images is an important and challenging task. Traditionally, many target detection models based on sliding-window fashion were developed and achieved acceptable performance, but these models are time-consuming in the detection phase. Recently, with the great success of convolutional neural networks (CNNs) in computer vision, many state-of-the-art detectors have been designed based on deep CNNs. However, these CNN-based detectors are inefficient when applied in aerial image data due to the fact that the existing CNN-based models struggle with small-size object detection and precise localization. To improve the detection accuracy without decreasing speed, we propose a CNN-based detection model combining two independent convolutional neural networks, where the first network is applied to generate a set of vehicle-like regions from multi-feature maps of different hierarchies and scales. Because the multi-feature maps combine the advantage of the deep and shallow convolutional layer, the first network performs well on locating the small targets in aerial image data. Then, the generated candidate regions are fed into the second network for feature extraction and decision making. Comprehensive experiments are conducted on the Vehicle Detection in Aerial Imagery (VEDAI) dataset and Munich vehicle dataset. The proposed cascaded detection model yields high performance, not only in detection accuracy but also in detection speed. PMID:29186756
NASA Astrophysics Data System (ADS)
Dabiri, M.; Ghafouri, M.; Rohani Raftar, H. R.; Björk, T.
2018-03-01
Methods to estimate the strain-life curve, which were divided into three categories: simple approximations, artificial neural network-based approaches and continuum damage mechanics models, were examined, and their accuracy was assessed in strain-life evaluation of a direct-quenched high-strength steel. All the prediction methods claim to be able to perform low-cycle fatigue analysis using available or easily obtainable material properties, thus eliminating the need for costly and time-consuming fatigue tests. Simple approximations were able to estimate the strain-life curve with satisfactory accuracy using only monotonic properties. The tested neural network-based model, although yielding acceptable results for the material in question, was found to be overly sensitive to the data sets used for training and showed an inconsistency in estimation of the fatigue life and fatigue properties. The studied continuum damage-based model was able to produce a curve detecting early stages of crack initiation. This model requires more experimental data for calibration than approaches using simple approximations. As a result of the different theories underlying the analyzed methods, the different approaches have different strengths and weaknesses. However, it was found that the group of parametric equations categorized as simple approximations are the easiest for practical use, with their applicability having already been verified for a broad range of materials.
31 CFR 356.21 - How are awards at the high yield or discount rate calculated?
Code of Federal Regulations, 2010 CFR
2010-07-01
... discount rate calculated? 356.21 Section 356.21 Money and Finance: Treasury Regulations Relating to Money... high yield or discount rate calculated? (a) Awards to submitters. We generally prorate bids at the highest accepted yield or discount rate under § 356.20(a)(2) of this part. For example, if 80.15% is the...
Accuracy of Four Imaging Techniques for Diagnosis of Posterior Pelvic Floor Disorders.
van Gruting, Isabelle M A; Stankiewicz, Aleksandra; Kluivers, Kirsten; De Bin, Riccardo; Blake, Helena; Sultan, Abdul H; Thakar, Ranee
2017-11-01
To establish the diagnostic test accuracy of evacuation proctography, magnetic resonance imaging (MRI), transperineal ultrasonography, and endovaginal ultrasonography for detecting posterior pelvic floor disorders (rectocele, enterocele, intussusception, and anismus) in women with obstructed defecation syndrome and secondarily to identify the most patient-friendly imaging technique. In this prospective cohort study, 131 women with symptoms of obstructed defecation syndrome underwent evacuation proctogram, MRI, and transperineal and endovaginal ultrasonography. Images were analyzed by two blinded observers. In the absence of a reference standard, latent class analysis was used to assess diagnostic test accuracy of multiple tests with area under the curve (AUC) as the primary outcome measure. Secondary outcome measures were interobserver agreement calculated as Cohen's κ and patient acceptability using a visual analog scale. No significant differences in diagnostic accuracy were found among the imaging techniques for all the target conditions. Estimates of diagnostic test accuracy were highest for rectocele using MRI (AUC 0.79) or transperineal ultrasonography (AUC 0.85), for enterocele using transperineal (AUC 0.73) or endovaginal ultrasonography (AUC 0.87), for intussusception using evacuation proctography (AUC 0.76) or endovaginal ultrasonography (AUC 0.77), and for anismus using endovaginal (AUC 0.95) or transperineal ultrasonography (AUC 0.78). Interobserver agreement for the diagnosis of rectocele (κ 0.53-0.72), enterocele (κ 0.54-0.94) and anismus (κ 0.43-0.81) was moderate to excellent, but poor to fair for intussusception (κ -0.03 to 0.37) with all techniques. Patient acceptability was better for transperineal and endovaginal ultrasonography as compared with MRI and evacuation proctography (P<.001). Evacuation proctography, MRI, and transperineal and endovaginal ultrasonography were shown to have similar diagnostic test accuracy. Evacuation proctography is not the best available imaging technique. There is no one optimal test for the diagnosis of all posterior pelvic floor disorders. Because transperineal and endovaginal ultrasonography have good test accuracy and patient acceptability, we suggest these could be used for initial assessment of obstructed defecation syndrome. ClinicalTrials.gov, NCT02239302.
ERIC Educational Resources Information Center
Rudner, Lawrence
2016-01-01
In the machine learning literature, it is commonly accepted as fact that as calibration sample sizes increase, Naïve Bayes classifiers initially outperform Logistic Regression classifiers in terms of classification accuracy. Applied to subtests from an on-line final examination and from a highly regarded certification examination, this study shows…
Broiler genetic strain and sex effects on meat characteristics.
López, K P; Schilling, M W; Corzo, A
2011-05-01
A randomized complete block design within a factorial arrangement of treatments was used to evaluate the effect of strain and sex on carcass characteristics, meat quality, and sensory acceptability. Two broiler strains were reared: a commercially available strain (strain A) and a strain currently in the test phase (strain B) that has been genetically selected to maximize breast yield. Broilers were harvested in a pilot scale processing plant using commercial prototype equipment at 42 d of age. Carcasses were deboned at 4 h postmortem. The left half of each breast was evaluated for pH, color, cooking loss, shear force, and proximate analysis. The right side of each breast was used for consumer acceptability testing. Thigh meat was evaluated for proximate composition. No interactions were observed throughout the study. Male broilers had a higher (P < 0.05) live BW, carcass weight, and breast weight and lower (P < 0.05) dressing percentage and breast meat yield when compared with females. Broilers from strain B presented a higher (P < 0.05) breast yield and dressing percentage than those broilers corresponding to the commercially available broiler strain. At 24 h postmortem, female broilers presented a lower ultimate pH and higher Commission internationale de l'éclairage yellowness values (ventral side of the pectoralis major) when compared with male broilers. On average, no differences existed (P > 0.05) among treatments with respect to pH decline, cooking loss, shear values, and proximate composition. In addition, no differences (P > 0.05) existed among breast meat from the different strains with respect to consumer acceptability of appearance, texture, flavor, and overall acceptability, but breast meat from strain B was slightly preferred (P < 0.05) over that of strain A with respect to aroma. However, breast meat from both strains received scores in the range of "like slightly to like moderately." Overall data suggest that all treatments yielded high quality breast and thigh meat and strain cross did not present variability in terms of consumer acceptability.
NASA Technical Reports Server (NTRS)
Grossman, Bernard
1999-01-01
Compressible and incompressible versions of a three-dimensional unstructured mesh Reynolds-averaged Navier-Stokes flow solver have been differentiated and resulting derivatives have been verified by comparisons with finite differences and a complex-variable approach. In this implementation, the turbulence model is fully coupled with the flow equations in order to achieve this consistency. The accuracy demonstrated in the current work represents the first time that such an approach has been successfully implemented. The accuracy of a number of simplifying approximations to the linearizations of the residual have been examined. A first-order approximation to the dependent variables in both the adjoint and design equations has been investigated. The effects of a "frozen" eddy viscosity and the ramifications of neglecting some mesh sensitivity terms were also examined. It has been found that none of the approximations yielded derivatives of acceptable accuracy and were often of incorrect sign. However, numerical experiments indicate that an incomplete convergence of the adjoint system often yield sufficiently accurate derivatives, thereby significantly lowering the time required for computing sensitivity information. The convergence rate of the adjoint solver relative to the flow solver has been examined. Inviscid adjoint solutions typically require one to four times the cost of a flow solution, while for turbulent adjoint computations, this ratio can reach as high as eight to ten. Numerical experiments have shown that the adjoint solver can stall before converging the solution to machine accuracy, particularly for viscous cases. A possible remedy for this phenomenon would be to include the complete higher-order linearization in the preconditioning step, or to employ a simple form of mesh sequencing to obtain better approximations to the solution through the use of coarser meshes. An efficient surface parameterization based on a free-form deformation technique has been utilized and the resulting codes have been integrated with an optimization package. Lastly, sample optimizations have been shown for inviscid and turbulent flow over an ONERA M6 wing. Drag reductions have been demonstrated by reducing shock strengths across the span of the wing. In order for large scale optimization to become routine, the benefits of parallel architectures should be exploited. Although the flow solver has been parallelized using compiler directives. The parallel efficiency is under 50 percent. Clearly, parallel versions of the codes will have an immediate impact on the ability to design realistic configurations on fine meshes, and this effort is currently underway.
Evaluation of a whole-farm model for pasture-based dairy systems.
Beukes, P C; Palliser, C C; Macdonald, K A; Lancaster, J A S; Levy, G; Thorrold, B S; Wastney, M E
2008-06-01
In the temperate climate of New Zealand, animals can be grazed outdoors all year round. The pasture is supplemented with conserved feed, with the amount being determined by seasonal pasture growth, genetics of the herd, and stocking rate. The large number of factors that affect production makes it impractical and expensive to use field trials to explore all the farm system options. A model of an in situ-grazed pasture system has been developed to provide a tool for developing and testing novel farm systems; for example, different levels of bought-in supplements and different levels of nitrogen fertilizer application, to maintain sustainability or environmental integrity and profitability. It consists of a software framework that links climate information, on a daily basis, with dynamic, mechanistic component-models for pasture growth and animal metabolism, as well as management policies. A unique feature is that the component models were developed and published by other groups, and are retained in their original software language. The aim of this study was to compare the model, called the whole-farm model (WFM) with a farm trial that was conducted over 3 yr and in which data were collected specifically for evaluating the WFM. Data were used from the first year to develop the WFM and data from the second and third year to evaluate the model. The model predicted annual pasture production, end-of-season cow liveweight, cow body condition score, and pasture cover across season with relative prediction error <20%. Milk yield and milksolids (fat + protein) were overpredicted by approximately 30% even though both annual and monthly pasture and supplement intake were predicted with acceptable accuracy, suggesting that the metabolic conversion of feed to fat, protein, and lactose in the mammary gland needs to be refined. Because feed growth and intake predictions were acceptable, economic predictions can be made using the WFM, with an adjustment for milk yield, to test different management policies, alterations in climate, or the use of genetically improved animals, pastures, or crops.
Altez-Fernandez, Carlos; Ortiz, Victor; Mirzazadeh, Majid; Zegarra, Luis; Seas, Carlos; Ugarte-Gil, Cesar
2017-06-05
Genitourinary tuberculosis is the third most common form of extrapulmonary tuberculosis. Diagnosis is difficult because of unspecific clinical manifestations and low accuracy of conventional tests. Unfortunately, the delayed diagnosis impacts the urinary tract severely. Nucleic acid amplification tests yield fast results, and among these, new technologies can also detect drug resistance. There is lack of consensus regarding the use of these tests in genitourinary tuberculosis; we therefore aimed to assess the accuracy of nucleic acid amplification tests in the diagnosis of genitourinary tuberculosis and to evaluate the heterogeneity between studies. We did a systematic review and meta-analysis of research articles comparing the accuracy of a reference standard and a nucleic acid amplification test for diagnosis of urinary tract tuberculosis. We searched Medline, EMBASE, Web of Science, LILACS, Cochrane Library, and Scopus for articles published between Jan 1, 1990, and Apr 14, 2016. Two investigators identified eligible articles and extracted data for individual study sites. We analyzed data in groups with the same index test. Then, we generated pooled summary estimates (95% CIs) for sensitivity and specificity by use of random-effects meta-analysis when studies were not heterogeneous. We identified eleven relevant studies from ten articles, giving information on PCR, LCR and Xpert MTB/RIF tests. All PCR studies were "in-house" tests, with different gene targets and had several quality concerns therefore we did not proceed with a pooled analysis. Only one study used LCR. Xpert studies were of good quality and not heterogeneous, pooled sensitivity was 0·87 (0·66-0·96) and specificity was 0·91 (0·84-0·95). PCR studies were highly heterogeneous. Among Xpert MTB/RIF studies, specificity was favorable with an acceptable confidence interval, however new studies can update meta-analysis and get more precise estimates. Further high-quality studies are urgently needed to improve diagnosis of genitourinary tuberculosis. PROSPERO CRD42016039020.
The Arrival of Robotics in Spine Surgery: A Review of the Literature.
Ghasem, Alexander; Sharma, Akhil; Greif, Dylan N; Alam, Milad; Maaieh, Motasem Al
2018-04-18
Systematic Review. The authors aim to review comparative outcome measures between robotic and free-hand spine surgical procedures including: accuracy of spinal instrumentation, radiation exposure, operative time, hospital stay, and complication rates. Misplacement of pedicle screws in conventional open as well as minimally invasive surgical procedures has prompted the need for innovation and allowed the emergence of robotics in spine surgery. Prior to incorporation of robotic surgery in routine practice, demonstration of improved instrumentation accuracy, operative efficiency, and patient safety is required. A systematic search of the PubMed, OVID-MEDLINE, and Cochrane databases was performed for papers relevant to robotic assistance of pedicle screw placement. Inclusion criteria were constituted by English written randomized control trials, prospective and retrospective cohort studies involving robotic instrumentation in the spine. Following abstract, title, and full-text review, 32 articles were selected for study inclusion. Intrapedicular accuracy in screw placement and subsequent complications were at least comparable if not superior in the robotic surgery cohort. There is evidence supporting that total operative time is prolonged in robot assisted surgery compared to conventional free-hand. Radiation exposure appeared to be variable between studies; radiation time did decrease in the robot arm as the total number of robotic cases ascended, suggesting a learning curve effect. Multi-level procedures appeared to tend toward earlier discharge in patients undergoing robotic spine surgery. The implementation of robotic technology for pedicle screw placement yields an acceptable level of accuracy on a highly consistent basis. Surgeons should remain vigilant about confirmation of robotic assisted screw trajectory, as drilling pathways have been shown to be altered by soft tissue pressures, forceful surgical application, and bony surface skiving. However, the effective consequence of robot-assistance on radiation exposure, length of stay, and operative time remains unclear and requires meticulous examination in future studies. 4.
Effects on crops of irrigation with treated municipal wastewaters.
Fasciolo, G E; Meca, M I; Gabriel, E; Morábito, J
2002-01-01
The fertilizing potential of treated municipal wastewater (oxidation ditch) and crop sanitary acceptability for direct human consumption were evaluated in Mendoza, Argentina. Two experiments were performed on a pilot plot planted with garlic (1998) and onions (1999) using furrow irrigation with three types of water in 10 random blocks: treated effluent (2.5 x 10(3) MPN Escherichia coli/100 ml, 3 helminth eggs/l, and Salmonella (positive); and well water (free of microorganisms), with and without fertilizer. Two responses were evaluated: (1) crop yield, and (2) crop microbiological quality for human consumption at different times after harvest. Crop yields were compared using Variance analysis. Crops' sanitary acceptability was assessed using a two-class sampling program for Salmonella (n=10; c=0), and a three-class program for E. coli (n=5, c=2, M=10(3) and m=10 MPN/g) as proposed by the International Commission on Microbiological Specifications for Foods (ICMSF) for fresh vegetables. Wastewater irrigation acted as well water with fertilizer, increasing garlic and onion yields by 10% and 15%, respectively, compared to irrigation with well water with no fertilizer. Wastewater-irrigated garlic reached sanitary acceptability 90 days after harvest, once attached roots and soil were removed. Onions, which were cleaned immediately after harvest, met this qualification earlier than garlic (55 days). Neither the wastewater-irrigated crops nor the control crops were microbiologically acceptable for consumption raw at harvest.
Multigrid methods in structural mechanics
NASA Technical Reports Server (NTRS)
Raju, I. S.; Bigelow, C. A.; Taasan, S.; Hussaini, M. Y.
1986-01-01
Although the application of multigrid methods to the equations of elasticity has been suggested, few such applications have been reported in the literature. In the present work, multigrid techniques are applied to the finite element analysis of a simply supported Bernoulli-Euler beam, and various aspects of the multigrid algorithm are studied and explained in detail. In this study, six grid levels were used to model half the beam. With linear prolongation and sequential ordering, the multigrid algorithm yielded results which were of machine accuracy with work equivalent to 200 standard Gauss-Seidel iterations on the fine grid. Also with linear prolongation and sequential ordering, the V(1,n) cycle with n greater than 2 yielded better convergence rates than the V(n,1) cycle. The restriction and prolongation operators were derived based on energy principles. Conserving energy during the inter-grid transfers required that the prolongation operator be the transpose of the restriction operator, and led to improved convergence rates. With energy-conserving prolongation and sequential ordering, the multigrid algorithm yielded results of machine accuracy with a work equivalent to 45 Gauss-Seidel iterations on the fine grid. The red-black ordering of relaxations yielded solutions of machine accuracy in a single V(1,1) cycle, which required work equivalent to about 4 iterations on the finest grid level.
A framework for standardized calculation of weather indices in Germany
NASA Astrophysics Data System (ADS)
Möller, Markus; Doms, Juliane; Gerstmann, Henning; Feike, Til
2018-05-01
Climate change has been recognized as a main driver in the increasing occurrence of extreme weather. Weather indices (WIs) are used to assess extreme weather conditions regarding its impact on crop yields. Designing WIs is challenging, since complex and dynamic crop-climate relationships have to be considered. As a consequence, geodata for WI calculations have to represent both the spatio-temporal dynamic of crop development and corresponding weather conditions. In this study, we introduce a WI design framework for Germany, which is based on public and open raster data of long-term spatio-temporal availability. The operational process chain enables the dynamic and automatic definition of relevant phenological phases for the main cultivated crops in Germany. Within the temporal bounds, WIs can be calculated for any year and test site in Germany in a reproducible and transparent manner. The workflow is demonstrated on the example of a simple cumulative rainfall index for the phenological phase shooting of winter wheat using 16 test sites and the period between 1994 and 2014. Compared to station-based approaches, the major advantage of our approach is the possibility to design spatial WIs based on raster data characterized by accuracy metrics. Raster data and WIs, which fulfill data quality standards, can contribute to an increased acceptance and farmers' trust in WI products for crop yield modeling or weather index-based insurances (WIIs).
Transnasal endoscopy: no gagging no panic!
Parker, Clare; Alexandridis, Estratios; Plevris, John; O'Hara, James; Panter, Simon
2016-01-01
Background Transnasal endoscopy (TNE) is performed with an ultrathin scope via the nasal passages and is increasingly used. This review covers the technical characteristics, tolerability, safety and acceptability of TNE and also diagnostic accuracy, use as a screening tool and therapeutic applications. It includes practical advice from an ear, nose, throat (ENT) specialist to optimise TNE practice, identify ENT pathology and manage complications. Methods A Medline search was performed using the terms “transnasal”, “ultrathin”, “small calibre”, “endoscopy”, “EGD” to identify relevant literature. Results There is increasing evidence that TNE is better tolerated than standard endoscopy as measured using visual analogue scales, and the main area of discomfort is nasal during insertion of the TN endoscope, which seems remediable with adequate topical anaesthesia. The diagnostic yield has been found to be similar for detection of Barrett's oesophagus, gastric cancer and GORD-associated diseases. There are some potential issues regarding the accuracy of TNE in detecting small early gastric malignant lesions, especially those in the proximal stomach. TNE is feasible and safe in a primary care population and is ideal for screening for upper gastrointestinal pathology. It has an advantage as a diagnostic tool in the elderly and those with multiple comorbidities due to fewer adverse effects on the cardiovascular system. It has significant advantages for therapeutic procedures, especially negotiating upper oesophageal strictures and insertion of nasoenteric feeding tubes. Conclusions TNE is well tolerated and a valuable diagnostic tool. Further evidence is required to establish its accuracy for the diagnosis of early and small gastric malignancies. There is an emerging role for TNE in therapeutic endoscopy, which needs further study. PMID:28839865
ERIC Educational Resources Information Center
Kistner, Janet A.; David-Ferdon, Corinne F.; Repper, Karla K.; Joiner, Thomas E., Jr.
2006-01-01
Are depressive symptoms in middle childhood associated with more or less realistic social self-perceptions? At the beginning and end of the school year, children in grades 3 through 5 (n = 667) rated how much they liked their classmates, predicted the acceptance ratings they would receive from each of their classmates, and completed self-report…
Multivariate prediction of upper limb prosthesis acceptance or rejection.
Biddiss, Elaine A; Chau, Tom T
2008-07-01
To develop a model for prediction of upper limb prosthesis use or rejection. A questionnaire exploring factors in prosthesis acceptance was distributed internationally to individuals with upper limb absence through community-based support groups and rehabilitation hospitals. A total of 191 participants (59 prosthesis rejecters and 132 prosthesis wearers) were included in this study. A logistic regression model, a C5.0 decision tree, and a radial basis function neural network were developed and compared in terms of sensitivity (prediction of prosthesis rejecters), specificity (prediction of prosthesis wearers), and overall cross-validation accuracy. The logistic regression and neural network provided comparable overall accuracies of approximately 84 +/- 3%, specificity of 93%, and sensitivity of 61%. Fitting time-frame emerged as the predominant predictor. Individuals fitted within two years of birth (congenital) or six months of amputation (acquired) were 16 times more likely to continue prosthesis use. To increase rates of prosthesis acceptance, clinical directives should focus on timely, client-centred fitting strategies and the development of improved prostheses and healthcare for individuals with high-level or bilateral limb absence. Multivariate analyses are useful in determining the relative importance of the many factors involved in prosthesis acceptance and rejection.
NASA Astrophysics Data System (ADS)
Gupta, Shaurya; Guha, Daipayan; Jakubovic, Raphael; Yang, Victor X. D.
2017-02-01
Computer-assisted navigation is used by surgeons in spine procedures to guide pedicle screws to improve placement accuracy and in some cases, to better visualize patient's underlying anatomy. Intraoperative registration is performed to establish a correlation between patient's anatomy and the pre/intra-operative image. Current algorithms rely on seeding points obtained directly from the exposed spinal surface to achieve clinically acceptable registration accuracy. Registration of these three dimensional surface point-clouds are prone to various systematic errors. The goal of this study was to evaluate the robustness of surgical navigation systems by looking at the relationship between the optical density of an acquired 3D point-cloud and the corresponding surgical navigation error. A retrospective review of a total of 48 registrations performed using an experimental structured light navigation system developed within our lab was conducted. For each registration, the number of points in the acquired point cloud was evaluated relative to whether the registration was acceptable, the corresponding system reported error and target registration error. It was demonstrated that the number of points in the point cloud neither correlates with the acceptance/rejection of a registration or the system reported error. However, a negative correlation was observed between the number of the points in the point-cloud and the corresponding sagittal angular error. Thus, system reported total registration points and accuracy are insufficient to gauge the accuracy of a navigation system and the operating surgeon must verify and validate registration based on anatomical landmarks prior to commencing surgery.
Cow genotyping strategies for genomic selection in a small dairy cattle population.
Jenko, J; Wiggans, G R; Cooper, T A; Eaglen, S A E; Luff, W G de L; Bichard, M; Pong-Wong, R; Woolliams, J A
2017-01-01
This study compares how different cow genotyping strategies increase the accuracy of genomic estimated breeding values (EBV) in dairy cattle breeds with low numbers. In these breeds, few sires have progeny records, and genotyping cows can improve the accuracy of genomic EBV. The Guernsey breed is a small dairy cattle breed with approximately 14,000 recorded individuals worldwide. Predictions of phenotypes of milk yield, fat yield, protein yield, and calving interval were made for Guernsey cows from England and Guernsey Island using genomic EBV, with training sets including 197 de-regressed proofs of genotyped bulls, with cows selected from among 1,440 genotyped cows using different genotyping strategies. Accuracies of predictions were tested using 10-fold cross-validation among the cows. Genomic EBV were predicted using 4 different methods: (1) pedigree BLUP, (2) genomic BLUP using only bulls, (3) univariate genomic BLUP using bulls and cows, and (4) bivariate genomic BLUP. Genotyping cows with phenotypes and using their data for the prediction of single nucleotide polymorphism effects increased the correlation between genomic EBV and phenotypes compared with using only bulls by 0.163±0.022 for milk yield, 0.111±0.021 for fat yield, and 0.113±0.018 for protein yield; a decrease of 0.014±0.010 for calving interval from a low base was the only exception. Genetic correlation between phenotypes from bulls and cows were approximately 0.6 for all yield traits and significantly different from 1. Only a very small change occurred in correlation between genomic EBV and phenotypes when using the bivariate model. It was always better to genotype all the cows, but when only half of the cows were genotyped, a divergent selection strategy was better compared with the random or directional selection approach. Divergent selection of 30% of the cows remained superior for the yield traits in 8 of 10 folds. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Cannell, R C; Tatum, J D; Belk, K E; Wise, J W; Clayton, R P; Smith, G C
1999-11-01
An improved ability to quantify differences in the fabrication yields of beef carcasses would facilitate the application of value-based marketing. This study was conducted to evaluate the ability of the Dual-Component Australian VIASCAN to 1) predict fabricated beef subprimal yields as a percentage of carcass weight at each of three fat-trim levels and 2) augment USDA yield grading, thereby improving accuracy of grade placement. Steer and heifer carcasses (n = 240) were evaluated using VIASCAN, as well as by USDA expert and online graders, before fabrication of carcasses to each of three fat-trim levels. Expert yield grade (YG), online YG, VIASCAN estimates, and VIASCAN estimated ribeye area used to augment actual and expert grader estimates of the remaining YG factors (adjusted fat thickness, percentage of kidney-pelvic-heart fat, and hot carcass weight), respectively, 1) accounted for 51, 37, 46, and 55% of the variation in fabricated yields of commodity-trimmed subprimals, 2) accounted for 74, 54, 66, and 75% of the variation in fabricated yields of closely trimmed subprimals, and 3) accounted for 74, 54, 71, and 75% of the variation in fabricated yields of very closely trimmed subprimals. The VIASCAN system predicted fabrication yields more accurately than current online yield grading and, when certain VIASCAN-measured traits were combined with some USDA yield grade factors in an augmentation system, the accuracy of cutability prediction was improved, at packing plant line speeds, to a level matching that of expert graders applying grades at a comfortable rate.
Shimizu, Sakura; Shinya, Akikazu; Kuroda, Soichi; Gomi, Harunori
2017-07-26
The accuracy of prostheses affects clinical success and is, in turn, affected by the accuracy of the scanner and CAD programs. Thus, their accuracy is important. The first aim of this study was to evaluate the accuracy of an intraoral scanner with active triangulation (Cerec Omnicam), an intraoral scanner with a confocal laser (3Shape Trios), and an extraoral scanner with active triangulation (D810). The second aim of this study was to compare the accuracy of the digital crowns designed with two different scanner/CAD combinations. The accuracy of the intraoral scanners and extraoral scanner was clinically acceptable. Marginal and internal fit of the digital crowns fabricated using the intraoral scanner and CAD programs were inferior to those fabricated using the extraoral scanner and CAD programs.
Plazzotta, Fernando; Otero, Carlos; Luna, Daniel; de Quiros, Fernan Gonzalez Bernaldo
2013-01-01
Physicians do not always keep the problem list accurate, complete and updated. To analyze natural language processing (NLP) techniques and inference rules as strategies to maintain completeness and accuracy of the problem list in EHRs. Non systematic literature review in PubMed, in the last 10 years. Strategies to maintain the EHRs problem list were analyzed in two ways: inputting and removing problems from the problem list. NLP and inference rules have acceptable performance for inputting problems into the problem list. No studies using these techniques for removing problems were published Conclusion: Both tools, NLP and inference rules have had acceptable results as tools for maintain the completeness and accuracy of the problem list.
Generalizations of the subject-independent feature set for music-induced emotion recognition.
Lin, Yuan-Pin; Chen, Jyh-Horng; Duann, Jeng-Ren; Lin, Chin-Teng; Jung, Tzyy-Ping
2011-01-01
Electroencephalogram (EEG)-based emotion recognition has been an intensely growing field. Yet, how to achieve acceptable accuracy on a practical system with as fewer electrodes as possible is less concerned. This study evaluates a set of subject-independent features, based on differential power asymmetry of symmetric electrode pairs [1], with emphasis on its applicability to subject variability in music-induced emotion classification problem. Results of this study have evidently validated the feasibility of using subject-independent EEG features to classify four emotional states with acceptable accuracy in second-scale temporal resolution. These features could be generalized across subjects to detect emotion induced by music excerpts not limited to the music database that was used to derive the emotion-specific features.
Deuterium-tritium neutron yield measurements with the 4.5 m neutron-time-of-flight detectors at NIF.
Moran, M J; Bond, E J; Clancy, T J; Eckart, M J; Khater, H Y; Glebov, V Yu
2012-10-01
The first several campaigns of laser fusion experiments at the National Ignition Facility (NIF) included a family of high-sensitivity scintillator∕photodetector neutron-time-of-flight (nTOF) detectors for measuring deuterium-deuterium (DD) and DT neutron yields. The detectors provided consistent neutron yield (Y(n)) measurements from below 10(9) (DD) to nearly 10(15) (DT). The detectors initially demonstrated detector-to-detector Y(n) precisions better than 5%, but lacked in situ absolute calibrations. Recent experiments at NIF now have provided in situ DT yield calibration data that establish the absolute sensitivity of the 4.5 m differential tissue harmonic imaging (DTHI) detector with an accuracy of ± 10% and precision of ± 1%. The 4.5 m nTOF calibration measurements also have helped to establish improved detector impulse response functions and data analysis methods, which have contributed to improving the accuracy of the Y(n) measurements. These advances have also helped to extend the usefulness of nTOF measurements of ion temperature and downscattered neutron ratio (neutron yield 10-12 MeV divided by yield 13-15 MeV) with other nTOF detectors.
Accuracy in inference of nursing diagnoses in heart failure patients.
Pereira, Juliana de Melo Vellozo; Cavalcanti, Ana Carla Dantas; Lopes, Marcos Venícios de Oliveira; da Silva, Valéria Gonçalves; de Souza, Rosana Oliveira; Gonçalves, Ludmila Cuzatis
2015-01-01
Heart failure (HF) is a common cause of hospitalization and requires accuracy in clinical judgment and appropriate nursing diagnoses. to determine the accuracy of nursing diagnoses of fatigue, intolerance to activity and decreased cardiac output in hospitalized HF patients. descriptive study applied to nurses with experience in NANDA-I and/or HF nursing diagnoses. Evaluation and accuracy were determined by calculating efficacy (E), false negative (FN), false positive (FP) and trend (T) measures. Nurses who showed acceptable inspection for two diagnoses were selected. the nursing diagnosis of fatigue was the most commonly mistaken diagnosis identified by the nursing evaluators. the search for improving diagnostic accuracy reaffirms the need for continuous and specific training to improve the diagnosis capability of nurses. the training allowed the exercise of clinical judgment and better accuracy of nurses.
Accuracy of genomic selection in European maize elite breeding populations.
Zhao, Yusheng; Gowda, Manje; Liu, Wenxin; Würschum, Tobias; Maurer, Hans P; Longin, Friedrich H; Ranc, Nicolas; Reif, Jochen C
2012-03-01
Genomic selection is a promising breeding strategy for rapid improvement of complex traits. The objective of our study was to investigate the prediction accuracy of genomic breeding values through cross validation. The study was based on experimental data of six segregating populations from a half-diallel mating design with 788 testcross progenies from an elite maize breeding program. The plants were intensively phenotyped in multi-location field trials and fingerprinted with 960 SNP markers. We used random regression best linear unbiased prediction in combination with fivefold cross validation. The prediction accuracy across populations was higher for grain moisture (0.90) than for grain yield (0.58). The accuracy of genomic selection realized for grain yield corresponds to the precision of phenotyping at unreplicated field trials in 3-4 locations. As for maize up to three generations are feasible per year, selection gain per unit time is high and, consequently, genomic selection holds great promise for maize breeding programs.
Older Adults' Acceptance of Activity Trackers
Preusse, Kimberly C.; Mitzner, Tracy L.; Fausset, Cara Bailey; Rogers, Wendy A.
2016-01-01
Objective To assess the usability and acceptance of activity tracking technologies by older adults. Method First in our multi-method approach, we conducted heuristic evaluations of two activity trackers that revealed potential usability barriers to acceptance. Next, questionnaires and interviews were administered to 16 older adults (Mage=70, SDage=3.09, rangeage= 65-75) before and after a 28-day field study to understand facilitators and additional barriers to acceptance. These measurements were supplemented with diary and usage data and assessed if and why users overcame usability issues. Results The heuristic evaluation revealed usability barriers in System Status Visibility; Error Prevention; and Consistency and Standards. The field study revealed additional barriers (e.g., accuracy, format), and acceptance-facilitators (e.g., goal-tracking, usefulness, encouragement). Discussion The acceptance of wellness management technologies, such as activity trackers, may be increased by addressing acceptance-barriers during deployment (e.g., providing tutorials on features that were challenging, communicating usefulness). PMID:26753803
Calus, M P L; de Haas, Y; Veerkamp, R F
2013-10-01
Genomic selection holds the promise to be particularly beneficial for traits that are difficult or expensive to measure, such that access to phenotypes on large daughter groups of bulls is limited. Instead, cow reference populations can be generated, potentially supplemented with existing information from the same or (highly) correlated traits available on bull reference populations. The objective of this study, therefore, was to develop a model to perform genomic predictions and genome-wide association studies based on a combined cow and bull reference data set, with the accuracy of the phenotypes differing between the cow and bull genomic selection reference populations. The developed bivariate Bayesian stochastic search variable selection model allowed for an unbalanced design by imputing residuals in the residual updating scheme for all missing records. The performance of this model is demonstrated on a real data example, where the analyzed trait, being milk fat or protein yield, was either measured only on a cow or a bull reference population, or recorded on both. Our results were that the developed bivariate Bayesian stochastic search variable selection model was able to analyze 2 traits, even though animals had measurements on only 1 of 2 traits. The Bayesian stochastic search variable selection model yielded consistently higher accuracy for fat yield compared with a model without variable selection, both for the univariate and bivariate analyses, whereas the accuracy of both models was very similar for protein yield. The bivariate model identified several additional quantitative trait loci peaks compared with the single-trait models on either trait. In addition, the bivariate models showed a marginal increase in accuracy of genomic predictions for the cow traits (0.01-0.05), although a greater increase in accuracy is expected as the size of the bull population increases. Our results emphasize that the chosen value of priors in Bayesian genomic prediction models are especially important in small data sets. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Clegg, R. H.; Scherz, J. P.
1975-01-01
Successful aerial photography depends on aerial cameras providing acceptable photographs within cost restrictions of the job. For topographic mapping where ultimate accuracy is required only large format mapping cameras will suffice. For mapping environmental patterns of vegetation, soils, or water pollution, 9-inch cameras often exceed accuracy and cost requirements, and small formats may be better. In choosing the best camera for environmental mapping, relative capabilities and costs must be understood. This study compares resolution, photo interpretation potential, metric accuracy, and cost of 9-inch, 70mm, and 35mm cameras for obtaining simultaneous color and color infrared photography for environmental mapping purposes.
Cannell, R C; Belk, K E; Tatum, J D; Wise, J W; Chapman, P L; Scanga, J A; Smith, G C
2002-05-01
Objective quantification of differences in wholesale cut yields of beef carcasses at plant chain speeds is important for the application of value-based marketing. This study was conducted to evaluate the ability of a commercial video image analysis system, the Computer Vision System (CVS) to 1) predict commercially fabricated beef subprimal yield and 2) augment USDA yield grading, in order to improve accuracy of grade assessment. The CVS was evaluated as a fully installed production system, operating on a full-time basis at chain speeds. Steer and heifer carcasses (n = 296) were evaluated using CVS, as well as by USDA expert and online graders, before the fabrication of carcasses into industry-standard subprimal cuts. Expert yield grade (YG), online YG, CVS estimated carcass yield, and CVS measured ribeye area in conjunction with expert grader estimates of the remaining YG factors (adjusted fat thickness, percentage of kidney-pelvic-heart fat, hot carcass weight) accounted for 67, 39, 64, and 65% of the observed variation in fabricated yields of closely trimmed subprimals. The dual component CVS predicted wholesale cut yields more accurately than current online yield grading, and, in an augmentation system, CVS ribeye measurement replaced estimated ribeye area in determination of USDA yield grade, and the accuracy of cutability prediction was improved, under packing plant conditions and speeds, to a level close to that of expert graders applying grades at a comfortable rate of speed offline.
NASA Astrophysics Data System (ADS)
Sah, Shagan
An increasingly important application of remote sensing is to provide decision support during emergency response and disaster management efforts. Land cover maps constitute one such useful application product during disaster events; if generated rapidly after any disaster, such map products can contribute to the efficacy of the response effort. In light of recent nuclear incidents, e.g., after the earthquake/tsunami in Japan (2011), our research focuses on constructing rapid and accurate land cover maps of the impacted area in case of an accidental nuclear release. The methodology involves integration of results from two different approaches, namely coarse spatial resolution multi-temporal and fine spatial resolution imagery, to increase classification accuracy. Although advanced methods have been developed for classification using high spatial or temporal resolution imagery, only a limited amount of work has been done on fusion of these two remote sensing approaches. The presented methodology thus involves integration of classification results from two different remote sensing modalities in order to improve classification accuracy. The data used included RapidEye and MODIS scenes over the Nine Mile Point Nuclear Power Station in Oswego (New York, USA). The first step in the process was the construction of land cover maps from freely available, high temporal resolution, low spatial resolution MODIS imagery using a time-series approach. We used the variability in the temporal signatures among different land cover classes for classification. The time series-specific features were defined by various physical properties of a pixel, such as variation in vegetation cover and water content over time. The pixels were classified into four land cover classes - forest, urban, water, and vegetation - using Euclidean and Mahalanobis distance metrics. On the other hand, a high spatial resolution commercial satellite, such as RapidEye, can be tasked to capture images over the affected area in the case of a nuclear event. This imagery served as a second source of data to augment results from the time series approach. The classifications from the two approaches were integrated using an a posteriori probability-based fusion approach. This was done by establishing a relationship between the classes, obtained after classification of the two data sources. Despite the coarse spatial resolution of MODIS pixels, acceptable accuracies were obtained using time series features. The overall accuracies using the fusion-based approach were in the neighborhood of 80%, when compared with GIS data sets from New York State. This fusion thus contributed to classification accuracy refinement, with a few additional advantages, such as correction for cloud cover and providing for an approach that is robust against point-in-time seasonal anomalies, due to the inclusion of multi-temporal data. We concluded that this approach is capable of generating land cover maps of acceptable accuracy and rapid turnaround, which in turn can yield reliable estimates of crop acreage of a region. The final algorithm is part of an automated software tool, which can be used by emergency response personnel to generate a nuclear ingestion pathway information product within a few hours of data collection.
Prins, Annabel; Bovin, Michelle J; Smolenski, Derek J; Marx, Brian P; Kimerling, Rachel; Jenkins-Guarnieri, Michael A; Kaloupek, Danny G; Schnurr, Paula P; Kaiser, Anica Pless; Leyva, Yani E; Tiet, Quyen Q
2016-10-01
Posttraumatic Stress Disorder (PTSD) is associated with increased health care utilization, medical morbidity, and tobacco and alcohol use. Consequently, screening for PTSD has become increasingly common in primary care clinics, especially in Veteran healthcare settings where trauma exposure among patients is common. The objective of this study was to revise the Primary Care PTSD screen (PC-PTSD) to reflect the new Diagnostic and Statistical Manual of Mental Disorders (DSM-5) criteria for PTSD (PC-PTSD-5) and to examine both the diagnostic accuracy and the patient acceptability of the revised measure. We compared the PC-PTSD-5 results with those from a brief psychiatric interview for PTSD. Participants also rated screening preferences and acceptability of the PC-PTSD-5. A convenience sample of 398 Veterans participated in the study (response rate = 41 %). Most of the participants were male, in their 60s, and the majority identified as non-Hispanic White. The PC-PTSD-5 was used as the screening measure, a modified version of the PTSD module of the MINI-International Neuropsychiatric Interview was used to diagnose DSM-5 PTSD, and five brief survey items were used to assess acceptability and preferences. The PC-PTSD-5 demonstrated excellent diagnostic accuracy (AUC = 0.941; 95 % C.I.: 0.912- 0.969). Whereas a cut score of 3 maximized sensitivity (κ[1]) = 0.93; SE = .041; 95 % C.I.: 0.849-1.00), a cut score of 4 maximized efficiency (κ[0.5] = 0.63; SE = 0.052; 95 % C.I.: 0.527-0.731), and a cut score of 5 maximized specificity (κ[0] = 0.70; SE = 0.077; 95 % C.I.: 0.550-0.853). Patients found the screen acceptable and indicated a preference for administration by their primary care providers as opposed to by other providers or via self-report. The PC-PTSD-5 demonstrated strong preliminary results for diagnostic accuracy, and was broadly acceptable to patients.
Hopkins, D L; Safari, E; Thompson, J M; Smith, C R
2004-06-01
A wide selection of lamb types of mixed sex (ewes and wethers) were slaughtered at a commercial abattoir and during this process images of 360 carcasses were obtained online using the VIAScan® system developed by Meat and Livestock Australia. Soft tissue depth at the GR site (thickness of tissue over the 12th rib 110 mm from the midline) was measured by an abattoir employee using the AUS-MEAT sheep probe (PGR). Another measure of this thickness was taken in the chiller using a GR knife (NGR). Each carcass was subsequently broken down to a range of trimmed boneless retail cuts and the lean meat yield determined. The current industry model for predicting meat yield uses hot carcass weight (HCW) and tissue depth at the GR site. A low level of accuracy and precision was found when HCW and PGR were used to predict lean meat yield (R(2)=0.19, r.s.d.=2.80%), which could be improved markedly when PGR was replaced by NGR (R(2)=0.41, r.s.d.=2.39%). If the GR measures were replaced by 8 VIAScan® measures then greater prediction accuracy could be achieved (R(2)=0.52, r.s.d.=2.17%). A similar result was achieved when the model was based on principal components (PCs) computed from the 8 VIAScan® measures (R(2)=0.52, r.s.d.=2.17%). The use of PCs also improved the stability of the model compared to a regression model based on HCW and NGR. The transportability of the models was tested by randomly dividing the data set and comparing coefficients and the level of accuracy and precision. Those models based on PCs were superior to those based on regression. It is demonstrated that with the appropriate modeling the VIAScan® system offers a workable method for predicting lean meat yield automatically.
Sputum color: potential implications for clinical practice.
Johnson, Allen L; Hampson, David F; Hampson, Neil B
2008-04-01
Respiratory infections with sputum production are a major reason for physician visits, diagnostic testing, and antibiotic prescription in the United States. We sought to determine whether the simple characteristic of sputum color provides information that impacts resource utilization such as laboratory testing and prescription of antibiotics. Out-patient sputum samples submitted to the microbiology laboratory for routine analysis were assigned to one of 8 color categories (green, yellow-green, rust, yellow, red, cream, white, and clear), based on a key made from paint chip color samples. Subsequent Gram stain and culture results were compared to sputum color. Of 289 consecutive samples, 144 (50%) met standard Gram-stain criteria for being acceptable lower-respiratory-tract specimens. In the acceptable Gram-stain group, 60 samples had a predominant organism on Gram stain, and the culture yielded a consistent result in 42 samples (15% of the 289 total specimens). Yield at each level of analysis differed greatly by color. The yield from sputum colors green, yellow-green, yellow, and rust was much higher than the yield from cream, white, or clear. If out-patient sputum is cream, white, or clear, the yield from bacteriologic analysis is extremely low. This information can reduce laboratory processing costs and help minimize unnecessary antibiotic prescription.
Research on Horizontal Accuracy Method of High Spatial Resolution Remotely Sensed Orthophoto Image
NASA Astrophysics Data System (ADS)
Xu, Y. M.; Zhang, J. X.; Yu, F.; Dong, S.
2018-04-01
At present, in the inspection and acceptance of high spatial resolution remotly sensed orthophoto image, the horizontal accuracy detection is testing and evaluating the accuracy of images, which mostly based on a set of testing points with the same accuracy and reliability. However, it is difficult to get a set of testing points with the same accuracy and reliability in the areas where the field measurement is difficult and the reference data with high accuracy is not enough. So it is difficult to test and evaluate the horizontal accuracy of the orthophoto image. The uncertainty of the horizontal accuracy has become a bottleneck for the application of satellite borne high-resolution remote sensing image and the scope of service expansion. Therefore, this paper proposes a new method to test the horizontal accuracy of orthophoto image. This method using the testing points with different accuracy and reliability. These points' source is high accuracy reference data and field measurement. The new method solves the horizontal accuracy detection of the orthophoto image in the difficult areas and provides the basis for providing reliable orthophoto images to the users.
Vinyard, David J; Zachary, Chase E; Ananyev, Gennady; Dismukes, G Charles
2013-07-01
Forty-three years ago, Kok and coworkers introduced a phenomenological model describing period-four oscillations in O2 flash yields during photosynthetic water oxidation (WOC), which had been first reported by Joliot and coworkers. The original two-parameter Kok model was subsequently extended in its level of complexity to better simulate diverse data sets, including intact cells and isolated PSII-WOCs, but at the expense of introducing physically unrealistic assumptions necessary to enable numerical solutions. To date, analytical solutions have been found only for symmetric Kok models (inefficiencies are equally probable for all intermediates, called "S-states"). However, it is widely accepted that S-state reaction steps are not identical and some are not reversible (by thermodynamic restraints) thereby causing asymmetric cycles. We have developed a mathematically more rigorous foundation that eliminates unphysical assumptions known to be in conflict with experiments and adopts a new experimental constraint on solutions. This new algorithm termed STEAMM for S-state Transition Eigenvalues of Asymmetric Markov Models enables solutions to models having fewer adjustable parameters and uses automated fitting to experimental data sets, yielding higher accuracy and precision than the classic Kok or extended Kok models. This new tool provides a general mathematical framework for analyzing damped oscillations arising from any cycle period using any appropriate Markov model, regardless of symmetry. We illustrate applications of STEAMM that better describe the intrinsic inefficiencies for photon-to-charge conversion within PSII-WOCs that are responsible for damped period-four and period-two oscillations of flash O2 yields across diverse species, while using simpler Markov models free from unrealistic assumptions. Copyright © 2013 Elsevier B.V. All rights reserved.
Acceptability and feasibility of a virtual counselor (VICKY) to collect family health histories.
Wang, Catharine; Bickmore, Timothy; Bowen, Deborah J; Norkunas, Tricia; Campion, MaryAnn; Cabral, Howard; Winter, Michael; Paasche-Orlow, Michael
2015-10-01
To overcome literacy-related barriers in the collection of electronic family health histories, we developed an animated Virtual Counselor for Knowing your Family History, or VICKY. This study examined the acceptability and accuracy of using VICKY to collect family histories from underserved patients as compared with My Family Health Portrait (MFHP). Participants were recruited from a patient registry at a safety net hospital and randomized to use either VICKY or MFHP. Accuracy was determined by comparing tool-collected histories with those obtained by a genetic counselor. A total of 70 participants completed this study. Participants rated VICKY as easy to use (91%) and easy to follow (92%), would recommend VICKY to others (83%), and were highly satisfied (77%). VICKY identified 86% of first-degree relatives and 42% of second-degree relatives; combined accuracy was 55%. As compared with MFHP, VICKY identified a greater number of health conditions overall (49% with VICKY vs. 31% with MFHP; incidence rate ratio (IRR): 1.59; 95% confidence interval (95% CI): 1.13-2.25; P = 0.008), in particular, hypertension (47 vs. 15%; IRR: 3.18; 95% CI: 1.66-6.10; P = 0.001) and type 2 diabetes (54 vs. 22%; IRR: 2.47; 95% CI: 1.33-4.60; P = 0.004). These results demonstrate that technological support for documenting family history risks can be highly accepted, feasible, and effective.
Accuracy Assessment and Correction of Vaisala RS92 Radiosonde Water Vapor Measurements
NASA Technical Reports Server (NTRS)
Whiteman, David N.; Miloshevich, Larry M.; Vomel, Holger; Leblanc, Thierry
2008-01-01
Relative humidity (RH) measurements from Vaisala RS92 radiosondes are widely used in both research and operational applications, although the measurement accuracy is not well characterized as a function of its known dependences on height, RH, and time of day (or solar altitude angle). This study characterizes RS92 mean bias error as a function of its dependences by comparing simultaneous measurements from RS92 radiosondes and from three reference instruments of known accuracy. The cryogenic frostpoint hygrometer (CFH) gives the RS92 accuracy above the 700 mb level; the ARM microwave radiometer gives the RS92 accuracy in the lower troposphere; and the ARM SurTHref system gives the RS92 accuracy at the surface using 6 RH probes with NIST-traceable calibrations. These RS92 assessments are combined using the principle of Consensus Referencing to yield a detailed estimate of RS92 accuracy from the surface to the lowermost stratosphere. An empirical bias correction is derived to remove the mean bias error, yielding corrected RS92 measurements whose mean accuracy is estimated to be +/-3% of the measured RH value for nighttime soundings and +/-4% for daytime soundings, plus an RH offset uncertainty of +/-0.5%RH that is significant for dry conditions. The accuracy of individual RS92 soundings is further characterized by the 1-sigma "production variability," estimated to be +/-1.5% of the measured RH value. The daytime bias correction should not be applied to cloudy daytime soundings, because clouds affect the solar radiation error in a complicated and uncharacterized way.
Jeong, Seok Hoo; Yoon, Hyun Hwa; Kim, Eui Joo; Kim, Yoon Jae; Kim, Yeon Suk; Cho, Jae Hee
2017-01-01
Abstract Endoscopic ultrasound-guided fine needle aspiration (EUS-FNA) is the accurate diagnostic method for pancreatic masses and its accuracy is affected by various FNA methods and EUS equipment. Therefore, we aimed to elucidate the instrumental and methodologic factors for determining the diagnostic yield of EUS-FNA for pancreatic solid masses without an on-site cytopathology evaluation. We retrospectively reviewed the medical records of 260 patients (265 pancreatic solid masses) who underwent EUS-FNA. We compared historical conventional EUS groups with high-resolution imaging devices and finally analyzed various factors affecting EUS-FNA accuracy. In total, 265 pancreatic solid masses of 260 patients were included in this study. The accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of EUS-FNA for pancreatic solid masses without on-site cytopathology evaluation were 83.4%, 81.8%, 100.0%, 100.0%, and 34.3%, respectively. In comparison with conventional image group, high-resolution image group showed the increased accuracy, sensitivity and specificity of EUS-FNA (71.3% vs 92.7%, 68.9% vs 91.9%, and 100% vs 100%, respectively). On the multivariate analysis with various instrumental and methodologic factors, high-resolution imaging (P = 0.040, odds ratio = 3.28) and 3 or more needle passes (P = 0.039, odds ratio = 2.41) were important factors affecting diagnostic yield of pancreatic solid masses. High-resolution imaging and 3 or more passes were the most significant factors influencing diagnostic yield of EUS-FNA in patients with pancreatic solid masses without an on-site cytopathologist. PMID:28079803
The uncertainty of crop yield projections is reduced by improved temperature response functions
USDA-ARS?s Scientific Manuscript database
Increasing the accuracy of crop productivity estimates is a key Increasing the accuracy of crop productivity estimates is a key element in planning adaptation strategies to ensure global food security under climate change. Process-based crop models are effective means to project climate impact on cr...
Use of multivariable asymptotic expansions in a satellite theory
NASA Technical Reports Server (NTRS)
Dallas, S. S.
1973-01-01
Initial conditions and perturbative force of satellite are restricted to yield motion of equatorial satellite about oblate body. In this manner, exact analytic solution exists and can be used as standard of comparison in numerical accuracy comparisons. Detailed numerical accuracy studies of uniformly valid asymptotic expansions were made.
Maeda, Takuma; Hattori, Kohshi; Sumiyoshi, Miho; Kanazawa, Hiroko; Ohnishi, Yoshihiko
2018-06-01
The fourth-generation FloTrac/Vigileo™ improved its algorithm to follow changes in systemic vascular resistance index (SVRI). This revision may improve the accuracy and trending ability of CI even in patients who undergo abdominal aortic aneurysm (AAA) surgery which cause drastic change of SVRI by aortic clamping. The purpose of this study is to elucidate the accuracy and trending ability of the fourth-generation FloTrac/Vigileo™ in patients with AAA surgery by comparing the FloTrac/Vigileo™-derived CI (CI FT ) with that measured by three-dimensional echocardiography (CI 3D ). Twenty-six patients undergoing elective AAA surgery were included in this study. CI FT and CI3 D were determined simultaneously in eight points including before and after aortic clamp. We used CI 3D as the reference method. In the Bland-Altman analysis, CI FT had a wide limit of agreement with CI 3D showing a percentage error of 46.7%. Subgroup analysis showed that the percentage error between CO 3D and CO FT was 56.3% in patients with cardiac index < 2.5 L/min/m 2 and 28.4% in patients with cardiac index ≥ 2.5 L/min/m 2 . SVRI was significantly higher in patients with cardiac index < 2.5 L/min/m 2 (1703 ± 330 vs. 2757 ± 798; p < 0.001). The tracking ability of fourth generation of FloTrac/Vigileo™ after aortic clamp was not clinically acceptable (26.9%). The degree of accuracy of the fourth-generation FloTrac/Vigileo™ in patients with AAA surgery was not acceptable. The tracking ability of the fourth-generation FloTrac/Vigileo™ after aortic clamp was below the acceptable limit.
Code of Federal Regulations, 2014 CFR
2014-01-01
... territory or possession of the United States. Subsequent crop means any crop planted after an initial crop... itself to the greatest level of accuracy, as determined by the FSA State committee. USDA means United... history yield means the average of the actual production history yields for each insurable or noninsurable...
Code of Federal Regulations, 2012 CFR
2012-01-01
... territory or possession of the United States. Subsequent crop means any crop planted after an initial crop... itself to the greatest level of accuracy, as determined by the FSA State committee. USDA means United... history yield means the average of the actual production history yields for each insurable or noninsurable...
Admissions-Yield and Persistence Analysis. AIR Forum Paper 1978.
ERIC Educational Resources Information Center
Ramist, Leonard
Data from the college Board's Admissions Test Program (ATP) Summary Reports are used to analyze the student market attraction for ATP report designations, the application rate, the acceptance rate, the enrollment yield, and the dropout rate for 254 different student groups for 25l colleges. Student groups are defined in terms of their College…
ERIC Educational Resources Information Center
Putnam, Susan K.; Lopata, Christopher; Fox, Jeffery D.; Thomeer, Marcus L.; Rodgers, Jonathan D.; Volker, Martin A.; Lee, Gloria K.; Neilans, Erik G.; Werth, Jilynn
2012-01-01
This study compared cortisol concentrations yielded using three saliva collection methods (passive drool, salivette, and sorbette) in both in vitro and in vivo conditions, as well as method acceptability for a sample of children (n = 39) with High Functioning Autism Spectrum Disorders. No cortisol concentration differences were observed between…
An optimized proportional-derivative controller for the human upper extremity with gravity.
Jagodnik, Kathleen M; Blana, Dimitra; van den Bogert, Antonie J; Kirsch, Robert F
2015-10-15
When Functional Electrical Stimulation (FES) is used to restore movement in subjects with spinal cord injury (SCI), muscle stimulation patterns should be selected to generate accurate and efficient movements. Ideally, the controller for such a neuroprosthesis will have the simplest architecture possible, to facilitate translation into a clinical setting. In this study, we used the simulated annealing algorithm to optimize two proportional-derivative (PD) feedback controller gain sets for a 3-dimensional arm model that includes musculoskeletal dynamics and has 5 degrees of freedom and 22 muscles, performing goal-oriented reaching movements. Controller gains were optimized by minimizing a weighted sum of position errors, orientation errors, and muscle activations. After optimization, gain performance was evaluated on the basis of accuracy and efficiency of reaching movements, along with three other benchmark gain sets not optimized for our system, on a large set of dynamic reaching movements for which the controllers had not been optimized, to test ability to generalize. Robustness in the presence of weakened muscles was also tested. The two optimized gain sets were found to have very similar performance to each other on all metrics, and to exhibit significantly better accuracy, compared with the three standard gain sets. All gain sets investigated used physiologically acceptable amounts of muscular activation. It was concluded that optimization can yield significant improvements in controller performance while still maintaining muscular efficiency, and that optimization should be considered as a strategy for future neuroprosthesis controller design. Published by Elsevier Ltd.
Sex determination of a Tunisian population by CT scan analysis of the skull.
Zaafrane, Malek; Ben Khelil, Mehdi; Naccache, Ines; Ezzedine, Ekbel; Savall, Frédéric; Telmon, Norbert; Mnif, Najla; Hamdoun, Moncef
2018-05-01
It is widely accepted that the estimation of biological attributes in the human skeleton is more accurate when population-specific standards are applied. With the shortage of such data for contemporary North African populations, it is duly required to establish population-specific standards. We present here the first craniometric standards for sex determination of a contemporary Tunisian population. The aim of this study was to analyze the correlation between sex and metric parameters of the skull in this population using CT scan analysis and to generate proper reliable standards for sex determination of a complete or fragmented skull. The study sample comprised cranial multislice computed tomography scans of 510 individuals equally distributed by sex. ASIR TM software in a General Electric TM workstation was used to position 37 landmarks along the volume-rendered images and the multiplanar slices, defining 27 inter-landmark distances. Frontal and parietal bone thickness was also measured for each case. The data were analyzed using basic descriptive statistics and logistic regression with cross-validation of classification results. All of the measurements were sexually dimorphic with male values being higher than female values. A nine-variable model achieved the maximum classification accuracy of 90% with -2.9% sex bias and a six-variable model yielded 85.9% sexing accuracy with -0.97% sex bias. We conclude that the skull is highly dimorphic and represents a reliable bone for sex determination in contemporary Tunisian individuals.
NASA Astrophysics Data System (ADS)
Martinez-Torteya, Antonio; Treviño-Alvarado, Víctor; Tamez-Peña, José
2013-02-01
The accurate diagnosis of Alzheimer's disease (AD) and mild cognitive impairment (MCI) confers many clinical research and patient care benefits. Studies have shown that multimodal biomarkers provide better diagnosis accuracy of AD and MCI than unimodal biomarkers, but their construction has been based on traditional statistical approaches. The objective of this work was the creation of accurate AD and MCI diagnostic multimodal biomarkers using advanced bioinformatics tools. The biomarkers were created by exploring multimodal combinations of features using machine learning techniques. Data was obtained from the ADNI database. The baseline information (e.g. MRI analyses, PET analyses and laboratory essays) from AD, MCI and healthy control (HC) subjects with available diagnosis up to June 2012 was mined for case/controls candidates. The data mining yielded 47 HC, 83 MCI and 43 AD subjects for biomarker creation. Each subject was characterized by at least 980 ADNI features. A genetic algorithm feature selection strategy was used to obtain compact and accurate cross-validated nearest centroid biomarkers. The biomarkers achieved training classification accuracies of 0.983, 0.871 and 0.917 for HC vs. AD, HC vs. MCI and MCI vs. AD respectively. The constructed biomarkers were relatively compact: from 5 to 11 features. Those multimodal biomarkers included several widely accepted univariate biomarkers and novel image and biochemical features. Multimodal biomarkers constructed from previously and non-previously AD associated features showed improved diagnostic performance when compared to those based solely on previously AD associated features.
Z{gamma}{gamma}{gamma} {yields} 0 Processes in SANC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bardin, D. Yu., E-mail: bardin@nu.jinr.ru; Kalinovskaya, L. V., E-mail: kalinov@nu.jinr.ru; Uglov, E. D., E-mail: corner@nu.jinr.ru
2013-11-15
We describe the analytic and numerical evaluation of the {gamma}{gamma} {yields} {gamma}Z process cross section and the Z {yields} {gamma}{gamma}{gamma} decay rate within the SANC system multi-channel approach at the one-loop accuracy level with all masses taken into account. The corresponding package for numeric calculations is presented. For checking of the results' correctness we make a comparison with the other independent calculations.
Independent Peer Evaluation of the Large Area Crop Inventory Experiment (LACIE): The LACIE Symposium
NASA Technical Reports Server (NTRS)
1978-01-01
Yield models and crop estimate accuracy are discussed within the Large Area Crop Inventory Experiment. The wheat yield estimates in the United States, Canada, and U.S.S.R. are emphasized. Experimental results design, system implementation, data processing systems, and applications were considered.
USDA-ARS?s Scientific Manuscript database
Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection is an attractive technology to generate rapid genetic gains in switchgrass and ...
Improving the Accuracy of Structural Fatigue Life Tracking Through Dynamic Strain Sensor Calibration
2011-09-01
strength corrosion resistant 7075 -T6 alloy, and included hinge lugs, a bulkhead, spars, and wing skins that were fastened together using welds, rivets...release, distribution unlimited 13. SUPPLEMENTARY NOTES See also ADA580921. International Workshop on Structural Health Monitoring: From Condition -based...greater than 10% under the same loading conditions [1]. These differences must be accounted for to have acceptable accuracy levels in the ultimate
Accuracy of post-bomb 137Cs and 14C in dating fluvial deposits
Ely, L.L.; Webb, R.H.; Enzel, Y.
1992-01-01
The accuracy and precision of 137Cs and 14C for dating post-1950 alluvial deposits were evaluated for deposits from known floods on two rivers in Arizona. The presence of 137Cs reliably indicates that deposition occurred after intensive above-ground nuclear testing was initiated around 1950. There was a positive correlation between the measured level of 137Cs activity and the clay content of the sediments, although 137Cs was detected even in sandy flood sediments with low clay content. 137Cs is a valuable dating tool in arid environments where organic materials for 14C or tree-ring dating are scarce and observational records are limited. The 14C activity measured in different types of fine organic detritus yielded dates within 1 to 8 yr of a 1980 flood deposit, and the accuracy was species-dependent. However, undifferentiated mixtures of fine organic materials from several post-bomb deposits of various ages repeatedly yielded dates between 1958 and 1962, and detrital charcoal yielded a date range of 1676-1939. In semiarid environments, the residence time of most types of organic debris precludes accurate annual resolution of post-bomb 14C dates. ?? 1992.
Darmstadt, G L; Kumar, V; Shearer, J C; Misra, R; Mohanty, S; Baqui, A H; Coffey, P S; Awasthi, S; Singh, J V; Santosham, M
2007-10-01
To determine the accuracy and acceptability of a handheld scale prototype designed for nonliterate users to classify newborns into three weight categories (>or=2,500 g; 2,000 to 2,499 g; and <2,000 g). Weights of 1,100 newborns in Uttar Pradesh, India, were measured on the test scale and validated against a gold standard. Mothers, family members and community health stakeholders were interviewed to assess the acceptability of the test scale. The test scale was highly sensitive and specific at classifying newborn weight (normal weight: 95.3 and 96.3%, respectively; low birth weight: 90.4 and 99.2%, respectively; very low birth weight: 91.7 and 98.4%, respectively). It was the overall agreement of the community that the test scale was more practical and easier to interpret than the gold standard. The BIRTHweigh III scale accurately identifies low birth weight and very low birth weight newborns to target weight-specific interventions. The scale is extremely practical and useful for resource-poor settings, especially those with low levels of literacy.
Hutchesson, Melinda J; Rollo, Megan E; Callister, Robin; Collins, Clare E
2015-01-01
Adherence and accuracy of self-monitoring of dietary intake influences success in weight management interventions. Information technologies such as computers and smartphones have the potential to improve adherence and accuracy by reducing the burden associated with monitoring dietary intake using traditional paper-based food records. We evaluated the acceptability and accuracy of three different 7-day food record methods (online accessed via computer, online accessed via smartphone, and paper-based). Young women (N=18; aged 23.4±2.9 years; body mass index 24.0±2.2) completed the three 7-day food records in random order with 7-day washout periods between each method. Total energy expenditure (TEE) was derived from resting energy expenditure (REE) measured by indirect calorimetry and physical activity level (PAL) derived from accelerometers (TEE=REE×PAL). Accuracy of the three methods was assessed by calculating absolute (energy intake [EI]-TEE) and percentage difference (EI/TEE×100) between self-reported EI and TEE. Acceptability was assessed via questionnaire. Mean±standard deviation TEE was 2,185±302 kcal/day and EI was 1,729±249 kcal/day, 1,675±287kcal/day, and 1,682±352 kcal/day for computer, smartphone, and paper records, respectively. There were no significant differences between absolute and percentage differences between EI and TEE for the three methods: computer, -510±389 kcal/day (78%); smartphone, -456±372 kcal/day (80%); and paper, -503±513 kcal/day (79%). Half of participants (n=9) preferred computer recording, 44.4% preferred smartphone, and 5.6% preferred paper-based records. Most participants (89%) least preferred the paper-based record. Because online food records completed on either computer or smartphone were as accurate as paper-based records but more acceptable to young women, they should be considered when self-monitoring of intake is recommended to young women. Copyright © 2015 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
Measurements of e + e - pairs from open heavy flavor in p + p and d + A collisions at s NN = 200 GeV
Adare, A.; Afanasiev, S.; Aidala, C.; ...
2017-08-18
In this paper, we report a measurement of e +e - pairs from semileptonic heavy-flavor decays in p + p collisions at √sNN = 200 GeV. The e +e - pair yield frommore » $$b\\overline{b}$$ and $$c\\overline{c}$$ is separated by exploiting a double differential fit done simultaneously in dielectron invariant mass and p T. We used three different event generators, pythia, mc@nlo, and powheg, to simulate the e +e - spectra from $$c\\overline{c}$$ and $$b\\overline{b}$$ production. The data can be well described by all three generators within the detector acceptance. However, when using the generators to extrapolate to 4π, significant differences are observed for the total cross section. These difference are less pronounced for $$b\\overline{b}$$ than for $$c\\overline{c}$$. The same model dependence was observed in already published d + A data. Lastly, the p + p data are also directly compared with d + A data in mass and p T, and within the statistical accuracy no nuclear modification is seen.« less
On the Nature of Clitics and Their Sensitivity to Number Attraction Effects
Santesteban, Mikel; Zawiszewski, Adam; Erdocia, Kepa; Laka, Itziar
2017-01-01
Pronominal dependencies have been shown to be more resilient to attraction effects than subject-verb agreement. We use this phenomenon to investigate whether antecedent-clitic dependencies in Spanish are computed like agreement or like pronominal dependencies. In Experiment 1, an acceptability judgment self-paced reading task was used. Accuracy data yielded reliable attraction effects in both grammatical and ungrammatical sentences, only in singular (but not plural) clitics. Reading times did not show reliable attraction effects. In Experiment 2, we measured electrophysiological responses to violations, which elicited a biphasic frontal negativity-P600 pattern. Number attraction modulated the frontal negativity but not the amplitude of the P600 component. This differs from ERP findings on subject-verb agreement, since when the baseline matching condition obtained a biphasic pattern, attraction effects only modulated the P600, not the preceding negativity. We argue that these findings support cue-retrieval accounts of dependency resolution and further suggest that the sensitivity to attraction effects shown by clitics resembles more the computation of pronominal dependencies than that of agreement. PMID:28928686
Measurements of e + e - pairs from open heavy flavor in p + p and d + A collisions at s NN = 200 GeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adare, A.; Afanasiev, S.; Aidala, C.
In this paper, we report a measurement of e +e - pairs from semileptonic heavy-flavor decays in p + p collisions at √sNN = 200 GeV. The e +e - pair yield frommore » $$b\\overline{b}$$ and $$c\\overline{c}$$ is separated by exploiting a double differential fit done simultaneously in dielectron invariant mass and p T. We used three different event generators, pythia, mc@nlo, and powheg, to simulate the e +e - spectra from $$c\\overline{c}$$ and $$b\\overline{b}$$ production. The data can be well described by all three generators within the detector acceptance. However, when using the generators to extrapolate to 4π, significant differences are observed for the total cross section. These difference are less pronounced for $$b\\overline{b}$$ than for $$c\\overline{c}$$. The same model dependence was observed in already published d + A data. Lastly, the p + p data are also directly compared with d + A data in mass and p T, and within the statistical accuracy no nuclear modification is seen.« less
Measurements of e+e- pairs from open heavy flavor in p +p and d +A collisions at √{sNN}=200 GeV
NASA Astrophysics Data System (ADS)
Adare, A.; Afanasiev, S.; Aidala, C.; Ajitanand, N. N.; Akiba, Y.; Al-Bataineh, H.; Alexander, J.; Alfred, M.; Aoki, K.; Apadula, N.; Aphecetche, L.; Asai, J.; Atomssa, E. T.; Averbeck, R.; Awes, T. C.; Ayuso, C.; Azmoun, B.; Babintsev, V.; Bagoly, A.; Bai, M.; Baksay, G.; Baksay, L.; Baldisseri, A.; Barish, K. N.; Barnes, P. D.; Bassalleck, B.; Basye, A. T.; Bathe, S.; Batsouli, S.; Baublis, V.; Baumann, C.; Bazilevsky, A.; Belikov, S.; Belmont, R.; Bennett, R.; Berdnikov, A.; Berdnikov, Y.; Bickley, A. A.; Blau, D. S.; Boer, M.; Boissevain, J. G.; Bok, J. S.; Borel, H.; Boyle, K.; Brooks, M. L.; Bryslawskyj, J.; Buesching, H.; Bumazhnov, V.; Bunce, G.; Butler, C.; Butsyk, S.; Camacho, C. M.; Campbell, S.; Canoa Roman, V.; Chang, B. S.; Chang, W. C.; Charvet, J.-L.; Chernichenko, S.; Chi, C. Y.; Chiu, M.; Choi, I. J.; Choudhury, R. K.; Chujo, T.; Chung, P.; Churyn, A.; Cianciolo, V.; Citron, Z.; Cole, B. A.; Connors, M.; Constantin, P.; Csanád, M.; Csörgő, T.; Dahms, T.; Dairaku, S.; Danley, T. W.; Das, K.; David, G.; Deblasio, K.; Dehmelt, K.; Denisov, A.; D'Enterria, D.; Deshpande, A.; Desmond, E. J.; Dietzsch, O.; Dion, A.; Do, J. H.; Donadelli, M.; Drapier, O.; Drees, A.; Drees, K. A.; Dubey, A. K.; Dumancic, M.; Durham, J. M.; Durum, A.; Dutta, D.; Dzhordzhadze, V.; Efremenko, Y. V.; Elder, T.; Ellinghaus, F.; Engelmore, T.; Enokizono, A.; En'yo, H.; Esumi, S.; Eyser, K. O.; Fadem, B.; Fan, W.; Feege, N.; Fields, D. E.; Finger, M.; Finger, M.; Fleuret, F.; Fokin, S. L.; Fraenkel, Z.; Frantz, J. E.; Franz, A.; Frawley, A. D.; Fujiwara, K.; Fukao, Y.; Fukuda, Y.; Fusayasu, T.; Gal, C.; Garishvili, I.; Ge, H.; Glenn, A.; Gong, H.; Gonin, M.; Gosset, J.; Goto, Y.; Granier de Cassagnac, R.; Grau, N.; Greene, S. V.; Grosse Perdekamp, M.; Gunji, T.; Gustafsson, H.-Å.; Hachiya, T.; Hadj Henni, A.; Haggerty, J. S.; Hahn, K. I.; Hamagaki, H.; Han, R.; Han, S. Y.; Hartouni, E. P.; Haruna, K.; Hasegawa, S.; Haseler, T. O. S.; Haslum, E.; Hayano, R.; He, X.; Heffner, M.; Hemmick, T. K.; Hester, T.; Hill, J. C.; Hill, K.; Hohlmann, M.; Holzmann, W.; Homma, K.; Hong, B.; Horaguchi, T.; Hornback, D.; Hoshino, T.; Hotvedt, N.; Huang, J.; Huang, S.; Ichihara, T.; Ichimiya, R.; Iinuma, H.; Ikeda, Y.; Imai, K.; Imrek, J.; Inaba, M.; Isenhower, D.; Ishihara, M.; Isobe, T.; Issah, M.; Isupov, A.; Ito, Y.; Ivanishchev, D.; Jacak, B. V.; Ji, Z.; Jia, J.; Jin, J.; Johnson, B. M.; Joo, K. S.; Jorjadze, V.; Jouan, D.; Jumper, D. S.; Kajihara, F.; Kametani, S.; Kamihara, N.; Kamin, J.; Kang, J. H.; Kapukchyan, D.; Kapustinsky, J.; Karthas, S.; Kawall, D.; Kazantsev, A. V.; Kempel, T.; Khachatryan, V.; Khanzadeev, A.; Kijima, K. M.; Kikuchi, J.; Kim, B. I.; Kim, C.; Kim, D. H.; Kim, D. J.; Kim, E.; Kim, E.-J.; Kim, M.; Kim, M. H.; Kim, S. H.; Kincses, D.; Kinney, E.; Kiriluk, K.; Kiss, Á.; Kistenev, E.; Klay, J.; Klein-Boesing, C.; Koblesky, T.; Kochenda, L.; Komkov, B.; Konno, M.; Koster, J.; Kotov, D.; Kozlov, A.; Král, A.; Kravitz, A.; Kudo, S.; Kunde, G. J.; Kurita, K.; Kurosawa, M.; Kweon, M. J.; Kwon, Y.; Kyle, G. S.; Lacey, R.; Lai, Y. S.; Lajoie, J. G.; Lallow, E. O.; Layton, D.; Lebedev, A.; Lee, D. M.; Lee, K. B.; Lee, T.; Leitch, M. J.; Leite, M. A. L.; Lenzi, B.; Leung, Y. H.; Lewis, N. A.; Li, X.; Li, X.; Liebing, P.; Lim, S. H.; Liška, T.; Litvinenko, A.; Liu, H.; Liu, L. D.; Liu, M. X.; Loggins, V.-R.; Lokos, S.; Love, B.; Lynch, D.; Maguire, C. F.; Majoros, T.; Makdisi, Y. I.; Makek, M.; Malakhov, A.; Malik, M. D.; Manko, V. I.; Mannel, E.; Mao, Y.; Mašek, L.; Masui, H.; Matathias, F.; McCumber, M.; McGaughey, P. L.; McGlinchey, D.; Means, N.; Meredith, B.; Miake, Y.; Mignerey, A. C.; Mihalik, D. E.; Mikeš, P.; Miki, K.; Milov, A.; Mishra, M.; Mitchell, J. T.; Mitsuka, G.; Mohanty, A. K.; Moon, T.; Morino, Y.; Morreale, A.; Morrison, D. P.; Morrow, S. I. M.; Moukhanova, T. V.; Mukhopadhyay, D.; Murakami, T.; Murata, J.; Nagai, K.; Nagamiya, S.; Nagashima, K.; Nagashima, T.; Nagle, J. L.; Naglis, M.; Nagy, M. I.; Nakagawa, I.; Nakagomi, H.; Nakamiya, Y.; Nakamura, T.; Nakano, K.; Nattrass, C.; Newby, J.; Nguyen, M.; Niida, T.; Nouicer, R.; Novák, T.; Novitzky, N.; Novotny, R.; Nyanin, A. S.; O'Brien, E.; Oda, S. X.; Ogilvie, C. A.; Oka, M.; Okada, K.; Onuki, Y.; Orjuela Koop, J. D.; Osborn, J. D.; Oskarsson, A.; Ouchida, M.; Ozawa, K.; Pak, R.; Palounek, A. P. T.; Pantuev, V.; Papavassiliou, V.; Park, J.; Park, J. S.; Park, S.; Park, W. J.; Pate, S. F.; Patel, M.; Pei, H.; Peng, J.-C.; Peng, W.; Pereira, H.; Perepelitsa, D. V.; Perera, G. D. N.; Peresedov, V.; Peressounko, D. Yu.; Perezlara, C. E.; Petti, R.; Phipps, M.; Pinkenburg, C.; Pun, A.; Purschke, M. L.; Purwar, A. K.; Qu, H.; Radzevich, P. V.; Rak, J.; Rakotozafindrabe, A.; Ravinovich, I.; Read, K. F.; Rembeczki, S.; Reygers, K.; Riabov, V.; Riabov, Y.; Richford, D.; Rinn, T.; Roach, D.; Roche, G.; Rolnick, S. D.; Rosati, M.; Rosendahl, S. S. E.; Rosnet, P.; Rowan, Z.; Rukoyatkin, P.; Runchey, J.; Ružička, P.; Rykov, V. L.; Sahlmueller, B.; Saito, N.; Sakaguchi, T.; Sakai, S.; Sakashita, K.; Sako, H.; Samsonov, V.; Sarsour, M.; Sato, K.; Sato, S.; Sato, T.; Sawada, S.; Schaefer, B.; Schmoll, B. K.; Schmoll, B. K.; Sedgwick, K.; Seele, J.; Seidl, R.; Semenov, A. Yu.; Semenov, V.; Sen, A.; Seto, R.; Sexton, A.; Sharma, D.; Shein, I.; Shibata, T.-A.; Shigaki, K.; Shimomura, M.; Shoji, K.; Shukla, P.; Sickles, A.; Silva, C. L.; Silvermyr, D.; Silvestre, C.; Sim, K. S.; Singh, B. K.; Singh, C. P.; Singh, V.; Skoby, M. J.; Slunečka, M.; Smith, K. L.; Soldatov, A.; Soltz, R. A.; Sondheim, W. E.; Sorensen, S. P.; Sourikova, I. V.; Staley, F.; Stankus, P. W.; Stenlund, E.; Stepanov, M.; Ster, A.; Stoll, S. P.; Sugitate, T.; Suire, C.; Sukhanov, A.; Syed, S.; Sziklai, J.; Takagui, E. M.; Taketani, A.; Tanabe, R.; Tanaka, Y.; Tanida, K.; Tannenbaum, M. J.; Tarafdar, S.; Taranenko, A.; Tarján, P.; Tarnai, G.; Themann, H.; Thomas, T. L.; Tieulent, R.; Timilsina, A.; Togawa, M.; Toia, A.; Tomášek, L.; Tomášek, M.; Tomita, Y.; Torii, H.; Towell, C. L.; Towell, R. S.; Tram, V.-N.; Tserruya, I.; Tsuchimoto, Y.; Ueda, Y.; Ujvari, B.; Vale, C.; Valle, H.; van Hecke, H. W.; Vazquez-Carson, S.; Veicht, A.; Velkovska, J.; Vértesi, R.; Vinogradov, A. A.; Virius, M.; Vrba, V.; Vznuzdaev, E.; Wang, X. R.; Wang, Z.; Watanabe, Y.; Wei, F.; Wessels, J.; White, S. N.; Winter, D.; Wong, C. P.; Woody, C. L.; Wysocki, M.; Xie, W.; Xu, C.; Xu, Q.; Yamaguchi, Y. L.; Yamaura, K.; Yang, R.; Yanovich, A.; Yin, P.; Ying, J.; Yokkaichi, S.; Yoo, J. H.; Yoon, I.; Young, G. R.; Younus, I.; Yu, H.; Yushmanov, I. E.; Zajc, W. A.; Zaudtke, O.; Zhang, C.; Zharko, S.; Zhou, S.; Zolin, L.; Zou, L.; Phenix Collaboration
2017-08-01
We report a measurement of e+e- pairs from semileptonic heavy-flavor decays in p +p collisions at √{s NN}=200 GeV. The e+e- pair yield from b b ¯ and c c ¯ is separated by exploiting a double differential fit done simultaneously in dielectron invariant mass and pT. We used three different event generators, pythia, mc@nlo, and powheg, to simulate the e+e- spectra from c c ¯ and b b ¯ production. The data can be well described by all three generators within the detector acceptance. However, when using the generators to extrapolate to 4 π , significant differences are observed for the total cross section. These difference are less pronounced for b b ¯ than for c c ¯ . The same model dependence was observed in already published d +A data. The p +p data are also directly compared with d +A data in mass and pT, and within the statistical accuracy no nuclear modification is seen.
On the Nature of Clitics and Their Sensitivity to Number Attraction Effects.
Santesteban, Mikel; Zawiszewski, Adam; Erdocia, Kepa; Laka, Itziar
2017-01-01
Pronominal dependencies have been shown to be more resilient to attraction effects than subject-verb agreement. We use this phenomenon to investigate whether antecedent-clitic dependencies in Spanish are computed like agreement or like pronominal dependencies. In Experiment 1, an acceptability judgment self-paced reading task was used. Accuracy data yielded reliable attraction effects in both grammatical and ungrammatical sentences, only in singular (but not plural) clitics. Reading times did not show reliable attraction effects. In Experiment 2, we measured electrophysiological responses to violations, which elicited a biphasic frontal negativity-P600 pattern. Number attraction modulated the frontal negativity but not the amplitude of the P600 component. This differs from ERP findings on subject-verb agreement, since when the baseline matching condition obtained a biphasic pattern, attraction effects only modulated the P600, not the preceding negativity. We argue that these findings support cue-retrieval accounts of dependency resolution and further suggest that the sensitivity to attraction effects shown by clitics resembles more the computation of pronominal dependencies than that of agreement.
Petraco, Ricardo; Dehbi, Hakim-Moulay; Howard, James P; Shun-Shin, Matthew J; Sen, Sayan; Nijjer, Sukhjinder S; Mayet, Jamil; Davies, Justin E; Francis, Darrel P
2018-01-01
Diagnostic accuracy is widely accepted by researchers and clinicians as an optimal expression of a test's performance. The aim of this study was to evaluate the effects of disease severity distribution on values of diagnostic accuracy as well as propose a sample-independent methodology to calculate and display accuracy of diagnostic tests. We evaluated the diagnostic relationship between two hypothetical methods to measure serum cholesterol (Chol rapid and Chol gold ) by generating samples with statistical software and (1) keeping the numerical relationship between methods unchanged and (2) changing the distribution of cholesterol values. Metrics of categorical agreement were calculated (accuracy, sensitivity and specificity). Finally, a novel methodology to display and calculate accuracy values was presented (the V-plot of accuracies). No single value of diagnostic accuracy can be used to describe the relationship between tests, as accuracy is a metric heavily affected by the underlying sample distribution. Our novel proposed methodology, the V-plot of accuracies, can be used as a sample-independent measure of a test performance against a reference gold standard.
Network Candidate Genes in Breeding for Drought Tolerant Crops
Krannich, Christoph Tim; Maletzki, Lisa; Kurowsky, Christina; Horn, Renate
2015-01-01
Climate change leading to increased periods of low water availability as well as increasing demands for food in the coming years makes breeding for drought tolerant crops a high priority. Plants have developed diverse strategies and mechanisms to survive drought stress. However, most of these represent drought escape or avoidance strategies like early flowering or low stomatal conductance that are not applicable in breeding for crops with high yields under drought conditions. Even though a great deal of research is ongoing, especially in cereals, in this regard, not all mechanisms involved in drought tolerance are yet understood. The identification of candidate genes for drought tolerance that have a high potential to be used for breeding drought tolerant crops represents a challenge. Breeding for drought tolerant crops has to focus on acceptable yields under water-limited conditions and not on survival. However, as more and more knowledge about the complex networks and the cross talk during drought is available, more options are revealed. In addition, it has to be considered that conditioning a crop for drought tolerance might require the production of metabolites and might cost the plants energy and resources that cannot be used in terms of yield. Recent research indicates that yield penalty exists and efficient breeding for drought tolerant crops with acceptable yields under well-watered and drought conditions might require uncoupling yield penalty from drought tolerance. PMID:26193269
Network Candidate Genes in Breeding for Drought Tolerant Crops.
Krannich, Christoph Tim; Maletzki, Lisa; Kurowsky, Christina; Horn, Renate
2015-07-17
Climate change leading to increased periods of low water availability as well as increasing demands for food in the coming years makes breeding for drought tolerant crops a high priority. Plants have developed diverse strategies and mechanisms to survive drought stress. However, most of these represent drought escape or avoidance strategies like early flowering or low stomatal conductance that are not applicable in breeding for crops with high yields under drought conditions. Even though a great deal of research is ongoing, especially in cereals, in this regard, not all mechanisms involved in drought tolerance are yet understood. The identification of candidate genes for drought tolerance that have a high potential to be used for breeding drought tolerant crops represents a challenge. Breeding for drought tolerant crops has to focus on acceptable yields under water-limited conditions and not on survival. However, as more and more knowledge about the complex networks and the cross talk during drought is available, more options are revealed. In addition, it has to be considered that conditioning a crop for drought tolerance might require the production of metabolites and might cost the plants energy and resources that cannot be used in terms of yield. Recent research indicates that yield penalty exists and efficient breeding for drought tolerant crops with acceptable yields under well-watered and drought conditions might require uncoupling yield penalty from drought tolerance.
Cunha, B C N; Belk, K E; Scanga, J A; LeValley, S B; Tatum, J D; Smith, G C
2004-07-01
This study was performed to validate previous equations and to develop and evaluate new regression equations for predicting lamb carcass fabrication yields using outputs from a lamb vision system-hot carcass component (LVS-HCC) and the lamb vision system-chilled carcass LM imaging component (LVS-CCC). Lamb carcasses (n = 149) were selected after slaughter, imaged hot using the LVS-HCC, and chilled for 24 to 48 h at -3 to 1 degrees C. Chilled carcasses yield grades (YG) were assigned on-line by USDA graders and by expert USDA grading supervisors with unlimited time and access to the carcasses. Before fabrication, carcasses were ribbed between the 12th and 13th ribs and imaged using the LVS-CCC. Carcasses were fabricated into bone-in subprimal/primal cuts. Yields calculated included 1) saleable meat yield (SMY); 2) subprimal yield (SPY); and 3) fat yield (FY). On-line (whole-number) USDA YG accounted for 59, 58, and 64%; expert (whole-number) USDA YG explained 59, 59, and 65%; and expert (nearest-tenth) USDA YG accounted for 60, 60, and 67% of the observed variation in SMY, SPY, and FY, respectively. The best prediction equation developed in this trial using LVS-HCC output and hot carcass weight as independent variables explained 68, 62, and 74% of the variation in SMY, SPY, and FY, respectively. Addition of output from LVS-CCC improved predictive accuracy of the equations; the combined output equations explained 72 and 66% of the variability in SMY and SPY, respectively. Accuracy and repeatability of measurement of LM area made with the LVS-CCC also was assessed, and results suggested that use of LVS-CCC provided reasonably accurate (R2 = 0.59) and highly repeatable (repeatability = 0.98) measurements of LM area. Compared with USDA YG, use of the dual-component lamb vision system to predict cut yields of lamb carcasses improved accuracy and precision, suggesting that this system could have an application as an objective means for pricing carcasses in a value-based marketing system.
Smoot, Betty J.; Wong, Josephine F.; Dodd, Marylin J.
2013-01-01
Objective To compare diagnostic accuracy of measures of breast cancer–related lymphedema (BCRL). Design Cross-sectional design comparing clinical measures with the criterion standard of previous diagnosis of BCRL. Setting University of California San Francisco Translational Science Clinical Research Center. Participants Women older than 18 years and more than 6 months posttreatment for breast cancer (n=141; 70 with BCRL, 71 without BCRL). Interventions Not applicable. Main Outcome Measures Sensitivity, specificity, receiver operator characteristic curve, and area under the curve (AUC) were used to evaluate accuracy. Results A total of 141 women were categorized as having (n=70) or not having (n=71) BCRL based on past diagnosis by a health care provider, which was used as the reference standard. Analyses of ROC curves for the continuous outcomes yielded AUC of .68 to .88 (P<.001); of the physical measures bioimpedance spectroscopy yielded the highest accuracy with an AUC of .88 (95% confidence interval, .80–.96) for women whose dominant arm was the affected arm. The lowest accuracy was found using the 2-cm diagnostic cutoff score to identify previously diagnosed BCRL (AUC, .54–.65). Conclusions Our findings support the use of bioimpedance spectroscopy in the assessment of existing BCRL. Refining diagnostic cutoff values may improve accuracy of diagnosis and warrant further investigation. PMID:21440706
Compact Intraoperative MRI: Stereotactic Accuracy and Future Directions.
Markowitz, Daniel; Lin, Dishen; Salas, Sussan; Kohn, Nina; Schulder, Michael
2017-01-01
Intraoperative imaging must supply data that can be used for accurate stereotactic navigation. This information should be at least as accurate as that acquired from diagnostic imagers. The aim of this study was to compare the stereotactic accuracy of an updated compact intraoperative MRI (iMRI) device based on a 0.15-T magnet to standard surgical navigation on a 1.5-T diagnostic scan MRI and to navigation with an earlier model of the same system. The accuracy of each system was assessed using a water-filled phantom model of the brain. Data collected with the new system were compared to those obtained in a previous study assessing the older system. The accuracy of the new iMRI was measured against standard surgical navigation on a 1.5-T MRI using T1-weighted (W) images. The mean error with the iMRI using T1W images was lower than that based on images from the 1.5-T scan (1.24 vs. 2.43 mm). T2W images from the newer iMRI yielded a lower navigation error than those acquired with the prior model (1.28 vs. 3.15 mm). Improvements in magnet design can yield progressive increases in accuracy, validating the concept of compact, low-field iMRI. Avoiding the need for registration between image and surgical space increases navigation accuracy. © 2017 S. Karger AG, Basel.
Figueira, Bruno; Gonçalves, Bruno; Folgado, Hugo; Masiulis, Nerijus; Calleja-González, Julio; Sampaio, Jaime
2018-06-14
The present study aims to identify the accuracy of the NBN23 ® system, an indoor tracking system based on radio-frequency and standard Bluetooth Low Energy channels. Twelve capture tags were attached to a custom cart with fixed distances of 0.5, 1.0, 1.5, and 1.8 m. The cart was pushed along a predetermined course following the lines of a standard dimensions Basketball court. The course was performed at low speed (<10.0 km/h), medium speed (>10.0 km/h and <20.0 km/h) and high speed (>20.0 km/h). Root mean square error (RMSE) and percentage of variance accounted for (%VAF) were used as accuracy measures. The obtained data showed acceptable accuracy results for both RMSE and %VAF, despite the expected degree of error in position measurement at higher speeds. The RMSE for all the distances and velocities presented an average absolute error of 0.30 ± 0.13 cm with 90.61 ± 8.34 of %VAF, in line with most available systems, and considered acceptable for indoor sports. The processing of data with filter correction seemed to reduce the noise and promote a lower relative error, increasing the %VAF for each measured distance. Research using positional-derived variables in Basketball is still very scarce; thus, this independent test of the NBN23 ® tracking system provides accuracy details and opens up opportunities to develop new performance indicators that help to optimize training adaptations and performance.
2014-01-01
Background Numerous clinical tests are used in the diagnosis of anterior cruciate ligament (ACL) injury but their accuracy is unclear. The purpose of this study is to evaluate the diagnostic accuracy of clinical tests for the diagnosis of ACL injury. Methods Study Design: Systematic review. The review protocol was registered through PROSPERO (CRD42012002069). Electronic databases (PubMed, MEDLINE, EMBASE, CINAHL) were searched up to 19th of June 2013 to identify diagnostic studies comparing the accuracy of clinical tests for ACL injury to an acceptable reference standard (arthroscopy, arthrotomy, or MRI). Risk of bias was appraised using the QUADAS-2 checklist. Index test accuracy was evaluated using a descriptive analysis of paired likelihood ratios and displayed as forest plots. Results A total of 285 full-text articles were assessed for eligibility, from which 14 studies were included in this review. Included studies were deemed to be clinically and statistically heterogeneous, so a meta-analysis was not performed. Nine clinical tests from the history (popping sound at time of injury, giving way, effusion, pain, ability to continue activity) and four from physical examination (anterior draw test, Lachman’s test, prone Lachman’s test and pivot shift test) were investigated for diagnostic accuracy. Inspection of positive and negative likelihood ratios indicated that none of the individual tests provide useful diagnostic information in a clinical setting. Most studies were at risk of bias and reported imprecise estimates of diagnostic accuracy. Conclusion Despite being widely used and accepted in clinical practice, the results of individual history items or physical tests do not meaningfully change the probability of ACL injury. In contrast combinations of tests have higher diagnostic accuracy; however the most accurate combination of clinical tests remains an area for future research. Clinical relevance Clinicians should be aware of the limitations associated with the use of clinical tests for diagnosis of ACL injury. PMID:25187877
Swain, Michael S; Henschke, Nicholas; Kamper, Steven J; Downie, Aron S; Koes, Bart W; Maher, Chris G
2014-01-01
Numerous clinical tests are used in the diagnosis of anterior cruciate ligament (ACL) injury but their accuracy is unclear. The purpose of this study is to evaluate the diagnostic accuracy of clinical tests for the diagnosis of ACL injury. Systematic review. The review protocol was registered through PROSPERO (CRD42012002069). Electronic databases (PubMed, MEDLINE, EMBASE, CINAHL) were searched up to 19th of June 2013 to identify diagnostic studies comparing the accuracy of clinical tests for ACL injury to an acceptable reference standard (arthroscopy, arthrotomy, or MRI). Risk of bias was appraised using the QUADAS-2 checklist. Index test accuracy was evaluated using a descriptive analysis of paired likelihood ratios and displayed as forest plots. A total of 285 full-text articles were assessed for eligibility, from which 14 studies were included in this review. Included studies were deemed to be clinically and statistically heterogeneous, so a meta-analysis was not performed. Nine clinical tests from the history (popping sound at time of injury, giving way, effusion, pain, ability to continue activity) and four from physical examination (anterior draw test, Lachman's test, prone Lachman's test and pivot shift test) were investigated for diagnostic accuracy. Inspection of positive and negative likelihood ratios indicated that none of the individual tests provide useful diagnostic information in a clinical setting. Most studies were at risk of bias and reported imprecise estimates of diagnostic accuracy. Despite being widely used and accepted in clinical practice, the results of individual history items or physical tests do not meaningfully change the probability of ACL injury. In contrast combinations of tests have higher diagnostic accuracy; however the most accurate combination of clinical tests remains an area for future research. Clinicians should be aware of the limitations associated with the use of clinical tests for diagnosis of ACL injury.
ERIC Educational Resources Information Center
Nurnberg, Peter; Schapiro, Morton; Zimmerman, David
2012-01-01
This paper provides an econometric analysis of the matriculation decisions made by students accepted to Williams College, one of the nation's most highly selective colleges and universities. Using data for the Williams classes of 2008 through 2012 to estimate a yield model, we find that--conditional on the student applying to and being accepted by…
Effects of Family and Friend Support on LGB Youths' Mental Health and Sexual Orientation Milestones
ERIC Educational Resources Information Center
Shilo, Guy; Savaya, Riki
2011-01-01
This study examined the effects of social support components and providers on mental health and sexual orientation (SO) milestones of lesbian, gay, and bisexual (LGB) youths. Data were collected on 461 self-identified LGB adolescents and young adults. Family acceptance and support yielded the strongest positive effect on self-acceptance of SO,…
Generation of pseudo-random numbers
NASA Technical Reports Server (NTRS)
Howell, L. W.; Rheinfurth, M. H.
1982-01-01
Practical methods for generating acceptable random numbers from a variety of probability distributions which are frequently encountered in engineering applications are described. The speed, accuracy, and guarantee of statistical randomness of the various methods are discussed.
[An ear thermometer based on infrared thermopiles sensor].
Xie, Haiyuan; Qian, Mingli
2013-09-01
According to the development of body temperature measurement mode, an ear thermometer with infrared thermopiles sensor is designed for body thermometry Compared with oral thermometer, the accuracy of ear thermometer is acceptable.
Accuracy of parameterized proton range models; A comparison
NASA Astrophysics Data System (ADS)
Pettersen, H. E. S.; Chaar, M.; Meric, I.; Odland, O. H.; Sølie, J. R.; Röhrich, D.
2018-03-01
An accurate calculation of proton ranges in phantoms or detector geometries is crucial for decision making in proton therapy and proton imaging. To this end, several parameterizations of the range-energy relationship exist, with different levels of complexity and accuracy. In this study we compare the accuracy of four different parameterizations models for proton range in water: Two analytical models derived from the Bethe equation, and two different interpolation schemes applied to range-energy tables. In conclusion, a spline interpolation scheme yields the highest reproduction accuracy, while the shape of the energy loss-curve is best reproduced with the differentiated Bragg-Kleeman equation.
Aihara, Hiroyuki; Kumar, Nitin; Thompson, Christopher C
2018-04-19
An education system for narrow band imaging (NBI) interpretation requires sufficient exposure to key features. However, access to didactic lectures by experienced teachers is limited in the United States. To develop and assess the effectiveness of a colorectal lesion identification tutorial. In the image analysis pretest, subjects including 9 experts and 8 trainees interpreted 50 white light (WL) and 50 NBI images of colorectal lesions. Results were not reviewed with subjects. Trainees then participated in an online tutorial emphasizing NBI interpretation in colorectal lesion analysis. A post-test was administered and diagnostic yields were compared to pre-education diagnostic yields. Under the NBI mode, experts showed higher diagnostic yields (sensitivity 91.5% [87.3-94.4], specificity 90.6% [85.1-94.2], and accuracy 91.1% [88.5-93.7] with substantial interobserver agreement [κ value 0.71]) compared to trainees (sensitivity 89.6% [84.8-93.0], specificity 80.6% [73.5-86.3], and accuracy 86.0% [82.6-89.2], with substantial interobserver agreement [κ value 0.69]). The online tutorial improved the diagnostic yields of trainees to the equivalent level of experts (sensitivity 94.1% [90.0-96.6], specificity 89.0% [83.0-93.2], and accuracy 92.0% [89.3-94.7], p < 0.001 with substantial interobserver agreement [κ value 0.78]). This short, online tutorial improved diagnostic performance and interobserver agreement. © 2018 S. Karger AG, Basel.
Favazza, Christopher P; Yu, Lifeng; Leng, Shuai; Kofler, James M; McCollough, Cynthia H
2015-01-01
To compare computed tomography dose and noise arising from use of an automatic exposure control (AEC) system designed to maintain constant image noise as patient size varies with clinically accepted technique charts and AEC systems designed to vary image noise. A model was developed to describe tube current modulation as a function of patient thickness. Relative dose and noise values were calculated as patient width varied for AEC settings designed to yield constant or variable noise levels and were compared to empirically derived values used by our clinical practice. Phantom experiments were performed in which tube current was measured as a function of thickness using a constant-noise-based AEC system and the results were compared with clinical technique charts. For 12-, 20-, 28-, 44-, and 50-cm patient widths, the requirement of constant noise across patient size yielded relative doses of 5%, 14%, 38%, 260%, and 549% and relative noises of 435%, 267%, 163%, 61%, and 42%, respectively, as compared with our clinically used technique chart settings at each respective width. Experimental measurements showed that a constant noise-based AEC system yielded 175% relative noise for a 30-cm phantom and 206% relative dose for a 40-cm phantom compared with our clinical technique chart. Automatic exposure control systems that prescribe constant noise as patient size varies can yield excessive noise in small patients and excessive dose in obese patients compared with clinically accepted technique charts. Use of noise-level technique charts and tube current limits can mitigate these effects.
Field comparison of several commercially available radon detectors.
Field, R W; Kross, B C
1990-01-01
To determine the accuracy and precision of commercially available radon detectors in a field setting, 15 detectors from six companies were exposed to radon and compared to a reference radon level. The detectors from companies that had already passed National Radon Measurement Proficiency Program testing had better precision and accuracy than those detectors awaiting proficiency testing. Charcoal adsorption detectors and diffusion barrier charcoal adsorption detectors performed very well, and the latter detectors displayed excellent time averaging ability. Alternatively, charcoal liquid scintillation detectors exhibited acceptable accuracy but poor precision, and bare alpha registration detectors showed both poor accuracy and precision. The mean radon level reported by the bare alpha registration detectors was 68 percent lower than the radon reference level. PMID:2368851
A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.
Yang, Harry; Zhang, Jianchun
2015-01-01
The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current methods. Analytical methods are often used to ensure safety, efficacy, and quality of medicinal products. According to government regulations and regulatory guidelines, these methods need to be validated through well-designed studies to minimize the risk of accepting unsuitable methods. This article describes a novel statistical test for analytical method validation, which provides better protection for the risk of accepting unsuitable analytical methods. © PDA, Inc. 2015.
Arend, Carlos Frederico; Arend, Ana Amalia; da Silva, Tiago Rodrigues
2014-06-01
The aim of our study was to systematically compare different methodologies to establish an evidence-based approach based on tendon thickness and structure for sonographic diagnosis of supraspinatus tendinopathy when compared to MRI. US was obtained from 164 symptomatic patients with supraspinatus tendinopathy detected at MRI and 42 asymptomatic controls with normal MRI. Diagnostic yield was calculated for either maximal supraspinatus tendon thickness (MSTT) and tendon structure as isolated criteria and using different combinations of parallel and sequential testing at US. Chi-squared tests were performed to assess sensitivity, specificity, and accuracy of different diagnostic approaches. Mean MSTT was 6.68 mm in symptomatic patients and 5.61 mm in asymptomatic controls (P<.05). When used as an isolated criterion, MSTT>6.0mm provided best results for accuracy (93.7%) when compared to other measurements of tendon thickness. Also as an isolated criterion, abnormal tendon structure (ATS) yielded 93.2% accuracy for diagnosis. The best overall yield was obtained by both parallel and sequential testing using either MSTT>6.0mm or ATS as diagnostic criteria at no particular order, which provided 99.0% accuracy, 100% sensitivity, and 95.2% specificity. Among these parallel and sequential tests that provided best overall yield, additional analysis revealed that sequential testing first evaluating tendon structure required assessment of 258 criteria (vs. 261 for sequential testing first evaluating tendon thickness and 412 for parallel testing) and demanded a mean of 16.1s to assess diagnostic criteria and reach the diagnosis (vs. 43.3s for sequential testing first evaluating tendon thickness and 47.4s for parallel testing). We found that using either MSTT>6.0mm or ATS as diagnostic criteria for both parallel and sequential testing provides the best overall yield for sonographic diagnosis of supraspinatus tendinopathy when compared to MRI. Among these strategies, a two-step sequential approach first assessing tendon structure was advantageous because it required a lower number of criteria to be assessed and demanded less time to assess diagnostic criteria and reach the diagnosis. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Diamond, Kevin R; Farrell, Thomas J; Patterson, Michael S
2003-12-21
Steady-state diffusion theory models of fluorescence in tissue have been investigated for recovering fluorophore concentrations and fluorescence quantum yield. Spatially resolved fluorescence, excitation and emission reflectance Carlo simulations, and measured using a multi-fibre probe on tissue-simulating phantoms containing either aluminium phthalocyanine tetrasulfonate (AlPcS4), Photofrin meso-tetra-(4-sulfonatophenyl)-porphine dihydrochloride The accuracy of the fluorophore concentration and fluorescence quantum yield recovered by three different models of spatially resolved fluorescence were compared. The models were based on: (a) weighted difference of the excitation and emission reflectance, (b) fluorescence due to a point excitation source or (c) fluorescence due to a pencil beam excitation source. When literature values for the fluorescence quantum yield were used for each of the fluorophores, the fluorophore absorption coefficient (and hence concentration) at the excitation wavelength (mu(a,x,f)) was recovered with a root-mean-square accuracy of 11.4% using the point source model of fluorescence and 8.0% using the more complicated pencil beam excitation model. The accuracy was calculated over a broad range of optical properties and fluorophore concentrations. The weighted difference of reflectance model performed poorly, with a root-mean-square error in concentration of about 50%. Monte Carlo simulations suggest that there are some situations where the weighted difference of reflectance is as accurate as the other two models, although this was not confirmed experimentally. Estimates of the fluorescence quantum yield in multiple scattering media were also made by determining mu(a,x,f) independently from the fitted absorption spectrum and applying the various diffusion theory models. The fluorescence quantum yields for AlPcS4 and TPPS4 were calculated to be 0.59 +/- 0.03 and 0.121 +/- 0.001 respectively using the point source model, and 0.63 +/- 0.03 and 0.129 +/- 0.002 using the pencil beam excitation model. These results are consistent with published values.
NASA Astrophysics Data System (ADS)
Susanti, Yuliana; Zukhronah, Etik; Pratiwi, Hasih; Respatiwulan; Sri Sulistijowati, H.
2017-11-01
To achieve food resilience in Indonesia, food diversification by exploring potentials of local food is required. Corn is one of alternating staple food of Javanese society. For that reason, corn production needs to be improved by considering the influencing factors. CHAID and CRT are methods of data mining which can be used to classify the influencing variables. The present study seeks to dig up information on the potentials of local food availability of corn in regencies and cities in Java Island. CHAID analysis yields four classifications with accuracy of 78.8%, while CRT analysis yields seven classifications with accuracy of 79.6%.
49 CFR 212.201 - General qualifications of State inspection personnel.
Code of Federal Regulations, 2011 CFR
2011-10-01
... acceptable; (3) The ability to record data on standard report forms with a high degree of accuracy; (4) The..., and terminology common to operating and maintenance functions; and (2) The scope and major...
49 CFR 212.201 - General qualifications of State inspection personnel.
Code of Federal Regulations, 2010 CFR
2010-10-01
... acceptable; (3) The ability to record data on standard report forms with a high degree of accuracy; (4) The..., and terminology common to operating and maintenance functions; and (2) The scope and major...
Kim, Young-sun; Trillaud, Hervé; Rhim, Hyunchul; Lim, Hyo K; Mali, Willem; Voogt, Marianne; Barkhausen, Jörg; Eckey, Thomas; Köhler, Max O; Keserci, Bilgin; Mougenot, Charles; Sokka, Shunmugavelu D; Soini, Jouko; Nieminen, Heikki J
2012-11-01
To evaluate the accuracy of the size and location of the ablation zone produced by volumetric magnetic resonance (MR) imaging-guided high-intensity focused ultrasound ablation of uterine fibroids on the basis of MR thermometric analysis and to assess the effects of a feedback control technique. This prospective study was approved by the institutional review board, and written informed consent was obtained. Thirty-three women with 38 uterine fibroids were treated with an MR imaging-guided high-intensity focused ultrasound system capable of volumetric feedback ablation. Size (diameter times length) and location (three-dimensional displacements) of each ablation zone induced by 527 sonications (with [n=471] and without [n=56] feedback) were analyzed according to the thermal dose obtained with MR thermometry. Prospectively defined acceptance ranges of targeting accuracy were ±5 mm in left-right (LR) and craniocaudal (CC) directions and ±12 mm in anteroposterior (AP) direction. Effects of feedback control in 8- and 12-mm treatment cells were evaluated by using a mixed model with repeated observations within patients. Overall mean sizes of ablation zones produced by 4-, 8-, 12-, and 16-mm treatment cells (with and without feedback) were 4.6 mm±1.4 (standard deviation)×4.4 mm±4.8 (n=13), 8.9 mm±1.9×20.2 mm±6.5 (n=248), 13.0 mm±1.2×29.1 mm±5.6 (n=234), and 18.1 mm±1.4×38.2 mm±7.6 (n=32), respectively. Targeting accuracy values (displacements in absolute values) were 0.9 mm±0.7, 1.2 mm±0.9, and 2.8 mm±2.2 in LR, CC, and AP directions, respectively. Of 527 sonications, 99.8% (526 of 527) were within acceptance ranges. Feedback control had no statistically significant effect on targeting accuracy or ablation zone size. However, variations in ablation zone size were smaller in the feedback control group. Sonication accuracy of volumetric MR imaging-guided high-intensity focused ultrasound ablation of uterine fibroids appears clinically acceptable and may be further improved by feedback control to produce more consistent ablation zones. © RSNA, 2012
Basaki, Kinga; Alkumru, Hasan; De Souza, Grace; Finer, Yoav
To assess the three-dimensional (3D) accuracy and clinical acceptability of implant definitive casts fabricated using a digital impression approach and to compare the results with those of a conventional impression method in a partially edentulous condition. A mandibular reference model was fabricated with implants in the first premolar and molar positions to simulate a patient with bilateral posterior edentulism. Ten implant-level impressions per method were made using either an intraoral scanner with scanning abutments for the digital approach or an open-tray technique and polyvinylsiloxane material for the conventional approach. 3D analysis and comparison of implant location on resultant definitive casts were performed using laser scanner and quality control software. The inter-implant distances and interimplant angulations for each implant pair were measured for the reference model and for each definitive cast (n = 20 per group); these measurements were compared to calculate the magnitude of error in 3D for each definitive cast. The influence of implant angulation on definitive cast accuracy was evaluated for both digital and conventional approaches. Statistical analysis was performed using t test (α = .05) for implant position and angulation. Clinical qualitative assessment of accuracy was done via the assessment of the passivity of a master verification stent for each implant pair, and significance was analyzed using chi-square test (α = .05). A 3D error of implant positioning was observed for the two impression techniques vs the reference model, with mean ± standard deviation (SD) error of 116 ± 94 μm and 56 ± 29 μm for the digital and conventional approaches, respectively (P = .01). In contrast, the inter-implant angulation errors were not significantly different between the two techniques (P = .83). Implant angulation did not have a significant influence on definitive cast accuracy within either technique (P = .64). The verification stent demonstrated acceptable passive fit for 11 out of 20 casts and 18 out of 20 casts for the digital and conventional methods, respectively (P = .01). Definitive casts fabricated using the digital impression approach were less accurate than those fabricated from the conventional impression approach for this simulated clinical scenario. A significant number of definitive casts generated by the digital technique did not meet clinically acceptable accuracy for the fabrication of a multiple implant-supported restoration.
Analysis of spatial distribution of land cover maps accuracy
NASA Astrophysics Data System (ADS)
Khatami, R.; Mountrakis, G.; Stehman, S. V.
2017-12-01
Land cover maps have become one of the most important products of remote sensing science. However, classification errors will exist in any classified map and affect the reliability of subsequent map usage. Moreover, classification accuracy often varies over different regions of a classified map. These variations of accuracy will affect the reliability of subsequent analyses of different regions based on the classified maps. The traditional approach of map accuracy assessment based on an error matrix does not capture the spatial variation in classification accuracy. Here, per-pixel accuracy prediction methods are proposed based on interpolating accuracy values from a test sample to produce wall-to-wall accuracy maps. Different accuracy prediction methods were developed based on four factors: predictive domain (spatial versus spectral), interpolation function (constant, linear, Gaussian, and logistic), incorporation of class information (interpolating each class separately versus grouping them together), and sample size. Incorporation of spectral domain as explanatory feature spaces of classification accuracy interpolation was done for the first time in this research. Performance of the prediction methods was evaluated using 26 test blocks, with 10 km × 10 km dimensions, dispersed throughout the United States. The performance of the predictions was evaluated using the area under the curve (AUC) of the receiver operating characteristic. Relative to existing accuracy prediction methods, our proposed methods resulted in improvements of AUC of 0.15 or greater. Evaluation of the four factors comprising the accuracy prediction methods demonstrated that: i) interpolations should be done separately for each class instead of grouping all classes together; ii) if an all-classes approach is used, the spectral domain will result in substantially greater AUC than the spatial domain; iii) for the smaller sample size and per-class predictions, the spectral and spatial domain yielded similar AUC; iv) for the larger sample size (i.e., very dense spatial sample) and per-class predictions, the spatial domain yielded larger AUC; v) increasing the sample size improved accuracy predictions with a greater benefit accruing to the spatial domain; and vi) the function used for interpolation had the smallest effect on AUC.
O’Cathain, Alicia
2014-01-01
Background. In 2010, a new telephone service, NHS 111, was piloted to improve access to urgent care in England. A unique feature is the use of non-clinical call takers who triage calls with computerized decision support and have access to clinical advisors when necessary. Aim. To explore users’ acceptability of NHS 111. Design. Cross-sectional postal survey. Setting. Four pilot sites in England. Method. A postal survey of recent users of NHS 111. Results. The response rate was 41% (1769/4265), with 49% offering written comments (872/1769). Sixty-five percent indicated the advice given had been very helpful and 28% quite helpful. The majority of respondents (86%) indicated that they fully complied with advice. Seventy-three percent was very satisfied and 19% quite satisfied with the service overall. Users were less satisfied with the relevance of questions asked, and the accuracy and appropriateness of advice given, than with other aspects of the service. Users who were autorouted to NHS 111 from services such as GP out-of-hours services were less satisfied than direct callers. Conclusion. In pilot services in the first year of operation, NHS 111 appeared to be acceptable to the majority of users. Acceptability could be improved by reassessing the necessity of triage questions used and auditing the accuracy and appropriateness of advice given. User acceptability should be viewed in the context of findings from the wider evaluation, which identified that the NHS 111 pilot services did not improve access to urgent care and indeed increased the use of emergency ambulance services. PMID:24334420
Heidaritabar, M; Wolc, A; Arango, J; Zeng, J; Settar, P; Fulton, J E; O'Sullivan, N P; Bastiaansen, J W M; Fernando, R L; Garrick, D J; Dekkers, J C M
2016-10-01
Most genomic prediction studies fit only additive effects in models to estimate genomic breeding values (GEBV). However, if dominance genetic effects are an important source of variation for complex traits, accounting for them may improve the accuracy of GEBV. We investigated the effect of fitting dominance and additive effects on the accuracy of GEBV for eight egg production and quality traits in a purebred line of brown layers using pedigree or genomic information (42K single-nucleotide polymorphism (SNP) panel). Phenotypes were corrected for the effect of hatch date. Additive and dominance genetic variances were estimated using genomic-based [genomic best linear unbiased prediction (GBLUP)-REML and BayesC] and pedigree-based (PBLUP-REML) methods. Breeding values were predicted using a model that included both additive and dominance effects and a model that included only additive effects. The reference population consisted of approximately 1800 animals hatched between 2004 and 2009, while approximately 300 young animals hatched in 2010 were used for validation. Accuracy of prediction was computed as the correlation between phenotypes and estimated breeding values of the validation animals divided by the square root of the estimate of heritability in the whole population. The proportion of dominance variance to total phenotypic variance ranged from 0.03 to 0.22 with PBLUP-REML across traits, from 0 to 0.03 with GBLUP-REML and from 0.01 to 0.05 with BayesC. Accuracies of GEBV ranged from 0.28 to 0.60 across traits. Inclusion of dominance effects did not improve the accuracy of GEBV, and differences in their accuracies between genomic-based methods were small (0.01-0.05), with GBLUP-REML yielding higher prediction accuracies than BayesC for egg production, egg colour and yolk weight, while BayesC yielded higher accuracies than GBLUP-REML for the other traits. In conclusion, fitting dominance effects did not impact accuracy of genomic prediction of breeding values in this population. © 2016 Blackwell Verlag GmbH.
Point-of-care wound visioning technology: Reproducibility and accuracy of a wound measurement app
Anderson, John A. E.; Evans, Robyn; Woo, Kevin; Beland, Benjamin; Sasseville, Denis; Moreau, Linda
2017-01-01
Background Current wound assessment practices are lacking on several measures. For example, the most common method for measuring wound size is using a ruler, which has been demonstrated to be crude and inaccurate. An increase in periwound temperature is a classic sign of infection but skin temperature is not always measured during wound assessments. To address this, we have developed a smartphone application that enables non-contact wound surface area and temperature measurements. Here we evaluate the inter-rater reliability and accuracy of this novel point-of-care wound assessment tool. Methods and findings The wounds of 87 patients were measured using the Swift Wound app and a ruler. The skin surface temperature of 37 patients was also measured using an infrared FLIR™ camera integrated with the Swift Wound app and using the clinically accepted reference thermometer Exergen DermaTemp 1001. Accuracy measurements were determined by assessing differences in surface area measurements of 15 plastic wounds between a digital planimeter of known accuracy and the Swift Wound app. To evaluate the impact of training on the reproducibility of the Swift Wound app measurements, three novice raters with no wound care training, measured the length, width and area of 12 plastic model wounds using the app. High inter-rater reliabilities (ICC = 0.97–1.00) and high accuracies were obtained using the Swift Wound app across raters of different levels of training in wound care. The ruler method also yielded reliable wound measurements (ICC = 0.92–0.97), albeit lower than that of the Swift Wound app. Furthermore, there was no statistical difference between the temperature differences measured using the infrared camera and the clinically tested reference thermometer. Conclusions The Swift Wound app provides highly reliable and accurate wound measurements. The FLIR™ infrared camera integrated into the Swift Wound app provides skin temperature readings equivalent to the clinically tested reference thermometer. Thus, the Swift Wound app has the advantage of being a non-contact, easy-to-use wound measurement tool that allows clinicians to image, measure, and track wound size and temperature from one visit to the next. In addition, this tool may also be used by patients and their caregivers for home monitoring. PMID:28817649
A model-based scatter artifacts correction for cone beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Wei; Zhu, Jun; Wang, Luyao
2016-04-15
Purpose: Due to the increased axial coverage of multislice computed tomography (CT) and the introduction of flat detectors, the size of x-ray illumination fields has grown dramatically, causing an increase in scatter radiation. For CT imaging, scatter is a significant issue that introduces shading artifact, streaks, as well as reduced contrast and Hounsfield Units (HU) accuracy. The purpose of this work is to provide a fast and accurate scatter artifacts correction algorithm for cone beam CT (CBCT) imaging. Methods: The method starts with an estimation of coarse scatter profiles for a set of CBCT data in either image domain ormore » projection domain. A denoising algorithm designed specifically for Poisson signals is then applied to derive the final scatter distribution. Qualitative and quantitative evaluations using thorax and abdomen phantoms with Monte Carlo (MC) simulations, experimental Catphan phantom data, and in vivo human data acquired for a clinical image guided radiation therapy were performed. Scatter correction in both projection domain and image domain was conducted and the influences of segmentation method, mismatched attenuation coefficients, and spectrum model as well as parameter selection were also investigated. Results: Results show that the proposed algorithm can significantly reduce scatter artifacts and recover the correct HU in either projection domain or image domain. For the MC thorax phantom study, four-components segmentation yields the best results, while the results of three-components segmentation are still acceptable. The parameters (iteration number K and weight β) affect the accuracy of the scatter correction and the results get improved as K and β increase. It was found that variations in attenuation coefficient accuracies only slightly impact the performance of the proposed processing. For the Catphan phantom data, the mean value over all pixels in the residual image is reduced from −21.8 to −0.2 HU and 0.7 HU for projection domain and image domain, respectively. The contrast of the in vivo human images is greatly improved after correction. Conclusions: The software-based technique has a number of advantages, such as high computational efficiency and accuracy, and the capability of performing scatter correction without modifying the clinical workflow (i.e., no extra scan/measurement data are needed) or modifying the imaging hardware. When implemented practically, this should improve the accuracy of CBCT image quantitation and significantly impact CBCT-based interventional procedures and adaptive radiation therapy.« less
Dehbi, Hakim-Moulay; Howard, James P; Shun-Shin, Matthew J; Sen, Sayan; Nijjer, Sukhjinder S; Mayet, Jamil; Davies, Justin E; Francis, Darrel P
2018-01-01
Background Diagnostic accuracy is widely accepted by researchers and clinicians as an optimal expression of a test’s performance. The aim of this study was to evaluate the effects of disease severity distribution on values of diagnostic accuracy as well as propose a sample-independent methodology to calculate and display accuracy of diagnostic tests. Methods and findings We evaluated the diagnostic relationship between two hypothetical methods to measure serum cholesterol (Cholrapid and Cholgold) by generating samples with statistical software and (1) keeping the numerical relationship between methods unchanged and (2) changing the distribution of cholesterol values. Metrics of categorical agreement were calculated (accuracy, sensitivity and specificity). Finally, a novel methodology to display and calculate accuracy values was presented (the V-plot of accuracies). Conclusion No single value of diagnostic accuracy can be used to describe the relationship between tests, as accuracy is a metric heavily affected by the underlying sample distribution. Our novel proposed methodology, the V-plot of accuracies, can be used as a sample-independent measure of a test performance against a reference gold standard. PMID:29387424
ERIC Educational Resources Information Center
Scullin, Matthew H.; Bonner, Karri
2006-01-01
The current study examined the relations among 3- to 5-year-olds' theory of mind, inhibitory control, and three measures of suggestibility: yielding to suggestive questions (yield), shifting answers in response to negative feedback (shift), and accuracy in response to misleading questions during a pressured interview about a live event. Theory of…
Adjusting site index and age to account for genetic effects in yield equations for loblolly pine
Steven A. Knowe; G. Sam Foster
2010-01-01
Nine combinations of site index curves and age adjustments methods were evaluated for incorporating genetic effects for open-pollinated loblolly pine (Pinus taeda L.) families. An explicit yield system consisting of dominant height, basal area, and merchantable green weight functions was used to compare the accuracy of predictions associated with...
Prieto, D; Das, T K
2016-03-01
Uncertainty of pandemic influenza viruses continue to cause major preparedness challenges for public health policymakers. Decisions to mitigate influenza outbreaks often involve tradeoff between the social costs of interventions (e.g., school closure) and the cost of uncontrolled spread of the virus. To achieve a balance, policymakers must assess the impact of mitigation strategies once an outbreak begins and the virus characteristics are known. Agent-based (AB) simulation is a useful tool for building highly granular disease spread models incorporating the epidemiological features of the virus as well as the demographic and social behavioral attributes of tens of millions of affected people. Such disease spread models provide excellent basis on which various mitigation strategies can be tested, before they are adopted and implemented by the policymakers. However, to serve as a testbed for the mitigation strategies, the AB simulation models must be operational. A critical requirement for operational AB models is that they are amenable for quick and simple calibration. The calibration process works as follows: the AB model accepts information available from the field and uses those to update its parameters such that some of its outputs in turn replicate the field data. In this paper, we present our epidemiological model based calibration methodology that has a low computational complexity and is easy to interpret. Our model accepts a field estimate of the basic reproduction number, and then uses it to update (calibrate) the infection probabilities in a way that its effect combined with the effects of the given virus epidemiology, demographics, and social behavior results in an infection pattern yielding a similar value of the basic reproduction number. We evaluate the accuracy of the calibration methodology by applying it for an AB simulation model mimicking a regional outbreak in the US. The calibrated model is shown to yield infection patterns closely replicating the input estimates of the basic reproduction number. The calibration method is also tested to replicate an initial infection incidence trend for a H1N1 outbreak like that of 2009.
Measurement of diffusion coefficients from solution rates of bubbles
NASA Technical Reports Server (NTRS)
Krieger, I. M.
1979-01-01
The rate of solution of a stationary bubble is limited by the diffusion of dissolved gas molecules away from the bubble surface. Diffusion coefficients computed from measured rates of solution give mean values higher than accepted literature values, with standard errors as high as 10% for a single observation. Better accuracy is achieved with sparingly soluble gases, small bubbles, and highly viscous liquids. Accuracy correlates with the Grashof number, indicating that free convection is the major source of error. Accuracy should, therefore, be greatly increased in a gravity-free environment. The fact that the bubble will need no support is an additional important advantage of Spacelab for this measurement.
Nelson, Sarah C.; Stilp, Adrienne M.; Papanicolaou, George J.; Taylor, Kent D.; Rotter, Jerome I.; Thornton, Timothy A.; Laurie, Cathy C.
2016-01-01
Imputation is commonly used in genome-wide association studies to expand the set of genetic variants available for analysis. Larger and more diverse reference panels, such as the final Phase 3 of the 1000 Genomes Project, hold promise for improving imputation accuracy in genetically diverse populations such as Hispanics/Latinos in the USA. Here, we sought to empirically evaluate imputation accuracy when imputing to a 1000 Genomes Phase 3 versus a Phase 1 reference, using participants from the Hispanic Community Health Study/Study of Latinos. Our assessments included calculating the correlation between imputed and observed allelic dosage in a subset of samples genotyped on a supplemental array. We observed that the Phase 3 reference yielded higher accuracy at rare variants, but that the two reference panels were comparable at common variants. At a sample level, the Phase 3 reference improved imputation accuracy in Hispanic/Latino samples from the Caribbean more than for Mainland samples, which we attribute primarily to the additional reference panel samples available in Phase 3. We conclude that a 1000 Genomes Project Phase 3 reference panel can yield improved imputation accuracy compared with Phase 1, particularly for rare variants and for samples of certain genetic ancestry compositions. Our findings can inform imputation design for other genome-wide association studies of participants with diverse ancestries, especially as larger and more diverse reference panels continue to become available. PMID:27346520
Identification of facilitators and barriers to residents' use of a clinical reasoning tool.
DiNardo, Deborah; Tilstra, Sarah; McNeil, Melissa; Follansbee, William; Zimmer, Shanta; Farris, Coreen; Barnato, Amber E
2018-03-28
While there is some experimental evidence to support the use of cognitive forcing strategies to reduce diagnostic error in residents, the potential usability of such strategies in the clinical setting has not been explored. We sought to test the effect of a clinical reasoning tool on diagnostic accuracy and to obtain feedback on its usability and acceptability. We conducted a randomized behavioral experiment testing the effect of this tool on diagnostic accuracy on written cases among post-graduate 3 (PGY-3) residents at a single internal medical residency program in 2014. Residents completed written clinical cases in a proctored setting with and without prompts to use the tool. The tool encouraged reflection on concordant and discordant aspects of each case. We used random effects regression to assess the effect of the tool on diagnostic accuracy of the independent case sets, controlling for case complexity. We then conducted audiotaped structured focus group debriefing sessions and reviewed the tapes for facilitators and barriers to use of the tool. Of 51 eligible PGY-3 residents, 34 (67%) participated in the study. The average diagnostic accuracy increased from 52% to 60% with the tool, a difference that just met the test for statistical significance in adjusted analyses (p=0.05). Residents reported that the tool was generally acceptable and understandable but did not recognize its utility for use with simple cases, suggesting the presence of overconfidence bias. A clinical reasoning tool improved residents' diagnostic accuracy on written cases. Overconfidence bias is a potential barrier to its use in the clinical setting.
Pinsornsak, Piya; Harnroongroj, Thos
2016-11-01
The specialized instrument system used in minimally invasive surgery (MIS) has been developed for reducing soft tissue trauma in total knee arthroplasty (TKA). Compared with front-cutting MIS instruments, side-cutting quadriceps sparing MIS instruments have the advantage of creating a smaller incision and causing fewer traumas to the quadriceps tendon. However, the accuracy of side-cutting instruments concerns surgeons in prosthesis malalignment. To compare the accuracy of side-cutting quadriceps sparing instruments versus front-cutting instruments in MIS-TKA. In this prospective randomized controlled study, we compared the accuracy of side-cutting quadriceps sparing instruments versus the front-cutting instruments used in MIS-TKA. Sixty knees were included in the study, with 30 knees in each group. All the operations were performed by single surgeon. Coronal alignment (tibiofemoral angle, lateral distal femoral angle, and medial proximal tibial angle), and sagittal alignment (femoral component flexion and tibial posterior slope) were measured and compared. Tibiofemoral angle, lateral distal femoral angle, and medial proximal tibial angle, all of which are considered in the assessment of acceptable coronal radiographic alignment, were not different between groups (p = 0.353, 0.500, and 0.177, respectively). However, side-cutting quadriceps sparing instruments produced less acceptable sagittal radiographic alignment, femoral component flexion (63% vs. 93%, p = 0.005), and tibial posterior slope (73% vs. 93%, p = 0.04). Side-cutting quadriceps sparing MIS-TKA instruments had similar accuracy to front-cutting MIS-TKA instruments for coronal alignment but is less accurate for sagittal alignment.
Multicentre knowledge sharing and planning/dose audit on flattening filter free beams for SBRT lung
NASA Astrophysics Data System (ADS)
Hansen, C. R.; Sykes, J. R.; Barber, J.; West, K.; Bromley, R.; Szymura, K.; Fisher, S.; Sim, J.; Bailey, M.; Chrystal, D.; Deshpande, S.; Franji, I.; Nielsen, T. B.; Brink, C.; Thwaites, D. I.
2015-01-01
When implementing new technology into clinical practice, there will always be a need for large knowledge gain. The aim of this study was twofold, (I) audit the treatment planning and dose delivery of Flattening Filter Free (FFF) beam technology for Stereotactic Body Radiation Therapy (SBRT) of lung tumours across a range of treatment planning systems compared to the conventional Flatting Filter (FF) beams, (II) investigate how sharing knowledge between centres of different experience can improve plan quality. All vendor/treatment planning system (TPS) combinations investigated were able to produce acceptable treatment plans and the dose accuracy was clinically acceptable for all plans. By sharing knowledge between the different centres, the minor protocol violations (MPV) could be significantly reduced, from an average of 1.9 MPV per plan to 0.6 after such sharing of treatment planning knowledge. In particular, for the centres with less SBRT and/or volumetric- modulated arc therapy (VMAT) experience the MPV average per plan improved. All vendor/TPS combinations were also able to successfully deliver the FF and FFF SBRT VMAT plans. The plan quality and dose accuracy were found to be clinically acceptable.
Identification and delineation of areas flood hazard using high accuracy of DEM data
NASA Astrophysics Data System (ADS)
Riadi, B.; Barus, B.; Widiatmaka; Yanuar, M. J. P.; Pramudya, B.
2018-05-01
Flood incidents that often occur in Karawang regency need to be mitigated. These expectations exist on technologies that can predict, anticipate and reduce disaster risks. Flood modeling techniques using Digital Elevation Model (DEM) data can be applied in mitigation activities. High accuracy DEM data used in modeling, will result in better flooding flood models. The result of high accuracy DEM data processing will yield information about surface morphology which can be used to identify indication of flood hazard area. The purpose of this study was to identify and describe flood hazard areas by identifying wetland areas using DEM data and Landsat-8 images. TerraSAR-X high-resolution data is used to detect wetlands from landscapes, while land cover is identified by Landsat image data. The Topography Wetness Index (TWI) method is used to detect and identify wetland areas with basic DEM data, while for land cover analysis using Tasseled Cap Transformation (TCT) method. The result of TWI modeling yields information about potential land of flood. Overlay TWI map with land cover map that produces information that in Karawang regency the most vulnerable areas occur flooding in rice fields. The spatial accuracy of the flood hazard area in this study was 87%.
Accuracy comparison among different machine learning techniques for detecting malicious codes
NASA Astrophysics Data System (ADS)
Narang, Komal
2016-03-01
In this paper, a machine learning based model for malware detection is proposed. It can detect newly released malware i.e. zero day attack by analyzing operation codes on Android operating system. The accuracy of Naïve Bayes, Support Vector Machine (SVM) and Neural Network for detecting malicious code has been compared for the proposed model. In the experiment 400 benign files, 100 system files and 500 malicious files have been used to construct the model. The model yields the best accuracy 88.9% when neural network is used as classifier and achieved 95% and 82.8% accuracy for sensitivity and specificity respectively.
Wheat productivity estimates using LANDSAT data
NASA Technical Reports Server (NTRS)
Nalepka, R. F.; Colwell, J. E. (Principal Investigator); Rice, D. P.; Bresnahan, P. A.
1977-01-01
The author has identified the following significant results. Large area LANDSAT yield estimates were generated. These results were compared with estimates computed using a meteorological yield model (CCEA). Both of these estimates were compared with Kansas Crop and Livestock Reporting Service (KCLRS) estimates of yield, in an attempt to assess the relative and absolute accuracy of the LANDSAT and CCEA estimates. Results were inconclusive. A large area direct wheat prediction procedure was implemented. Initial results have produced a wheat production estimate comparable with the KCLRS estimate.
Improving Accuracy in Arrhenius Models of Cell Death: Adding a Temperature-Dependent Time Delay.
Pearce, John A
2015-12-01
The Arrhenius formulation for single-step irreversible unimolecular reactions has been used for many decades to describe the thermal damage and cell death processes. Arrhenius predictions are acceptably accurate for structural proteins, for some cell death assays, and for cell death at higher temperatures in most cell lines, above about 55 °C. However, in many cases--and particularly at hyperthermic temperatures, between about 43 and 55 °C--the particular intrinsic cell death or damage process under study exhibits a significant "shoulder" region that constant-rate Arrhenius models are unable to represent with acceptable accuracy. The primary limitation is that Arrhenius calculations always overestimate the cell death fraction, which leads to severely overoptimistic predictions of heating effectiveness in tumor treatment. Several more sophisticated mathematical model approaches have been suggested and show much-improved performance. But simpler models that have adequate accuracy would provide useful and practical alternatives to intricate biochemical analyses. Typical transient intrinsic cell death processes at hyperthermic temperatures consist of a slowly developing shoulder region followed by an essentially constant-rate region. The shoulder regions have been demonstrated to arise chiefly from complex functional protein signaling cascades that generate delays in the onset of the constant-rate region, but may involve heat shock protein activity as well. This paper shows that acceptably accurate and much-improved predictions in the simpler Arrhenius models can be obtained by adding a temperature-dependent time delay. Kinetic coefficients and the appropriate time delay are obtained from the constant-rate regions of the measured survival curves. The resulting predictions are seen to provide acceptably accurate results while not overestimating cell death. The method can be relatively easily incorporated into numerical models. Additionally, evidence is presented to support the application of compensation law behavior to the cell death processes--that is, the strong correlation between the kinetic coefficients, ln{A} and E(a), is confirmed.
NASA Astrophysics Data System (ADS)
Dondurur, Mehmet
The primary objective of this study was to determine the degree to which modern SAR systems can be used to obtain information about the Earth's vegetative resources. Information obtainable from microwave synthetic aperture radar (SAR) data was compared with that obtainable from LANDSAT-TM and SPOT data. Three hypotheses were tested: (a) Classification of land cover/use from SAR data can be accomplished on a pixel-by-pixel basis with the same overall accuracy as from LANDSAT-TM and SPOT data. (b) Classification accuracy for individual land cover/use classes will differ between sensors. (c) Combining information derived from optical and SAR data into an integrated monitoring system will improve overall and individual land cover/use class accuracies. The study was conducted with three data sets for the Sleeping Bear Dunes test site in the northwestern part of Michigan's lower peninsula, including an October 1982 LANDSAT-TM scene, a June 1989 SPOT scene and C-, L- and P-Band radar data from the Jet Propulsion Laboratory AIRSAR. Reference data were derived from the Michigan Resource Information System (MIRIS) and available color infrared aerial photos. Classification and rectification of data sets were done using ERDAS Image Processing Programs. Classification algorithms included Maximum Likelihood, Mahalanobis Distance, Minimum Spectral Distance, ISODATA, Parallelepiped, and Sequential Cluster Analysis. Classified images were rectified as necessary so that all were at the same scale and oriented north-up. Results were analyzed with contingency tables and percent correctly classified (PCC) and Cohen's Kappa (CK) as accuracy indices using CSLANT and ImagePro programs developed for this study. Accuracy analyses were based upon a 1.4 by 6.5 km area with its long axis east-west. Reference data for this subscene total 55,770 15 by 15 m pixels with sixteen cover types, including seven level III forest classes, three level III urban classes, two level II range classes, two water classes, one wetland class and one agriculture class. An initial analysis was made without correcting the 1978 MIRIS reference data to the different dates of the TM, SPOT and SAR data sets. In this analysis, highest overall classification accuracy (PCC) was 87% with the TM data set, with both SPOT and C-Band SAR at 85%, a difference statistically significant at the 0.05 level. When the reference data were corrected for land cover change between 1978 and 1991, classification accuracy with the C-Band SAR data increased to 87%. Classification accuracy differed from sensor to sensor for individual land cover classes, Combining sensors into hypothetical multi-sensor systems resulted in higher accuracies than for any single sensor. Combining LANDSAT -TM and C-Band SAR yielded an overall classification accuracy (PCC) of 92%. The results of this study indicate that C-Band SAR data provide an acceptable substitute for LANDSAT-TM or SPOT data when land cover information is desired of areas where cloud cover obscures the terrain. Even better results can be obtained by integrating TM and C-Band SAR data into a multi-sensor system.
Cardiac Auscultation Using Smartphones: Pilot Study.
Kang, Si-Hyuck; Joe, Byunggill; Yoon, Yeonyee; Cho, Goo-Yeong; Shin, Insik; Suh, Jung-Won
2018-02-28
Cardiac auscultation is a cost-effective, noninvasive screening tool that can provide information about cardiovascular hemodynamics and disease. However, with advances in imaging and laboratory tests, the importance of cardiac auscultation is less appreciated in clinical practice. The widespread use of smartphones provides opportunities for nonmedical expert users to perform self-examination before hospital visits. The objective of our study was to assess the feasibility of cardiac auscultation using smartphones with no add-on devices for use at the prehospital stage. We performed a pilot study of patients with normal and pathologic heart sounds. Heart sounds were recorded on the skin of the chest wall using 3 smartphones: the Samsung Galaxy S5 and Galaxy S6, and the LG G3. Recorded heart sounds were processed and classified by a diagnostic algorithm using convolutional neural networks. We assessed diagnostic accuracy, as well as sensitivity, specificity, and predictive values. A total of 46 participants underwent heart sound recording. After audio file processing, 30 of 46 (65%) heart sounds were proven interpretable. Atrial fibrillation and diastolic murmur were significantly associated with failure to acquire interpretable heart sounds. The diagnostic algorithm classified the heart sounds into the correct category with high accuracy: Galaxy S5, 90% (95% CI 73%-98%); Galaxy S6, 87% (95% CI 69%-96%); and LG G3, 90% (95% CI 73%-98%). Sensitivity, specificity, positive predictive value, and negative predictive value were also acceptable for the 3 devices. Cardiac auscultation using smartphones was feasible. Discrimination using convolutional neural networks yielded high diagnostic accuracy. However, using the built-in microphones alone, the acquisition of reproducible and interpretable heart sounds was still a major challenge. ClinicalTrials.gov NCT03273803; https://clinicaltrials.gov/ct2/show/NCT03273803 (Archived by WebCite at http://www.webcitation.org/6x6g1fHIu). ©Si-Hyuck Kang, Byunggill Joe, Yeonyee Yoon, Goo-Yeong Cho, Insik Shin, Jung-Won Suh. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 28.02.2018.
Cardiac Auscultation Using Smartphones: Pilot Study
Kang, Si-Hyuck; Joe, Byunggill; Yoon, Yeonyee; Cho, Goo-Yeong; Shin, Insik
2018-01-01
Background Cardiac auscultation is a cost-effective, noninvasive screening tool that can provide information about cardiovascular hemodynamics and disease. However, with advances in imaging and laboratory tests, the importance of cardiac auscultation is less appreciated in clinical practice. The widespread use of smartphones provides opportunities for nonmedical expert users to perform self-examination before hospital visits. Objective The objective of our study was to assess the feasibility of cardiac auscultation using smartphones with no add-on devices for use at the prehospital stage. Methods We performed a pilot study of patients with normal and pathologic heart sounds. Heart sounds were recorded on the skin of the chest wall using 3 smartphones: the Samsung Galaxy S5 and Galaxy S6, and the LG G3. Recorded heart sounds were processed and classified by a diagnostic algorithm using convolutional neural networks. We assessed diagnostic accuracy, as well as sensitivity, specificity, and predictive values. Results A total of 46 participants underwent heart sound recording. After audio file processing, 30 of 46 (65%) heart sounds were proven interpretable. Atrial fibrillation and diastolic murmur were significantly associated with failure to acquire interpretable heart sounds. The diagnostic algorithm classified the heart sounds into the correct category with high accuracy: Galaxy S5, 90% (95% CI 73%-98%); Galaxy S6, 87% (95% CI 69%-96%); and LG G3, 90% (95% CI 73%-98%). Sensitivity, specificity, positive predictive value, and negative predictive value were also acceptable for the 3 devices. Conclusions Cardiac auscultation using smartphones was feasible. Discrimination using convolutional neural networks yielded high diagnostic accuracy. However, using the built-in microphones alone, the acquisition of reproducible and interpretable heart sounds was still a major challenge. Trial Registration ClinicalTrials.gov NCT03273803; https://clinicaltrials.gov/ct2/show/NCT03273803 (Archived by WebCite at http://www.webcitation.org/6x6g1fHIu) PMID:29490899
Washburn, Micki; Bordnick, Patrick; Rizzo, Albert Skip
2016-10-01
This study presents preliminary feasibility and acceptability data on the use of virtual patient (VP) simulations to develop brief assessment skills within an interdisciplinary care setting. Results support the acceptability of technology-enhanced simulations and offer preliminary evidence for an association between engagement in VP practice simulations and improvements in diagnostic accuracy and clinical interviewing skills. Recommendations and next steps for research on technology-enhanced simulations within social work are discussed.
Exposure Range For Cine Radiographic Procedures
NASA Astrophysics Data System (ADS)
Moore, Robert J.
1980-08-01
Based on the author's experience, state-of-the-art cine radiographic equipment of the type used in modern cardiovascular laboratories for selective coronary arteriography must perform at well-defined levels to produce cine images with acceptable quantum mottle, contrast, and detail, as judged by consensus of across section of American cardiologists/radiologists experienced in viewing such images. Accordingly, a "standard" undertable state-of-the-art cine radiographic imaging system is postulated to answer the question of what patient exposure range is necessary to obtain cine images of acceptable quality. It is shown that such a standard system would be expected to produce a 'tabletop exposure of about 25 milliRoentgens per frame for the "standard" adult patient, plus-or-minus 33% for accept-able variation of system parameters. This means that for cine radiography at 60 frames per second (30 frames per second) the exposure rate range based on this model is 60 to 120 Roentgens per minute (30 to 60 Roentgens per minute). The author contends that studies at exposure levels below these will yield cine images of questionable diagnostic value; studies at exposure levels above these may yield cine images of excellent visual quality but having little additional diagnostic value, at the expense of added patient/personnel radiation exposure and added x-ray tube heat loading.
Real-time self-calibration of a tracked augmented reality display
NASA Astrophysics Data System (ADS)
Baum, Zachary; Lasso, Andras; Ungi, Tamas; Fichtinger, Gabor
2016-03-01
PURPOSE: Augmented reality systems have been proposed for image-guided needle interventions but they have not become widely used in clinical practice due to restrictions such as limited portability, low display refresh rates, and tedious calibration procedures. We propose a handheld tablet-based self-calibrating image overlay system. METHODS: A modular handheld augmented reality viewbox was constructed from a tablet computer and a semi-transparent mirror. A consistent and precise self-calibration method, without the use of any temporary markers, was designed to achieve an accurate calibration of the system. Markers attached to the viewbox and patient are simultaneously tracked using an optical pose tracker to report the position of the patient with respect to a displayed image plane that is visualized in real-time. The software was built using the open-source 3D Slicer application platform's SlicerIGT extension and the PLUS toolkit. RESULTS: The accuracy of the image overlay with image-guided needle interventions yielded a mean absolute position error of 0.99 mm (95th percentile 1.93 mm) in-plane of the overlay and a mean absolute position error of 0.61 mm (95th percentile 1.19 mm) out-of-plane. This accuracy is clinically acceptable for tool guidance during various procedures, such as musculoskeletal injections. CONCLUSION: A self-calibration method was developed and evaluated for a tracked augmented reality display. The results show potential for the use of handheld image overlays in clinical studies with image-guided needle interventions.
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization
NASA Astrophysics Data System (ADS)
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj
2015-03-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj
2017-01-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery. PMID:28943703
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj
2015-03-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.
Sankey, Joel B.; McVay, Jason C.; Kreitler, Jason R.; Hawbaker, Todd J.; Vaillant, Nicole; Lowe, Scott
2015-01-01
Increased sedimentation following wildland fire can negatively impact water supply and water quality. Understanding how changing fire frequency, extent, and location will affect watersheds and the ecosystem services they supply to communities is of great societal importance in the western USA and throughout the world. In this work we assess the utility of the InVEST (Integrated Valuation of Ecosystem Services and Tradeoffs) Sediment Retention Model to accurately characterize erosion and sedimentation of burned watersheds. InVEST was developed by the Natural Capital Project at Stanford University (Tallis et al., 2014) and is a suite of GIS-based implementations of common process models, engineered for high-end computing to allow the faster simulation of larger landscapes and incorporation into decision-making. The InVEST Sediment Retention Model is based on common soil erosion models (e.g., USLE – Universal Soil Loss Equation) and determines which areas of the landscape contribute the greatest sediment loads to a hydrological network and conversely evaluate the ecosystem service of sediment retention on a watershed basis. In this study, we evaluate the accuracy and uncertainties for InVEST predictions of increased sedimentation after fire, using measured postfire sediment yields available for many watersheds throughout the western USA from an existing, published large database. We show that the model can be parameterized in a relatively simple fashion to predict post-fire sediment yield with accuracy. Our ultimate goal is to use the model to accurately predict variability in post-fire sediment yield at a watershed scale as a function of future wildfire conditions.
Charles J. Gatchell; Charles J. Gatchell
1991-01-01
Gang-ripping technology that uses a movable (floating) outer blade to eliminate unusable edgings is described, including new tenn1nology for identifying preferred and minimally acceptable strip widths. Because of the large amount of salvage required to achieve total yields, floating blade gang ripping is not recommended for boards with crook. With crook removed by...
Lee, Juneyoung; Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi
2015-01-01
Meta-analysis of diagnostic test accuracy studies differs from the usual meta-analysis of therapeutic/interventional studies in that, it is required to simultaneously analyze a pair of two outcome measures such as sensitivity and specificity, instead of a single outcome. Since sensitivity and specificity are generally inversely correlated and could be affected by a threshold effect, more sophisticated statistical methods are required for the meta-analysis of diagnostic test accuracy. Hierarchical models including the bivariate model and the hierarchical summary receiver operating characteristic model are increasingly being accepted as standard methods for meta-analysis of diagnostic test accuracy studies. We provide a conceptual review of statistical methods currently used and recommended for meta-analysis of diagnostic test accuracy studies. This article could serve as a methodological reference for those who perform systematic review and meta-analysis of diagnostic test accuracy studies. PMID:26576107
Strudwick, Gillian
2015-05-01
The benefits of healthcare technologies can only be attained if nurses accept and intend to fully use them. One of the most common models utilized to understand user acceptance of technology is the Technology Acceptance Model. This model and modified versions of it have only recently been applied in the healthcare literature among nurse participants. An integrative literature review was conducted on this topic. Ovid/MEDLINE, PubMed, Google Scholar, and CINAHL were searched yielding a total of 982 references. Upon eliminating duplicates and applying the inclusion and exclusion criteria, the review included a total of four dissertations, three symposium proceedings, and 13 peer-reviewed journal articles. These documents were appraised and reviewed. The results show that a modified Technology Acceptance Model with added variables could provide a better explanation of nurses' acceptance of healthcare technology. These added variables to modified versions of the Technology Acceptance Model are discussed, and the studies' methodologies are critiqued. Limitations of the studies included in the integrative review are also examined.
Code of Federal Regulations, 2011 CFR
2011-10-01
... address, photograph or photo-image, and personal signature. (2) Objectives of the training and assessment... Guard-accepted standards for accuracy, integrity, and availability. (j) Substitution of a training...
Code of Federal Regulations, 2013 CFR
2013-10-01
... address, photograph or photo-image, and personal signature. (2) Objectives of the training and assessment... Guard-accepted standards for accuracy, integrity, and availability. (j) Substitution of a training...
Code of Federal Regulations, 2010 CFR
2010-10-01
... address, photograph or photo-image, and personal signature. (2) Objectives of the training and assessment... Guard-accepted standards for accuracy, integrity, and availability. (j) Substitution of a training...
Code of Federal Regulations, 2012 CFR
2012-10-01
... address, photograph or photo-image, and personal signature. (2) Objectives of the training and assessment... Guard-accepted standards for accuracy, integrity, and availability. (j) Substitution of a training...
Calibration methodology for proportional counters applied to yield measurements of a neutron burst.
Tarifeño-Saldivia, Ariel; Mayer, Roberto E; Pavez, Cristian; Soto, Leopoldo
2014-01-01
This paper introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. This methodology is to be applied when single neutron events cannot be resolved in time by nuclear standard electronics, or when a continuous current cannot be measured at the output of the counter. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from the detection of the burst of neutrons. The model is developed and presented in full detail. For the measurement of fast neutron yields generated from plasma focus experiments using a moderated proportional counter, the implementation of the methodology is herein discussed. An experimental verification of the accuracy of the methodology is presented. An improvement of more than one order of magnitude in the accuracy of the detection system is obtained by using this methodology with respect to previous calibration methods.
Breeding of commercially acceptable allelopathic rice cultivars in China.
Kong, Chui-Hua; Chen, Xiong-Hui; Hu, Fei; Zhang, Song-Zhu
2011-09-01
One promising area of paddy weed control is the potential for exploiting the weed-suppressing ability of rice. This study was conducted to develop commercially acceptable allelopathic rice cultivars using crosses between allelopathic rice variety PI312777 and commercial Chinese cultivars (N2S, N9S, Huahui354, Peiai64S and Tehuazhan35), and to assess their weed suppression and grain yield in paddy fields in relation to their parents. There was a positive dominance in the crosses Huahui354 × PI312777 and N2S × PI312777 but recessive or negative dominance in N9S × PI312777, Peiai64S × PI312777 and Tehuazhan35 × PI312777. Huahui354 × PI312777 and N2S × PI312777 showed stronger weed suppression than their parents and other crosses. Finally, an F8 line with an appearance close to Huahui354 and a magnitude of weed suppression close to PI312777 was obtained from Huahui354 × PI312777. This line, named Huagan-3, was released as a first commercially acceptable allelopathic rice cultivar in China. The grain yield and quality of Huagan-3 met the commercial standard of the local rice industry. Huagan-3 greatly suppressed paddy weeds, although suppression was influenced by year-to-year variation and plant density. There was no certain yield reduction in Huagan-3 even under a slight infestation of barnyard grass in paddy fields. The successful breeding of Huagan-3 with high yield and strong weed suppression may be incorporated into present rice production systems to minimise the amount of herbicide used. Copyright © 2011 Society of Chemical Industry.
Favazza, Christopher P.; Yu, Lifeng; Leng, Shuai; Kofler, James M.; McCollough, Cynthia H.
2015-01-01
Objective To compare computed tomography dose and noise arising from use of an automatic exposure control (AEC) system designed to maintain constant image noise as patient size varies with clinically accepted technique charts and AEC systems designed to vary image noise. Materials and Methods A model was developed to describe tube current modulation as a function of patient thickness. Relative dose and noise values were calculated as patient width varied for AEC settings designed to yield constant or variable noise levels and were compared to empirically derived values used by our clinical practice. Phantom experiments were performed in which tube current was measured as a function of thickness using a constant-noise-based AEC system and the results were compared with clinical technique charts. Results For 12-, 20-, 28-, 44-, and 50-cm patient widths, the requirement of constant noise across patient size yielded relative doses of 5%, 14%, 38%, 260%, and 549% and relative noises of 435%, 267%, 163%, 61%, and 42%, respectively, as compared with our clinically used technique chart settings at each respective width. Experimental measurements showed that a constant noise–based AEC system yielded 175% relative noise for a 30-cm phantom and 206% relative dose for a 40-cm phantom compared with our clinical technique chart. Conclusions Automatic exposure control systems that prescribe constant noise as patient size varies can yield excessive noise in small patients and excessive dose in obese patients compared with clinically accepted technique charts. Use of noise-level technique charts and tube current limits can mitigate these effects. PMID:25938214
Bernard R. Parresol; Steven C. Stedman
2004-01-01
The accuracy of forest growth and yield forecasts affects the quality of forest management decisions (Rauscher et al. 2000). Users of growth and yield models want assurance that model outputs are reasonable and mimic local/regional forest structure and composition and accurately reflect the influences of stand dynamics such as competition and disturbance. As such,...
He, Jun; Xu, Jiaqi; Wu, Xiao-Lin; Bauck, Stewart; Lee, Jungjae; Morota, Gota; Kachman, Stephen D; Spangler, Matthew L
2018-04-01
SNP chips are commonly used for genotyping animals in genomic selection but strategies for selecting low-density (LD) SNPs for imputation-mediated genomic selection have not been addressed adequately. The main purpose of the present study was to compare the performance of eight LD (6K) SNP panels, each selected by a different strategy exploiting a combination of three major factors: evenly-spaced SNPs, increased minor allele frequencies, and SNP-trait associations either for single traits independently or for all the three traits jointly. The imputation accuracies from 6K to 80K SNP genotypes were between 96.2 and 98.2%. Genomic prediction accuracies obtained using imputed 80K genotypes were between 0.817 and 0.821 for daughter pregnancy rate, between 0.838 and 0.844 for fat yield, and between 0.850 and 0.863 for milk yield. The two SNP panels optimized on the three major factors had the highest genomic prediction accuracy (0.821-0.863), and these accuracies were very close to those obtained using observed 80K genotypes (0.825-0.868). Further exploration of the underlying relationships showed that genomic prediction accuracies did not respond linearly to imputation accuracies, but were significantly affected by genotype (imputation) errors of SNPs in association with the traits to be predicted. SNPs optimal for map coverage and MAF were favorable for obtaining accurate imputation of genotypes whereas trait-associated SNPs improved genomic prediction accuracies. Thus, optimal LD SNP panels were the ones that combined both strengths. The present results have practical implications on the design of LD SNP chips for imputation-enabled genomic prediction.
ERIC Educational Resources Information Center
Decker, Dawn M.; Hixson, Michael D.; Shaw, Amber; Johnson, Gloria
2014-01-01
The purpose of this study was to examine whether using a multiple-measure framework yielded better classification accuracy than oral reading fluency (ORF) or maze alone in predicting pass/fail rates for middle-school students on a large-scale reading assessment. Participants were 178 students in Grades 7 and 8 from a Midwestern school district.…
Paramedic Application of a Triage Sieve: A Paper-Based Exercise.
Cuttance, Glen; Dansie, Kathryn; Rayner, Tim
2017-02-01
Introduction Triage is the systematic prioritization of casualties when there is an imbalance between the needs of these casualties and resource availability. The triage sieve is a recognized process for prioritizing casualties for treatment during mass-casualty incidents (MCIs). While the application of a triage sieve generally is well-accepted, the measurement of its accuracy has been somewhat limited. Obtaining reliable measures for triage sieve accuracy rates is viewed as a necessity for future development in this area. The goal of this study was to investigate how theoretical knowledge acquisition and the practical application of an aide-memoir impacted triage sieve accuracy rates. Two hundred and ninety-two paramedics were allocated randomly to one of four separate sub-groups, a non-intervention control group, and three intervention groups, which involved them receiving either an educational review session and/or an aide-memoir. Participants were asked to triage sieve 20 casualties using a previously trialed questionnaire. The study showed the non-intervention control group had a correct accuracy rate of 47%, a similar proportion of casualties found to be under-triaged (37%), but a significantly lower number of casualties were over-triaged (16%). The provision of either an educational review or aide-memoir significantly increased the correct triage sieve accuracy rate to 77% and 90%, respectively. Participants who received both the educational review and aide-memoir had an overall accuracy rate of 89%. Over-triaged rates were found not to differ significantly across any of the study groups. This study supports the use of an aide-memoir for maximizing MCI triage accuracy rates. A "just-in-time" educational refresher provided comparable benefits, however its practical application to the MCI setting has significant operational limitations. In addition, this study provides some guidance on triage sieve accuracy rate measures that can be applied to define acceptable performance of a triage sieve during a MCI. Cuttance G , Dansie K , Rayner T . Paramedic application of a triage sieve: a paper-based exercise. Prehosp Disaster Med. 2017;32(1):3-13.
Sung, Yun J; Gu, C Charles; Tiwari, Hemant K; Arnett, Donna K; Broeckel, Ulrich; Rao, Dabeeru C
2012-07-01
Genotype imputation provides imputation of untyped single nucleotide polymorphisms (SNPs) that are present on a reference panel such as those from the HapMap Project. It is popular for increasing statistical power and comparing results across studies using different platforms. Imputation for African American populations is challenging because their linkage disequilibrium blocks are shorter and also because no ideal reference panel is available due to admixture. In this paper, we evaluated three imputation strategies for African Americans. The intersection strategy used a combined panel consisting of SNPs polymorphic in both CEU and YRI. The union strategy used a panel consisting of SNPs polymorphic in either CEU or YRI. The merge strategy merged results from two separate imputations, one using CEU and the other using YRI. Because recent investigators are increasingly using the data from the 1000 Genomes (1KG) Project for genotype imputation, we evaluated both 1KG-based imputations and HapMap-based imputations. We used 23,707 SNPs from chromosomes 21 and 22 on Affymetrix SNP Array 6.0 genotyped for 1,075 HyperGEN African Americans. We found that 1KG-based imputations provided a substantially larger number of variants than HapMap-based imputations, about three times as many common variants and eight times as many rare and low-frequency variants. This higher yield is expected because the 1KG panel includes more SNPs. Accuracy rates using 1KG data were slightly lower than those using HapMap data before filtering, but slightly higher after filtering. The union strategy provided the highest imputation yield with next highest accuracy. The intersection strategy provided the lowest imputation yield but the highest accuracy. The merge strategy provided the lowest imputation accuracy. We observed that SNPs polymorphic only in CEU had much lower accuracy, reducing the accuracy of the union strategy. Our findings suggest that 1KG-based imputations can facilitate discovery of significant associations for SNPs across the whole MAF spectrum. Because the 1KG Project is still under way, we expect that later versions will provide better imputation performance. © 2012 Wiley Periodicals, Inc.
21 CFR 211.165 - Testing and release for distribution.
Code of Federal Regulations, 2010 CFR
2010-04-01
... products meet each appropriate specification and appropriate statistical quality control criteria as a condition for their approval and release. The statistical quality control criteria shall include appropriate acceptance levels and/or appropriate rejection levels. (e) The accuracy, sensitivity, specificity, and...
46 CFR 176.650 - Alternative Hull Examination Program options: Divers or underwater ROV.
Code of Federal Regulations, 2011 CFR
2011-10-01
... accuracy; (3) Take ultrasonic thickness gaugings at a minimum of 5 points on each plate, evenly spaced; (4... must be accepted by the Officer in Charge, Marine Inspection (OCMI) prior to the survey. If you choose...
46 CFR 176.650 - Alternative Hull Examination Program options: Divers or underwater ROV.
Code of Federal Regulations, 2014 CFR
2014-10-01
... accuracy; (3) Take ultrasonic thickness gaugings at a minimum of 5 points on each plate, evenly spaced; (4... must be accepted by the Officer in Charge, Marine Inspection (OCMI) prior to the survey. If you choose...
46 CFR 176.650 - Alternative Hull Examination Program options: Divers or underwater ROV.
Code of Federal Regulations, 2012 CFR
2012-10-01
... accuracy; (3) Take ultrasonic thickness gaugings at a minimum of 5 points on each plate, evenly spaced; (4... must be accepted by the Officer in Charge, Marine Inspection (OCMI) prior to the survey. If you choose...
46 CFR 176.650 - Alternative Hull Examination Program options: Divers or underwater ROV.
Code of Federal Regulations, 2013 CFR
2013-10-01
... accuracy; (3) Take ultrasonic thickness gaugings at a minimum of 5 points on each plate, evenly spaced; (4... must be accepted by the Officer in Charge, Marine Inspection (OCMI) prior to the survey. If you choose...
NASA Technical Reports Server (NTRS)
Rignot, Eric; Williams, Cynthia; Way, Jobea; Viereck, Leslie
1993-01-01
A maximum a posteriori Bayesian classifier for multifrequency polarimetric SAR data is used to perform a supervised classification of forest types in the floodplains of Alaska. The image classes include white spruce, balsam poplar, black spruce, alder, non-forests, and open water. The authors investigate the effect on classification accuracy of changing environmental conditions, and of frequency and polarization of the signal. The highest classification accuracy (86 percent correctly classified forest pixels, and 91 percent overall) is obtained combining L- and C-band frequencies fully polarimetric on a date where the forest is just recovering from flooding. The forest map compares favorably with a vegetation map assembled from digitized aerial photos which took five years for completion, and address the state of the forest in 1978, ignoring subsequent fires, changes in the course of the river, clear-cutting of trees, and tree growth. HV-polarization is the most useful polarization at L- and C-band for classification. C-band VV (ERS-1 mode) and L-band HH (J-ERS-1 mode) alone or combined yield unsatisfactory classification accuracies. Additional data acquired in the winter season during thawed and frozen days yield classification accuracies respectively 20 percent and 30 percent lower due to a greater confusion between conifers and deciduous trees. Data acquired at the peak of flooding in May 1991 also yield classification accuracies 10 percent lower because of dominant trunk-ground interactions which mask out finer differences in radar backscatter between tree species. Combination of several of these dates does not improve classification accuracy. For comparison, panchromatic optical data acquired by SPOT in the summer season of 1991 are used to classify the same area. The classification accuracy (78 percent for the forest types and 90 percent if open water is included) is lower than that obtained with AIRSAR although conifers and deciduous trees are better separated due to the presence of leaves on the deciduous trees. Optical data do not separate black spruce and white spruce as well as SAR data, cannot separate alder from balsam poplar, and are of course limited by the frequent cloud cover in the polar regions. Yet, combining SPOT and AIRSAR offers better chances to identify vegetation types independent of ground truth information using a combination of NDVI indexes from SPOT, biomass numbers from AIRSAR, and a segmentation map from either one.
Ciaramella, Michael A; Kim, Taejo; Avery, Jimmy L; Allen, Peter J; Schilling, M Wes
2016-08-01
Stress during fish culture alters physiological homeostasis and affects fillet quality. Maintenance of high-quality seafood is important to ensure the production of a marketable product. This study assessed how sequential stressors affect the sensory and quality characteristics of catfish (Ictalurus punctatus) fillets. Three stress trials were conducted where temperature (25 or 33 °C) and dissolved oxygen (DO, approximately 2.5 or >5 mg/L) were manipulated followed by socking and transport stress. After each stage of harvest (environmental stress, socking, and transport), fillet yield, consumer acceptability, descriptive evaluation, cook loss, tenderness, and pH were evaluated. Fillet yield decreased with increasing severity of environmental stress. Fillets from the severe stress treatment (33 °C, approximately 2.5 mg/L) received the highest acceptability scores (P < 0.05). Control fillets (25 °C, >5 mg/L) were the least acceptable (P < 0.05). Increased intensity of less favorable flavor attributes commonly associated with catfish resulted in the differences in acceptability among treatments. As fish progressed through the harvest event, cook loss decreased, tenderness increased, and pH increased, indicating that stress induced textural changes. The data suggest that although environmental stress results in slight changes in flavor attributes, its effects on acceptability are minor with fillets from all treatments still liked (>6 on a 9 point scale). Socking and transport were identified to positively affect textural characteristics of catfish fillets. Although the effects observed were not likely to negatively impact consumer acceptance, a strict management plan should be followed to maintain consistency in the product and avoid changes in stressors that might alter quality more drastically. © 2016 Institute of Food Technologists®
Using LabVIEW to facilitate calibration and verification for respiratory impedance plethysmography.
Ellis, W S; Jones, R T
1991-12-01
A system for calibrating the Respitrace impedance plethysmograph was developed with the capacity to quantitatively verify the accuracy of calibration. LabVIEW software was used on a Macintosh II computer to create a user-friendly environment, with the added benefit of reducing development time. The system developed enabled a research assistant to calibrate the Respitrace within 15 min while achieving an accuracy within the normally accepted 10% deviation when the Respitrace output is compared to a water spirometer standard. The system and methods described were successfully used in a study of 10 subjects smoking cigarettes containing marijuana or cocaine under four conditions, calibrating all subjects to 10% accuracy within 15 min.
LaDuke, Mike; Monti, Jon; Cronin, Aaron; Gillum, Bart
2017-03-01
Patients commonly present to emergency rooms and primary care clinics with cellulitic skin infections with or without abscess formation. In military operational units, non-physician medical personnel provide most primary and initial emergency medical care. The objective of this study was to determine if, after minimal training, Army physician assistants and medics could use portable ultrasound (US) machines to detect superficial soft tissue abscesses. This was a single-blinded, randomized, prospective observational study conducted over the course of 2 days at a military installation. Active duty military physician assistants and medics with little or no US experience were recruited as participants. They received a short block of training on abscess detection using both clinical examination skills (inspection/palpation) and US examination. The participants were then asked to provide a yes/no answer regarding abscess presence in a chicken tissue model. Results were analyzed to assess the participants' abilities to detect abscesses, compare the diagnostic accuracy of their clinical examinations with their US examinations, and assess how often US results changed treatment plans initially on the basis of clinical examination findings alone. 22 participants performed a total of 220 clinical examinations and 220 US scans on 10 chicken tissue abscess models. Clinical examination for abscess detection yielded a sensitivity of 73.5% (95% confidence interval [CI], 65.3-80.3%) and a specificity of 77.2% (95% CI, 67.4-84.9%), although US examination for abscess detection yielded a sensitivity of 99.2% (95% CI, 95.4-99.9%) and a specificity of 95.5% (95% CI, 88.5-98.6%). Clinical examination yielded a diagnostic accuracy of 75.0% (95% CI, 68.9-80.3) although US examination yielded a diagnostic accuracy of 97.7% (95% CI, 94.6-99.2%), a difference in accuracy of 22.7% favoring US (p < 0.01). US changed the diagnosis in 56 of 220 cases (25.4% of all cases, p = 0.02). Of these 56 cases, US led to the correct diagnosis 53 of 56 times (94.6%). Non-physician military medical providers can be trained in a very brief period to use US to detect superficial soft tissue abscesses with excellent accuracy. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.
Constructing better classifier ensemble based on weighted accuracy and diversity measure.
Zeng, Xiaodong; Wong, Derek F; Chao, Lidia S
2014-01-01
A weighted accuracy and diversity (WAD) method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases.
Constructing Better Classifier Ensemble Based on Weighted Accuracy and Diversity Measure
Chao, Lidia S.
2014-01-01
A weighted accuracy and diversity (WAD) method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases. PMID:24672402
Le, Vu H.; Buscaglia, Robert; Chaires, Jonathan B.; Lewis, Edwin A.
2013-01-01
Isothermal Titration Calorimetry, ITC, is a powerful technique that can be used to estimate a complete set of thermodynamic parameters (e.g. Keq (or ΔG), ΔH, ΔS, and n) for a ligand binding interaction described by a thermodynamic model. Thermodynamic models are constructed by combination of equilibrium constant, mass balance, and charge balance equations for the system under study. Commercial ITC instruments are supplied with software that includes a number of simple interaction models, for example one binding site, two binding sites, sequential sites, and n-independent binding sites. More complex models for example, three or more binding sites, one site with multiple binding mechanisms, linked equilibria, or equilibria involving macromolecular conformational selection through ligand binding need to be developed on a case by case basis by the ITC user. In this paper we provide an algorithm (and a link to our MATLAB program) for the non-linear regression analysis of a multiple binding site model with up to four overlapping binding equilibria. Error analysis demonstrates that fitting ITC data for multiple parameters (e.g. up to nine parameters in the three binding site model) yields thermodynamic parameters with acceptable accuracy. PMID:23262283
Non-invasive diagnosis of liver fibrosis and cirrhosis
Lurie, Yoav; Webb, Muriel; Cytter-Kuint, Ruth; Shteingart, Shimon; Lederkremer, Gerardo Z
2015-01-01
The evaluation and follow up of liver fibrosis and cirrhosis have been traditionally performed by liver biopsy. However, during the last 20 years, it has become evident that this “gold-standard” is imperfect; even according to its proponents, it is only “the best” among available methods. Attempts at uncovering non-invasive diagnostic tools have yielded multiple scores, formulae, and imaging modalities. All are better tolerated, safer, more acceptable to the patient, and can be repeated essentially as often as required. Most are much less expensive than liver biopsy. Consequently, their use is growing, and in some countries the number of biopsies performed, at least for routine evaluation of hepatitis B and C, has declined sharply. However, the accuracy and diagnostic value of most, if not all, of these methods remains controversial. In this review for the practicing physician, we analyze established and novel biomarkers and physical techniques. We may be witnessing in recent years the beginning of the end of the first phase for the development of non-invasive markers. Early evidence suggests that they might be at least as good as liver biopsy. Novel experimental markers and imaging techniques could produce a dramatic change in diagnosis in the near future. PMID:26556987
GTA weld penetration and the effects of deviations in machine variables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giedt, W.H.
1987-07-01
Analytical models for predicting the temperature distribution during GTA welding are reviewed with the purpose of developing a procedure for investigating the effects of deviations in machine parameters. The objective was to determine the accuracy required in machine settings to obtain reproducible results. This review revealed a wide range of published values (21 to 90%) for the arc heating efficiency. Low values (21 to 65%) were associated with evaluation of efficiency using constant property conduction models. Values from 75 to 90% were determined from calorimetric type measurements and are applicable for more accurate numerical solution procedures. Although numerical solutions canmore » yield better overall weld zone predictions, calculations are lengthy and complex. In view of this and the indication that acceptable agreement with experimental measurements can be achieved with the moving-point-source solution, it was utilized to investigate the effects of deviations or errors in voltage, current, and travel speed on GTA weld penetration. Variations resulting from welding within current goals for voltage (+-0.1 V), current (+-3.0 A), and travel speed (+-2.0%) were found to be +-2 to 4%, with voltage and current being more influential than travel speed.« less
NASA Astrophysics Data System (ADS)
Martin, Jeffery W.
2002-04-01
Recent calculations of parity-violating (PV) electroproduction asymmetries for the NarrowΔ transition and for quasi-elastic electron scattering on the deuteron have led theorists to consider the photoproduction limit of these processes. In the case of the NarrowΔ transition, it has been proposed that the PV π^± photoproduction asymmetry A_γ^± might be of order 10-6, from a model based on hyperon weak radiative decays. An accurate measurement of A_γ^± would tightly constrain that model, at the same time reducing the dominant theoretical uncertainty in calculations of the PV NarrowΔ asymmetry at non-zero Q^2. Estimates for the G^0 experiment at Jefferson Lab for a measurement of A_γ^± will be presented. A measurement of π^- production from deuterium should yield a 47% measurement of A_γ^±, assuming the best theory estimate for A_γ^±. This measurement would be parasitic to a low-energy run that is already planned. Improvements to this accuracy would require tuning the spectrometer for maximum acceptance of pions and/or luminosity upgrades for photoproduction. Possibilities for such improvements will be discussed.
An alternative sensor-based method for glucose monitoring in children and young people with diabetes
Edge, Julie; Acerini, Carlo; Campbell, Fiona; Hamilton-Shield, Julian; Moudiotis, Chris; Rahman, Shakeel; Randell, Tabitha; Smith, Anne; Trevelyan, Nicola
2017-01-01
Objective To determine accuracy, safety and acceptability of the FreeStyle Libre Flash Glucose Monitoring System in the paediatric population. Design, setting and patients Eighty-nine study participants, aged 4–17 years, with type 1 diabetes were enrolled across 9 diabetes centres in the UK. A factory calibrated sensor was inserted on the back of the upper arm and used for up to 14 days. Sensor glucose measurements were compared with capillary blood glucose (BG) measurements. Sensor results were masked to participants. Results Clinical accuracy of sensor results versus BG results was demonstrated, with 83.8% of results in zone A and 99.4% of results in zones A and B of the consensus error grid. Overall mean absolute relative difference (MARD) was 13.9%. Sensor accuracy was unaffected by patient factors such as age, body weight, sex, method of insulin administration or time of use (day vs night). Participants were in the target glucose range (3.9–10.0 mmol/L) ∼50% of the time (mean 12.1 hours/day), with an average of 2.2 hours/day and 9.5 hours/day in hypoglycaemia and hyperglycaemia, respectively. Sensor application, wear/use of the device and comparison to self-monitoring of blood glucose were rated favourably by most participants/caregivers (84.3–100%). Five device related adverse events were reported across a range of participant ages. Conclusions Accuracy, safety and user acceptability of the FreeStyle Libre System were demonstrated for the paediatric population. Accuracy of the system was unaffected by subject characteristics, making it suitable for a broad range of children and young people with diabetes. Trial registration number NCT02388815. PMID:28137708
Yan, Min; Takahashi, Hidekazu; Nishimura, Fumio
2004-12-01
The aim of the present study was to evaluate the dimensional accuracy and surface property of titanium casting obtained using a gypsum-bonded alumina investment. The experimental gypsum-bonded alumina investment with 20 mass% gypsum content mixed with 2 mass% potassium sulfate was used for five cp titanium castings and three Cu-Zn alloy castings. The accuracy, surface roughness (Ra), and reaction layer thickness of these castings were investigated. The accuracy of the castings obtained from the experimental investment ranged from -0.04 to 0.23%, while surface roughness (Ra) ranged from 7.6 to 10.3microm. A reaction layer of about 150 microm thickness under the titanium casting surface was observed. These results suggested that the titanium casting obtained using the experimental investment was acceptable. Although the reaction layer was thin, surface roughness should be improved.
Measurement accuracies in band-limited extrapolation
NASA Technical Reports Server (NTRS)
Kritikos, H. N.
1982-01-01
The problem of numerical instability associated with extrapolation algorithms is addressed. An attempt is made to estimate the bounds for the acceptable errors and to place a ceiling on the measurement accuracy and computational accuracy needed for the extrapolation. It is shown that in band limited (or visible angle limited) extrapolation the larger effective aperture L' that can be realized from a finite aperture L by over sampling is a function of the accuracy of measurements. It is shown that for sampling in the interval L/b absolute value of xL, b1 the signal must be known within an error e sub N given by e sub N squared approximately = 1/4(2kL') cubed (e/8b L/L')(2kL') where L is the physical aperture, L' is the extrapolated aperture, and k = 2pi lambda.
Automatic anatomy recognition via multiobject oriented active shape models.
Chen, Xinjian; Udupa, Jayaram K; Alavi, Abass; Torigian, Drew A
2010-12-01
This paper studies the feasibility of developing an automatic anatomy recognition (AAR) system in clinical radiology and demonstrates its operation on clinical 2D images. The anatomy recognition method described here consists of two main components: (a) multiobject generalization of OASM and (b) object recognition strategies. The OASM algorithm is generalized to multiple objects by including a model for each object and assigning a cost structure specific to each object in the spirit of live wire. The delineation of multiobject boundaries is done in MOASM via a three level dynamic programming algorithm, wherein the first level is at pixel level which aims to find optimal oriented boundary segments between successive landmarks, the second level is at landmark level which aims to find optimal location for the landmarks, and the third level is at the object level which aims to find optimal arrangement of object boundaries over all objects. The object recognition strategy attempts to find that pose vector (consisting of translation, rotation, and scale component) for the multiobject model that yields the smallest total boundary cost for all objects. The delineation and recognition accuracies were evaluated separately utilizing routine clinical chest CT, abdominal CT, and foot MRI data sets. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF and FPVF). The recognition accuracy was assessed (1) in terms of the size of the space of the pose vectors for the model assembly that yielded high delineation accuracy, (2) as a function of the number of objects and objects' distribution and size in the model, (3) in terms of the interdependence between delineation and recognition, and (4) in terms of the closeness of the optimum recognition result to the global optimum. When multiple objects are included in the model, the delineation accuracy in terms of TPVF can be improved to 97%-98% with a low FPVF of 0.1%-0.2%. Typically, a recognition accuracy of > or = 90% yielded a TPVF > or = 95% and FPVF < or = 0.5%. Over the three data sets and over all tested objects, in 97% of the cases, the optimal solutions found by the proposed method constituted the true global optimum. The experimental results showed the feasibility and efficacy of the proposed automatic anatomy recognition system. Increasing the number of objects in the model can significantly improve both recognition and delineation accuracy. More spread out arrangement of objects in the model can lead to improved recognition and delineation accuracy. Including larger objects in the model also improved recognition and delineation. The proposed method almost always finds globally optimum solutions.
Walsworth, Matthew K; Doukas, William C; Murphy, Kevin P; Mielcarek, Billie J; Michener, Lori A
2008-01-01
Glenoid labral tears provide a diagnostic challenge. Combinations of items in the patient history and physical examination will provide stronger diagnostic accuracy to suggest the presence or absence of glenoid labral tear than will individual items. Cohort study (diagnosis); Level of evidence, 1. History and examination findings in patients with shoulder pain (N = 55) were compared with arthroscopic findings to determine diagnostic accuracy and intertester reliability. The intertester reliability of the crank, anterior slide, and active compression tests was 0.20 to 0.24. A combined history of popping or catching and positive crank or anterior slide results yielded specificities of 0.91 and 1.00 and positive likelihood ratios of 3.0 and infinity, respectively. A positive anterior slide result combined with either a positive active compression or crank result yielded specificities of 0.91 and positive likelihood ratio of 2.75 and 3.75, respectively. Requiring only a single positive finding in the combination of popping or catching and the anterior slide or crank yielded sensitivities of 0.82 and 0.89 and negative likelihood ratios of 0.31 and 0.33, respectively. The diagnostic accuracy of individual tests in previous studies is quite variable, which may be explained in part by the modest reliability of these tests. The combination of popping or catching with a positive crank or anterior slide result or a positive anterior slide result with a positive active compression or crank test result suggests the presence of a labral tear. The combined absence of popping or catching and a negative anterior slide or crank result suggests the absence of a labral tear.
Endelman, Jeffrey B; Carley, Cari A Schmitz; Bethke, Paul C; Coombs, Joseph J; Clough, Mark E; da Silva, Washington L; De Jong, Walter S; Douches, David S; Frederick, Curtis M; Haynes, Kathleen G; Holm, David G; Miller, J Creighton; Muñoz, Patricio R; Navarro, Felix M; Novy, Richard G; Palta, Jiwan P; Porter, Gregory A; Rak, Kyle T; Sathuvalli, Vidyasagar R; Thompson, Asunta L; Yencho, G Craig
2018-05-01
As one of the world's most important food crops, the potato ( Solanum tuberosum L.) has spurred innovation in autotetraploid genetics, including in the use of SNP arrays to determine allele dosage at thousands of markers. By combining genotype and pedigree information with phenotype data for economically important traits, the objectives of this study were to (1) partition the genetic variance into additive vs. nonadditive components, and (2) determine the accuracy of genome-wide prediction. Between 2012 and 2017, a training population of 571 clones was evaluated for total yield, specific gravity, and chip fry color. Genomic covariance matrices for additive ( G ), digenic dominant ( D ), and additive × additive epistatic ( G # G ) effects were calculated using 3895 markers, and the numerator relationship matrix ( A ) was calculated from a 13-generation pedigree. Based on model fit and prediction accuracy, mixed model analysis with G was superior to A for yield and fry color but not specific gravity. The amount of additive genetic variance captured by markers was 20% of the total genetic variance for specific gravity, compared to 45% for yield and fry color. Within the training population, including nonadditive effects improved accuracy and/or bias for all three traits when predicting total genotypic value. When six F 1 populations were used for validation, prediction accuracy ranged from 0.06 to 0.63 and was consistently lower (0.13 on average) without allele dosage information. We conclude that genome-wide prediction is feasible in potato and that it will improve selection for breeding value given the substantial amount of nonadditive genetic variance in elite germplasm. Copyright © 2018 by the Genetics Society of America.
Movement amplitude and tempo change in piano performance
NASA Astrophysics Data System (ADS)
Palmer, Caroline
2004-05-01
Music performance places stringent temporal and cognitive demands on individuals that should yield large speed/accuracy tradeoffs. Skilled piano performance, however, shows consistently high accuracy across a wide variety of rates. Movement amplitude may affect the speed/accuracy tradeoff, so that high accuracy can be obtained even at very fast tempi. The contribution of movement amplitude changes in rate (tempo) is investigated with motion capture. Cameras recorded pianists with passive markers on hands and fingers, who performed on an electronic (MIDI) keyboard. Pianists performed short melodies at faster and faster tempi until they made errors (altering the speed/accuracy function). Variability of finger movements in the three motion planes indicated most change in the plane perpendicular to the keyboard across tempi. Surprisingly, peak amplitudes of motion before striking the keys increased as tempo increased. Increased movement amplitudes at faster rates may reduce or compensate for speed/accuracy tradeoffs. [Work supported by Canada Research Chairs program, HIMH R01 45764.
Accuracy optimization with wavelength tunability in overlay imaging technology
NASA Astrophysics Data System (ADS)
Lee, Honggoo; Kang, Yoonshik; Han, Sangjoon; Shim, Kyuchan; Hong, Minhyung; Kim, Seungyoung; Lee, Jieun; Lee, Dongyoung; Oh, Eungryong; Choi, Ahlin; Kim, Youngsik; Marciano, Tal; Klein, Dana; Hajaj, Eitan M.; Aharon, Sharon; Ben-Dov, Guy; Lilach, Saltoun; Serero, Dan; Golotsvan, Anna
2018-03-01
As semiconductor manufacturing technology progresses and the dimensions of integrated circuit elements shrink, overlay budget is accordingly being reduced. Overlay budget closely approaches the scale of measurement inaccuracies due to both optical imperfections of the measurement system and the interaction of light with geometrical asymmetries of the measured targets. Measurement inaccuracies can no longer be ignored due to their significant effect on the resulting device yield. In this paper we investigate a new approach for imaging based overlay (IBO) measurements by optimizing accuracy rather than contrast precision, including its effect over the total target performance, using wavelength tunable overlay imaging metrology. We present new accuracy metrics based on theoretical development and present their quality in identifying the measurement accuracy when compared to CD-SEM overlay measurements. The paper presents the theoretical considerations and simulation work, as well as measurement data, for which tunability combined with the new accuracy metrics is shown to improve accuracy performance.
A spectral-spatial-dynamic hierarchical Bayesian (SSD-HB) model for estimating soybean yield
NASA Astrophysics Data System (ADS)
Kazama, Yoriko; Kujirai, Toshihiro
2014-10-01
A method called a "spectral-spatial-dynamic hierarchical-Bayesian (SSD-HB) model," which can deal with many parameters (such as spectral and weather information all together) by reducing the occurrence of multicollinearity, is proposed. Experiments conducted on soybean yields in Brazil fields with a RapidEye satellite image indicate that the proposed SSD-HB model can predict soybean yield with a higher degree of accuracy than other estimation methods commonly used in remote-sensing applications. In the case of the SSD-HB model, the mean absolute error between estimated yield of the target area and actual yield is 0.28 t/ha, compared to 0.34 t/ha when conventional PLS regression was applied, showing the potential effectiveness of the proposed model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunlop, W H
It is my pleasure to be here to day to participate in this Conference. My thanks to the organizers for preparing such an interesting agenda on a very difficult topic. My effort in preparing my presentation was performed under the auspices of the U.S. Department of Energy by University of California, Lawrence Livermore National Laboratory under Contract W-7405-Eng-48. And as many of you know Lawrence Livermore National Laboratory is now, as of Oct 1st, under contract to the Lawrence Livermore National Security LLC. There has been a long history of how to view verification of arms control agreements. The basismore » for verification during the days of SALT was that verification would be based on each country's national technical means. For treaties dealing with strategic missiles this worked well as the individual items subject to verification were of such a size that they were visible by the National Technical Means available at the time. And it was felt that the counting of missiles and launchers could be verified by our National Technical Means. For nuclear testing treaties the use of seismic measurements developed into a capability that was reasonably robust for all but the smallest of nuclear tests. However, once we had the Threshold Test Ban Treaty, there was a significant problem in that the fidelity of the measurements were not sufficient to determine if a test was slightly above the 150 kt limit or slightly below the 150 kt limit. This led some in the US to believe that the Soviet Union was not living up to the TTBT agreement. An on-site verification protocol was negotiated in 1988 and 1989 that allowed the US to make hydrodynamic yield measurements on Soviet tests above 50 kt yield and regional seismic measurements on all tests above 35 kt of yield; and the Soviets to make the same type of measurements on US tests to ensure that they were not over 150 kt. These on-site measurements were considered reasonably intrusive. Again the measurement capability was not perfect and it was expected that occasionally there might be a verification measurement that was slightly above 150 kt. But the accuracy was much improved over the earlier seismic measurements. In fact some of this improvement was because as part of this verification protocol the US and Soviet Union provided the yields of several past tests to improve seismic calibrations. This actually helped provide a much needed calibration for the seismic measurements. It was also accepted that since nuclear tests were to a large part R&D related, it was also expected that occasionally there might be a test that was slightly above 150 kt, as you could not always predict the yield with high accuracy in advance of the test. While one could hypothesize that the Soviets could do a test at some other location than their test sites, if it were even a small fraction of 150 kt it would clearly be observed and would be a violation of the treaty. So the issue of clandestine tests of significance was easily covered for this particular treaty.« less
NASA Astrophysics Data System (ADS)
Khattak, Khanzadi Fatima; Simpson, Thomas James; Ihasnullah
2009-03-01
The assurance of microbial quality is necessary to make plant materials suitable for human consumption and commercialization. The aim of the present study was to evaluate the possibility to apply the gamma radiation treatment on the rhizome samples of Nelumbo nucifera for microbial decontamination. The radiation processing was carried out at dose levels of 1, 2, 4 and 6 kGy. The irradiated and control samples were analyzed for microbial load, organoleptic acceptance, extraction yield, proximate composition, phenolic contents and DPPH scavenging activity. The results indicated that gamma radiation treatment significantly reduced microbial load and increased the storability of the irradiated samples. The treated samples were also acceptable sensorically. The extraction yield and phenolic contents increased with the increase of radiation dose. Gamma radiation also enhanced the DPPH scavenging activity.
DOT National Transportation Integrated Search
1988-06-17
Use of nuclear asphalt content gauges for determining asphalt content of asphaltic concrete pavement are gaining acceptance as an alternative method to the vacuum extraction process. The reasons nuclear asphalt content gauges are considered promising...
Geo-referenced digital data acquisition and processing system using LiDAR technology.
DOT National Transportation Integrated Search
2006-02-01
LiDAR technology, introduced in the late 90s, has received wide acceptance in airborne surveying as a leading : tool for obtaining high-quality surface data at decimeter-level vertical accuracy in an unprecedentedly short : turnaround time. State-of-...
Procedures on installing, acceptance testing, operating, maintaining and quality assuring three types of ground-based, upper air meteorological measurement systems are described. he limitations and uncertainties in precision and accuracy measurements associated with these systems...
Increased genomic prediction accuracy in wheat breeding using a large Australian panel.
Norman, Adam; Taylor, Julian; Tanaka, Emi; Telfer, Paul; Edwards, James; Martinant, Jean-Pierre; Kuchel, Haydn
2017-12-01
Genomic prediction accuracy within a large panel was found to be substantially higher than that previously observed in smaller populations, and also higher than QTL-based prediction. In recent years, genomic selection for wheat breeding has been widely studied, but this has typically been restricted to population sizes under 1000 individuals. To assess its efficacy in germplasm representative of commercial breeding programmes, we used a panel of 10,375 Australian wheat breeding lines to investigate the accuracy of genomic prediction for grain yield, physical grain quality and other physiological traits. To achieve this, the complete panel was phenotyped in a dedicated field trial and genotyped using a custom Axiom TM Affymetrix SNP array. A high-quality consensus map was also constructed, allowing the linkage disequilibrium present in the germplasm to be investigated. Using the complete SNP array, genomic prediction accuracies were found to be substantially higher than those previously observed in smaller populations and also more accurate compared to prediction approaches using a finite number of selected quantitative trait loci. Multi-trait genetic correlations were also assessed at an additive and residual genetic level, identifying a negative genetic correlation between grain yield and protein as well as a positive genetic correlation between grain size and test weight.
Assessment of progressively delayed prompts on guided skill learning in rats.
Reid, Alliston K; Futch, Sara E; Ball, Katherine M; Knight, Aubrey G; Tucker, Martha
2017-03-01
We examined the controlling factors that allow a prompted skill to become autonomous in a discrete-trials implementation of Touchette's (1971) progressively delayed prompting procedure, but our subjects were rats rather than children with disabilities. Our prompted skill was a left-right lever-press sequence guided by two panel lights. We manipulated (a) the effectiveness of the guiding lights prompt and (b) the presence or absence of a progressively delayed prompt in four groups of rats. The less effective prompt yielded greater autonomy than the more effective prompt. The ability of the progressively delayed prompt procedure to produce behavioral autonomy depended upon characteristics of the obtained delay (trial duration) rather than on the pending prompt. Sequence accuracy was reliably higher in unprompted trials than in prompted trials, and this difference was maintained in the 2 groups that received no prompts but yielded equivalent trial durations. Overall sequence accuracy decreased systematically as trial duration increased. Shorter trials and their greater accuracy were correlated with higher overall reinforcement rates for faster responding. Waiting for delayed prompts (even if no actual prompt was provided) was associated with lower overall reinforcement rate by decreasing accuracy and by lengthening trials. These findings extend results from previous studies regarding the controlling factors in delayed prompting procedures applied to children with disabilities.
Jia, Cang-Zhi; He, Wen-Ying; Yao, Yu-Hua
2017-03-01
Hydroxylation of proline or lysine residues in proteins is a common post-translational modification event, and such modifications are found in many physiological and pathological processes. Nonetheless, the exact molecular mechanism of hydroxylation remains under investigation. Because experimental identification of hydroxylation is time-consuming and expensive, bioinformatics tools with high accuracy represent desirable alternatives for large-scale rapid identification of protein hydroxylation sites. In view of this, we developed a supporter vector machine-based tool, OH-PRED, for the prediction of protein hydroxylation sites using the adapted normal distribution bi-profile Bayes feature extraction in combination with the physicochemical property indexes of the amino acids. In a jackknife cross validation, OH-PRED yields an accuracy of 91.88% and a Matthew's correlation coefficient (MCC) of 0.838 for the prediction of hydroxyproline sites, and yields an accuracy of 97.42% and a MCC of 0.949 for the prediction of hydroxylysine sites. These results demonstrate that OH-PRED increased significantly the prediction accuracy of hydroxyproline and hydroxylysine sites by 7.37 and 14.09%, respectively, when compared with the latest predictor PredHydroxy. In independent tests, OH-PRED also outperforms previously published methods.
Dual-energy CT for the diagnosis of gout: an accuracy and diagnostic yield study.
Bongartz, Tim; Glazebrook, Katrina N; Kavros, Steven J; Murthy, Naveen S; Merry, Stephen P; Franz, Walter B; Michet, Clement J; Veetil, Barath M Akkara; Davis, John M; Mason, Thomas G; Warrington, Kenneth J; Ytterberg, Steven R; Matteson, Eric L; Crowson, Cynthia S; Leng, Shuai; McCollough, Cynthia H
2015-06-01
To assess the accuracy of dual-energy CT (DECT) for diagnosing gout, and to explore whether it can have any impact on clinical decision making beyond the established diagnostic approach using polarising microscopy of synovial fluid (diagnostic yield). Diagnostic single-centre study of 40 patients with active gout, and 41 individuals with other types of joint disease. Sensitivity and specificity of DECT for diagnosing gout was calculated against a combined reference standard (polarising and electron microscopy of synovial fluid). To explore the diagnostic yield of DECT scanning, a third cohort was assembled consisting of patients with inflammatory arthritis and risk factors for gout who had negative synovial fluid polarising microscopy results. Among these patients, the proportion of subjects with DECT findings indicating a diagnosis of gout was assessed. The sensitivity and specificity of DECT for diagnosing gout was 0.90 (95% CI 0.76 to 0.97) and 0.83 (95% CI 0.68 to 0.93), respectively. All false negative patients were observed among patients with acute, recent-onset gout. All false positive patients had advanced knee osteoarthritis. DECT in the diagnostic yield cohort revealed evidence of uric acid deposition in 14 out of 30 patients (46.7%). DECT provides good diagnostic accuracy for detection of monosodium urate (MSU) deposits in patients with gout. However, sensitivity is lower in patients with recent-onset disease. DECT has a significant impact on clinical decision making when gout is suspected, but polarising microscopy of synovial fluid fails to demonstrate the presence of MSU crystals. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Chen, Chih-Ming; Lin, Hsien-Tang
2017-12-01
This study evaluated the supplementary effect of higher concentrations of various disaccharides on processing yield, major physicochemical properties, and sensory attributes of Chinese-style pork jerky (CSPJ). CSPJ samples were prepared by marinating sliced ham (4 mm) with three dissaccharides, including sucrose, lactose, and maltose, at 0%, 15%, 18%, 21%, and 24%. Subsequently, the CSPJ samples were dried and roasted. The moisture content, water activity, crude protein, moisture-to-protein ratio, pH, processing yield, shear force, color, and sensory attributes of the CSPJ samples were evaluated. The quality characteristics of CSPJ samples prepared with sucrose were more acceptable. By contrast, CSPJ samples prepared with lactose showed the lowest scores. However, the processing yield and moisture content were the highest for CSPJ samples prepared with lactose, which may be associated with improved benefits for cost reduction. Furthermore, sucrose and lactose supplementation resulted in contrasting quality characteristics; for example, CSPJ samples with sucrose and maltose supplementation had higher sensory scores for color than samples with lactose supplementation. Additionally, most quality characteristics of CSPJ samples with sucrose supplementation contrasted with those of the samples with lactose supplementation; for example, the samples with sucrose supplementation had higher scores for sensory attributes than those with lactose supplementation. Sucrose supplementation up to 21% to 24% was associated with the highest overall acceptability scores (5.19 to 5.80), enhanced quality characteristics, increased processing yield, and reduced production cost.
2011-01-01
Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p < 0.05). Support Vector Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed overall classification accuracy above a median value of 0.63, but for most sensitivity was around or even lower than a median value of 0.5. Conclusions When taking into account sensitivity, specificity and overall classification accuracy Random Forests and Linear Discriminant analysis rank first among all the classifiers tested in prediction of dementia using several neuropsychological tests. These methods may be used to improve accuracy, sensitivity and specificity of Dementia predictions from neuropsychological testing. PMID:21849043
Food-packaging migration models: A critical discussion.
Gavriil, Gavriil; Kanavouras, Antonis; Coutelieris, Frank A
2017-06-14
The widely accepted and used migration models that describe the mass transport from polymeric packaging material to food and food simulants are confirmed here. A critical review of the most accepted models is presented in detail. Their main advantages and weak points, regarding their predictive accuracy, are discussed and weighted toward their usage extensiveness. By identifying the specific areas where using such models may not provide a strong correlation between theoretical and actual results, this work also aims in outlining some particular directions regarding further research on food - packaging interactions.
Theelen, A; Martens, J; Bosmans, G; Houben, R; Jager, J J; Rutten, I; Lambin, P; Minken, A W; Baumert, B G
2012-01-01
The goal was to provide a quantitative evaluation of the accuracy of three different fixation systems for stereotactic radiotherapy and to evaluate patients' acceptance for all fixations. A total of 16 consecutive patients with brain tumours undergoing fractionated stereotactic radiotherapy (SCRT) were enrolled after informed consent (Clinical trials.gov: NCT00181350). Fixation systems evaluated were the BrainLAB® mask, with and without custom made bite-block (fixations S and A) and a homemade neck support with bite-block (fixation B) based on the BrainLAB® frame. The sequence of measurements was evaluated in a randomized manner with a cross-over design and patients' acceptance by a questionnaire. The mean three-dimensional (3D) displacement and standard deviations were 1.16 ± 0.68 mm for fixation S, 1.92 ± 1.28 and 1.70 ± 0.83 mm for fixations A and B, respectively. There was a significant improvement of the overall alignment (3D vector) when using the standard fixation instead of fixation A or B in the craniocaudal direction (p = 0.037). Rotational deviations were significantly less for the standard fixation S in relation to fixations A (p = 0.005) and B (p = 0.03). EPI imaging with off-line correction further improved reproducibility. Five out of 8 patients preferred the neck support with the bite-block. The mask fixation system in conjunction with a bite-block is the most accurate fixation for SCRT reducing craniocaudal and rotational movements. Patients favoured the more comfortable but less accurate neck support. To optimize the accuracy of SCRT, additional regular portal imaging is warranted.
Dosing Accuracy of Insulin Aspart FlexPens After Transport Through the Pneumatic Tube System.
Ward, Leah G; Heckman, Michael G; Warren, Amy I; Tran, Kimberly
2013-01-01
The purpose of this study was to evaluate whether transporting insulin aspart FlexPens via a pneumatic tube system affects the dosing accuracy of the pens. A total of 115 Novo Nordisk FlexPens containing insulin aspart were randomly assigned to be transported via a pneumatic tube system (n = 92) or to serve as the control (n = 23). Each pen was then randomized to 10 international unit (IU) doses (n = 25) or 30 IU doses (n = 67), providing 600 and 603 doses, respectively, for the pneumatic tube group. The control group also received random assignment to 10 IU doses (n = 6) or 30 IU doses (n = 17), providing 144 and 153 doses, respectively. Each dose was expelled using manufacturer instructions. Weights were recorded, corrected for specific gravity, and evaluated based on acceptable International Organization for Standardization (ISO) dosing limits. In the group of pens transported through the pneumatic tube system, none of the 600 doses of 10 IU (0.0%; 95% CI, 0.0 to 0.6) and none of the 603 doses of 30 IU (0.0%; 95% CI, 0.0 to 0.6) fell outside of the range of acceptable weights. Correspondingly, in the control group, none of the 144 doses at 10 IU (0.0%; 95% CI, 0.0 to 2.5) and none of the 153 doses at 30 IU (0.0%; 95% CI, 0.0 to 2.4) were outside of acceptable ISO limits. Transportation via pneumatic tube system does not appear to compromise dosing accuracy. Hospital pharmacies may rely on the pneumatic tube system for timely and accurate transport of insulin aspart FlexPens.
Supervised segmentation of microelectrode recording artifacts using power spectral density.
Bakstein, Eduard; Schneider, Jakub; Sieger, Tomas; Novak, Daniel; Wild, Jiri; Jech, Robert
2015-08-01
Appropriate detection of clean signal segments in extracellular microelectrode recordings (MER) is vital for maintaining high signal-to-noise ratio in MER studies. Existing alternatives to manual signal inspection are based on unsupervised change-point detection. We present a method of supervised MER artifact classification, based on power spectral density (PSD) and evaluate its performance on a database of 95 labelled MER signals. The proposed method yielded test-set accuracy of 90%, which was close to the accuracy of annotation (94%). The unsupervised methods achieved accuracy of about 77% on both training and testing data.
Yield estimation of sugarcane based on agrometeorological-spectral models
NASA Technical Reports Server (NTRS)
Rudorff, Bernardo Friedrich Theodor; Batista, Getulio Teixeira
1990-01-01
This work has the objective to assess the performance of a yield estimation model for sugarcane (Succharum officinarum). The model uses orbital gathered spectral data along with yield estimated from an agrometeorological model. The test site includes the sugarcane plantations of the Barra Grande Plant located in Lencois Paulista municipality in Sao Paulo State. Production data of four crop years were analyzed. Yield data observed in the first crop year (1983/84) were regressed against spectral and agrometeorological data of that same year. This provided the model to predict the yield for the following crop year i.e., 1984/85. The model to predict the yield of subsequent years (up to 1987/88) were developed similarly, incorporating all previous years data. The yield estimations obtained from these models explained 69, 54, and 50 percent of the yield variation in the 1984/85, 1985/86, and 1986/87 crop years, respectively. The accuracy of yield estimations based on spectral data only (vegetation index model) and on agrometeorological data only (agrometeorological model) were also investigated.
Incardona, Sandra; Mwancha-Kwasa, Magoma; Rees-Channer, Roxanne R; Albertini, Audrey; Havumaki, Joshua; Chiodini, Peter; Oyibo, Wellington; Gonzalez, Iveth J
2018-01-15
Malaria rapid diagnostic tests (RDTs) are becoming widely adopted for case management at community level. However, reports and anecdotal observations indicate that the blood transfer step poses a significant challenge to many users. This study sought to evaluate the inverted cup device in the hands of health workers in everyday clinical practice, in comparison with the plastic pipette, and to determine the volume accuracy of the device made of a lower-cost plastic. The volume accuracy of inverted cup devices made of two plastics, PMMA and SBC, was compared by transferring blood 150 times onto filter paper and comparing the blood spot areas with those produced by 20 reference transfers with a calibrated micropipette. The ease of use, safety and acceptability of the inverted cup device and the pipette were evaluated by 50 health workers in Nigeria. Observations were recorded on pre-designed questionnaires, by the health workers themselves and by trained observers. Focus group discussions were also conducted. The volume accuracy assessment showed that the device made from the low-cost material (SBC) delivered a more accurate volume (mean 5.4 μL, SD 0.48 μL, range 4.5-7.0 μL) than the PMMA device (mean 5.9 μL, SD 0.48 μL, range 4.9-7.2 μL). The observational evaluation demonstrated that the inverted cup device performed better than the pipette in all aspects, e.g. higher proportions of health workers achieved successful blood collection (96%, vs. 66%), transfer of the required blood volume (90%, vs. 58%), and blood deposit without any loss (95%, vs. 50%). Majority of health workers also considered it' very easy' to use (81%),'very appropriate' for everyday use (78%), and 50% of them reported that it was their preferred BTD. The good volume accuracy and high acceptability of the inverted cup device shown in this study, along with observed ease of use and safety in hands of health workers, further strengthens prior findings which demonstrated its higher accuracy as compared with other BTDs in a laboratory setting. Altogether, these studies suggest that the inverted cup device should replace other types of devices for use in day-to-day malaria diagnosis with RDTs.
Retinopathy of Prematurity-assist: Novel Software for Detecting Plus Disease
Pour, Elias Khalili; Pourreza, Hamidreza; Zamani, Kambiz Ameli; Mahmoudi, Alireza; Sadeghi, Arash Mir Mohammad; Shadravan, Mahla; Karkhaneh, Reza; Pour, Ramak Rouhi
2017-01-01
Purpose To design software with a novel algorithm, which analyzes the tortuosity and vascular dilatation in fundal images of retinopathy of prematurity (ROP) patients with an acceptable accuracy for detecting plus disease. Methods Eighty-seven well-focused fundal images taken with RetCam were classified to three groups of plus, non-plus, and pre-plus by agreement between three ROP experts. Automated algorithms in this study were designed based on two methods: the curvature measure and distance transform for assessment of tortuosity and vascular dilatation, respectively as two major parameters of plus disease detection. Results Thirty-eight plus, 12 pre-plus, and 37 non-plus images, which were classified by three experts, were tested by an automated algorithm and software evaluated the correct grouping of images in comparison to expert voting with three different classifiers, k-nearest neighbor, support vector machine and multilayer perceptron network. The plus, pre-plus, and non-plus images were analyzed with 72.3%, 83.7%, and 84.4% accuracy, respectively. Conclusions The new automated algorithm used in this pilot scheme for diagnosis and screening of patients with plus ROP has acceptable accuracy. With more improvements, it may become particularly useful, especially in centers without a skilled person in the ROP field. PMID:29022295
Conceptual issues behind the Chinese translations of the term 'Bipolar Disorder'.
Leung, Chi-Ming; Ungvari, Gabor S; Xiang, Yu-Tao
2016-12-01
The paper examines the problems of the existing nomenclature in Chinese psychiatry with special reference to the Chinese translation of bipolar disorder in the context of stigma of mental illness in the Chinese culture. The development of the concept of bipolar disorder is reviewed, followed by a critical examination of the accuracy and validity of the current translation of bipolar disorder in the Chinese psychiatric literature. A new translation is suggested with consideration for literal accuracy and social acceptance. © 2016 John Wiley & Sons Australia, Ltd.
The mathematical model accuracy estimation of the oil storage tank foundation soil moistening
NASA Astrophysics Data System (ADS)
Gildebrandt, M. I.; Ivanov, R. N.; Gruzin, AV; Antropova, L. B.; Kononov, S. A.
2018-04-01
The oil storage tanks foundations preparation technologies improvement is the relevant objective which achievement will make possible to reduce the material costs and spent time for the foundation preparing while providing the required operational reliability. The laboratory research revealed the nature of sandy soil layer watering with a given amount of water. The obtained data made possible developing the sandy soil layer moistening mathematical model. The performed estimation of the oil storage tank foundation soil moistening mathematical model accuracy showed the experimental and theoretical results acceptable convergence.
Feasibility of dual-energy computed tomography in radiation therapy planning
NASA Astrophysics Data System (ADS)
Sheen, Heesoon; Shin, Han-Back; Cho, Sungkoo; Cho, Junsang; Han, Youngyih
2017-12-01
In this study, the noise level, effective atomic number ( Z eff), accuracy of the computed tomography (CT) number, and the CT number to the relative electron density EDconversion curve were estimated for virtual monochromatic energy and polychromatic energy. These values were compared to the theoretically predicted values to investigate the feasibility of the use of dual-energy CT in routine radiation therapy planning. The accuracies of the parameters were within the range of acceptability. These results can serve as a stepping stone toward the routine use of dual-energy CT in radiotherapy planning.
Murugesan, Yahini Prabha; Alsadoon, Abeer; Manoranjan, Paul; Prasad, P W C
2018-06-01
Augmented reality-based surgeries have not been successfully implemented in oral and maxillofacial areas due to limitations in geometric accuracy and image registration. This paper aims to improve the accuracy and depth perception of the augmented video. The proposed system consists of a rotational matrix and translation vector algorithm to reduce the geometric error and improve the depth perception by including 2 stereo cameras and a translucent mirror in the operating room. The results on the mandible/maxilla area show that the new algorithm improves the video accuracy by 0.30-0.40 mm (in terms of overlay error) and the processing rate to 10-13 frames/s compared to 7-10 frames/s in existing systems. The depth perception increased by 90-100 mm. The proposed system concentrates on reducing the geometric error. Thus, this study provides an acceptable range of accuracy with a shorter operating time, which provides surgeons with a smooth surgical flow. Copyright © 2018 John Wiley & Sons, Ltd.
Code of Federal Regulations, 2011 CFR
2011-10-01
... accuracy; (3) Take ultrasonic thickness gaugings at a minimum of 5 points on each plate, evenly spaced; (4... must be accepted by the Officer in Charge, Marine Inspection (OCMI) prior to the survey. If you choose...
Code of Federal Regulations, 2013 CFR
2013-10-01
... accuracy; (3) Take ultrasonic thickness gaugings at a minimum of 5 points on each plate, evenly spaced; (4... must be accepted by the Officer in Charge, Marine Inspection (OCMI) prior to the survey. If you choose...
Code of Federal Regulations, 2012 CFR
2012-10-01
... accuracy; (3) Take ultrasonic thickness gaugings at a minimum of 5 points on each plate, evenly spaced; (4... must be accepted by the Officer in Charge, Marine Inspection (OCMI) prior to the survey. If you choose...
Code of Federal Regulations, 2014 CFR
2014-10-01
... accuracy; (3) Take ultrasonic thickness gaugings at a minimum of 5 points on each plate, evenly spaced; (4... must be accepted by the Officer in Charge, Marine Inspection (OCMI) prior to the survey. If you choose...
NASA Technical Reports Server (NTRS)
Buist, R. J.
1977-01-01
The design and fabrication of a thermoelectric chiller for use in chilling a liquid reservoir is described. Acceptance test results establish the accuracy of the thermal model and predict the unit performance under various conditions required by the overall spacelab program.
Zhang, Yitao; Wang, Hongyuan; Liu, Shen; Lei, Qiuliang; Liu, Jian; He, Jianqiang; Zhai, Limei; Ren, Tianzhi; Liu, Hongbin
2015-05-01
Identification of critical nitrogen (N) application rate can provide management supports for ensuring grain yield and reducing amount of nitrate leaching to ground water. A five-year (2008-2012) field lysimeter (1 m × 2 m × 1.2 m) experiment with three N treatments (0, 180 and 240 kg Nha(-1)) was conducted to quantify maize yields and amount of nitrate leaching from a Haplic Luvisol soil in the North China Plain. The experimental data were used to calibrate and validate the process-based model of Denitrification-Decomposition (DNDC). After this, the model was used to simulate maize yield production and amount of nitrate leaching under a series of N application rates and to identify critical N application rate based on acceptable yield and amount of nitrate leaching for this cropping system. The results of model calibration and validation indicated that the model could correctly simulate maize yield and amount of nitrate leaching, with satisfactory values of RMSE-observation standard deviation ratio, model efficiency and determination coefficient. The model simulations confirmed the measurements that N application increased maize yield compared with the control, but the high N rate (240 kg Nha(-1)) did not produce more yield than the low one (120 kg Nha(-1)), and that the amount of nitrate leaching increased with increasing N application rate. The simulation results suggested that the optimal N application rate was in a range between 150 and 240 kg ha(-1), which would keep the amount of nitrate leaching below 18.4 kg NO₃(-)-Nha(-1) and meanwhile maintain acceptable maize yield above 9410 kg ha(-1). Furthermore, 180 kg Nha(-1) produced the highest yields (9837 kg ha(-1)) and comparatively lower amount of nitrate leaching (10.0 kg NO₃(-)-Nha(-1)). This study will provide a valuable reference for determining optimal N application rate (or range) in other crop systems and regions in China. Copyright © 2015 Elsevier B.V. All rights reserved.
Video Feedback in Key Word Signing Training for Preservice Direct Support Staff.
Rombouts, Ellen; Meuris, Kristien; Maes, Bea; De Meyer, Anne-Marie; Zink, Inge
2016-04-01
Research has demonstrated that formal training is essential for professionals to learn key word signing. Yet, the particular didactic strategies have not been studied. Therefore, this study compared the effectiveness of verbal and video feedback in a key word signing training for future direct support staff. Forty-nine future direct support staff were randomly assigned to 1 of 3 key word signing training programs: modeling and verbal feedback (classical method [CM]), additional video feedback (+ViF), and additional video feedback and photo reminder (+ViF/R). Signing accuracy and training acceptability were measured 1 week after and 7 months after training. Participants from the +ViF/R program achieved significantly higher signing accuracy compared with the CM group. Acceptability ratings did not differ between any of the groups. Results suggest that at an equal time investment, the programs containing more training components were more effective. Research on the effect of rehearsal on signing maintenance is warranted.
John R. Brooks; Gary W. Miller
2011-01-01
Data from even-aged hardwood stands in four ecoregions across the mid-Appalachian region were used to test projection accuracy for three available growth and yield software systems: SILVAH, the Forest Vegetation Simulator, and the Stand Damage Model. Average root mean squared error (RMSE) ranged from 20 to 140 percent of actual trees per acre while RMSE ranged from 2...
Gibson, Eli; Fenster, Aaron; Ward, Aaron D
2013-10-01
Novel imaging modalities are pushing the boundaries of what is possible in medical imaging, but their signal properties are not always well understood. The evaluation of these novel imaging modalities is critical to achieving their research and clinical potential. Image registration of novel modalities to accepted reference standard modalities is an important part of characterizing the modalities and elucidating the effect of underlying focal disease on the imaging signal. The strengths of the conclusions drawn from these analyses are limited by statistical power. Based on the observation that in this context, statistical power depends in part on uncertainty arising from registration error, we derive a power calculation formula relating registration error, number of subjects, and the minimum detectable difference between normal and pathologic regions on imaging, for an imaging validation study design that accommodates signal correlations within image regions. Monte Carlo simulations were used to evaluate the derived models and test the strength of their assumptions, showing that the model yielded predictions of the power, the number of subjects, and the minimum detectable difference of simulated experiments accurate to within a maximum error of 1% when the assumptions of the derivation were met, and characterizing sensitivities of the model to violations of the assumptions. The use of these formulae is illustrated through a calculation of the number of subjects required for a case study, modeled closely after a prostate cancer imaging validation study currently taking place at our institution. The power calculation formulae address three central questions in the design of imaging validation studies: (1) What is the maximum acceptable registration error? (2) How many subjects are needed? (3) What is the minimum detectable difference between normal and pathologic image regions? Copyright © 2013 Elsevier B.V. All rights reserved.
Compression Frequency Choice for Compression Mass Gauge Method and Effect on Measurement Accuracy
NASA Astrophysics Data System (ADS)
Fu, Juan; Chen, Xiaoqian; Huang, Yiyong
2013-12-01
It is a difficult job to gauge the liquid fuel mass in a tank on spacecrafts under microgravity condition. Without the presence of strong buoyancy, the configuration of the liquid and gas in the tank is uncertain and more than one bubble may exist in the liquid part. All these will affect the measure accuracy of liquid mass gauge, especially for a method called Compression Mass Gauge (CMG). Four resonance resources affect the choice of compression frequency for CMG method. There are the structure resonance, liquid sloshing, transducer resonance and bubble resonance. Ground experimental apparatus are designed and built to validate the gauging method and the influence of different compression frequencies at different fill levels on the measurement accuracy. Harmonic phenomenon should be considered during filter design when processing test data. Results demonstrate the ground experiment system performances well with high accuracy and the measurement accuracy increases as the compression frequency climbs in low fill levels. But low compression frequencies should be the better choice for high fill levels. Liquid sloshing induces the measurement accuracy to degrade when the surface is excited to wave by external disturbance at the liquid natural frequency. The measurement accuracy is still acceptable at small amplitude vibration.
Wagener, Emily A; Kerr, William L
2017-10-20
It has been difficult to produce acceptable pecan butters as the high oil content results in a product that flows and separates too easily. The objective of this work was to create pecan butters with varying oil levels (50-70%) and determine which would give the most acceptable product. Consumers rated pecan butters with 55-60% oil the most acceptable, whether roasted or not. Acceptability varied most in terms of texture and spreadability, but not flavor. Under large deformation firmness varied from 51.8 g (70% oil) to 4,880 g (50%) oil, while "spreadability" ranged from 19.2 to 7748 (g/s). Samples with 70% oil had the lowest viscosity and were Newtonian. Pecan butters with 50-55% oil had high viscosity and were shear thinning. Yield stress decreased with oil content, ranging from 0.014 to 500 Pa. The storage modulus (G') increased from ∼7 Pa for samples with 70% oil up to 260,000 Pa for those with 50% oil. In conjunction, tan δ decreased from 1 to 0.07, showing the products take on much more solid-like behavior as oil is removed. In conclusion, the rheological properties of pecan butter were quite sensitive to the amount of oil in the product. Differences in acceptability were primarily due to "texture" and "spreadability," suggesting there is a limited range of firmness and spreadability that consumers will deem acceptable. There has been considerable demand for butters and spreads made from a variety of culinary nuts. Pecans generally have too much oil (∼70%) to make a product with proper consistency and stability. In this study, some of the oil was removed to overcome this problem. It was found that pecan butter with 55-60% oil was most acceptable to consumers and with the level of firmness, yield stress, and spreadability most similar to commercial nut butters. The oil was relatively simple to remove from unroasted nuts, thus manufacturers could easily produce more acceptable pecan butter for the market. © 2017 Wiley Periodicals, Inc.
Automatic force balance calibration system
NASA Technical Reports Server (NTRS)
Ferris, Alice T. (Inventor)
1995-01-01
A system for automatically calibrating force balances is provided. The invention uses a reference balance aligned with the balance being calibrated to provide superior accuracy while minimizing the time required to complete the calibration. The reference balance and the test balance are rigidly attached together with closely aligned moment centers. Loads placed on the system equally effect each balance, and the differences in the readings of the two balances can be used to generate the calibration matrix for the test balance. Since the accuracy of the test calibration is determined by the accuracy of the reference balance and current technology allows for reference balances to be calibrated to within +/-0.05% the entire system has an accuracy of +/-0.2%. The entire apparatus is relatively small and can be mounted on a movable base for easy transport between test locations. The system can also accept a wide variety of reference balances, thus allowing calibration under diverse load and size requirements.
Automatic force balance calibration system
NASA Technical Reports Server (NTRS)
Ferris, Alice T. (Inventor)
1996-01-01
A system for automatically calibrating force balances is provided. The invention uses a reference balance aligned with the balance being calibrated to provide superior accuracy while minimizing the time required to complete the calibration. The reference balance and the test balance are rigidly attached together with closely aligned moment centers. Loads placed on the system equally effect each balance, and the differences in the readings of the two balances can be used to generate the calibration matrix for the test balance. Since the accuracy of the test calibration is determined by the accuracy of the reference balance and current technology allows for reference balances to be calibrated to within .+-.0.05%, the entire system has an accuracy of a .+-.0.2%. The entire apparatus is relatively small and can be mounted on a movable base for easy transport between test locations. The system can also accept a wide variety of reference balances, thus allowing calibration under diverse load and size requirements.
A Priori Estimation of Organic Reaction Yields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emami, Fateme S.; Vahid, Amir; Wylie, Elizabeth K.
2015-07-21
A thermodynamically guided calculation of free energies of substrate and product molecules allows for the estimation of the yields of organic reactions. The non-ideality of the system and the solvent effects are taken into account through the activity coefficients calculated at the molecular level by perturbed-chain statistical associating fluid theory (PC-SAFT). The model is iteratively trained using a diverse set of reactions with yields that have been reported previously. This trained model can then estimate a priori the yields of reactions not included in the training set with an accuracy of ca. ±15 %. This ability has the potential tomore » translate into significant economic savings through the selection and then execution of only those reactions that can proceed in good yields.« less
Effect of reflection losses on stationary dielectric-filled nonimaging concentrators
NASA Astrophysics Data System (ADS)
Madala, Srikanth; Boehm, Robert F.
2016-10-01
The effect of Fresnel reflection and total internal reflection (TIR) losses on the performance parameters in refractive solar concentrators has often been downplayed because most refractive solar concentrators are traditionally the imaging type, yielding a line or point image on the absorber surface when solely interacted with paraxial etendue ensured by solar tracking. Whereas, with refractive-type nonimaging solar concentrators that achieve two-dimensional (rectangular strip) focus or three-dimensional (circular or elliptical) focus through interaction with both paraxial and nonparaxial etendue within the acceptance angle, the Fresnel reflection and TIR losses are significant as they will affect the performance parameters and, thereby, energy collection. A raytracing analysis has been carried out to illustrate the effects of Fresnel reflection and TIR losses on four different types of stationary dielectric-filled nonimaging concentrators, namely V-trough, compound parabolic concentrator, compound elliptical concentrator, and compound hyperbolic concentrator. The refractive index (RI) of a dielectric fill material determines the acceptance angle of a solid nonimaging collector. Larger refractive indices yield larger acceptance angles and, thereby, larger energy collection. However, they also increase the Fresnel reflection losses. This paper also assesses the relative benefit of increasing RI from an energy collection standpoint.
Evaluation of the CEAS model for barley yields in North Dakota and Minnesota
NASA Technical Reports Server (NTRS)
Barnett, T. L. (Principal Investigator)
1981-01-01
The CEAS yield model is based upon multiple regression analysis at the CRD and state levels. For the historical time series, yield is regressed on a set of variables derived from monthly mean temperature and monthly precipitation. Technological trend is represented by piecewise linear and/or quadriatic functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test (1970-79) demonstrated that biases are small and performance as indicated by the root mean square errors are acceptable for intended application, however, model response for individual years particularly unusual years, is not very reliable and shows some large errors. The model is objective, adequate, timely, simple and not costly. It considers scientific knowledge on a broad scale but not in detail, and does not provide a good current measure of modeled yield reliability.
Evaluation of the Williams-type model for barley yields in North Dakota and Minnesota
NASA Technical Reports Server (NTRS)
Barnett, T. L. (Principal Investigator)
1981-01-01
The Williams-type yield model is based on multiple regression analysis of historial time series data at CRD level pooled to regional level (groups of similar CRDs). Basic variables considered in the analysis include USDA yield, monthly mean temperature, monthly precipitation, soil texture and topographic information, and variables derived from these. Technologic trend is represented by piecewise linear and/or quadratic functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test (1970-1979) demonstrate that biases are small and performance based on root mean square appears to be acceptable for the intended AgRISTARS large area applications. The model is objective, adequate, timely, simple, and not costly. It consideres scientific knowledge on a broad scale but not in detail, and does not provide a good current measure of modeled yield reliability.
Sentinel Lymph Node Biopsy in Endometrial Cancer: a New Standard of Care?
Sullivan, Stephanie A; Rossi, Emma C
2017-09-18
Lymph node status is one of the most important factors in determining prognosis and the need for adjuvant treatment in endometrial cancer (EMCA). Unfortunately, full lymphadenectomy bears significant surgical and postoperative risks. The majority of patients with clinical stage I disease will not have metastatic disease; thus, a full lymphadenectomy only increases morbidity in this population of patients. The use of the sentinel lymph node (SLN) biopsy has emerged as an alternative to complete lymphadenectomy in EMCA. By removing the highest yield lymph nodes, the SLN biopsy has the same diagnostic ability as lymphadenectomy while minimizing morbidity. The sensitivity of sentinel lymph node identification with robotic fluorescence imaging for detecting metastatic endometrial and cervical cancer (FIRES) trial published this year is the largest prospective, multi-institution trial investigating the accuracy of the SLN biopsy for endometrial and cervical cancer. Results of this trial found an excellent sensitivity (97.2%) and false negative rate (3%) with the technique. The conclusions from the FIRES trial and those of a recent meta-analysis are that SLN biopsy has an acceptable diagnostic accuracy in detecting lymphatic metastases, and can replace lymphadenectomy for this diagnostic purpose. There remains controversy surrounding the SLN biopsy in high-risk disease and the use of adjuvant therapy in the setting of low volume disease detected with ultrastaging. Current data suggests that the technique is accurate in high-risk disease and that the increased detection of metastasis helps guide adjuvant therapy such that oncologic outcomes are likely not affected by forgoing a full lymphadenectomy. Further prospective study is needed to investigate the impact of low volume metastatic disease on oncologic outcomes and the need for adjuvant therapy in these patients.
The quality of information about sickle cell disease on the Internet for youth.
Breakey, Vicky R; Harris, Lauren; Davis, Omar; Agarwal, Arnav; Ouellette, Carley; Akinnawo, Elizabeth; Stinson, Jennifer
2017-04-01
Adolescence is a vulnerable time for teens with sickle cell disease (SCD). Although there is evidence to support the use of web-based education to promote self-management skills in patients with chronic illnesses, the quality of SCD-related information on the Internet has not been assessed. A website review was conducted to appraise the quality, content, accuracy, readability, and desirability of online information for the adolescents with SCD. Relevant keywords were searched on the most popular search engines. Websites meeting predetermined criteria were reviewed. The quality of information was appraised using the validated DISCERN tool. Two physicians independently rated website completeness and accuracy. Readability of the sites was documented using the simple measure of gobbledygook (SMOG) scores and the Flesch Reading Ease (FRE). The website features considered desirable by youth were tracked. Search results yielded >600 websites with 25 unique hits meeting criteria. The overall quality of the information was "fair" and the average DISCERN rating score was 50.1 (±9.3, range 31.0-67.5). Only 12 of 25 (48%) websites had scores >50. The average completeness score was 20 of 29 (±5, range 12-27). No errors were identified. The mean SMOG score was 13.04 (±2.80, range 10.21-22.85) and the mean FRE score was 46.05 (±11.47; range 17.50-66.10), suggesting that the material was written well beyond the acceptable reading level for patient education. The websites were text-heavy and lacked the features that appeal to youth (chat, games, videos, etc.). Given the paucity of high-quality health information available for the teens with SCD, it is essential that additional online resources be developed. © 2016 Wiley Periodicals, Inc.
Use of Empirical Estimates of Shrinkage in Multiple Regression: A Caution.
ERIC Educational Resources Information Center
Kromrey, Jeffrey D.; Hines, Constance V.
1995-01-01
The accuracy of four empirical techniques to estimate shrinkage in multiple regression was studied through Monte Carlo simulation. None of the techniques provided unbiased estimates of the population squared multiple correlation coefficient, but the normalized jackknife and bootstrap techniques demonstrated marginally acceptable performance with…
Comparison of Some Secondary Body Composition Algorithms
ERIC Educational Resources Information Center
Sutton, Robert A.; Miller, Carolyn
2006-01-01
Body composition measurements vary greatly in degree of measurement difficulty and accuracy. Hydrostatic weighing, chemical dilution or their equivalents were the accepted "gold" standards for assessing fat mass. Dual Energy X-ray Absorptiometry (DEXA) is fast replacing these techniques as the preferred standard. However, these direct measurement…
NASA Astrophysics Data System (ADS)
Nyckowiak, Jedrzej; Lesny, Jacek; Haas, Edwin; Juszczak, Radoslaw; Kiese, Ralf; Butterbach-Bahl, Klaus; Olejnik, Janusz
2014-05-01
Modeling of nitrous oxide emissions from soil is very complex. Many different biological and chemical processes take place in soils which determine the amount of emitted nitrous oxide. Additionaly, biogeochemical models contain many detailed factors which may determine fluxes and other simulated variables. We used the LandscapeDNDC model in order to simulate N2O emissions, crop yields and soil physical properties from mineral cultivated soils in Poland. Nitrous oxide emissions from soils were modeled for fields with winter wheat, winter rye, spring barley, triticale, potatoes and alfalfa crops. Simulations were carried out for the plots of the Brody arable experimental station of Poznan University of Life Science in western Poland and covered the period 2003 - 2012. The model accuracy and its efficiency was determined by comparing simulations result with measurements of nitrous oxide emissions (measured with static chambers) from about 40 field campaigns. N2O emissions are strongly dependent on temperature and soil water content, hence we compared also simulated soil temperature at 10cm depth and soil water content at the same depth with the daily measured values of these driving variables. We compared also simulated yield quantities for each individual experimental plots with yield quantities which were measured in the period 2003-2012. We conclude that the LandscapeDNDC model is capable to simulate soil N2O emissions, crop yields and physical properties of soil with satisfactorily good accuracy and efficiency.
Development and evaluation of a vision based poultry debone line monitoring system
NASA Astrophysics Data System (ADS)
Usher, Colin T.; Daley, W. D. R.
2013-05-01
Efficient deboning is key to optimizing production yield (maximizing the amount of meat removed from a chicken frame while reducing the presence of bones). Many processors evaluate the efficiency of their deboning lines through manual yield measurements, which involves using a special knife to scrape the chicken frame for any remaining meat after it has been deboned. Researchers with the Georgia Tech Research Institute (GTRI) have developed an automated vision system for estimating this yield loss by correlating image characteristics with the amount of meat left on a skeleton. The yield loss estimation is accomplished by the system's image processing algorithms, which correlates image intensity with meat thickness and calculates the total volume of meat remaining. The team has established a correlation between transmitted light intensity and meat thickness with an R2 of 0.94. Employing a special illuminated cone and targeted software algorithms, the system can make measurements in under a second and has up to a 90-percent correlation with yield measurements performed manually. This same system is also able to determine the probability of bone chips remaining in the output product. The system is able to determine the presence/absence of clavicle bones with an accuracy of approximately 95 percent and fan bones with an accuracy of approximately 80%. This paper describes in detail the approach and design of the system, results from field testing, and highlights the potential benefits that such a system can provide to the poultry processing industry.
Egger, Alexander E; Theiner, Sarah; Kornauth, Christoph; Heffeter, Petra; Berger, Walter; Keppler, Bernhard K; Hartinger, Christian G
2014-09-01
Laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) was used to study the spatially-resolved distribution of ruthenium and platinum in viscera (liver, kidney, spleen, and muscle) originating from mice treated with the investigational ruthenium-based antitumor compound KP1339 or cisplatin, a potent, but nephrotoxic clinically-approved platinum-based anticancer drug. Method development was based on homogenized Ru- and Pt-containing samples (22.0 and 0.257 μg g(-1), respectively). Averaging yielded satisfactory precision and accuracy for both concentrations (3-15% and 93-120%, respectively), however when considering only single data points, the highly concentrated Ru sample maintained satisfactory precision and accuracy, while the low concentrated Pt sample yielded low recoveries and precision, which could not be improved by use of internal standards ((115)In, (185)Re or (13)C). Matrix-matched standards were used for quantification in LA-ICP-MS which yielded comparable metal distributions, i.e., enrichment in the cortex of the kidney in comparison with the medulla, a homogenous distribution in the liver and the muscle and areas of enrichment in the spleen. Elemental distributions were assigned to histological structures exceeding 100 μm in size. The accuracy of a quantitative LA-ICP-MS imaging experiment was validated by an independent method using microwave-assisted digestion (MW) followed by direct infusion ICP-MS analysis.
Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.; Mitchell, Robert B.; Vogel, Kenneth P.; Buell, C. Robin; Casler, Michael D.
2016-01-01
Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height, and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs. PMID:26869619
Glycerol particle cigarettes: a less harmful option for chronic smokers.
Sutherland, G; Russell, M A; Stapleton, J A; Feyerabend, C
1993-01-01
In 20 smokers who switched to a new type of virtually tar free cigarette for three days, average nicotine intake was reduced by 44%, carbon monoxide intake increased by 19%, while estimated tar intake was reduced by about 90%. Such cigarettes pose substantially less risk of cancer and chronic obstructive lung disease than conventional cigarettes, and their acceptability and safety could be improved by increasing nicotine yield, reducing carbon monoxide yield, and improving the flavour. PMID:8511737
Pilot production and testing of high efficiency wraparound contact solar cells
NASA Technical Reports Server (NTRS)
Gillanders, M.
1981-01-01
Modifications were made to the process sequence until a device capable of high performance and satisfactory processing yields could be fabricated on a production line. Pilot production resulted in a 2 x 4 cm screen printed dielectric wraparound contact solar cell with average 28 C, Air Mass Zero (AMO) conversion efficiencies of 14.2% and reasonable process yields. This high performance was obtained with two different back contact configurations, making the device acceptable for many applications.
Fan, Yong; Du, Jin Peng; Liu, Ji Jun; Zhang, Jia Nan; Qiao, Huan Huan; Liu, Shi Chang; Hao, Ding Jun
2018-06-01
A miniature spine-mounted robot has recently been introduced to further improve the accuracy of pedicle screw placement in spine surgery. However, the differences in accuracy between the robotic-assisted (RA) technique and the free-hand with fluoroscopy-guided (FH) method for pedicle screw placement are controversial. A meta-analysis was conducted to focus on this problem. Several randomized controlled trials (RCTs) and cohort studies involving RA and FH and published before January 2017 were searched for using the Cochrane Library, Ovid, Web of Science, PubMed, and EMBASE databases. A total of 55 papers were selected. After the full-text assessment, 45 clinical trials were excluded. The final meta-analysis included 10 articles. The accuracy of pedicle screw placement within the RA group was significantly greater than the accuracy within the FH group (odds ratio 95%, "perfect accuracy" confidence interval: 1.38-2.07, P < .01; odds ratio 95% "clinically acceptable" Confidence Interval: 1.17-2.08, P < .01). There are significant differences in accuracy between RA surgery and FH surgery. It was demonstrated that the RA technique is superior to the conventional method in terms of the accuracy of pedicle screw placement.
Keng, Shian-Ling; Ji, Jie Lisa; Moore, Tyler; Minkel, Jared; Dichter, Gabriel S.
2015-01-01
Mood disorders are characterized by impaired emotion regulation abilities, reflected in alterations in frontolimbic brain functioning during regulation. However, little is known about differences in brain function when comparing regulatory strategies. Reappraisal and emotional acceptance are effective in downregulating negative affect, and are components of effective depression psychotherapies. Investigating neural mechanisms of reappraisal vs emotional acceptance in remitted major depressive disorder (rMDD) may yield novel mechanistic insights into depression risk and prevention. Thirty-seven individuals (18 rMDD, 19 controls) were assessed during a functional magnetic resonance imaging task requiring reappraisal, emotional acceptance or no explicit regulation while viewing sad images. Lower negative affect was reported following reappraisal than acceptance, and was lower following acceptance than no explicit regulation. In controls, the acceptance > reappraisal contrast revealed greater activation in left insular cortex and right prefrontal gyrus, and less activation in several other prefrontal regions. Compared with controls, the rMDD group had greater paracingulate and right midfrontal gyrus (BA 8) activation during reappraisal relative to acceptance. Compared with reappraisal, acceptance is associated with activation in regions linked to somatic and emotion awareness, although this activation is associated with less reduction in negative affect. Additionally, a history of MDD moderated these effects. PMID:25617820
New public QSAR model for carcinogenicity
2010-01-01
Background One of the main goals of the new chemical regulation REACH (Registration, Evaluation and Authorization of Chemicals) is to fulfill the gaps in data concerned with properties of chemicals affecting the human health. (Q)SAR models are accepted as a suitable source of information. The EU funded CAESAR project aimed to develop models for prediction of 5 endpoints for regulatory purposes. Carcinogenicity is one of the endpoints under consideration. Results Models for prediction of carcinogenic potency according to specific requirements of Chemical regulation were developed. The dataset of 805 non-congeneric chemicals extracted from Carcinogenic Potency Database (CPDBAS) was used. Counter Propagation Artificial Neural Network (CP ANN) algorithm was implemented. In the article two alternative models for prediction carcinogenicity are described. The first model employed eight MDL descriptors (model A) and the second one twelve Dragon descriptors (model B). CAESAR's models have been assessed according to the OECD principles for the validation of QSAR. For the model validity we used a wide series of statistical checks. Models A and B yielded accuracy of training set (644 compounds) equal to 91% and 89% correspondingly; the accuracy of the test set (161 compounds) was 73% and 69%, while the specificity was 69% and 61%, respectively. Sensitivity in both cases was equal to 75%. The accuracy of the leave 20% out cross validation for the training set of models A and B was equal to 66% and 62% respectively. To verify if the models perform correctly on new compounds the external validation was carried out. The external test set was composed of 738 compounds. We obtained accuracy of external validation equal to 61.4% and 60.0%, sensitivity 64.0% and 61.8% and specificity equal to 58.9% and 58.4% respectively for models A and B. Conclusion Carcinogenicity is a particularly important endpoint and it is expected that QSAR models will not replace the human experts opinions and conventional methods. However, we believe that combination of several methods will provide useful support to the overall evaluation of carcinogenicity. In present paper models for classification of carcinogenic compounds using MDL and Dragon descriptors were developed. Models could be used to set priorities among chemicals for further testing. The models at the CAESAR site were implemented in java and are publicly accessible. PMID:20678182
NASA Astrophysics Data System (ADS)
Vilhena de Moraes, Rodolpho; Cristiane Pardal, Paula; Koiti Kuga, Helio
The problem of orbit determination consists essentially of estimating parameter values that completely specify the body trajectory in the space, processing a set of information (measure-ments) from this body. Such observations can be collected through a conventional tracking network on Earth or through sensors like GPS. The Global Positioning System (GPS) is a powerful and low cost way to allow the computation of orbits for artificial Earth satellites. The Topex/Poseidon satellite is normally used as a reference for analyzing this system for space positioning. The orbit determination of artificial satellites is a nonlinear problem in which the disturbing forces are not easily modeled, like geopotential and direct solar radiation pressure. Through an onboard GPS receiver it is possible to obtain measurements (pseudo-range and phase) that can be used to estimate the state of the orbit. One intends to analyze the modeling of the orbit of an artificial satellite, using signals of the GPS constellation and least squares algorithms as a method of estimation, with the aim of analyzing the performance of the orbit estimation process. Accuracy is not the main goal; one pursues to verify how differences of modeling can affect the final accuracy of the orbit determination. To accomplish that, the following effects were considered: perturbations up to high degree and order for the geopoten-tial coefficients; direct solar radiation pressure, Sun attraction, and Moon attraction. It was also considered the position of the GPS antenna on the satellite body that, lately, consists of the influence of the satellite attitude motion in the orbit determination process. Although not presenting the ultimate accuracy, pseudo-range measurements corrected from ionospheric effects were considered enough to such analysis. The measurements were used to feed the batch least squares orbit determination process, in order to yield conclusive results about the orbit modeling issue. An application has been done, using such GPS data, for orbit determination of the Topex/Poseidon satellite, whose accurate ephemerides are freely available at Internet. It is shown that from a poor but acceptable modeling up to all effects included, the accuracy can vary from about 30m to 8m. Test results for short period (2 hours) and for long period (24 hours) are also shown.
Sun, Chuanyu; VanRaden, Paul M.; Cole, John B.; O'Connell, Jeffrey R.
2014-01-01
Dominance may be an important source of non-additive genetic variance for many traits of dairy cattle. However, nearly all prediction models for dairy cattle have included only additive effects because of the limited number of cows with both genotypes and phenotypes. The role of dominance in the Holstein and Jersey breeds was investigated for eight traits: milk, fat, and protein yields; productive life; daughter pregnancy rate; somatic cell score; fat percent and protein percent. Additive and dominance variance components were estimated and then used to estimate additive and dominance effects of single nucleotide polymorphisms (SNPs). The predictive abilities of three models with both additive and dominance effects and a model with additive effects only were assessed using ten-fold cross-validation. One procedure estimated dominance values, and another estimated dominance deviations; calculation of the dominance relationship matrix was different for the two methods. The third approach enlarged the dataset by including cows with genotype probabilities derived using genotyped ancestors. For yield traits, dominance variance accounted for 5 and 7% of total variance for Holsteins and Jerseys, respectively; using dominance deviations resulted in smaller dominance and larger additive variance estimates. For non-yield traits, dominance variances were very small for both breeds. For yield traits, including additive and dominance effects fit the data better than including only additive effects; average correlations between estimated genetic effects and phenotypes showed that prediction accuracy increased when both effects rather than just additive effects were included. No corresponding gains in prediction ability were found for non-yield traits. Including cows with derived genotype probabilities from genotyped ancestors did not improve prediction accuracy. The largest additive effects were located on chromosome 14 near DGAT1 for yield traits for both breeds; those SNPs also showed the largest dominance effects for fat yield (both breeds) as well as for Holstein milk yield. PMID:25084281
USDA-ARS?s Scientific Manuscript database
Use of lamb body or chilled carcass weights; live-animal ultrasound or direct carcass measurements of backfat thickness (BF; mm) and LM area (LMA; cm2); and carcass body wall thickness (BWall; mm) to predict carcass yield and value was evaluated using 512 crossbred lambs produced over 3 yr by mating...
Space Station racks weight and CG measurement using the rack insertion end-effector
NASA Technical Reports Server (NTRS)
Brewer, William V.
1994-01-01
The objective was to design a method to measure weight and center of gravity (C.G.) location for Space Station Modules by adding sensors to the existing Rack Insertion End Effector (RIEE). Accomplishments included alternative sensor placement schemes organized into categories. Vendors were queried for suitable sensor equipment recommendations. Inverse mathematical models for each category determine expected maximum sensor loads. Sensors are selected using these computations, yielding cost and accuracy data. Accuracy data for individual sensors are inserted into forward mathematical models to estimate the accuracy of an overall sensor scheme. Cost of the schemes can be estimated. Ease of implementation and operation are discussed.
NASA Astrophysics Data System (ADS)
Müller-Putz, Gernot R.; Scherer, Reinhold; Brauneis, Christian; Pfurtscheller, Gert
2005-12-01
Brain-computer interfaces (BCIs) can be realized on the basis of steady-state evoked potentials (SSEPs). These types of brain signals resulting from repetitive stimulation have the same fundamental frequency as the stimulation but also include higher harmonics. This study investigated how the classification accuracy of a 4-class BCI system can be improved by incorporating visually evoked harmonic oscillations. The current study revealed that the use of three SSVEP harmonics yielded a significantly higher classification accuracy than was the case for one or two harmonics. During feedback experiments, the five subjects investigated reached a classification accuracy between 42.5% and 94.4%.
Müller-Putz, Gernot R; Scherer, Reinhold; Brauneis, Christian; Pfurtscheller, Gert
2005-12-01
Brain-computer interfaces (BCIs) can be realized on the basis of steady-state evoked potentials (SSEPs). These types of brain signals resulting from repetitive stimulation have the same fundamental frequency as the stimulation but also include higher harmonics. This study investigated how the classification accuracy of a 4-class BCI system can be improved by incorporating visually evoked harmonic oscillations. The current study revealed that the use of three SSVEP harmonics yielded a significantly higher classification accuracy than was the case for one or two harmonics. During feedback experiments, the five subjects investigated reached a classification accuracy between 42.5% and 94.4%.
Heterogenic Solid Biofuel Sampling Methodology and Uncertainty Associated with Prompt Analysis
Pazó, Jose A.; Granada, Enrique; Saavedra, Ángeles; Patiño, David; Collazo, Joaquín
2010-01-01
Accurate determination of the properties of biomass is of particular interest in studies on biomass combustion or cofiring. The aim of this paper is to develop a methodology for prompt analysis of heterogeneous solid fuels with an acceptable degree of accuracy. Special care must be taken with the sampling procedure to achieve an acceptable degree of error and low statistical uncertainty. A sampling and error determination methodology for prompt analysis is presented and validated. Two approaches for the propagation of errors are also given and some comparisons are made in order to determine which may be better in this context. Results show in general low, acceptable levels of uncertainty, demonstrating that the samples obtained in the process are representative of the overall fuel composition. PMID:20559506
Comparing Features for Classification of MEG Responses to Motor Imagery.
Halme, Hanna-Leena; Parkkonen, Lauri
2016-01-01
Motor imagery (MI) with real-time neurofeedback could be a viable approach, e.g., in rehabilitation of cerebral stroke. Magnetoencephalography (MEG) noninvasively measures electric brain activity at high temporal resolution and is well-suited for recording oscillatory brain signals. MI is known to modulate 10- and 20-Hz oscillations in the somatomotor system. In order to provide accurate feedback to the subject, the most relevant MI-related features should be extracted from MEG data. In this study, we evaluated several MEG signal features for discriminating between left- and right-hand MI and between MI and rest. MEG was measured from nine healthy participants imagining either left- or right-hand finger tapping according to visual cues. Data preprocessing, feature extraction and classification were performed offline. The evaluated MI-related features were power spectral density (PSD), Morlet wavelets, short-time Fourier transform (STFT), common spatial patterns (CSP), filter-bank common spatial patterns (FBCSP), spatio-spectral decomposition (SSD), and combined SSD+CSP, CSP+PSD, CSP+Morlet, and CSP+STFT. We also compared four classifiers applied to single trials using 5-fold cross-validation for evaluating the classification accuracy and its possible dependence on the classification algorithm. In addition, we estimated the inter-session left-vs-right accuracy for each subject. The SSD+CSP combination yielded the best accuracy in both left-vs-right (mean 73.7%) and MI-vs-rest (mean 81.3%) classification. CSP+Morlet yielded the best mean accuracy in inter-session left-vs-right classification (mean 69.1%). There were large inter-subject differences in classification accuracy, and the level of the 20-Hz suppression correlated significantly with the subjective MI-vs-rest accuracy. Selection of the classification algorithm had only a minor effect on the results. We obtained good accuracy in sensor-level decoding of MI from single-trial MEG data. Feature extraction methods utilizing both the spatial and spectral profile of MI-related signals provided the best classification results, suggesting good performance of these methods in an online MEG neurofeedback system.
Erbe, M; Hayes, B J; Matukumalli, L K; Goswami, S; Bowman, P J; Reich, C M; Mason, B A; Goddard, M E
2012-07-01
Achieving accurate genomic estimated breeding values for dairy cattle requires a very large reference population of genotyped and phenotyped individuals. Assembling such reference populations has been achieved for breeds such as Holstein, but is challenging for breeds with fewer individuals. An alternative is to use a multi-breed reference population, such that smaller breeds gain some advantage in accuracy of genomic estimated breeding values (GEBV) from information from larger breeds. However, this requires that marker-quantitative trait loci associations persist across breeds. Here, we assessed the gain in accuracy of GEBV in Jersey cattle as a result of using a combined Holstein and Jersey reference population, with either 39,745 or 624,213 single nucleotide polymorphism (SNP) markers. The surrogate used for accuracy was the correlation of GEBV with daughter trait deviations in a validation population. Two methods were used to predict breeding values, either a genomic BLUP (GBLUP_mod), or a new method, BayesR, which used a mixture of normal distributions as the prior for SNP effects, including one distribution that set SNP effects to zero. The GBLUP_mod method scaled both the genomic relationship matrix and the additive relationship matrix to a base at the time the breeds diverged, and regressed the genomic relationship matrix to account for sampling errors in estimating relationship coefficients due to a finite number of markers, before combining the 2 matrices. Although these modifications did result in less biased breeding values for Jerseys compared with an unmodified genomic relationship matrix, BayesR gave the highest accuracies of GEBV for the 3 traits investigated (milk yield, fat yield, and protein yield), with an average increase in accuracy compared with GBLUP_mod across the 3 traits of 0.05 for both Jerseys and Holsteins. The advantage was limited for either Jerseys or Holsteins in using 624,213 SNP rather than 39,745 SNP (0.01 for Holsteins and 0.03 for Jerseys, averaged across traits). Even this limited and nonsignificant advantage was only observed when BayesR was used. An alternative panel, which extracted the SNP in the transcribed part of the bovine genome from the 624,213 SNP panel (to give 58,532 SNP), performed better, with an increase in accuracy of 0.03 for Jerseys across traits. This panel captures much of the increased genomic content of the 624,213 SNP panel, with the advantage of a greatly reduced number of SNP effects to estimate. Taken together, using this panel, a combined breed reference and using BayesR rather than GBLUP_mod increased the accuracy of GEBV in Jerseys from 0.43 to 0.52, averaged across the 3 traits. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
An Overhead Projection Demonstration of Optical Activity
ERIC Educational Resources Information Center
Hill, John W.
1973-01-01
Describes the use of two polarizing lenses, a yellow filter, an oatmeal bos, a piece of cardboard, a 1,000 ml beaker, and an overhead projector to demonstrate compound optical activity to large classes. Indicates the presence of an accuracy within 1-2 degrees of usually acceptable data. (CC)
Teaching Grammar to Adult English Language Learners: Focus on Form. CAELA Network Brief
ERIC Educational Resources Information Center
Gallup Rodriguez, Amber
2009-01-01
Many adult English language learners place a high value on learning grammar. Perceiving a link between grammatical accuracy and effective communication, they associate excellent grammar with opportunities for employment and promotion, the attainment of educational goals, and social acceptance by native speakers. Reflecting the disagreement that…
45 CFR 261.41 - How will we determine the caseload reduction credit?
Code of Federal Regulations, 2010 CFR
2010-10-01
... ASSISTANCE (ASSISTANCE PROGRAMS), ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND HUMAN..., or other administrative data sources and analyses. (2) We will accept the information and estimates... closures, or other administrative data sources to validate the accuracy of the State estimates. (b) In...
USDA-ARS?s Scientific Manuscript database
The use of automated methods to estimate canopy cover (CC) from digital photographs has increased in recent years given its potential to produce accurate, fast and inexpensive CC measurements. Wide acceptance has been delayed because of the limitations of these methods. This work introduces a novel ...
In vitro bioaccessibility assays (IVBA) estimate arsenic (As) relative bioavailability (RBA) in contaminated soils to improve the accuracy of site-specific human exposure assessments and risk calculations. For an IVBA assay to gain acceptance for use in risk assessment, it must ...
46 CFR 170.185 - Stability test preparations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... REQUIREMENTS FOR ALL INSPECTED VESSELS Determination of Lightweight Displacement and Centers of Gravity § 170... partial filling on the location of the center of gravity and on the displacement can be accurately... acceptable accuracy in calculating the center of gravity and displacement of the unit. (g) The stability test...
46 CFR 170.185 - Stability test preparations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... REQUIREMENTS FOR ALL INSPECTED VESSELS Determination of Lightweight Displacement and Centers of Gravity § 170... partial filling on the location of the center of gravity and on the displacement can be accurately... acceptable accuracy in calculating the center of gravity and displacement of the unit. (g) The stability test...
46 CFR 170.185 - Stability test preparations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... REQUIREMENTS FOR ALL INSPECTED VESSELS Determination of Lightweight Displacement and Centers of Gravity § 170... partial filling on the location of the center of gravity and on the displacement can be accurately... acceptable accuracy in calculating the center of gravity and displacement of the unit. (g) The stability test...
46 CFR 170.185 - Stability test preparations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... REQUIREMENTS FOR ALL INSPECTED VESSELS Determination of Lightweight Displacement and Centers of Gravity § 170... partial filling on the location of the center of gravity and on the displacement can be accurately... acceptable accuracy in calculating the center of gravity and displacement of the unit. (g) The stability test...
The percentage of impervious surface area in a watershed has been widely recognized as a key indicator of terrestrial and aquatic ecosystem condition. Although the use of the impervious indicator is widespread, there is currently no consistent or mutually accepted method of compu...
ERIC Educational Resources Information Center
Kaleba, Frank
2008-01-01
The central problem for the facility manager of large portfolios is not the accuracy of data, but rather data integrity. Data integrity means that it's (1) acceptable to the users; (2) based upon an objective source; (3) reproducible; and (4) internally consistent. Manns and Katsinas, in their January/February 2006 Facilities Manager article…
Laura P. Leites; Andrew P. Robinson; Nicholas L. Crookston
2009-01-01
Diameter growth (DG) equations in many existing forest growth and yield models use tree crown ratio (CR) as a predictor variable. Where CR is not measured, it is estimated from other measured variables. We evaluated CR estimation accuracy for the models in two Forest Vegetation Simulator variants: the exponential and the logistic CR models used in the North...
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.
1990-01-01
The accuracy of high-alpha slender-body theory (HASBT) for bodies with elliptical cross-sections is presently demonstrated by means of a comparison with exact solutions for incompressible potential flow over a wide range of ellipsoid geometries and angles of attack and sideslip. The addition of the appropriate trigonometric coefficients to the classical slender-body theory decomposition yields the formally correct HASBT, and results in accuracies previously considered unattainable.
Proceedings of Technical Sessions, Volumes 1 and 2: the LACIE Symposium
NASA Technical Reports Server (NTRS)
1979-01-01
The technical design of the Large Area Crop Inventory Experiment is examined and data acquired over 3 global crop years is analyzed with respect to (1) sampling and aggregation; (2) growth size estimation; (3) classification and mensuration; (4) yield estimation; and (5) accuracy assessment. Seventy-nine papers delivered at conference sessions cover system implementation and operation; data processing systems; experiment results and accuracy; supporting research and technology; and the USDA application test system.
Development of guidelines for pedestrian safety treatments at signalized intersections.
DOT National Transportation Integrated Search
2012-01-01
For intersections with a permissive or protected-permissive left-turn mode, pedestrians cross during the : permissive period. This operation requires the left-turn driver to yield to both opposing vehicles and : pedestrians, prior to accepting a gap ...
40 CFR 63.526 - Monitoring requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... vent. (D) Design analysis based on accepted chemical engineering principles, measurable process.... (i) For the purpose of determining de minimis status for emission points, engineering assessment may... operating conditions expected to yield the highest flow rate and concentration. Engineering assessment...
40 CFR 63.526 - Monitoring requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... vent. (D) Design analysis based on accepted chemical engineering principles, measurable process.... (i) For the purpose of determining de minimis status for emission points, engineering assessment may... operating conditions expected to yield the highest flow rate and concentration. Engineering assessment...
40 CFR 63.526 - Monitoring requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... vent. (D) Design analysis based on accepted chemical engineering principles, measurable process.... (i) For the purpose of determining de minimis status for emission points, engineering assessment may... operating conditions expected to yield the highest flow rate and concentration. Engineering assessment...
NASA Astrophysics Data System (ADS)
Qur’ania, A.; Sarinah, I.
2018-03-01
People often wrong in knowing the type of jasmine by just looking at the white color of the jasmine, while not all white flowers including jasmine and not all jasmine flowers have white. There is a jasmine that is yellow and there is a jasmine that is white and purple.The aim of this research is to identify Jasmine flower (Jasminum sp.) based on the shape of the flower image-based using Sobel edge detection and k-Nearest Neighbor. Edge detection is used to detect the type of flower from the flower shape. Edge detection aims to improve the appearance of the border of a digital image. While k-Nearest Neighbor method is used to classify the classification of test objects into classes that have neighbouring properties closest to the object of training. The data used in this study are three types of jasmine namely jasmine white (Jasminum sambac), jasmine gambir (Jasminum pubescens), and jasmine japan (Pseuderanthemum reticulatum). Testing of jasmine flower image resized 50 × 50 pixels, 100 × 100 pixels, 150 × 150 pixels yields an accuracy of 84%. Tests on distance values of the k-NN method with spacing 5, 10 and 15 resulted in different accuracy rates for 5 and 10 closest distances yielding the same accuracy rate of 84%, for the 15 shortest distance resulted in a small accuracy of 65.2%.
Nikkels, A F; Debrus, S; Sadzot-Delvaux, C; Piette, J; Rentier, B; Piérard, G E
1995-12-01
Early and specific recognition of varicella zoster virus (VZV) infection is of vital concern in immunocompromised patients. The aim of this study was to compare the diagnostic accuracy of histochemical and immunohistochemical identification of the VZV ORF63 encoded protein (IE63) and of the VZV late protein gE on smears and formalin-fixed paraffin-embedded skin sections taken from lesions clinically diagnosed as varicella (n = 15) and herpes zoster (n = 51). Microscopic examinations of Tzanck smears and skin sections yielded a diagnostic accuracy of Herpesviridae infections in 66.7% (10/15) and 92.3% (12/13) of varicella, and 74.4% (29/39) and 87.8% (43/49) of herpes zoster, respectively. Immunohistochemistry applied to varicella provided a type-specific virus diagnostic accuracy of 86.7% (13/15; IE63) and 100% (15/15; gE) on smears, and of 92.3% for both VZV proteins on skin sections. In herpes zoster, the diagnostic accuracy of immunohistochemistry reached 92.3% (36/39; IE63) and 94.9% (37/39; gE) on smears, and 91.7% (44/48; IE63) and 91.8% (45/49; gE) on skin sections. These findings indicate that the immunohistochemical detection of IE63 and gE on both smears and skin sections yields a higher specificity and sensitivity than standard microscopic assessments.
NASA Astrophysics Data System (ADS)
Guha, Daipayan; Jakubovic, Raphael; Gupta, Shaurya; Yang, Victor X. D.
2017-02-01
Computer-assisted navigation (CAN) may guide spinal surgeries, reliably reducing screw breach rates. Definitions of screw breach, if reported, vary widely across studies. Absolute quantitative error is theoretically a more precise and generalizable metric of navigation accuracy, but has been computed variably and reported in fewer than 25% of clinical studies of CAN-guided pedicle screw accuracy. We reviewed a prospectively-collected series of 209 pedicle screws placed with CAN guidance to characterize the correlation between clinical pedicle screw accuracy, based on postoperative imaging, and absolute quantitative navigation accuracy. We found that acceptable screw accuracy was achieved for significantly fewer screws based on 2mm grade vs. Heary grade, particularly in the lumbar spine. Inter-rater agreement was good for the Heary classification and moderate for the 2mm grade, significantly greater among radiologists than surgeon raters. Mean absolute translational/angular accuracies were 1.75mm/3.13° and 1.20mm/3.64° in the axial and sagittal planes, respectively. There was no correlation between clinical and absolute navigation accuracy, in part because surgeons appear to compensate for perceived translational navigation error by adjusting screw medialization angle. Future studies of navigation accuracy should therefore report absolute translational and angular errors. Clinical screw grades based on post-operative imaging, if reported, may be more reliable if performed in multiple by radiologist raters.
NASA Astrophysics Data System (ADS)
Vrugt, Jasper A.; Beven, Keith J.
2018-04-01
This essay illustrates some recent developments to the DiffeRential Evolution Adaptive Metropolis (DREAM) MATLAB toolbox of Vrugt (2016) to delineate and sample the behavioural solution space of set-theoretic likelihood functions used within the GLUE (Limits of Acceptability) framework (Beven and Binley, 1992, 2014; Beven and Freer, 2001; Beven, 2006). This work builds on the DREAM(ABC) algorithm of Sadegh and Vrugt (2014) and enhances significantly the accuracy and CPU-efficiency of Bayesian inference with GLUE. In particular it is shown how lack of adequate sampling in the model space might lead to unjustified model rejection.
Soltani, Esmail; Bahrainian, Seyed Abdolmajid; Masjedi Arani, Abbas; Farhoudian, Ali; Gachkar, Latif
2016-06-01
Social anxiety disorder is often related to specific impairment or distress in different areas of life, including occupational, social and family settings. The purpose of the present study was to examine the psychometric properties of the persian version of the social anxiety-acceptance and action questionnaire (SA-AAQ) in university students. In this descriptive cross-sectional study, 324 students from Shahid Beheshti University of Medical Sciences participated via the cluster sampling method during year 2015. Factor analysis by the principle component analysis method, internal consistency analysis, and convergent and divergent validity were conducted to examine the validity of the SA-AAQ. To calculate the reliability of the SA-AAQ, Cronbach's alpha and test-retest reliability were used. The results from factor analysis by principle component analysis method yielded three factors that were named acceptance, action and non-judging of experience. The three-factor solution explained 51.82% of the variance. Evidence for the internal consistency of SA-AAQ was obtained via calculating correlations between SA-AAQ and its subscales. Support for convergent and discriminant validity of the SA-AAQ via its correlations with the acceptance and action questionnaire - II, social interaction anxiety scale, cognitive fusion questionnaire, believability of anxious feelings and thoughts questionnaire, valued living questionnaire and WHOQOL- BREF was obtained. The reliability of the SA-AAQ via calculating Cronbach's alpha and test-retest coefficients yielded values of 0.84 and 0.84, respectively. The Iranian version of the SA-AAQ has acceptable levels of psychometric properties in university students. The SA-AAQ is a valid and reliable measure to be utilized in research investigations and therapeutic interventions.
Soltani, Esmail; Bahrainian, Seyed Abdolmajid; Masjedi Arani, Abbas; Farhoudian, Ali; Gachkar, Latif
2016-01-01
Background Social anxiety disorder is often related to specific impairment or distress in different areas of life, including occupational, social and family settings. Objective The purpose of the present study was to examine the psychometric properties of the persian version of the social anxiety-acceptance and action questionnaire (SA-AAQ) in university students. Materials and Methods In this descriptive cross-sectional study, 324 students from Shahid Beheshti University of Medical Sciences participated via the cluster sampling method during year 2015. Factor analysis by the principle component analysis method, internal consistency analysis, and convergent and divergent validity were conducted to examine the validity of the SA-AAQ. To calculate the reliability of the SA-AAQ, Cronbach’s alpha and test-retest reliability were used. Results The results from factor analysis by principle component analysis method yielded three factors that were named acceptance, action and non-judging of experience. The three-factor solution explained 51.82% of the variance. Evidence for the internal consistency of SA-AAQ was obtained via calculating correlations between SA-AAQ and its subscales. Support for convergent and discriminant validity of the SA-AAQ via its correlations with the acceptance and action questionnaire - II, social interaction anxiety scale, cognitive fusion questionnaire, believability of anxious feelings and thoughts questionnaire, valued living questionnaire and WHOQOL- BREF was obtained. The reliability of the SA-AAQ via calculating Cronbach’s alpha and test-retest coefficients yielded values of 0.84 and 0.84, respectively. Conclusions The Iranian version of the SA-AAQ has acceptable levels of psychometric properties in university students. The SA-AAQ is a valid and reliable measure to be utilized in research investigations and therapeutic interventions. PMID:27803719
NASA Astrophysics Data System (ADS)
Beretta, Gian Paolo; Rivadossi, Luca; Janbozorgi, Mohammad
2018-04-01
Rate-Controlled Constrained-Equilibrium (RCCE) modeling of complex chemical kinetics provides acceptable accuracies with much fewer differential equations than for the fully Detailed Kinetic Model (DKM). Since its introduction by James C. Keck, a drawback of the RCCE scheme has been the absence of an automatable, systematic procedure to identify the constraints that most effectively warrant a desired level of approximation for a given range of initial, boundary, and thermodynamic conditions. An optimal constraint identification has been recently proposed. Given a DKM with S species, E elements, and R reactions, the procedure starts by running a probe DKM simulation to compute an S-vector that we call overall degree of disequilibrium (ODoD) because its scalar product with the S-vector formed by the stoichiometric coefficients of any reaction yields its degree of disequilibrium (DoD). The ODoD vector evolves in the same (S-E)-dimensional stoichiometric subspace spanned by the R stoichiometric S-vectors. Next we construct the rank-(S-E) matrix of ODoD traces obtained from the probe DKM numerical simulation and compute its singular value decomposition (SVD). By retaining only the first C largest singular values of the SVD and setting to zero all the others we obtain the best rank-C approximation of the matrix of ODoD traces whereby its columns span a C-dimensional subspace of the stoichiometric subspace. This in turn yields the best approximation of the evolution of the ODoD vector in terms of only C parameters that we call the constraint potentials. The resulting order-C RCCE approximate model reduces the number of independent differential equations related to species, mass, and energy balances from S+2 to C+E+2, with substantial computational savings when C ≪ S-E.
Accuracy assessment in the Large Area Crop Inventory Experiment
NASA Technical Reports Server (NTRS)
Houston, A. G.; Pitts, D. E.; Feiveson, A. H.; Badhwar, G.; Ferguson, M.; Hsu, E.; Potter, J.; Chhikara, R.; Rader, M.; Ahlers, C.
1979-01-01
The Accuracy Assessment System (AAS) of the Large Area Crop Inventory Experiment (LACIE) was responsible for determining the accuracy and reliability of LACIE estimates of wheat production, area, and yield, made at regular intervals throughout the crop season, and for investigating the various LACIE error sources, quantifying these errors, and relating them to their causes. Some results of using the AAS during the three years of LACIE are reviewed. As the program culminated, AAS was able not only to meet the goal of obtaining accurate statistical estimates of sampling and classification accuracy, but also the goal of evaluating component labeling errors. Furthermore, the ground-truth data processing matured from collecting data for one crop (small grains) to collecting, quality-checking, and archiving data for all crops in a LACIE small segment.
NASA Technical Reports Server (NTRS)
Houston, A. G.; Feiveson, A. H.; Chhikara, R. S.; Hsu, E. M. (Principal Investigator)
1979-01-01
A statistical methodology was developed to check the accuracy of the products of the experimental operations throughout crop growth and to determine whether the procedures are adequate to accomplish the desired accuracy and reliability goals. It has allowed the identification and isolation of key problems in wheat area yield estimation, some of which have been corrected and some of which remain to be resolved. The major unresolved problem in accuracy assessment is that of precisely estimating the bias of the LACIE production estimator. Topics covered include: (1) evaluation techniques; (2) variance and bias estimation for the wheat production estimate; (3) the 90/90 evaluation; (4) comparison of the LACIE estimate with reference standards; and (5) first and second order error source investigations.
NASA Astrophysics Data System (ADS)
Brookman, T. H.; Whittaker, T. E.; King, P. L.; Horton, T. W.
2011-12-01
Stable isotope dendroclimatology is a burgeoning field in palaeoclimate science due to its unique potential to contribute (sub)annually resolved climate records, over millennial timescales, to the terrestrial palaeoclimate record. Until recently the time intensive methods precluded long-term climate reconstructions. Advances in continuous-flow mass spectrometry and isolation methods for α-cellulose (ideal for palaeoclimate studies as, unlike other wood components, it retains its initial isotopic composition) have made long-term, calendar dated palaeoclimate reconstructions a viable proposition. The Modified Brendel (mBrendel) α-cellulose extraction method is a fast, cost-effective way of preparing whole-wood samples for stable oxygen and carbon isotope analysis. However, resinous woods often yield incompletely processed α-cellulose using the standard mBrendel approach. As climate signals may be recorded by small (<1%) isotopic shifts it is important to investigate if incomplete processing affects the accuracy and precision of tree-ring isotopic records. In an effort to address this methodological issue, we investigated three highly resinous woods: kauri (Agathis australis), ponderosa pine (Pinus ponderosa) and huon pine (Lagarastrobus franklinii). Samples of each species were treated with 16 iterations of the mBrendel, varying reaction temperature, time and reagent volumes. Products were investigated using microscopic and bulk transmission Fourier Transform infrared spectroscopy (FITR) to reveal variations in the level of processing; poorly-digested fibres display a peak at 1520cm-1 suggesting residual lignin and a peak at ~1600cm-1 in some samples suggests retained resin. Despite the different levels of purity, replicate analyses of samples processed by high temperature digestion yielded consistent δ18O within and between experiments. All α-cellulose samples were 5-7% enriched compared to the whole-wood, suggesting that even incomplete processing at high temperature can provide acceptable δ18O analytical external precision. For kauri, short, lower temperature extractions produced α-cellulose with δ18O consistently ~1% lower than longer, higher temperature kauri experiments. These findings suggest that temperature and time are significant variables that influence the analytical precision of α-cellulose stable isotope analysis and that resinous hardwoods (e.g. kauri) may require longer and/or hotter digestions than softwoods. The effects of mBrendel variants on the carbon isotope ratio precision of α-cellulose extracts will also be presented. Our findings indicate that the standard mBrendel α-cellulose extraction method may not fully remove lignins and resins depending on the type of wood being analysed. Residual impurities can decrease analytical precision and accuracy. Fortunately, FTIR analysis prior to isotopic analysis is a relatively fast and cost effective way to determine α-cellulose extract purity, ultimately improving the data quality, accuracy and utility of tree-ring based stable isotopic climate records.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholey, J. E.; Lin, L.; Ainsley, C. G.
2015-06-15
Purpose: To evaluate the accuracy and limitations of a commercially-available treatment planning system’s (TPS’s) dose calculation algorithm for proton pencil-beam scanning (PBS) and present a novel technique to efficiently derive a clinically-acceptable beam model. Methods: In-air fluence profiles of PBS spots were modeled in the TPS alternately as single-(SG) and double-Gaussian (DG) functions, based on fits to commissioning data. Uniform-fluence, single-energy-layer square fields of various sizes and energies were calculated with both beam models and delivered to water. Dose was measured at several depths. Motivated by observed discrepancies in measured-versus-calculated dose comparisons, a third model was constructed based on double-Gaussianmore » parameters contrived through a novel technique developed to minimize these differences (DGC). Eleven cuboid-dose-distribution-shaped fields with varying range/modulation and field size were subsequently generated in the TPS, using each of the three beam models described, and delivered to water. Dose was measured at the middle of each spread-out Bragg peak. Results: For energies <160 MeV, the DG model fit square-field measurements to <2% at all depths, while the SG model could disagree by >6%. For energies >160 MeV, both SG and DG models fit square-field measurements to <1% at <4 cm depth, but could exceed 6% deeper. By comparison, disagreement with the DGC model was always <3%. For the cuboid plans, calculation-versus-measured percent dose differences exceeded 7% for the SG model, being larger for smaller fields. The DG model showed <3% disagreement for all field sizes in shorter-range beams, although >5% differences for smaller fields persisted in longer-range beams. In contrast, the DGC model predicted measurements to <2% for all beams. Conclusion: Neither the TPS’s SG nor DG models, employed as intended, are ideally suited for routine clinical use. However, via a novel technique to be presented, its DG model can be tuned judiciously to yield acceptable results.« less
Assessment of energy crops alternative to maize for biogas production in the Greater Region.
Mayer, Frédéric; Gerin, Patrick A; Noo, Anaïs; Lemaigre, Sébastien; Stilmant, Didier; Schmit, Thomas; Leclech, Nathael; Ruelle, Luc; Gennen, Jerome; von Francken-Welz, Herbert; Foucart, Guy; Flammang, Jos; Weyland, Marc; Delfosse, Philippe
2014-08-01
The biomethane yield of various energy crops, selected among potential alternatives to maize in the Greater Region, was assessed. The biomass yield, the volatile solids (VS) content and the biochemical methane potential (BMP) were measured to calculate the biomethane yield per hectare of all plant species. For all species, the dry matter biomass yield and the VS content were the main factors that influence, respectively, the biomethane yield and the BMP. Both values were predicted with good accuracy by linear regressions using the biomass yield and the VS as independent variable. The perennial crop miscanthus appeared to be the most promising alternative to maize when harvested as green matter in autumn and ensiled. Miscanthus reached a biomethane yield of 5.5 ± 1 × 10(3)m(3)ha(-1) during the second year after the establishment, as compared to 5.3 ± 1 × 10(3)m(3)ha(-1) for maize under similar crop conditions. Copyright © 2014. Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Bosi, F.; Pellegrino, S.
2017-01-01
A molecular formulation of the onset of plasticity is proposed to assess temperature and strain rate effects in anisotropic semi-crystalline rubbery films. The presented plane stress criterion is based on the strain rate-temperature superposition principle and the cooperative theory of yielding, where some parameters are assumed to be material constants, while others are considered to depend on specific modes of deformation. An orthotropic yield function is developed for a linear low density polyethylene thin film. Uniaxial and biaxial inflation experiments were carried out to determine the yield stress of the membrane via a strain recovery method. It is shown that the 3% offset method predicts the uniaxial elastoplastic transition with good accuracy. Both the tensile yield points along the two principal directions of the film and the biaxial yield stresses are found to obey the superposition principle. The proposed yield criterion is compared against experimental measurements, showing excellent agreement over a wide range of deformation rates and temperatures.
Wilk, Brian L
2015-01-01
Over the course of the past two to three decades, intraoral digital impression systems have gained acceptance due to high accuracy and ease of use as they have been incorporated into the fabrication of dental implant restorations. The use of intraoral digital impressions enables the clinician to produce accurate restorations without the unpleasant aspects of traditional impression materials and techniques. This article discusses the various types of digital impression systems and their accuracy compared to traditional impression techniques. The cost, time, and patient satisfaction components of both techniques will also be reviewed.
Jayakody, Chatura; Hull-Ryde, Emily A
2016-01-01
Well-defined quality control (QC) processes are used to determine whether a certain procedure or action conforms to a widely accepted standard and/or set of guidelines, and are important components of any laboratory quality assurance program (Popa-Burke et al., J Biomol Screen 14: 1017-1030, 2009). In this chapter, we describe QC procedures useful for monitoring the accuracy and precision of laboratory instrumentation, most notably automated liquid dispensers. Two techniques, gravimetric QC and photometric QC, are highlighted in this chapter. When used together, these simple techniques provide a robust process for evaluating liquid handler accuracy and precision, and critically underpin high-quality research programs.
Implementation study of wearable sensors for activity recognition systems.
Rezaie, Hamed; Ghassemian, Mona
2015-08-01
This Letter investigates and reports on a number of activity recognition methods for a wearable sensor system. The authors apply three methods for data transmission, namely 'stream-based', 'feature-based' and 'threshold-based' scenarios to study the accuracy against energy efficiency of transmission and processing power that affects the mote's battery lifetime. They also report on the impact of variation of sampling frequency and data transmission rate on energy consumption of motes for each method. This study leads us to propose a cross-layer optimisation of an activity recognition system for provisioning acceptable levels of accuracy and energy efficiency.
Two-screen single-shot electron spectrometer for laser wakefield accelerated electron beams.
Soloviev, A A; Starodubtsev, M V; Burdonov, K F; Kostyukov, I Yu; Nerush, E N; Shaykin, A A; Khazanov, E A
2011-04-01
The laser wakefield acceleration electron beams can essentially deviate from the axis of the system, which distinguishes them greatly from beams of conventional accelerators. In case of energy measurements by means of a permanent-magnet electron spectrometer, the deviation angle can affect accuracy, especially for high energies. A two-screen single-shot electron spectrometer that correctly allows for variations of the angle of entry is considered. The spectrometer design enables enhancing accuracy of measuring narrow electron beams significantly as compared to a one-screen spectrometer with analogous magnetic field, size, and angular acceptance. © 2011 American Institute of Physics
Redesigned Gas Mass Flow Sensors for Space Shuttle Pressure Control System and Fuel Cell System
NASA Technical Reports Server (NTRS)
1996-01-01
A program was conducted to determine if a state of the art micro-machined silicon solid state flow sensor could be used to replace the existing space shuttle orbiter flow sensors. The rather aggressive goal was to obtain a new sensor which would also be a multi-gas sensor and operate over a much wider flow range and with a higher degree of accuracy than the existing sensors. Two types of sensors were tested. The first type was a venturi throat design and the second was a bypass design. The accuracy of venturi design was found to be marginally acceptable. The bypass sensor was much better although it still did not fully reach the accuracy goal. Two main problems were identified which would require further work.
Simultaneous fitting of genomic-BLUP and Bayes-C components in a genomic prediction model.
Iheshiulor, Oscar O M; Woolliams, John A; Svendsen, Morten; Solberg, Trygve; Meuwissen, Theo H E
2017-08-24
The rapid adoption of genomic selection is due to two key factors: availability of both high-throughput dense genotyping and statistical methods to estimate and predict breeding values. The development of such methods is still ongoing and, so far, there is no consensus on the best approach. Currently, the linear and non-linear methods for genomic prediction (GP) are treated as distinct approaches. The aim of this study was to evaluate the implementation of an iterative method (called GBC) that incorporates aspects of both linear [genomic-best linear unbiased prediction (G-BLUP)] and non-linear (Bayes-C) methods for GP. The iterative nature of GBC makes it less computationally demanding similar to other non-Markov chain Monte Carlo (MCMC) approaches. However, as a Bayesian method, GBC differs from both MCMC- and non-MCMC-based methods by combining some aspects of G-BLUP and Bayes-C methods for GP. Its relative performance was compared to those of G-BLUP and Bayes-C. We used an imputed 50 K single-nucleotide polymorphism (SNP) dataset based on the Illumina Bovine50K BeadChip, which included 48,249 SNPs and 3244 records. Daughter yield deviations for somatic cell count, fat yield, milk yield, and protein yield were used as response variables. GBC was frequently (marginally) superior to G-BLUP and Bayes-C in terms of prediction accuracy and was significantly better than G-BLUP only for fat yield. On average across the four traits, GBC yielded a 0.009 and 0.006 increase in prediction accuracy over G-BLUP and Bayes-C, respectively. Computationally, GBC was very much faster than Bayes-C and similar to G-BLUP. Our results show that incorporating some aspects of G-BLUP and Bayes-C in a single model can improve accuracy of GP over the commonly used method: G-BLUP. Generally, GBC did not statistically perform better than G-BLUP and Bayes-C, probably due to the close relationships between reference and validation individuals. Nevertheless, it is a flexible tool, in the sense, that it simultaneously incorporates some aspects of linear and non-linear models for GP, thereby exploiting family relationships while also accounting for linkage disequilibrium between SNPs and genes with large effects. The application of GBC in GP merits further exploration.
Nissan, Aviram; Protic, Mladjan; Bilchik, Anton J; Howard, Robin S; Peoples, George E; Stojadinovic, Alexander
2012-09-01
Our randomized controlled trial previously demonstrated improved staging accuracy with targeted nodal assessment and ultrastaging (TNA-us) in colon cancer (CC). Our objective was to test the hypothesis that TNA-us improves disease-free survival (DFS) in CC. In this randomized trial, targeted nodal assessment and ultrastaging resulted in enhanced lymph node diagnostic yield associated with improved staging accuracy, which was further associated with improved disease-free survival in early colon cancer. Clinical parameters of the control (n = 94) and TNA-us (n = 98) groups were comparable. Median (interquartile range) lymph node yield was higher in the TNA-us arm: 16 (12-22) versus 13 (10-18); P = 0.002. Median follow-up was 46 (29-70) months. Overall 5-year DFS was 61% in the control arm and 71% in the TNA-us arm (P = 0.11). Clinical parameters of node-negative patients in the control (n = 51) and TNA-us (n = 55) groups were comparable. Lymph node yield was higher in the TNA-us arm: 15 (12-21) versus 13 (8-18); P = 0.03. Five-year DFS differed significantly between groups with node-negative CC (control 71% vs TNA-us 86%; P = 0.04). Survival among stage II CC alone was higher in the TNA-us group, 83% versus 65%; P = 0.03. Adjuvant chemotherapy use was nearly identical between groups. TNA-us stratified CC prognosis; DFS differed significantly between ultrastaged and conventionally staged node-negative patients [control pN0 72% vs TNA-us pN0(i-) 87%; P = 0.03]. Survival varied according to lymph node yield in patients with node-negative CC [5-year DFS: <12 lymph nodes = 57% vs 12+ lymph nodes = 85%; P = 0.011] but not in stage III CC. TNA-us is associated with improved nodal diagnostic yield and enhanced staging accuracy (stage migration), which is further associated with improved DFS in early CC. This study is registered at clinicaltrials.gov under the registration number: NCT01623258.
Barrera, Terri L; Szafranski, Derek D; Ratcliff, Chelsea G; Garnaat, Sarah L; Norton, Peter J
2016-03-01
One of the primary differences between Cognitive Behavioral Therapy (CBT) and Acceptance and Commitment Therapy (ACT) for anxiety is the approach to managing negative thoughts. CBT focuses on challenging the accuracy of dysfunctional thoughts through cognitive restructuring exercises, whereas ACT attempts to foster acceptance of such thoughts through cognitive defusion exercises. Previous research suggests that both techniques reduce the distress associated with negative thoughts, though questions remain regarding the benefit of these techniques above and beyond exposure to feared stimuli. In the present study, we conducted a brief experimental intervention to examine the utility of cognitive defusion + in-vivo exposure, cognitive restructuring + in-vivo exposure, and in-vivo exposure alone in reducing the impact of negative thoughts in patients with social anxiety disorder. All participants completed a brief public speaking exposure and those in the cognitive conditions received training in the assigned cognitive technique. Participants returned a week later to complete a second exposure task and self-report measures. All three conditions resulted in similar decreases in discomfort related to negative thoughts. ANOVA models failed to find an interaction between change in accuracy or importance and assignment to condition in predicting decreased distress of negative thoughts. These preliminary results suggest that changes in perceived importance and accuracy of negative thoughts may not be the mechanisms by which cognitive defusion and cognitive restructuring affect distress in the short-term.
Design, implementation and accuracy of a prototype for medical augmented reality.
Pandya, Abhilash; Siadat, Mohammad-Reza; Auner, Greg
2005-01-01
This paper is focused on prototype development and accuracy evaluation of a medical Augmented Reality (AR) system. The accuracy of such a system is of critical importance for medical use, and is hence considered in detail. We analyze the individual error contributions and the system accuracy of the prototype. A passive articulated arm is used to track a calibrated end-effector-mounted video camera. The live video view is superimposed in real time with the synchronized graphical view of CT-derived segmented object(s) of interest within a phantom skull. The AR accuracy mostly depends on the accuracy of the tracking technology, the registration procedure, the camera calibration, and the image scanning device (e.g., a CT or MRI scanner). The accuracy of the Microscribe arm was measured to be 0.87 mm. After mounting the camera on the tracking device, the AR accuracy was measured to be 2.74 mm on average (standard deviation = 0.81 mm). After using data from a 2-mm-thick CT scan, the AR error remained essentially the same at an average of 2.75 mm (standard deviation = 1.19 mm). For neurosurgery, the acceptable error is approximately 2-3 mm, and our prototype approaches these accuracy requirements. The accuracy could be increased with a higher-fidelity tracking system and improved calibration and object registration. The design and methods of this prototype device can be extrapolated to current medical robotics (due to the kinematic similarity) and neuronavigation systems.
Grubbs, J.W.; Pittman, J.R.
1997-01-01
Water flow and quality data were collected from December 1994 to September 1995 to evaluate variations in discharge, water quality, and chemical fluxes (loads) through Perdido Bay, Florida. Data were collected at a cross section parallel to the U.S. Highway 98 bridge. Discharges measured with an acoustic Doppler current profiler (ADCP) and computed from stage-area and velocity ratings varied roughly between + or - 10,000 cubic feet per second during a typical tidal cycle. Large reversals in flow direction occurred rapidly (less than 1 hour), and complete reversals (resulting in near peak net-upstream or downstream discharges) occurred within a few hours of slack water. Observations of simultaneous upstream and downstream flow (bidirectional flow) were quite common in the ADCP measurements, with opposing directions of flow occurring predominantly in vertical layers. Continuous (every 15 minutes) discharge data were computed for the period from August 18, 1995, to September 28, 1995, and filtered daily mean discharge values were computed for the period from August 19 to September 26, 1995. Data were not computed prior to August 18, 1995, either because of missing data or because the velocity rating was poorly defined (because of insufficient data) for the period prior to landfall of hurricane Erin (August 3, 1995). The results of the study indicate that acoustical techniques can yield useful estimates of continuous (instantaneous) discharge in Perdido Bay. Useful estimates of average daily net flow rates can also be obtained, but the accuracy of these estimates will be limited by small rating shifts that introduce bias into the instantaneous values that are used to compute the net flows. Instantaneous loads of total nitrogen ranged from -180 to 220 grams per second for the samples collected during the study, and instantaneous loads of total phosphorous ranged from -10 to 11 grams per second (negative loads indicate net upstream transport). The chloride concentrations from the water samples collected from Perdido Bay indicated a significant amount of mixing of saltwater and freshwater. Mixing effects could greatly reduce the accuracy of estimates of net loads of nutrients or other substances. The study results indicate that acoustical techniques can yield acceptable estimates of instantaneous loads in Perdido Bay. However, estimates of net loads should be interpreted with great caution and may have unacceptably large errors, especially when saltwater and freshwater concentrations differ greatly.
Measurement of Clinical Performance of Nurses: A Literature Review.
ERIC Educational Resources Information Center
Robb, Yvonne; Fleming, Valerie; Dietert, Christine
2002-01-01
A research review (n=12) yielded a number of tools for assessing nurses' clinical competence but none that is universally accepted. The review did identify methods that could be used to develop a useful instrument. (Contains 23 references.) (SK)
Evaluating the Impact of CETA on Participant Earnings.
ERIC Educational Resources Information Center
Bryant, Edward C.; Rupp, Kalman
1987-01-01
Estimates of the Comprehensive Employment and Training Act's net impact on participant earnings, using Continuous Longitudinal Manpower Survey data, were compared to a similar sample from the Current Population Survey. The use of multivariate matching and weighting yielded acceptable results. (GDC)
40 CFR 63.526 - Monitoring requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
.... (D) Design analysis based on accepted chemical engineering principles, measurable process parameters... purpose of determining de minimis status for emission points, engineering assessment may be used to... expected to yield the highest flow rate and concentration. Engineering assessment includes, but is not...
40 CFR 63.526 - Monitoring requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
.... (D) Design analysis based on accepted chemical engineering principles, measurable process parameters... purpose of determining de minimis status for emission points, engineering assessment may be used to... expected to yield the highest flow rate and concentration. Engineering assessment includes, but is not...
A quantitative approach to combine sources in stable isotope mixing models
Stable isotope mixing models, used to estimate source contributions to a mixture, typically yield highly uncertain estimates when there are many sources and relatively few isotope elements. Previously, ecologists have either accepted the uncertain contribution estimates for indiv...
Prediction of beta-turns from amino acid sequences using the residue-coupled model.
Guruprasad, K; Shukla, S
2003-04-01
We evaluated the prediction of beta-turns from amino acid sequences using the residue-coupled model with an enlarged representative protein data set selected from the Protein Data Bank. Our results show that the probability values derived from a data set comprising 425 protein chains yielded an overall beta-turn prediction accuracy 68.74%, compared with 94.7% reported earlier on a data set of 30 proteins using the same method. However, we noted that the overall beta-turn prediction accuracy using probability values derived from the 30-protein data set reduces to 40.74% when tested on the data set comprising 425 protein chains. In contrast, using probability values derived from the 425 data set used in this analysis, the overall beta-turn prediction accuracy yielded consistent results when tested on either the 30-protein data set (64.62%) used earlier or a more recent representative data set comprising 619 protein chains (64.66%) or on a jackknife data set comprising 476 representative protein chains (63.38%). We therefore recommend the use of probability values derived from the 425 representative protein chains data set reported here, which gives more realistic and consistent predictions of beta-turns from amino acid sequences.
Meinel, Felix G; Schoepf, U Joseph; Townsend, Jacob C; Flowers, Brian A; Geyer, Lucas L; Ebersberger, Ullrich; Krazinski, Aleksander W; Kunz, Wolfgang G; Thierfelder, Kolja M; Baker, Deborah W; Khan, Ashan M; Fernandes, Valerian L; O'Brien, Terrence X
2018-06-15
We aimed to determine the diagnostic yield and accuracy of coronary CT angiography (CCTA) in patients referred for invasive coronary angiography (ICA) based on clinical concern for coronary artery disease (CAD) and an abnormal nuclear stress myocardial perfusion imaging (MPI) study. We enrolled 100 patients (84 male, mean age 59.6 ± 8.9 years) with an abnormal MPI study and subsequent referral for ICA. Each patient underwent CCTA prior to ICA. We analyzed the prevalence of potentially obstructive CAD (≥50% stenosis) on CCTA and calculated the diagnostic accuracy of ≥50% stenosis on CCTA for the detection of clinically significant CAD on ICA (defined as any ≥70% stenosis or ≥50% left main stenosis). On CCTA, 54 patients had at least one ≥50% stenosis. With ICA, 45 patients demonstrated clinically significant CAD. A positive CCTA had 100% sensitivity and 84% specificity with a 100% negative predictive value and 83% positive predictive value for clinically significant CAD on a per patient basis in MPI positive symptomatic patients. In conclusion, almost half (48%) of patients with suspected CAD and an abnormal MPI study demonstrate no obstructive CAD on CCTA.
Decay Properties of K-Vacancy States in Fe X-Fe XVII
NASA Technical Reports Server (NTRS)
Mendoza, C.; Kallman, T. R.; Bautista, M. A.; Palmeri, P.
2003-01-01
We report extensive calculations of the decay properties of fine-structure K-vacancy levels in Fe X-Fe XVII. A large set of level energies, wavelengths, radiative and Auger rates, and fluorescence yields has been computed using three different standard atomic codes, namely Cowan's HFR, AUTOSTRUCTURE and the Breit-Pauli R-matrix package. This multi-code approach is used to the study the effects of core relaxation, configuration interaction and the Breit interaction, and enables the estimate of statistical accuracy ratings. The Ksigma and KLL Auger widths have been found to be nearly independent of both the outer-electron configuration and electron occupancy keeping a constant ratio of 1.53 +/- 0.06. By comparing with previous theoretical and measured wavelengths, the accuracy of the present set is determined to be within 2 m Angstrom. Also, the good agreement found between the different radiative and Auger data sets that have been computed allow us to propose with confidence an accuracy rating of 20% for the line fluorescence yields greater than 0.01. Emission and absorption spectral features are predicted finding good correlation with measurements in both laboratory and astrophysical plasmas.
Self-Calibrating Respiratory-Flowmeter Combination
NASA Technical Reports Server (NTRS)
Westenskow, Dwayne R.; Orr, Joseph A.
1990-01-01
Dual flowmeters ensure accuracy over full range of human respiratory flow rates. System for measurement of respiratory flow employs two flowmeters; one compensates for deficiencies of other. Combination yields easily calibrated system accurate over wide range of gas flow.
NASA Astrophysics Data System (ADS)
Nakano, Hayato; Hakoyama, Tomoyuki; Kuwabara, Toshihiko
2017-10-01
Hole expansion forming of a cold rolled steel sheet is investigated both experimentally and analytically to clarify the effects of material models on the predictive accuracy of finite element analyses (FEA). The multiaxial plastic deformation behavior of a cold rolled steel sheet with a thickness of 1.2 mm was measured using a servo-controlled multiaxial tube expansion testing machine for the range of strain from initial yield to fracture. Tubular specimens were fabricated from the sheet sample by roller bending and laser welding. Many linear stress paths in the first quadrant of stress space were applied to the tubular specimens to measure the contours of plastic work in stress space up to a reference plastic strain of 0.24 along with the directions of plastic strain rates. The anisotropic parameters and exponent of the Yld2000-2d yield function (Barlat et al., 2003) were optimized to approximate the contours of plastic work and the directions of plastic strain rates. The hole expansion forming simulations were performed using the different model identifications based on the Yld2000-2d yield function. It is concluded that the yield function best capturing both the plastic work contours and the directions of plastic strain rates leads to the most accurate predicted FEA.
Floating shock fitting via Lagrangian adaptive meshes
NASA Technical Reports Server (NTRS)
Vanrosendale, John
1994-01-01
In recent works we have formulated a new approach to compressible flow simulation, combining the advantages of shock-fitting and shock-capturing. Using a cell-centered Roe scheme discretization on unstructured meshes, we warp the mesh while marching to steady state, so that mesh edges align with shocks and other discontinuities. This new algorithm, the Shock-fitting Lagrangian Adaptive Method (SLAM) is, in effect, a reliable shock-capturing algorithm which yields shock-fitted accuracy at convergence. Shock-capturing algorithms like this, which warp the mesh to yield shock-fitted accuracy, are new and relatively untried. However, their potential is clear. In the context of sonic booms, accurate calculation of near-field sonic boom signatures is critical to the design of the High Speed Civil Transport (HSCT). SLAM should allow computation of accurate N-wave pressure signatures on comparatively coarse meshes, significantly enhancing our ability to design low-boom configurations for high-speed aircraft.
Li, Hang; He, Junting; Liu, Qin; Huo, Zhaohui; Liang, Si; Liang, Yong
2011-03-01
A tandem solid-phase extraction method (SPE) of connecting two different cartridges (C(18) and MCX) in series was developed as the extraction procedure in this article, which provided better extraction yields (>86%) for all analytes and more appropriate sample purification from endogenous interference materials compared with a single cartridge. Analyte separation was achieved on a C(18) reversed-phase column at the wavelength of 265 nm by high-performance liquid chromatography (HPLC). The method was validated in terms of extraction yield, precision and accuracy. These assays gave mean accuracy values higher than 89% with RSD values that were always less than 3.8%. The method has been successfully applied to plasma samples from rats after oral administration of target compounds. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Accuracy of least-squares methods for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Bochev, Pavel B.; Gunzburger, Max D.
1993-01-01
Recently there has been substantial interest in least-squares finite element methods for velocity-vorticity-pressure formulations of the incompressible Navier-Stokes equations. The main cause for this interest is the fact that algorithms for the resulting discrete equations can be devised which require the solution of only symmetric, positive definite systems of algebraic equations. On the other hand, it is well-documented that methods using the vorticity as a primary variable often yield very poor approximations. Thus, here we study the accuracy of these methods through a series of computational experiments, and also comment on theoretical error estimates. It is found, despite the failure of standard methods for deriving error estimates, that computational evidence suggests that these methods are, at the least, nearly optimally accurate. Thus, in addition to the desirable matrix properties yielded by least-squares methods, one also obtains accurate approximations.
Robust control of electrostatic torsional micromirrors using adaptive sliding-mode control
NASA Astrophysics Data System (ADS)
Sane, Harshad S.; Yazdi, Navid; Mastrangelo, Carlos H.
2005-01-01
This paper presents high-resolution control of torsional electrostatic micromirrors beyond their inherent pull-in instability using robust sliding-mode control (SMC). The objectives of this paper are two-fold - firstly, to demonstrate the applicability of SMC for MEMS devices; secondly - to present a modified SMC algorithm that yields improved control accuracy. SMC enables compact realization of a robust controller tolerant of device characteristic variations and nonlinearities. Robustness of the control loop is demonstrated through extensive simulations and measurements on MEMS with a wide range in their characteristics. Control of two-axis gimbaled micromirrors beyond their pull-in instability with overall 10-bit pointing accuracy is confirmed experimentally. In addition, this paper presents an analysis of the sources of errors in discrete-time implementation of the control algorithm. To minimize these errors, we present an adaptive version of the SMC algorithm that yields substantial performance improvement without considerably increasing implementation complexity.
Static yields and quality issues: Is the agri-environment program the primary driver?
Peltonen-Sainio, Pirjo; Salo, Tapio; Jauhiainen, Lauri; Lehtonen, Heikki; Sieviläinen, Elina
2015-10-01
The Finnish agri-environmental program (AEP) has been in operation for 20 years with >90 % farmer commitment. This study aimed to establish whether reduced nitrogen (N) and phosphorus (P) use has impacted spring cereal yields and quality based on comprehensive follow-up studies and long-term experiments. We found that the gap between genetic yield potential and attained yield has increased after the AEP was imposed. However, many contemporary changes in agricultural practices, driven by changes in prices and farm subsidies, also including the AEP, were likely reasons, together with reduced N, but not phosphorus use. Such overall changes in crop management coincided with stagnation or decline in yields and adverse changes in quality, but yield-removed N increased and residual N decreased. Further studies are needed to assess whether all the changes are environmentally, economically, and socially sustainable, and acceptable, in the long run. The concept of sustainable intensification is worth considering as a means to develop northern European agricultural systems to combine environmental benefits with productivity.
Validation of the Edinburgh Postnatal Depression Scale (EPDS) on the Thai–Myanmar border
Ing, Harriet; Fellmeth, Gracia; White, Jitrachote; Stein, Alan; Simpson, Julie A; McGready, Rose
2017-01-01
Postnatal depression is common and may have severe consequences for women and their children. Locally validated screening tools are required to identify at-risk women in marginalised populations. The Edinburgh Postnatal Depression Scale (EPDS) is one of the most frequently used tools globally. This cross-sectional study assessed the validity and acceptability of the EPDS in Karen and Burmese among postpartum migrant and refugee women on the Thai–Myanmar border. The EPDS was administered to participants and results compared with a diagnostic interview. Local staff provided feedback on the acceptability of the EPDS through a focus group discussion. Results from 670 women showed high accuracy and reasonable internal consistency of the EPDS. However, acceptability to local staff was low, limiting the utility of the EPDS in this setting despite its good psychometrics. Further work is required to identify a tool that is acceptable and sensitive to cultural manifestations of depression in this vulnerable population. PMID:28699396
Louis Essen and the Velocity of Light: From Wartime Radar to Unit of Length
NASA Astrophysics Data System (ADS)
Essen, Ray
2010-03-01
Louis Essen (1908-1997), working at the National Physical Laboratory in Teddington, England, was the first scientist to realize that the value for the velocity of light used widely during World War II was incorrect. In 1947 he published his first determination of it, which was 16 kilometers per second higher than the accepted value, causing a great deal of controversy in the scientific community. His new value was not accepted for several years, until it was shown that it improved the precision of range-finding by radar. Essen’s result has remained as the internationally accepted value despite a number of attempts to improve on it. I discuss Essen’s work and also examine other optical and nonoptical determinations that were made in the United States, and their limits of accuracy. I also identify the reasons why it took so long for Essen’s new value to be accepted, and how it led to changes in the definition of the units of length and time.
The Accuracy of Point-of-Care Glucose Measurements
Rebel, Annette; Rice, Mark A.; Fahy, Brenda G.
2012-01-01
Control of blood glucose (BG) in an acceptable range is a major therapy target for diabetes patients in both the hospital and outpatient environments. This review focuses on the state of point-of-care (POC) glucose monitoring and the accuracy of the measurement devices. The accuracy of the POC glucose monitor depends on device methodology and other factors, including sample source and collection and patient characteristics. Patient parameters capable of influencing measurements include variations in pH, blood oxygen, hematocrit, changes in microcirculation, and vasopressor therapy. These elements alone or when combined can significantly impact BG measurement accuracy with POC glucose monitoring devices (POCGMDs). In general, currently available POCGMDs exhibit the greatest accuracy within the range of physiological glucose levels but become less reliable at the lower and higher ranges of BG levels. This issue raises serious safety concerns and the importance of understanding the limitations of POCGMDs. This review will discuss potential interferences and shortcomings of the current POCGMDs and stress when these may impact the reliability of POCGMDs for clinical decision-making. PMID:22538154
Presentation accuracy of the web revisited: animation methods in the HTML5 era.
Garaizar, Pablo; Vadillo, Miguel A; López-de-Ipiña, Diego
2014-01-01
Using the Web to run behavioural and social experiments quickly and efficiently has become increasingly popular in recent years, but there is some controversy about the suitability of using the Web for these objectives. Several studies have analysed the accuracy and precision of different web technologies in order to determine their limitations. This paper updates the extant evidence about presentation accuracy and precision of the Web and extends the study of the accuracy and precision in the presentation of multimedia stimuli to HTML5-based solutions, which were previously untested. The accuracy and precision in the presentation of visual content in classic web technologies is acceptable for use in online experiments, although some results suggest that these technologies should be used with caution in certain circumstances. Declarative animations based on CSS are the best alternative when animation intervals are above 50 milliseconds. The performance of procedural web technologies based on the HTML5 standard is similar to that of previous web technologies. These technologies are being progressively adopted by the scientific community and have promising futures, which makes their use advisable to utilizing more obsolete technologies.
A new accuracy measure based on bounded relative error for time series forecasting
Twycross, Jamie; Garibaldi, Jonathan M.
2017-01-01
Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred. PMID:28339480
A new accuracy measure based on bounded relative error for time series forecasting.
Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M
2017-01-01
Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.
An analytical model with flexible accuracy for deep submicron DCVSL cells
NASA Astrophysics Data System (ADS)
Valiollahi, Sepideh; Ardeshir, Gholamreza
2018-07-01
Differential cascoded voltage switch logic (DCVSL) cells are among the best candidates of circuit designers for a wide range of applications due to advantages such as low input capacitance, high switching speed, small area and noise-immunity; nevertheless, a proper model has not yet been developed to analyse them. This paper analyses deep submicron DCVSL cells based on a flexible accuracy-simplicity trade-off including the following key features: (1) the model is capable of producing closed-form expressions with an acceptable accuracy; (2) model equations can be solved numerically to offer higher accuracy; (3) the short-circuit currents occurring in high-low/low-high transitions are accounted in analysis and (4) the changes in the operating modes of transistors during transitions together with an efficient submicron I-V model, which incorporates the most important non-ideal short-channel effects, are considered. The accuracy of the proposed model is validated in IBM 0.13 µm CMOS technology through comparisons with the accurate physically based BSIM3 model. The maximum error caused by analytical solutions is below 10%, while this amount is below 7% for numerical solutions.
Automatic interpretation of ERTS data for forest management
NASA Technical Reports Server (NTRS)
Kirvida, L.; Johnson, G. R.
1973-01-01
Automatic stratification of forested land from ERTS-1 data provides a valuable tool for resource management. The results are useful for wood product yield estimates, recreation and wild life management, forest inventory and forest condition monitoring. Automatic procedures based on both multi-spectral and spatial features are evaluated. With five classes, training and testing on the same samples, classification accuracy of 74% was achieved using the MSS multispectral features. When adding texture computed from 8 x 8 arrays, classification accuracy of 99% was obtained.
Uranium Measurement Improvements at the Savannah River Technology Center
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shick, C. Jr.
Uranium isotope ratio and isotope dilution methods by mass spectrometry are used to achieve sensitivity, precision and accuracy for various applications. This report presents recent progress made at SRTC in the analysis of minor isotopes of uranium. Comparison of routine measurements of NBL certified uranium (U005a) using the SRTC Three Stage Mass Spectrometer (3SMS) and the SRTC Single Stage Mass Spectrometer (SSMS). As expected, the three stage mass spectrometer yielded superior sensitivity, precision, and accuracy for this application.
NASA Astrophysics Data System (ADS)
Chuamchaitrakool, Porntip; Widjaja, Joewono; Yoshimura, Hiroyuki
2018-01-01
A method for improving accuracy in Wigner-Ville distribution (WVD)-based particle size measurements from inline holograms using flip and replication technique (FRT) is proposed. The FRT extends the length of hologram signals being analyzed, yielding better spatial-frequency resolution of the WVD output. Experimental results verify reduction in measurement error as the length of the hologram signals increases. The proposed method is suitable for particle sizing from holograms recorded using small-sized image sensors.
Genomic Prediction of Testcross Performance in Canola (Brassica napus)
Jan, Habib U.; Abbadi, Amine; Lücke, Sophie; Nichols, Richard A.; Snowdon, Rod J.
2016-01-01
Genomic selection (GS) is a modern breeding approach where genome-wide single-nucleotide polymorphism (SNP) marker profiles are simultaneously used to estimate performance of untested genotypes. In this study, the potential of genomic selection methods to predict testcross performance for hybrid canola breeding was applied for various agronomic traits based on genome-wide marker profiles. A total of 475 genetically diverse spring-type canola pollinator lines were genotyped at 24,403 single-copy, genome-wide SNP loci. In parallel, the 950 F1 testcross combinations between the pollinators and two representative testers were evaluated for a number of important agronomic traits including seedling emergence, days to flowering, lodging, oil yield and seed yield along with essential seed quality characters including seed oil content and seed glucosinolate content. A ridge-regression best linear unbiased prediction (RR-BLUP) model was applied in combination with 500 cross-validations for each trait to predict testcross performance, both across the whole population as well as within individual subpopulations or clusters, based solely on SNP profiles. Subpopulations were determined using multidimensional scaling and K-means clustering. Genomic prediction accuracy across the whole population was highest for seed oil content (0.81) followed by oil yield (0.75) and lowest for seedling emergence (0.29). For seed yieId, seed glucosinolate, lodging resistance and days to onset of flowering (DTF), prediction accuracies were 0.45, 0.61, 0.39 and 0.56, respectively. Prediction accuracies could be increased for some traits by treating subpopulations separately; a strategy which only led to moderate improvements for some traits with low heritability, like seedling emergence. No useful or consistent increase in accuracy was obtained by inclusion of a population substructure covariate in the model. Testcross performance prediction using genome-wide SNP markers shows considerable potential for pre-selection of promising hybrid combinations prior to resource-intensive field testing over multiple locations and years. PMID:26824924
Transthoracic needle biopsy of the lung
DiBardino, David M.; Yarmus, Lonny B.
2015-01-01
Background Image guided transthoracic needle aspiration (TTNA) is a valuable tool used for the diagnosis of countless thoracic diseases. Computed tomography (CT) is the most common imaging modality used for guidance followed by ultrasound (US) for lesions abutting the pleural surface. Novel approaches using virtual CT guidance have recently been introduced. The objective of this review is to examine the current literature for TTNA biopsy of the lung focusing on diagnostic accuracy and safety. Methods MEDLINE was searched from inception to October 2015 for all case series examining image guided TTNA. Articles focusing on fluoroscopic guidance as well as influence of rapid on-site evaluation (ROSE) on yield were excluded. The diagnostic accuracy, defined as the number of true positives divided by the number of biopsies done, as well as the complication rate [pneumothorax (PTX), bleeding] was examined for CT guided TTNA, US guided TTNA as well as CT guided electromagnetic navigational-TTNA (E-TTNA). Of the 490 articles recovered 75 were included in our analysis. Results The overall pooled diagnostic accuracy for CT guided TTNA using 48 articles that met the inclusion and exclusion criteria was 92.1% (9,567/10,383). A similar yield was obtained examining ten articles using US guided TTNA of 88.7% (446/503). E-TTNA, being a new modality, only had one pilot study citing a diagnostic accuracy of 83% (19/23). Pooled PTX and hemorrhage rates were 20.5% and 2.8% respectively for CT guided TTNA. The PTX rate was lower in US guided TTNA at a pooled rate of 4.4%. E-TTNA showed a similar rate of PTX at 20% with no incidence of bleeding in a single pilot study available. Conclusions Image guided TTNA is a safe and accurate modality for the biopsy of lung pathology. This study found similar yield and safety profiles with the three imaging modalities examined. PMID:26807279
A Swarm Optimization approach for clinical knowledge mining.
Christopher, J Jabez; Nehemiah, H Khanna; Kannan, A
2015-10-01
Rule-based classification is a typical data mining task that is being used in several medical diagnosis and decision support systems. The rules stored in the rule base have an impact on classification efficiency. Rule sets that are extracted with data mining tools and techniques are optimized using heuristic or meta-heuristic approaches in order to improve the quality of the rule base. In this work, a meta-heuristic approach called Wind-driven Swarm Optimization (WSO) is used. The uniqueness of this work lies in the biological inspiration that underlies the algorithm. WSO uses Jval, a new metric, to evaluate the efficiency of a rule-based classifier. Rules are extracted from decision trees. WSO is used to obtain different permutations and combinations of rules whereby the optimal ruleset that satisfies the requirement of the developer is used for predicting the test data. The performance of various extensions of decision trees, namely, RIPPER, PART, FURIA and Decision Tables are analyzed. The efficiency of WSO is also compared with the traditional Particle Swarm Optimization. Experiments were carried out with six benchmark medical datasets. The traditional C4.5 algorithm yields 62.89% accuracy with 43 rules for liver disorders dataset where as WSO yields 64.60% with 19 rules. For Heart disease dataset, C4.5 is 68.64% accurate with 98 rules where as WSO is 77.8% accurate with 34 rules. The normalized standard deviation for accuracy of PSO and WSO are 0.5921 and 0.5846 respectively. WSO provides accurate and concise rulesets. PSO yields results similar to that of WSO but the novelty of WSO lies in its biological motivation and it is customization for rule base optimization. The trade-off between the prediction accuracy and the size of the rule base is optimized during the design and development of rule-based clinical decision support system. The efficiency of a decision support system relies on the content of the rule base and classification accuracy. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foucher, J.; Faurie, P.; Dourthe, L.
2011-11-10
The measurement accuracy is becoming one of the major components that have to be controlled in order to guarantee sufficient production yield. Already at the R and D level, we have to come up with the accurate measurements of sub-40 nm dense trenches and contact holes coming from 193 immersion lithography or E-Beam lithography. Current production CD (Critical Dimension) metrology techniques such as CD-SEM (CD-Scanning Electron Microscope) and OCD (Optical Critical Dimension) are limited in relative accuracy for various reasons (i.e electron proximity effect, outputs parameters correlation, stack influence, electron interaction with materials...). Therefore, time for R and D ismore » increasing, process windows degrade and finally production yield can decrease because you cannot manufactured correctly if you are unable to measure correctly. A new high volume manufacturing (HVM) CD metrology solution has to be found in order to improve the relative accuracy of production environment otherwise current CD Metrology solution will very soon get out of steam.In this paper, we will present a potential Hybrid CD metrology solution that smartly tuned 3D-AFM (3D-Atomic Force Microscope) and CD-SEM data in order to add accuracy both in R and D and production. The final goal for 'chip makers' is to improve yield and save R and D and production costs through real-time feedback loop implement on CD metrology routines. Such solution can be implemented and extended to any kind of CD metrology solution. In a 2{sup nd} part we will discuss and present results regarding a new AFM3D probes breakthrough with the introduction of full carbon tips made will E-Beam Deposition process. The goal is to overcome the current limitations of conventional flared silicon tips which are definitely not suitable for sub-32 nm nodes production.« less
Estimating national crop yield potential and the relevance of weather data sources
NASA Astrophysics Data System (ADS)
Van Wart, Justin
2011-12-01
To determine where, when, and how to increase yields, researchers often analyze the yield gap (Yg), the difference between actual current farm yields and crop yield potential. Crop yield potential (Yp) is the yield of a crop cultivar grown under specific management limited only by temperature and solar radiation and also by precipitation for water limited yield potential (Yw). Yp and Yw are critical components of Yg estimations, but are very difficult to quantify, especially at larger scales because management data and especially daily weather data are scarce. A protocol was developed to estimate Yp and Yw at national scales using site-specific weather, soils and management data. Protocol procedures and inputs were evaluated to determine how to improve accuracy of Yp, Yw and Yg estimates. The protocol was also used to evaluate raw, site-specific and gridded weather database sources for use in simulations of Yp or Yw. The protocol was applied to estimate crop Yp in US irrigated maize and Chinese irrigated rice and Yw in US rainfed maize and German rainfed wheat. These crops and countries account for >20% of global cereal production. The results have significant implications for past and future studies of Yp, Yw and Yg. Accuracy of national long-term average Yp and Yw estimates was significantly improved if (i) > 7 years of simulations were performed for irrigated and > 15 years for rainfed sites, (ii) > 40% of nationally harvested area was within 100 km of all simulation sites, (iii) observed weather data coupled with satellite derived solar radiation data were used in simulations, and (iv) planting and harvesting dates were specified within +/- 7 days of farmers actual practices. These are much higher standards than have been applied in national estimates of Yp and Yw and this protocol is a substantial step in making such estimates more transparent, robust, and straightforward. Finally, this protocol may be a useful tool for understanding yield trends and directing research and development efforts aimed at providing for a secure and stable future food supply.
A Remote Sensing-Derived Corn Yield Assessment Model
NASA Astrophysics Data System (ADS)
Shrestha, Ranjay Man
Agricultural studies and food security have become critical research topics due to continuous growth in human population and simultaneous shrinkage in agricultural land. In spite of modern technological advancements to improve agricultural productivity, more studies on crop yield assessments and food productivities are still necessary to fulfill the constantly increasing food demands. Besides human activities, natural disasters such as flood and drought, along with rapid climate changes, also inflect an adverse effect on food productivities. Understanding the impact of these disasters on crop yield and making early impact estimations could help planning for any national or international food crisis. Similarly, the United States Department of Agriculture (USDA) Risk Management Agency (RMA) insurance management utilizes appropriately estimated crop yield and damage assessment information to sustain farmers' practice through timely and proper compensations. Through County Agricultural Production Survey (CAPS), the USDA National Agricultural Statistical Service (NASS) uses traditional methods of field interviews and farmer-reported survey data to perform annual crop condition monitoring and production estimations at the regional and state levels. As these manual approaches of yield estimations are highly inefficient and produce very limited samples to represent the entire area, NASS requires supplemental spatial data that provides continuous and timely information on crop production and annual yield. Compared to traditional methods, remote sensing data and products offer wider spatial extent, more accurate location information, higher temporal resolution and data distribution, and lower data cost--thus providing a complementary option for estimation of crop yield information. Remote sensing derived vegetation indices such as Normalized Difference Vegetation Index (NDVI) provide measurable statistics of potential crop growth based on the spectral reflectance and could be further associated with the actual yield. Utilizing satellite remote sensing products, such as daily NDVI derived from Moderate Resolution Imaging Spectroradiometer (MODIS) at 250 m pixel size, the crop yield estimation can be performed at a very fine spatial resolution. Therefore, this study examined the potential of these daily NDVI products within agricultural studies and crop yield assessments. In this study, a regression-based approach was proposed to estimate the annual corn yield through changes in MODIS daily NDVI time series. The relationship between daily NDVI and corn yield was well defined and established, and as changes in corn phenology and yield were directly reflected by the changes in NDVI within the growing season, these two entities were combined to develop a relational model. The model was trained using 15 years (2000-2014) of historical NDVI and county-level corn yield data for four major corn producing states: Kansas, Nebraska, Iowa, and Indiana, representing four climatic regions as South, West North Central, East North Central, and Central, respectively, within the U.S. Corn Belt area. The model's goodness of fit was well defined with a high coefficient of determination (R2>0.81). Similarly, using 2015 yield data for validation, 92% of average accuracy signified the performance of the model in estimating corn yield at county level. Besides providing the county-level corn yield estimations, the derived model was also accurate enough to estimate the yield at finer spatial resolution (field level). The model's assessment accuracy was evaluated using the randomly selected field level corn yield within the study area for 2014, 2015, and 2016. A total of over 120 plot level corn yield were used for validation, and the overall average accuracy was 87%, which statistically justified the model's capability to estimate plot-level corn yield. Additionally, the proposed model was applied to the impact estimation by examining the changes in corn yield due to flood events during the growing season. Using a 2011 Missouri River flood event as a case study, field-level flood impact map on corn yield throughout the flooded regions was produced and an overall agreement of over 82.2% was achieved when compared with the reference impact map. The future research direction of this dissertation research would be to examine other major crops outside the Corn Belt region of the U.S.
Hoeft, Kristin S; Rios, Sarah M; Pantoja Guzman, Estela; Barker, Judith C
2015-09-03
Latino children experience more prevalent and severe tooth decay than non-Hispanic white and non-Hispanic black children. Few theory-based, evaluated and culturally appropriate interventions target parents of this vulnerable population. To fill this gap, the Contra Caries Oral Health Education Program, a theory-based, promotora-led education program for low-income, Spanish-speaking parents of children aged 1-5 years, was developed. This article describes qualitative findings of the acceptability of curriculum content and activities, presents the process of refinement of the curriculum through engaging the target population and promotoras, and presents results from the evaluation assessing the acceptability of the curriculum once implemented. Focus groups were conducted with low-income Spanish-speaking parents of children 1-5 years living in a city in an agricultural area of California. Interviews were digitally recorded, translated and transcribed, checked for accuracy and the resulting data was thematically coded and analyzed using a social constructionist approach. The Contra Caries Oral Health Education Program was then implemented with a separate but similar sample, and after completing the program, participants were administered surveys asking about acceptability and favorite activities of the education program. Data were entered into a database, checked for accuracy, open-ended questions were categorized, and responses to close-ended questions counted. Twelve focus groups were conducted (N = 51), 105 parents attended the Contra Caries Oral Health Education Program, and 83 parents filled out surveys. Complete attendance and retention was high (89% and 90%, respectively). This study found that their children's oral health is a high priority. Parents were not only interested in, but actually attended classes focused on increasing their knowledge and skills with respect to early childhood oral health. The Contra Caries content and format was perceived as acceptable by parents. Strong opinions about curriculum content were expressed for including information on how caries starts and progresses, weaning from the bottle, oral health care for children and adults, motivational strategies for children's tooth brushing, dental visits and cavity restorations. The Contra Caries Oral Health Education Program was acceptable to low-income, Spanish-speaking parents of children 1-5 years. Participating in the curriculum development and revision process likely played an important role in the parents' high acceptability of the program.
Physical impairment aware transparent optical networks
NASA Astrophysics Data System (ADS)
Antona, Jean-Christophe; Morea, Annalisa; Zami, Thierry; Leplingard, Florence
2009-11-01
As illustrated by optical fiber and optical amplification, optical telecommunications have appeared for the last ten years as one of the most promising candidates to increase the transmission capacities. More recently, the concept of optical transparency has been investigated and introduced: it consists of the optical routing of Wavelength Division Multiplexed (WDM) channels without systematic optoelectronic processing at nodes, as long as propagation impairments remain acceptable [1]. This allows achieving less power-consuming, more scalable and flexible networks, and today partial optical transparency has become a reality in deployed systems. However, because of the evolution of traffic features, optical networks are facing new challenges such as demand for higher transmitted capacity, further upgradeability, and more automation. Making all these evolutions compliant on the same current network infrastructure with a minimum of upgrades is one of the main issues for equipment vendors and operators. Hence, an automatic and efficient management of the network needs a control plan aware of the expected Quality of Transmission (QoT) of the connections to set-up with respect to numerous parameters such as: the services demanded by the customers in terms of protection/restoration; the modulation rate and format of the connection under test and also of its adjacent WDM channels; the engineering rules of the network elements traversed with an accurate knowledge of the associated physical impairments. Whatever the method and/or the technology used to collect this information, the issue about its accuracy is one of the main concerns of the network system vendors, because an inaccurate knowledge could yield a sub-optimal dimensioning and so additional costs when installing the network in the field. Previous studies [1], [2] illustrated the impact of this knowledge accuracy on the ability to predict the connection feasibility. After describing usual methods to build performance estimators, this paper reports on this impact but at the global network level, quantifying the importance to account for these uncertainties from the early network planning step; it also proposes an improvement of the accuracy of the Quality of Transmission (QoT) estimator to reduce the raise of planned resources due to these uncertainties.
Avidan, Alexander; Weissman, Charles
2012-03-01
Use of an anesthesia information management system (AIMS) does not insure record completeness and data accuracy. Mandatory data-entry fields can be used to assure data completeness. However, they are not suited for data that is mandatory depending on the clinical situation (context sensitive). For example, information on equal breath sounds should be mandatory with tracheal intubation, but not with mask ventilation. It was hypothesized that employing context-sensitive mandatory data-entry fields can insure high data-completeness and accuracy while maintaining usability. A commercial off-the-shelf AIMS was enhanced using its built-in VBScript programming tool to build event-driven forms with context-sensitive mandatory data-entry fields. One year after introduction of the system, all anesthesia records were reviewed for data completeness. Data concordance, used as a proxy for accuracy, was evaluated using verifiable age-related data. Additionally, an anonymous satisfaction survey on general acceptance and usability of the AIMS was performed. During the initial 12 months of AIMS use, 12,241 (99.6%) of 12,290 anesthesia records had complete data. Concordances of entered data (weight, size of tracheal tubes, laryngoscopy blades and intravenous catheters) with patients' ages were 98.7-99.9%. The AIMS implementation was deemed successful by 98% of the anesthesiologists. Users rated the AIMS usability in general as very good and the data-entry forms in particular as comfortable. Due to the complexity and the high costs of implementation of an anesthesia information management system it was not possible to compare various system designs (for example with or without context-sensitive mandatory data entry-fields). Therefore, it is possible that a different or simpler design would have yielded the same or even better results. This refers also to the evaluation of usability, since users did not have the opportunity to work with different design approaches or even different computer programs. Using context-sensitive mandatory fields in an anesthesia information management system was associated with high record completeness rate and data concordance. In addition, the system's usability was rated as very good by its users. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Edge, Julie; Acerini, Carlo; Campbell, Fiona; Hamilton-Shield, Julian; Moudiotis, Chris; Rahman, Shakeel; Randell, Tabitha; Smith, Anne; Trevelyan, Nicola
2017-06-01
To determine accuracy, safety and acceptability of the FreeStyle Libre Flash Glucose Monitoring System in the paediatric population. Eighty-nine study participants, aged 4-17 years, with type 1 diabetes were enrolled across 9 diabetes centres in the UK. A factory calibrated sensor was inserted on the back of the upper arm and used for up to 14 days. Sensor glucose measurements were compared with capillary blood glucose (BG) measurements. Sensor results were masked to participants. Clinical accuracy of sensor results versus BG results was demonstrated, with 83.8% of results in zone A and 99.4% of results in zones A and B of the consensus error grid. Overall mean absolute relative difference (MARD) was 13.9%. Sensor accuracy was unaffected by patient factors such as age, body weight, sex, method of insulin administration or time of use (day vs night). Participants were in the target glucose range (3.9-10.0 mmol/L) ∼50% of the time (mean 12.1 hours/day), with an average of 2.2 hours/day and 9.5 hours/day in hypoglycaemia and hyperglycaemia, respectively. Sensor application, wear/use of the device and comparison to self-monitoring of blood glucose were rated favourably by most participants/caregivers (84.3-100%). Five device related adverse events were reported across a range of participant ages. Accuracy, safety and user acceptability of the FreeStyle Libre System were demonstrated for the paediatric population. Accuracy of the system was unaffected by subject characteristics, making it suitable for a broad range of children and young people with diabetes. NCT02388815. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Performance of two updated blood glucose monitoring systems: an evaluation following ISO 15197:2013.
Pleus, Stefan; Baumstark, Annette; Rittmeyer, Delia; Jendrike, Nina; Haug, Cornelia; Freckmann, Guido
2016-05-01
Objective For patients with diabetes, regular self-monitoring of blood glucose (SMBG) is essential to ensure adequate glycemic control. Therefore, accurate and reliable blood glucose measurements with SMBG systems are necessary. The international standard ISO 15197 describes requirements for SMBG systems, such as limits within which 95% of glucose results have to fall to reach acceptable system accuracy. The 2013 version of this standard sets higher demands, especially regarding system accuracy, than the currently still valid edition. ISO 15197 can be applied by manufacturers to receive a CE mark for their system. Research design and methods This study was an accuracy evaluation following ISO 15197:2013 section 6.3 of two recently updated SMBG systems (Contour * and Contour TS; Bayer Consumer Care AG, Basel, Switzerland) with an improved algorithm to investigate whether the systems fulfill the requirements of the new standard. For this purpose, capillary blood samples of approximately 100 participants were measured with three test strip lots of both systems and deviations from glucose values obtained with a hexokinase-based comparison method (Cobas Integra † 400 plus; Roche Instrument Center, Rotkreuz, Switzerland) were determined. Percentages of values within the acceptance criteria of ISO 15197:2013 were calculated. This study was registered at clinicaltrials.gov (NCT02358408). Main outcome Both updated systems fulfilled the system accuracy requirements of ISO 15197:2013 as 98.5% to 100% of the results were within the stipulated limits. Furthermore, all results were within the clinically non-critical zones A and B of the consensus error grid for type 1 diabetes. Conclusions The technical improvement of the systems ensured compliance with ISO 15197 in the hands of healthcare professionals even in its more stringent 2013 version. Alternative presentation of system accuracy results in radar plots provides additional information with certain advantages. In addition, the surveillance error grid offers a modern tool to assess a system's clinical performance.
Assessing new patient access to mental health providers in HMO networks.
Barry, Colleen L; Venkatesh, Mohini; Busch, Susan H
2008-12-01
This study examined access to mental health providers in health maintenance organization (HMO) networks. A telephone survey was conducted with a stratified random sample of mental health providers listed as being in a network for at lease one of six HMOs operating in Connecticut (response rate=72%; N=366). Data were collected between December 2006 and March 2007. Measures included the accuracy of network listings, acceptance rates of new patients, and reasons for not accepting new patients. Acceptance of new patients was defined as scheduling an appointment within two weeks from the time of the initial contact. Logistic regression was used to examine acceptance rates of new patients while controlling for type of provider (social worker, nurse, psychologist, or psychiatrist) and practice characteristics. Findings indicate that 17% of sampled HMO network listings were inaccurate. Among the providers with an accurate listing, 73% were accepting new HMO patients and 76% were accepting new self-pay patients. These aggregate acceptance rates of new patients mask differences among providers, with psychiatrists significantly less likely than other providers to accept new patients (55% of psychiatrists were accepting new patients). The most common reason for not accepting new patients was the lack of available appointments. Results indicate that access to mental health providers in HMO networks varied by type of provider. For HMO enrollees seeking treatment for mental health problems from a provider with a master's degree in social work (M.S.W. degree), network access was not a major problem. Scheduling an appointment with a psychiatrist, particularly a psychiatrist treating children only, was more difficult.
Arnould, Valérie M. R.; Reding, Romain; Bormann, Jeanne; Gengler, Nicolas; Soyeurt, Hélène
2015-01-01
Simple Summary Reducing the frequency of milk recording decreases the costs of official milk recording. However, this approach can negatively affect the accuracy of predicting daily yields. Equations to predict daily yield from morning or evening data were developed in this study for fatty milk components from traits recorded easily by milk recording organizations. The correlation values ranged from 96.4% to 97.6% (96.9% to 98.3%) when the daily yields were estimated from the morning (evening) milkings. The simplicity of the proposed models which do not include the milking interval should facilitate their use by breeding and milk recording organizations. Abstract Reducing the frequency of milk recording would help reduce the costs of official milk recording. However, this approach could also negatively affect the accuracy of predicting daily yields. This problem has been investigated in numerous studies. In addition, published equations take into account milking intervals (MI), and these are often not available and/or are unreliable in practice. The first objective of this study was to propose models in which the MI was replaced by a combination of data easily recorded by dairy farmers. The second objective was to further investigate the fatty acids (FA) present in milk. Equations to predict daily yield from AM or PM data were based on a calibration database containing 79,971 records related to 51 traits [milk yield (expected AM, expected PM, and expected daily); fat content (expected AM, expected PM, and expected daily); fat yield (expected AM, expected PM, and expected daily; g/day); levels of seven different FAs or FA groups (expected AM, expected PM, and expected daily; g/dL milk), and the corresponding FA yields for these seven FA types/groups (expected AM, expected PM, and expected daily; g/day)]. These equations were validated using two distinct external datasets. The results obtained from the proposed models were compared to previously published results for models which included a MI effect. The corresponding correlation values ranged from 96.4% to 97.6% when the daily yields were estimated from the AM milkings and ranged from 96.9% to 98.3% when the daily yields were estimated from the PM milkings. The simplicity of these proposed models should facilitate their use by breeding and milk recording organizations. PMID:26479379
de Froment, Adrian J.; Rubenstein, Daniel I.; Levin, Simon A.
2014-01-01
The standard view in biology is that all animals, from bumblebees to human beings, face a trade-off between speed and accuracy as they search for resources and mates, and attempt to avoid predators. For example, the more time a forager spends out of cover gathering information about potential food sources the more likely it is to make accurate decisions about which sources are most rewarding. However, when the cost of time spent out of cover rises (e.g. in the presence of a predator) the optimal strategy is for the forager to spend less time gathering information and to accept a corresponding decline in the accuracy of its decisions. We suggest that this familiar picture is missing a crucial dimension: the amount of effort an animal expends on gathering information in each unit of time. This is important because an animal that can respond to changing time costs by modulating its level of effort per-unit-time does not have to accept the same decrease in accuracy that an animal limited to a simple speed-accuracy trade-off must bear in the same situation. Instead, it can direct additional effort towards (i) reducing the frequency of perceptual errors in the samples it gathers or (ii) increasing the number of samples it gathers per-unit-time. Both of these have the effect of allowing it to gather more accurate information within a given period of time. We use a modified version of a canonical model of decision-making (the sequential probability ratio test) to show that this ability to substitute effort for time confers a fitness advantage in the face of changing time costs. We predict that the ability to modulate effort levels will therefore be widespread in nature, and we lay out testable predictions that could be used to detect adaptive modulation of effort levels in laboratory and field studies. Our understanding of decision-making in all species, including our own, will be improved by this more ecologically-complete picture of the three-way tradeoff between time, effort per-unit-time and accuracy. PMID:25522281
de Froment, Adrian J; Rubenstein, Daniel I; Levin, Simon A
2014-12-01
The standard view in biology is that all animals, from bumblebees to human beings, face a trade-off between speed and accuracy as they search for resources and mates, and attempt to avoid predators. For example, the more time a forager spends out of cover gathering information about potential food sources the more likely it is to make accurate decisions about which sources are most rewarding. However, when the cost of time spent out of cover rises (e.g. in the presence of a predator) the optimal strategy is for the forager to spend less time gathering information and to accept a corresponding decline in the accuracy of its decisions. We suggest that this familiar picture is missing a crucial dimension: the amount of effort an animal expends on gathering information in each unit of time. This is important because an animal that can respond to changing time costs by modulating its level of effort per-unit-time does not have to accept the same decrease in accuracy that an animal limited to a simple speed-accuracy trade-off must bear in the same situation. Instead, it can direct additional effort towards (i) reducing the frequency of perceptual errors in the samples it gathers or (ii) increasing the number of samples it gathers per-unit-time. Both of these have the effect of allowing it to gather more accurate information within a given period of time. We use a modified version of a canonical model of decision-making (the sequential probability ratio test) to show that this ability to substitute effort for time confers a fitness advantage in the face of changing time costs. We predict that the ability to modulate effort levels will therefore be widespread in nature, and we lay out testable predictions that could be used to detect adaptive modulation of effort levels in laboratory and field studies. Our understanding of decision-making in all species, including our own, will be improved by this more ecologically-complete picture of the three-way tradeoff between time, effort per-unit-time and accuracy.
Smoski, Moria J; Keng, Shian-Ling; Ji, Jie Lisa; Moore, Tyler; Minkel, Jared; Dichter, Gabriel S
2015-09-01
Mood disorders are characterized by impaired emotion regulation abilities, reflected in alterations in frontolimbic brain functioning during regulation. However, little is known about differences in brain function when comparing regulatory strategies. Reappraisal and emotional acceptance are effective in downregulating negative affect, and are components of effective depression psychotherapies. Investigating neural mechanisms of reappraisal vs emotional acceptance in remitted major depressive disorder (rMDD) may yield novel mechanistic insights into depression risk and prevention. Thirty-seven individuals (18 rMDD, 19 controls) were assessed during a functional magnetic resonance imaging task requiring reappraisal, emotional acceptance or no explicit regulation while viewing sad images. Lower negative affect was reported following reappraisal than acceptance, and was lower following acceptance than no explicit regulation. In controls, the acceptance > reappraisal contrast revealed greater activation in left insular cortex and right prefrontal gyrus, and less activation in several other prefrontal regions. Compared with controls, the rMDD group had greater paracingulate and right midfrontal gyrus (BA 8) activation during reappraisal relative to acceptance. Compared with reappraisal, acceptance is associated with activation in regions linked to somatic and emotion awareness, although this activation is associated with less reduction in negative affect. Additionally, a history of MDD moderated these effects. © The Author (2015). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
A hierarchical spatial model for well yield in complex aquifers
NASA Astrophysics Data System (ADS)
Montgomery, J.; O'sullivan, F.
2017-12-01
Efficiently siting and managing groundwater wells requires reliable estimates of the amount of water that can be produced, or the well yield. This can be challenging to predict in highly complex, heterogeneous fractured aquifers due to the uncertainty around local hydraulic properties. Promising statistical approaches have been advanced in recent years. For instance, kriging and multivariate regression analysis have been applied to well test data with limited but encouraging levels of prediction accuracy. Additionally, some analytical solutions to diffusion in homogeneous porous media have been used to infer "effective" properties consistent with observed flow rates or drawdown. However, this is an under-specified inverse problem with substantial and irreducible uncertainty. We describe a flexible machine learning approach capable of combining diverse datasets with constraining physical and geostatistical models for improved well yield prediction accuracy and uncertainty quantification. Our approach can be implemented within a hierarchical Bayesian framework using Markov Chain Monte Carlo, which allows for additional sources of information to be incorporated in priors to further constrain and improve predictions and reduce the model order. We demonstrate the usefulness of this approach using data from over 7,000 wells in a fractured bedrock aquifer.
Chernyshev, Oleg Y; Garami, Zsolt; Calleja, Sergio; Song, Joon; Campbell, Morgan S; Noser, Elizabeth A; Shaltoni, Hashem; Chen, Chin-I; Iguchi, Yasuyuki; Grotta, James C; Alexandrov, Andrei V
2005-01-01
We routinely perform an urgent bedside neurovascular ultrasound examination (NVUE) with carotid/vertebral duplex and transcranial Doppler (TCD) in patients with acute cerebral ischemia. We aimed to determine the yield and accuracy of NVUE to identify lesions amenable for interventional treatment (LAITs). NVUE was performed with portable carotid duplex and TCD using standardized fast-track (<15 minutes) insonation protocols. Digital subtraction angiography (DSA) was the gold standard for identifying LAIT. These lesions were defined as proximal intra- or extracranial occlusions, near-occlusions, > or =50% stenoses or thrombus in the symptomatic artery. One hundred and fifty patients (70 women, mean age 66+/-15 years) underwent NVUE at median 128 minutes after symptom onset. Fifty-four patients (36%) received intravenous or intra-arterial thrombolysis (median National Institutes of Health Stroke Scale (NIHSS) score 14, range 4 to 29; 81% had NIHSS > or =10 points). NVUE demonstrated LAITs in 98% of patients eligible for thrombolysis, 76% of acute stroke patients ineligible for thrombolysis (n=63), and 42% in patients with transient ischemic attack (n=33), P<0.001. Urgent DSA was performed in 30 patients on average 230 minutes after NVUE. Compared with DSA, NVUE predicted LAIT presence with 100% sensitivity and 100% specificity, although individual accuracy parameters for TCD and carotid duplex specific to occlusion location ranged 75% to 96% because of the presence of tandem lesions and 10% rate of no temporal windows. Bedside neurovascular ultrasound examination, combining carotid/vertebral duplex with TCD yields a substantial proportion of LAITs in excellent agreement with urgent DSA.
Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.; ...
2016-02-11
Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height,more » and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Furthermore, some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.
Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height,more » and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Furthermore, some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs.« less
Vázquez Blanco, E; López Mahía, P; Muniategui Lorenzo, S; Prada Rodríguez, D; Fernández Fernández, E
2000-02-01
Microwave energy was applied to extract polycyclic aromatic hydrocarbons (PAHs) and linear aliphatic hydrocarbons (LAHs) from marine sediments. The influence of experimental conditions, such as different extracting solvents and mixtures, microwave power, irradiation time and number of samples extracted per run has been tested using real marine sediment samples; volume of the solvent, sample quantity and matrix effects were also evaluated. The yield of extracted compounds obtained by microwave irradiation was compared with that obtained using the traditional Soxhlet extraction. The best results were achieved with a mixture of acetone and hexane (1:1), and recoveries ranged from 92 to 106%. The extraction time is dependent on the irradiation power and the number of samples extracted per run, so when the irradiation power was set to 500 W, the extraction times varied from 6 min for 1 sample to 18 min for 8 samples. Analytical determinations were carried out by high-performance liquid chromatography (HPLC) with an ultraviolet-visible photodiode-array detector for PAHs and gas chromatography (GC) using a FID detector for LAHs. To test the accuracy of the microwave-assisted extraction (MAE) technique, optimized methodology was applied to the analysis of standard reference material (SRM 1941), obtaining acceptable results.
Cue-Dependent Interference in Comprehension
ERIC Educational Resources Information Center
Van Dyke, Julie A.; McElree, Brian
2011-01-01
The role of interference as a primary determinant of forgetting in memory has long been accepted, however its role as a contributor to poor comprehension is just beginning to be understood. The current paper reports two studies, in which speed-accuracy tradeoff and eye-tracking methodologies were used with the same materials to provide converging…
20 CFR 655.1308 - Offered wage rate.
Code of Federal Regulations, 2010 CFR
2010-04-01
.... Recruitment for this purpose begins when the job order is accepted by the SWA for posting. (d) Wage offer. The... job offers for beginning level employees who have a basic understanding of the occupation. These... monitored and reviewed for accuracy. (2) Level II wage rates are assigned to job offers for employees who...
Commentary: Using Mixed Methods to Transform Special Education Research
ERIC Educational Resources Information Center
Trainor, Audrey A.
2011-01-01
Klingner and Boardman (this issue) offer a cogent and compelling argument for opening the door for the acceptance and use of mixed methods in special education research. Self-identifying as pragmatists, they embody this paradigmatic view by focusing on the utility, efficacy, and accuracy of mixed methods, an argument that should appeal to the…
The Use of Protocols as Educational Tools for House Officers.
ERIC Educational Resources Information Center
Dworin, Aaron M.; Stross, Jeoffrey K.
1979-01-01
A project was initiated to determine whether protocols for the management of dysuria would be accepted by clinic house officers and whether these protocols could influence the care given to patients. Positive results were obtained in both areas. Significant improvement in documentation of patient history and diagnostic accuracy were noted. (JMD)
Considerations for Test Selection: How Do Validity and Reliability Impact Diagnostic Decisions?
ERIC Educational Resources Information Center
Friberg, Jennifer C.
2010-01-01
Nine preschool and school-age language assessment tools found to have acceptable levels of identification accuracy were evaluated to determine their overall levels of psychometric validity for use in diagnosing the presence/absence of language impairment. Eleven specific criteria based on those initially devised by McCauley and Swisher (1984) were…
The Danger of Inadequate Conceptualisation in PISA for Education Policy
ERIC Educational Resources Information Center
Gaber, Slavko; Cankar, Gregor; Umek, Ljubica Marjanovic; Tasner, Veronika
2012-01-01
Due to the broad acceptance of the Programme for International Student Assessment (PISA) and other comparative studies as instruments of policymaking, its accuracy is essential. This article attempts to demonstrate omissions in the conceptualisation, and consequently in calculation and interpretation, of one of the central points of PISA 2006 and…
USDA-ARS?s Scientific Manuscript database
Most efforts to harness the power of big data for ecology and environmental sciences focus on data and metadata sharing, standardization, and accuracy. However, many scientists have not accepted the data deluge as an integral part of their research because the current scientific method is not scalab...
A CFO's Perspective on the Quality Revolution.
ERIC Educational Resources Information Center
Norton, Alan J.
1994-01-01
The chief financial officer (CFO) of St. John Fisher College (New York) analyzes the costs associated with the implementation of quality management at St. John Fisher and outlines one way to determine whether the investment is yielding an acceptable internal rate of return. (DB)
78 FR 2370 - New England Fishery Management Council (NEFMC); Public Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-11
... p.m. to address employment matters. Tuesday, January 29, 2013 Following introductions and any... catch based on Scientific and Statistical Committee advice, management uncertainty, optimum yield and a...'s Scientific and Statistical Committee will report on its acceptable biological catch...
Reducing Error Rates for Iris Image using higher Contrast in Normalization process
NASA Astrophysics Data System (ADS)
Aminu Ghali, Abdulrahman; Jamel, Sapiee; Abubakar Pindar, Zahraddeen; Hasssan Disina, Abdulkadir; Mat Daris, Mustafa
2017-08-01
Iris recognition system is the most secured, and faster means of identification and authentication. However, iris recognition system suffers a setback from blurring, low contrast and illumination due to low quality image which compromises the accuracy of the system. The acceptance or rejection rates of verified user depend solely on the quality of the image. In many cases, iris recognition system with low image contrast could falsely accept or reject user. Therefore this paper adopts Histogram Equalization Technique to address the problem of False Rejection Rate (FRR) and False Acceptance Rate (FAR) by enhancing the contrast of the iris image. A histogram equalization technique enhances the image quality and neutralizes the low contrast of the image at normalization stage. The experimental result shows that Histogram Equalization Technique has reduced FRR and FAR compared to the existing techniques.
Automatic multi-organ segmentation using learning-based segmentation and level set optimization.
Kohlberger, Timo; Sofka, Michal; Zhang, Jingdan; Birkbeck, Neil; Wetzl, Jens; Kaftan, Jens; Declerck, Jérôme; Zhou, S Kevin
2011-01-01
We present a novel generic segmentation system for the fully automatic multi-organ segmentation from CT medical images. Thereby we combine the advantages of learning-based approaches on point cloud-based shape representation, such a speed, robustness, point correspondences, with those of PDE-optimization-based level set approaches, such as high accuracy and the straightforward prevention of segment overlaps. In a benchmark on 10-100 annotated datasets for the liver, the lungs, and the kidneys we show that the proposed system yields segmentation accuracies of 1.17-2.89 mm average surface errors. Thereby the level set segmentation (which is initialized by the learning-based segmentations) contributes with an 20%-40% increase in accuracy.
Hillarp, A; Friedman, K D; Adcock-Funk, D; Tiefenbacher, S; Nichols, W L; Chen, D; Stadler, M; Schwartz, B A
2015-11-01
The ability of von Willebrand factor (VWF) to bind platelet GP Ib and promote platelet plug formation is measured in vitro using the ristocetin cofactor (VWF:RCo) assay. Automated assay systems make testing more accessible for diagnosis, but do not necessarily improve sensitivity and accuracy. We assessed the performance of a modified automated VWF:RCo assay protocol for the Behring Coagulation System (BCS(®) ) compared to other available assay methods. Results from different VWF:RCo assays in a number of specialized commercial and research testing laboratories were compared using plasma samples with varying VWF:RCo activities (0-1.2 IU mL(-1) ). Samples were prepared by mixing VWF concentrate or plasma standard into VWF-depleted plasma. Commercially available lyophilized standard human plasma was also studied. Emphasis was put on the low measuring range. VWF:RCo accuracy was calculated based on the expected values, whereas precision was obtained from repeated measurements. In the physiological concentration range, most of the automated tests resulted in acceptable accuracy, with varying reproducibility dependent on the method. However, several assays were inaccurate in the low measuring range. Only the modified BCS protocol showed acceptable accuracy over the entire measuring range with improved reproducibility. A modified BCS(®) VWF:RCo method can improve sensitivity and thus enhances the measuring range. Furthermore, the modified BCS(®) assay displayed good precision. This study indicates that the specific modifications - namely the combination of increased ristocetin concentration, reduced platelet content, VWF-depleted plasma as on-board diluent and a two-curve calculation mode - reduces the issues seen with current VWF:RCo activity assays. © 2015 John Wiley & Sons Ltd.
Nikolac Gabaj, Nora; Miler, Marijana; Vrtarić, Alen; Hemar, Marina; Filipi, Petra; Kocijančić, Marija; Šupak Smolčić, Vesna; Ćelap, Ivana; Šimundić, Ana-Maria
2018-04-25
The aim of our study was to perform verification of serum indices on three clinical chemistry platforms. This study was done on three analyzers: Abbott Architect c8000, Beckman Coulter AU5800 (BC) and Roche Cobas 6000 c501. The following analytical specifications were verified: precision (two patient samples), accuracy (sample with the highest concentration of interferent was serially diluted and measured values compared to theoretical values), comparability (120 patients samples) and cross reactivity (samples with increasing concentrations of interferent were divided in two aliquots and remaining interferents were added in each aliquot. Measurements were done before and after adding interferents). Best results for precision were obtained for the H index (0.72%-2.08%). Accuracy for the H index was acceptable for Cobas and BC, while on Architect, deviations in the high concentration range were observed (y=0.02 [0.01-0.07]+1.07 [1.06-1.08]x). All three analyzers showed acceptable results in evaluating accuracy of L index and unacceptable results for I index. The H index was comparable between BC and both, Architect (Cohen's κ [95% CI]=0.795 [0.692-0.898]) and Roche (Cohen's κ [95% CI]=0.825 [0.729-0.922]), while Roche and Architect were not comparable. The I index was not comparable between all analyzer combinations, while the L index was only comparable between Abbott and BC. Cross reactivity analysis mostly showed that serum indices measurement is affected when a combination of interferences is present. There is heterogeneity between analyzers in the hemolysis, icteria, lipemia (HIL) quality performance. Verification of serum indices in routine work is necessary to establish analytical specifications.
From 16-bit to high-accuracy IDCT approximation: fruits of single architecture affliation
NASA Astrophysics Data System (ADS)
Liu, Lijie; Tran, Trac D.; Topiwala, Pankaj
2007-09-01
In this paper, we demonstrate an effective unified framework for high-accuracy approximation of the irrational co-effcient floating-point IDCT by a single integer-coeffcient fixed-point architecture. Our framework is based on a modified version of the Loeffler's sparse DCT factorization, and the IDCT architecture is constructed via a cascade of dyadic lifting steps and butterflies. We illustrate that simply varying the accuracy of the approximating parameters yields a large family of standard-compliant IDCTs, from rare 16-bit approximations catering to portable computing to ultra-high-accuracy 32-bit versions that virtually eliminate any drifting effect when pairing with the 64-bit floating-point IDCT at the encoder. Drifting performances of the proposed IDCTs along with existing popular IDCT algorithms in H.263+, MPEG-2 and MPEG-4 are also demonstrated.
Grammaticality, Acceptability, and Probability: A Probabilistic View of Linguistic Knowledge.
Lau, Jey Han; Clark, Alexander; Lappin, Shalom
2017-07-01
The question of whether humans represent grammatical knowledge as a binary condition on membership in a set of well-formed sentences, or as a probabilistic property has been the subject of debate among linguists, psychologists, and cognitive scientists for many decades. Acceptability judgments present a serious problem for both classical binary and probabilistic theories of grammaticality. These judgements are gradient in nature, and so cannot be directly accommodated in a binary formal grammar. However, it is also not possible to simply reduce acceptability to probability. The acceptability of a sentence is not the same as the likelihood of its occurrence, which is, in part, determined by factors like sentence length and lexical frequency. In this paper, we present the results of a set of large-scale experiments using crowd-sourced acceptability judgments that demonstrate gradience to be a pervasive feature in acceptability judgments. We then show how one can predict acceptability judgments on the basis of probability by augmenting probabilistic language models with an acceptability measure. This is a function that normalizes probability values to eliminate the confounding factors of length and lexical frequency. We describe a sequence of modeling experiments with unsupervised language models drawn from state-of-the-art machine learning methods in natural language processing. Several of these models achieve very encouraging levels of accuracy in the acceptability prediction task, as measured by the correlation between the acceptability measure scores and mean human acceptability values. We consider the relevance of these results to the debate on the nature of grammatical competence, and we argue that they support the view that linguistic knowledge can be intrinsically probabilistic. Copyright © 2016 Cognitive Science Society, Inc.
Development of a semi-automated combined PET and CT lung lesion segmentation framework
NASA Astrophysics Data System (ADS)
Rossi, Farli; Mokri, Siti Salasiah; Rahni, Ashrani Aizzuddin Abd.
2017-03-01
Segmentation is one of the most important steps in automated medical diagnosis applications, which affects the accuracy of the overall system. In this paper, we propose a semi-automated segmentation method for extracting lung lesions from thoracic PET/CT images by combining low level processing and active contour techniques. The lesions are first segmented in PET images which are first converted to standardised uptake values (SUVs). The segmented PET images then serve as an initial contour for subsequent active contour segmentation of corresponding CT images. To evaluate its accuracy, the Jaccard Index (JI) was used as a measure of the accuracy of the segmented lesion compared to alternative segmentations from the QIN lung CT segmentation challenge, which is possible by registering the whole body PET/CT images to the corresponding thoracic CT images. The results show that our proposed technique has acceptable accuracy in lung lesion segmentation with JI values of around 0.8, especially when considering the variability of the alternative segmentations.
A Comparison of Machine Learning Approaches for Corn Yield Estimation
NASA Astrophysics Data System (ADS)
Kim, N.; Lee, Y. W.
2017-12-01
Machine learning is an efficient empirical method for classification and prediction, and it is another approach to crop yield estimation. The objective of this study is to estimate corn yield in the Midwestern United States by employing the machine learning approaches such as the support vector machine (SVM), random forest (RF), and deep neural networks (DNN), and to perform the comprehensive comparison for their results. We constructed the database using satellite images from MODIS, the climate data of PRISM climate group, and GLDAS soil moisture data. In addition, to examine the seasonal sensitivities of corn yields, two period groups were set up: May to September (MJJAS) and July and August (JA). In overall, the DNN showed the highest accuracies in term of the correlation coefficient for the two period groups. The differences between our predictions and USDA yield statistics were about 10-11 %.
Factor regression for interpreting genotype-environment interaction in bread-wheat trials.
Baril, C P
1992-05-01
The French INRA wheat (Triticum aestivum L. em Thell.) breeding program is based on multilocation trials to produce high-yielding, adapted lines for a wide range of environments. Differential genotypic responses to variable environment conditions limit the accuracy of yield estimations. Factor regression was used to partition the genotype-environment (GE) interaction into four biologically interpretable terms. Yield data were analyzed from 34 wheat genotypes grown in four environments using 12 auxiliary agronomic traits as genotypic and environmental covariates. Most of the GE interaction (91%) was explained by the combination of only three traits: 1,000-kernel weight, lodging susceptibility and spike length. These traits are easily measured in breeding programs, therefore factor regression model can provide a convenient and useful prediction method of yield.
Classification of EEG Signals Based on Pattern Recognition Approach.
Amin, Hafeez Ullah; Mumtaz, Wajid; Subhani, Ahmad Rauf; Saad, Mohamad Naufal Mohamad; Malik, Aamir Saeed
2017-01-01
Feature extraction is an important step in the process of electroencephalogram (EEG) signal classification. The authors propose a "pattern recognition" approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolution decompositions into detailed and approximate coefficients as well as relative wavelet energy were computed. Extracted relative wavelet energy features were normalized to zero mean and unit variance and then optimized using Fisher's discriminant ratio (FDR) and principal component analysis (PCA). A high density EEG dataset validated the proposed method (128-channels) by identifying two classifications: (1) EEG signals recorded during complex cognitive tasks using Raven's Advance Progressive Metric (RAPM) test; (2) EEG signals recorded during a baseline task (eyes open). Classifiers such as, K-nearest neighbors (KNN), Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Naïve Bayes (NB) were then employed. Outcomes yielded 99.11% accuracy via SVM classifier for coefficient approximations (A5) of low frequencies ranging from 0 to 3.90 Hz. Accuracy rates for detailed coefficients were 98.57 and 98.39% for SVM and KNN, respectively; and for detailed coefficients (D5) deriving from the sub-band range (3.90-7.81 Hz). Accuracy rates for MLP and NB classifiers were comparable at 97.11-89.63% and 91.60-81.07% for A5 and D5 coefficients, respectively. In addition, the proposed approach was also applied on public dataset for classification of two cognitive tasks and achieved comparable classification results, i.e., 93.33% accuracy with KNN. The proposed scheme yielded significantly higher classification performances using machine learning classifiers compared to extant quantitative feature extraction. These results suggest the proposed feature extraction method reliably classifies EEG signals recorded during cognitive tasks with a higher degree of accuracy.
Classification of EEG Signals Based on Pattern Recognition Approach
Amin, Hafeez Ullah; Mumtaz, Wajid; Subhani, Ahmad Rauf; Saad, Mohamad Naufal Mohamad; Malik, Aamir Saeed
2017-01-01
Feature extraction is an important step in the process of electroencephalogram (EEG) signal classification. The authors propose a “pattern recognition” approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolution decompositions into detailed and approximate coefficients as well as relative wavelet energy were computed. Extracted relative wavelet energy features were normalized to zero mean and unit variance and then optimized using Fisher's discriminant ratio (FDR) and principal component analysis (PCA). A high density EEG dataset validated the proposed method (128-channels) by identifying two classifications: (1) EEG signals recorded during complex cognitive tasks using Raven's Advance Progressive Metric (RAPM) test; (2) EEG signals recorded during a baseline task (eyes open). Classifiers such as, K-nearest neighbors (KNN), Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Naïve Bayes (NB) were then employed. Outcomes yielded 99.11% accuracy via SVM classifier for coefficient approximations (A5) of low frequencies ranging from 0 to 3.90 Hz. Accuracy rates for detailed coefficients were 98.57 and 98.39% for SVM and KNN, respectively; and for detailed coefficients (D5) deriving from the sub-band range (3.90–7.81 Hz). Accuracy rates for MLP and NB classifiers were comparable at 97.11–89.63% and 91.60–81.07% for A5 and D5 coefficients, respectively. In addition, the proposed approach was also applied on public dataset for classification of two cognitive tasks and achieved comparable classification results, i.e., 93.33% accuracy with KNN. The proposed scheme yielded significantly higher classification performances using machine learning classifiers compared to extant quantitative feature extraction. These results suggest the proposed feature extraction method reliably classifies EEG signals recorded during cognitive tasks with a higher degree of accuracy. PMID:29209190
Hsieh, Chi-Wen; Liu, Tzu-Chiang; Wang, Jui-Kai; Jong, Tai-Lang; Tiu, Chui-Mei
2011-08-01
The Tanner-Whitehouse III (TW3) method is popular for assessing children's bone age, but it is time-consuming in clinical settings; to simplify this, a grouped-TW algorithm (GTA) was developed. A total of 534 left-hand roentgenograms of subjects aged 2-15 years, including 270 training and 264 testing datasets, were evaluated by a senior pediatrician. Next, GTA was used to choose the appropriate candidate of radius, ulna, and short bones and to classify the bones into three groups by data mining. Group 1 was composed of the maturity pattern of the radius and the middle phalange of the third and fifth digits and three weights were obtained by data mining, yielding a result similar to that of TW3. Subsequently, new bone-age assessment tables were constructed for boys and girls by linear regression and fuzzy logic. In addition, the Bland-Altman plot was utilized to compare accuracy between the GTA, the Greulich-Pyle (GP), and the TW3 method. The relative accuracy between the GTA and the TW3 was 96.2% in boys and 95% in girls, with an error of 1 year, while that between the assessment results of the GP and TW3 was about 87%, with an error of 1 year. However, even if the three weights were not optimally processed, GTA yielded a marginal result with an accuracy of 78.2% in boys and 79.6% in girls. GTA can efficiently simplify the complexity of the TW3 method, while maintaining almost the same accuracy. The relative accuracy between the assessment results of GTA and GP can also be marginal. © 2011 The Authors. Pediatrics International © 2011 Japan Pediatric Society.
Quantitative falls risk estimation through multi-sensor assessment of standing balance.
Greene, Barry R; McGrath, Denise; Walsh, Lorcan; Doheny, Emer P; McKeown, David; Garattini, Chiara; Cunningham, Clodagh; Crosby, Lisa; Caulfield, Brian; Kenny, Rose A
2012-12-01
Falls are the most common cause of injury and hospitalization and one of the principal causes of death and disability in older adults worldwide. Measures of postural stability have been associated with the incidence of falls in older adults. The aim of this study was to develop a model that accurately classifies fallers and non-fallers using novel multi-sensor quantitative balance metrics that can be easily deployed into a home or clinic setting. We compared the classification accuracy of our model with an established method for falls risk assessment, the Berg balance scale. Data were acquired using two sensor modalities--a pressure sensitive platform sensor and a body-worn inertial sensor, mounted on the lower back--from 120 community dwelling older adults (65 with a history of falls, 55 without, mean age 73.7 ± 5.8 years, 63 female) while performing a number of standing balance tasks in a geriatric research clinic. Results obtained using a support vector machine yielded a mean classification accuracy of 71.52% (95% CI: 68.82-74.28) in classifying falls history, obtained using one model classifying all data points. Considering male and female participant data separately yielded classification accuracies of 72.80% (95% CI: 68.85-77.17) and 73.33% (95% CI: 69.88-76.81) respectively, leading to a mean classification accuracy of 73.07% in identifying participants with a history of falls. Results compare favourably to those obtained using the Berg balance scale (mean classification accuracy: 59.42% (95% CI: 56.96-61.88)). Results from the present study could lead to a robust method for assessing falls risk in both supervised and unsupervised environments.
Nonportable computed radiography of the chest--radiologists' acceptance
NASA Astrophysics Data System (ADS)
Gennari, Rose C.; Gur, David; Miketic, Linda M.; Campbell, William L.; Oliver, James H., III; Plunkett, Michael B.
1994-04-01
Following a large ROC study to assess diagnostic accuracy of PA chest computed radiography (CR) images displayed in a variety of formats, we asked nine experienced radiologists to subjectively assess their acceptance of and preferences for display modes in primary diagnosis of erect PA chest images. Our results indicate that radiologists felt somewhat less comfortable interpreting CR images displayed on either laser-printed films or workstations as compared to conventional films. The use of four minified images were thought to somewhat decrease diagnostic confidence, as well as to increase the time of interpretation. The reverse mode (black bone) images increased radiologists' confidence level in the detection of soft tissue abnormalities.
Palm, Peter; Josephson, Malin; Mathiassen, Svend Erik; Kjellberg, Katarina
2016-06-01
We evaluated the intra- and inter-observer reliability and criterion validity of an observation protocol, developed in an iterative process involving practicing ergonomists, for assessment of working technique during cash register work for the purpose of preventing upper extremity symptoms. Two ergonomists independently assessed 17 15-min videos of cash register work on two occasions each, as a basis for examining reliability. Criterion validity was assessed by comparing these assessments with meticulous video-based analyses by researchers. Intra-observer reliability was acceptable (i.e. proportional agreement >0.7 and kappa >0.4) for 10/10 questions. Inter-observer reliability was acceptable for only 3/10 questions. An acceptable inter-observer reliability combined with an acceptable criterion validity was obtained only for one working technique aspect, 'Quality of movements'. Thus, major elements of the cashiers' working technique could not be assessed with an acceptable accuracy from short periods of observations by one observer, such as often desired by practitioners. Practitioner Summary: We examined an observation protocol for assessing working technique in cash register work. It was feasible in use, but inter-observer reliability and criterion validity were generally not acceptable when working technique aspects were assessed from short periods of work. We recommend the protocol to be used for educational purposes only.
Resnick, Daniel K
2003-06-01
Fluoroscopy-based frameless stereotactic systems provide feedback to the surgeon using virtual fluoroscopic images. The real-life accuracy of these virtual images has not been compared with traditional fluoroscopy in a clinical setting. We prospectively studied 23 consecutive cases. In two cases, registration errors precluded the use of virtual fluoroscopy. Pedicle probes placed with virtual fluoroscopic imaging were imaged with traditional fluoroscopy in the remaining 21 cases. Position of the probes was judged to be ideal, acceptable but not ideal, or not acceptable based on the traditional fluoroscopic images. Virtual fluoroscopy was used to place probes in for 97 pedicles from L1 to the sacrum. Eighty-eight probes were judged to be in ideal position, eight were judged to be acceptable but not ideal, and one probe was judged to be in an unacceptable position. This probe was angled toward an adjacent disc space. Therefore, 96 of 97 probes placed using virtual fluoroscopy were found to be in an acceptable position. The positive predictive value for acceptable screw placement with virtual fluoroscopy compared with traditional fluoroscopy was 99%. A probe placed with virtual fluoroscopic guidance will be judged to be in an acceptable position when imaged with traditional fluoroscopy 99% of the time.
Prediction-Oriented Marker Selection (PROMISE): With Application to High-Dimensional Regression.
Kim, Soyeon; Baladandayuthapani, Veerabhadran; Lee, J Jack
2017-06-01
In personalized medicine, biomarkers are used to select therapies with the highest likelihood of success based on an individual patient's biomarker/genomic profile. Two goals are to choose important biomarkers that accurately predict treatment outcomes and to cull unimportant biomarkers to reduce the cost of biological and clinical verifications. These goals are challenging due to the high dimensionality of genomic data. Variable selection methods based on penalized regression (e.g., the lasso and elastic net) have yielded promising results. However, selecting the right amount of penalization is critical to simultaneously achieving these two goals. Standard approaches based on cross-validation (CV) typically provide high prediction accuracy with high true positive rates but at the cost of too many false positives. Alternatively, stability selection (SS) controls the number of false positives, but at the cost of yielding too few true positives. To circumvent these issues, we propose prediction-oriented marker selection (PROMISE), which combines SS with CV to conflate the advantages of both methods. Our application of PROMISE with the lasso and elastic net in data analysis shows that, compared to CV, PROMISE produces sparse solutions, few false positives, and small type I + type II error, and maintains good prediction accuracy, with a marginal decrease in the true positive rates. Compared to SS, PROMISE offers better prediction accuracy and true positive rates. In summary, PROMISE can be applied in many fields to select regularization parameters when the goals are to minimize false positives and maximize prediction accuracy.
The uncertainty of crop yield projections is reduced by improved temperature response functions.
Wang, Enli; Martre, Pierre; Zhao, Zhigan; Ewert, Frank; Maiorano, Andrea; Rötter, Reimund P; Kimball, Bruce A; Ottman, Michael J; Wall, Gerard W; White, Jeffrey W; Reynolds, Matthew P; Alderman, Phillip D; Aggarwal, Pramod K; Anothai, Jakarat; Basso, Bruno; Biernath, Christian; Cammarano, Davide; Challinor, Andrew J; De Sanctis, Giacomo; Doltra, Jordi; Fereres, Elias; Garcia-Vila, Margarita; Gayler, Sebastian; Hoogenboom, Gerrit; Hunt, Leslie A; Izaurralde, Roberto C; Jabloun, Mohamed; Jones, Curtis D; Kersebaum, Kurt C; Koehler, Ann-Kristin; Liu, Leilei; Müller, Christoph; Naresh Kumar, Soora; Nendel, Claas; O'Leary, Garry; Olesen, Jørgen E; Palosuo, Taru; Priesack, Eckart; Eyshi Rezaei, Ehsan; Ripoche, Dominique; Ruane, Alex C; Semenov, Mikhail A; Shcherbak, Iurii; Stöckle, Claudio; Stratonovitch, Pierre; Streck, Thilo; Supit, Iwan; Tao, Fulu; Thorburn, Peter; Waha, Katharina; Wallach, Daniel; Wang, Zhimin; Wolf, Joost; Zhu, Yan; Asseng, Senthold
2017-07-17
Increasing the accuracy of crop productivity estimates is a key element in planning adaptation strategies to ensure global food security under climate change. Process-based crop models are effective means to project climate impact on crop yield, but have large uncertainty in yield simulations. Here, we show that variations in the mathematical functions currently used to simulate temperature responses of physiological processes in 29 wheat models account for >50% of uncertainty in simulated grain yields for mean growing season temperatures from 14 °C to 33 °C. We derived a set of new temperature response functions that when substituted in four wheat models reduced the error in grain yield simulations across seven global sites with different temperature regimes by 19% to 50% (42% average). We anticipate the improved temperature responses to be a key step to improve modelling of crops under rising temperature and climate change, leading to higher skill of crop yield projections.
The Uncertainty of Crop Yield Projections Is Reduced by Improved Temperature Response Functions
NASA Technical Reports Server (NTRS)
Wang, Enli; Martre, Pierre; Zhao, Zhigan; Ewert, Frank; Maiorano, Andrea; Rotter, Reimund P.; Kimball, Bruce A.; Ottman, Michael J.; White, Jeffrey W.; Reynolds, Matthew P.;
2017-01-01
Increasing the accuracy of crop productivity estimates is a key element in planning adaptation strategies to ensure global food security under climate change. Process-based crop models are effective means to project climate impact on crop yield, but have large uncertainty in yield simulations. Here, we show that variations in the mathematical functions currently used to simulate temperature responses of physiological processes in 29 wheat models account for is greater than 50% of uncertainty in simulated grain yields for mean growing season temperatures from 14 C to 33 C. We derived a set of new temperature response functions that when substituted in four wheat models reduced the error in grain yield simulations across seven global sites with different temperature regimes by 19% to 50% (42% average). We anticipate the improved temperature responses to be a key step to improve modelling of crops under rising temperature and climate change, leading to higher skill of crop yield projections.
12 CFR 337.6 - Brokered deposits.
Code of Federal Regulations, 2010 CFR
2010-01-01
... offered by other insured depository institutions in such depository institution's normal market area. (6... prevailing effective yields on insured deposits of comparable maturity in such institution's normal market... period, the institution may not accept, renew or roll over any brokered deposit. (e) A market is any...
A simplified method for monomeric carbohydrate analysis of corn stover biomass
USDA-ARS?s Scientific Manuscript database
Constituent determination of biomass for theoretical ethanol yield (TEY) estimation requires the removal of non-structural carbohydrates prior to analysis to prevent interference with the analytical procedure. According to the accepted U.S. Dept. of Energy-National Renewable Energy Laboratory (NREL)...
NASA Astrophysics Data System (ADS)
Alver, B.; Back, B. B.; Baker, M. D.; Ballintijn, M.; Barton, D. S.; Betts, R. R.; Bickley, A. A.; Bindel, R.; Busza, W.; Carroll, A.; Chai, Z.; Chetluru, V.; Decowski, M. P.; García, E.; Gburek, T.; George, N.; Gulbrandsen, K.; Halliwell, C.; Hamblen, J.; Hauer, M.; Henderson, C.; Hofman, D. J.; Hollis, R. S.; Hołyński, R.; Holzman, B.; Iordanova, A.; Johnson, E.; Kane, J. L.; Khan, N.; Kulinich, P.; Kuo, C. M.; Li, W.; Lin, W. T.; Loizides, C.; Manly, S.; Mignerey, A. C.; Nouicer, R.; Olszewski, A.; Pak, R.; Reed, C.; Roland, C.; Roland, G.; Sagerer, J.; Seals, H.; Sedykh, I.; Smith, C. E.; Stankiewicz, M. A.; Steinberg, P.; Stephans, G. S. F.; Sukhanov, A.; Tonjes, M. B.; Trzupek, A.; Vale, C.; van Nieuwenhuizen, G. J.; Vaurynovich, S. S.; Verdier, R.; Veres, G. I.; Walters, P.; Wenger, E.; Wolfs, F. L. H.; Wosiek, B.; Woźniak, K.; Wysłouch, B.
2010-02-01
A measurement of two-particle correlations with a high transverse momentum trigger particle (pTtrig>2.5GeV/c) is presented for Au+Au collisions at sNN=200GeV over the uniquely broad longitudinal acceptance of the PHOBOS detector (-4<Δη<2). A broadening of the away-side azimuthal correlation compared to elementary collisions is observed at all Δη. As in p+p collisions, the near side is characterized by a peak of correlated partners at small angle relative to the trigger particle. However, in central Au+Au collisions an additional correlation extended in Δη and known as the “ridge” is found to reach at least |Δη|≈4. The ridge yield is largely independent of Δη over the measured range, and it decreases towards more peripheral collisions. For the chosen pTtrig cut, the ridge yield is consistent with zero for events with less than roughly 100 participating nucleons.
Porcelain surface conditioning protocols and shear bond strength of orthodontic brackets.
Lestrade, Ashley M; Ballard, Richard W; Xu, Xiaoming; Yu, Qingzhao; Kee, Edwin L; Armbruster, Paul C
2016-05-01
The objective of the present study was to determine which of six bonding protocols yielded a clinically acceptable shear bond strength (SBS) of metal orthodontic brackets to CAD/CAM lithium disilicate porcelain restorations. A secondary aim was to determine which bonding protocol produced the least surface damage at debond. Sixty lithium disilicate samples were fabricated to replicate the facial surface of a mandibular first molar using a CEREC CAD/CAM machine. The samples were split into six test groups, each of which received different mechanical/chemical pretreatment protocols to roughen the porcelain surface prior to bonding a molar orthodontic attachment. Shear bond strength testing was conducted using an Instron machine. The mean, maximum, minimal, and standard deviation SBS values for each sample group including an enamel control were calculated. A t-test was used to evaluate the statistical significance between the groups. No significant differences were found in SBS values, with the exception of surface roughening with a green stone prior to HFA and silane treatment. This protocol yielded slightly higher bond strength which was statistically significant. Chemical treatment alone with HFA/silane yielded SBS values within an acceptable clinical range to withstand forces applied by orthodontic treatment and potentially eliminates the need to mechanically roughen the ceramic surface.
Exploring Mouse Protein Function via Multiple Approaches.
Huang, Guohua; Chu, Chen; Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning; Cai, Yu-Dong
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality.
Exploring Mouse Protein Function via Multiple Approaches
Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality. PMID:27846315
Fuzzy image processing in sun sensor
NASA Technical Reports Server (NTRS)
Mobasser, S.; Liebe, C. C.; Howard, A.
2003-01-01
This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.
Curating and sharing structures and spectra for the environmental community
The increasing popularity of high mass accuracy non-target mass spectrometry methods has yielded extensive identification efforts based on spectral and chemical compound databases in the environmental community and beyond. Increasingly, new methods are relying on open data resour...
Reference-based phasing using the Haplotype Reference Consortium panel.
Loh, Po-Ru; Danecek, Petr; Palamara, Pier Francesco; Fuchsberger, Christian; A Reshef, Yakir; K Finucane, Hilary; Schoenherr, Sebastian; Forer, Lukas; McCarthy, Shane; Abecasis, Goncalo R; Durbin, Richard; L Price, Alkes
2016-11-01
Haplotype phasing is a fundamental problem in medical and population genetics. Phasing is generally performed via statistical phasing in a genotyped cohort, an approach that can yield high accuracy in very large cohorts but attains lower accuracy in smaller cohorts. Here we instead explore the paradigm of reference-based phasing. We introduce a new phasing algorithm, Eagle2, that attains high accuracy across a broad range of cohort sizes by efficiently leveraging information from large external reference panels (such as the Haplotype Reference Consortium; HRC) using a new data structure based on the positional Burrows-Wheeler transform. We demonstrate that Eagle2 attains a ∼20× speedup and ∼10% increase in accuracy compared to reference-based phasing using SHAPEIT2. On European-ancestry samples, Eagle2 with the HRC panel achieves >2× the accuracy of 1000 Genomes-based phasing. Eagle2 is open source and freely available for HRC-based phasing via the Sanger Imputation Service and the Michigan Imputation Server.
Orbit Determination Accuracy for Comets on Earth-Impacting Trajectories
NASA Technical Reports Server (NTRS)
Kay-Bunnell, Linda
2004-01-01
The results presented show the level of orbit determination accuracy obtainable for long-period comets discovered approximately one year before collision with Earth. Preliminary orbits are determined from simulated observations using Gauss' method. Additional measurements are incorporated to improve the solution through the use of a Kalman filter, and include non-gravitational perturbations due to outgassing. Comparisons between observatories in several different circular heliocentric orbits show that observatories in orbits with radii less than 1 AU result in increased orbit determination accuracy for short tracking durations due to increased parallax per unit time. However, an observatory at 1 AU will perform similarly if the tracking duration is increased, and accuracy is significantly improved if additional observatories are positioned at the Sun-Earth Lagrange points L3, L4, or L5. A single observatory at 1 AU capable of both optical and range measurements yields the highest orbit determination accuracy in the shortest amount of time when compared to other systems of observatories.
Tarrasch, Ricardo; Margalit-Shalom, Lilach; Berger, Rony
2017-01-01
The present study assessed the effects of the mindfulness/compassion cultivating program: “Call to Care-Israel” on the performance in visual perception (VP) and motor accuracy, as well as on anxiety levels and self-reported mindfulness among 4th and 5th grade students. One hundred and thirty-eight children participated in the program for 24 weekly sessions, while 78 children served as controls. Repeated measures ANOVA’s yielded significant interactions between time of measurement and group for VP, motor accuracy, reported mindfulness, and anxiety. Post hoc tests revealed significant improvements in the four aforementioned measures in the experimental group only. In addition, significant correlations were obtained between the improvement in motor accuracy and the reduction in anxiety and the increase in mindfulness. Since VP and motor accuracy are basic skills associated with quantifiable academic characteristics, such as reading and mathematical abilities, the results may suggest that mindfulness practice has the ability to improve academic achievements. PMID:28286492
Karzmark, Peter; Deutsch, Gayle K
2018-01-01
This investigation was designed to determine the predictive accuracy of a comprehensive neuropsychological and brief neuropsychological test battery with regard to the capacity to perform instrumental activities of daily living (IADLs). Accuracy statistics that included measures of sensitivity, specificity, positive and negative predicted power and positive likelihood ratio were calculated for both types of batteries. The sample was drawn from a general neurological group of adults (n = 117) that included a number of older participants (age >55; n = 38). Standardized neuropsychological assessments were administered to all participants and were comprised of the Halstead Reitan Battery and portions of the Wechsler Adult Intelligence Scale-III. A comprehensive test battery yielded a moderate increase over base-rate in predictive accuracy that generalized to older individuals. There was only limited support for using a brief battery, for although sensitivity was high, specificity was low. We found that a comprehensive neuropsychological test battery provided good classification accuracy for predicting IADL capacity.
NASA Technical Reports Server (NTRS)
Stoner, E. R.; May, G. A.; Kalcic, M. T. (Principal Investigator)
1981-01-01
Sample segments of ground-verified land cover data collected in conjunction with the USDA/ESS June Enumerative Survey were merged with LANDSAT data and served as a focus for unsupervised spectral class development and accuracy assessment. Multitemporal data sets were created from single-date LANDSAT MSS acquisitions from a nominal scene covering an eleven-county area in north central Missouri. Classification accuracies for the four land cover types predominant in the test site showed significant improvement in going from unitemporal to multitemporal data sets. Transformed LANDSAT data sets did not significantly improve classification accuracies. Regression estimators yielded mixed results for different land covers. Misregistration of two LANDSAT data sets by as much and one half pixels did not significantly alter overall classification accuracies. Existing algorithms for scene-to scene overlay proved adequate for multitemporal data analysis as long as statistical class development and accuracy assessment were restricted to field interior pixels.
Ferragina, A; Cipolat-Gotet, C; Cecchinato, A; Bittante, G
2013-01-01
Cheese yield is an important technological trait in the dairy industry in many countries. The aim of this study was to evaluate the effectiveness of Fourier-transform infrared (FTIR) spectral analysis of fresh unprocessed milk samples for predicting cheese yield and nutrient recovery traits. A total of 1,264 model cheeses were obtained from 1,500-mL milk samples collected from individual Brown Swiss cows. Individual measurements of 7 new cheese yield-related traits were obtained from the laboratory cheese-making procedure, including the fresh cheese yield, total solid cheese yield, and the water retained in curd, all as a percentage of the processed milk, and nutrient recovery (fat, protein, total solids, and energy) in the curd as a percentage of the same nutrient contained in the milk. All individual milk samples were analyzed using a MilkoScan FT6000 over the spectral range from 5,000 to 900 wavenumber × cm(-1). Two spectral acquisitions were carried out for each sample and the results were averaged before data analysis. Different chemometric models were fitted and compared with the aim of improving the accuracy of the calibration equations for predicting these traits. The most accurate predictions were obtained for total solid cheese yield and fresh cheese yield, which exhibited coefficients of determination between the predicted and measured values in cross-validation (1-VR) of 0.95 and 0.83, respectively. A less favorable result was obtained for water retained in curd (1-VR=0.65). Promising results were obtained for recovered protein (1-VR=0.81), total solids (1-VR=0.86), and energy (1-VR=0.76), whereas recovered fat exhibited a low accuracy (1-VR=0.41). As FTIR spectroscopy is a rapid, cheap, high-throughput technique that is already used to collect standard milk recording data, these FTIR calibrations for cheese yield and nutrient recovery highlight additional potential applications of the technique in the dairy industry, especially for monitoring cheese-making processes and milk payment systems. In addition, the prediction models can be used to provide breeding organizations with information on new phenotypes for cheese yield and milk nutrient recovery, potentially allowing these traits to be enhanced through selection. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Poškus, A.
2016-09-01
This paper evaluates the accuracy of the single-event (SE) and condensed-history (CH) models of electron transport in MCNP6.1 when simulating characteristic Kα, total K (=Kα + Kβ) and Lα X-ray emission from thick targets bombarded by electrons with energies from 5 keV to 30 keV. It is shown that the MCNP6.1 implementation of the CH model for the K-shell impact ionization leads to underestimation of the K yield by 40% or more for the elements with atomic numbers Z < 15 and overestimation of the Kα yield by more than 40% for the elements with Z > 25. The Lα yields are underestimated by more than an order of magnitude in CH mode, because MCNP6.1 neglects X-ray emission caused by electron-impact ionization of L, M and higher shells in CH mode (the Lα yields calculated in CH mode reflect only X-ray fluorescence, which is mainly caused by photoelectric absorption of bremsstrahlung photons). The X-ray yields calculated by MCNP6.1 in SE mode (using ENDF/B-VII.1 library data) are more accurate: the differences of the calculated and experimental K yields are within the experimental uncertainties for the elements C, Al and Si, and the calculated Kα yields are typically underestimated by (20-30)% for the elements with Z > 25, whereas the Lα yields are underestimated by (60-70)% for the elements with Z > 49. It is also shown that agreement of the experimental X-ray yields with those calculated in SE mode is additionally improved by replacing the ENDF/B inner-shell electron-impact ionization cross sections with the set of cross sections obtained from the distorted-wave Born approximation (DWBA), which are also used in the PENELOPE code system. The latter replacement causes a decrease of the average relative difference of the experimental X-ray yields and the simulation results obtained in SE mode to approximately 10%, which is similar to accuracy achieved with PENELOPE. This confirms that the DWBA inner-shell impact ionization cross sections are significantly more accurate than the corresponding ENDF/B cross sections when energy of incident electrons is of the order of the binding energy.
Kane, J.S.
1988-01-01
A study is described that identifies the optimum operating conditions for the accurate determination of Co, Cu, Mn, Ni, Pb, Zn, Ag, Bi and Cd using simultaneous multi-element atomic absorption spectrometry. Accuracy was measured in terms of the percentage recoveries of the analytes based on certified values in nine standard reference materials. In addition to identifying optimum operating conditions for accurate analysis, conditions resulting in serious matrix interferences and the magnitude of the interferences were determined. The listed elements can be measured with acceptable accuracy in a lean to stoicheiometric flame at measurement heights ???5-10 mm above the burner.
Implementation study of wearable sensors for activity recognition systems
Ghassemian, Mona
2015-01-01
This Letter investigates and reports on a number of activity recognition methods for a wearable sensor system. The authors apply three methods for data transmission, namely ‘stream-based’, ‘feature-based’ and ‘threshold-based’ scenarios to study the accuracy against energy efficiency of transmission and processing power that affects the mote's battery lifetime. They also report on the impact of variation of sampling frequency and data transmission rate on energy consumption of motes for each method. This study leads us to propose a cross-layer optimisation of an activity recognition system for provisioning acceptable levels of accuracy and energy efficiency. PMID:26609413
Zheng, Meixun; Bender, Daniel
2018-03-13
Computer-based testing (CBT) has made progress in health sciences education. In 2015, the authors led implementation of a CBT system (ExamSoft) at a dental school in the U.S. Guided by the Technology Acceptance Model (TAM), the purposes of this study were to (a) examine dental students' acceptance of ExamSoft; (b) understand factors impacting acceptance; and (c) evaluate the impact of ExamSoft on students' learning and exam performance. Survey and focus group data revealed that ExamSoft was well accepted by students as a testing tool and acknowledged by most for its potential to support learning. Regression analyses showed that perceived ease of use and perceived usefulness of ExamSoft significantly predicted student acceptance. Prior CBT experience and computer skills did not significantly predict acceptance of ExamSoft. Students reported that ExamSoft promoted learning in the first program year, primarily through timely and rich feedback on examination performance. t-Tests yielded mixed results on whether students performed better on computerized or paper examinations. The study contributes to the literature on CBT and the application of the TAM model in health sciences education. Findings also suggest ways in which health sciences institutions can implement CBT to maximize its potential as an assessment and learning tool.
Klaver, Jacqueline M; Palo, Amanda D; DiLalla, Lisabeth F
2014-01-01
The authors examined problem behaviors in preschool children as a function of perceived competence. Prior research has demonstrated a link between inaccuracy of self-perceptions and teacher-reported externalizing behaviors in preschool aged boys. This study extended past research by adding data collected from observed behaviors in a laboratory setting, as well as parent reports of internalizing and externalizing behaviors. Five-year-old children completed the Pictorial Scale of Perceived Competence and Social Acceptance for Young Children (PSPCSA) in the lab, participated in a 10-min puzzle interaction task with their cotwin and mother, and completed a short task assessing cognitive abilities. Children were grouped into 3 self-esteem categories (unrealistically low, realistic, and unrealistically high) based on comparisons of self-reported (PSPCSA) versus actual competencies for maternal acceptance, peer acceptance, and cognitive competence. Results showed that children who overreported their maternal acceptance and peer acceptance had significantly more parent-reported externalizing problems as well as internalizing problems. There were no significant differences in accuracy for cognitive competence. The findings from this study underscore the negative impact of unrealistically high self-appraisal on problem behaviors in young children.
Emperical Tests of Acceptance Sampling Plans
NASA Technical Reports Server (NTRS)
White, K. Preston, Jr.; Johnson, Kenneth L.
2012-01-01
Acceptance sampling is a quality control procedure applied as an alternative to 100% inspection. A random sample of items is drawn from a lot to determine the fraction of items which have a required quality characteristic. Both the number of items to be inspected and the criterion for determining conformance of the lot to the requirement are given by an appropriate sampling plan with specified risks of Type I and Type II sampling errors. In this paper, we present the results of empirical tests of the accuracy of selected sampling plans reported in the literature. These plans are for measureable quality characteristics which are known have either binomial, exponential, normal, gamma, Weibull, inverse Gaussian, or Poisson distributions. In the main, results support the accepted wisdom that variables acceptance plans are superior to attributes (binomial) acceptance plans, in the sense that these provide comparable protection against risks at reduced sampling cost. For the Gaussian and Weibull plans, however, there are ranges of the shape parameters for which the required sample sizes are in fact larger than the corresponding attributes plans, dramatically so for instances of large skew. Tests further confirm that the published inverse-Gaussian (IG) plan is flawed, as reported by White and Johnson (2011).
AIRS Version 6 Products and Data Services at NASA GES DISC
NASA Astrophysics Data System (ADS)
Ding, F.; Savtchenko, A. K.; Hearty, T. J.; Theobald, M. L.; Vollmer, B.; Esfandiari, E.
2013-12-01
The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is the home of processing, archiving, and distribution services for data from the Atmospheric Infrared Sounder (AIRS) mission. The AIRS mission is entering its 11th year of global observations of the atmospheric state, including temperature and humidity profiles, outgoing longwave radiation, cloud properties, and trace gases. The GES DISC, in collaboration with the AIRS Project, released data from the Version 6 algorithm in early 2013. The new algorithm represents a significant improvement over previous versions in terms of greater stability, yield, and quality of products. Among the most substantial advances are: improved soundings of Tropospheric and Sea Surface Temperatures; larger improvements with increasing cloud cover; improved retrievals of surface spectral emissivity; near-complete removal of spurious temperature bias trends seen in earlier versions; substantially improved retrieval yield (i.e., number of soundings accepted for output) for climate studies; AIRS-Only retrievals with comparable accuracy to AIRS+AMSU (Advanced Microwave Sounding Unit) retrievals; and more realistic hemispheric seasonal variability and global distribution of carbon monoxide. The GES DISC is working to bring the distribution services up-to-date with these new developments. Our focus is on popular services, like variable subsetting and quality screening, which are impacted by the new elements in Version 6. Other developments in visualization services, such as Giovanni, Near-Real Time imagery, and a granule-map viewer, are progressing along with the introduction of the new data; each service presents its own challenge. This presentation will demonstrate the most significant improvements in Version 6 AIRS products, such as newly added variables (higher resolution outgoing longwave radiation, new cloud property products, etc.), the new quality control schema, and improved retrieval yields. We will also demonstrate the various distribution and visualization services for AIRS data products. The cloud properties, model physics, and water and energy cycles research communities are invited to take advantage of the improvements in Version 6 AIRS products and the various services at GES DISC which provide them.
Richardson, Jessica; Datta, Abhishek; Dmochowski, Jacek; Parra, Lucas C; Fridriksson, Julius
2015-01-01
Transcranial direct current stimulation (tDCS) enhances treatment outcomes post-stroke. Feasibility and tolerability of high-definition (HD) tDCS (a technique that increases current focality and intensity) for consecutive weekdays as an adjuvant to behavioral treatment in a clinical population has not been demonstrated. To determine HD-tDCS feasibility outcomes: 1) ability to implement study as designed, 2) acceptability of repeated HD-tDCS administration to patients, and 3) preliminary efficacy. Eight patients with chronic post-stroke aphasia participated in a randomized crossover trial with two arms: conventional sponge-based (CS) tDCS and HD-tDCS. Computerized anomia treatment was administered for five consecutive days during each treatment arm. Individualized modeling/targeting procedures and an 8-channel HD-tDCS device were developed. CS-tDCS and HD-tDCS were comparable in terms of implementation, acceptability, and outcomes. Naming accuracy and response time improved for both stimulation conditions. Change in accuracy of trained items was numerically higher (but not statistically significant) for HD-tDCS compared to CS-tDCS for most patients. Regarding feasibility, HD-tDCS treatment studies can be implemented when designed similarly to documented CS-tDCS studies. HD-tDCS is likely to be acceptable to patients and clinicians. Preliminary efficacy data suggest that HD-tDCS effects, using only 4 electrodes, are at least comparable to CS-tDCS.
Richardson, Jessica; Datta, Abhishek; Dmochowski, Jacek; Parra, Lucas C.; Fridriksson, Julius
2018-01-01
BACKGROUND Transcranial direct current stimulation (tDCS) enhances treatment outcomes post-stroke. Feasibility and tolerability of high-definition (HD) tDCS (a technique that increases current focality and intensity) for consecutive weekdays as an adjuvant to behavioral treatment in a clinical population has not been demonstrated. OBJECTIVE To determine HD-tDCS feasibility outcomes: 1) ability to implement study as designed, 2) acceptability of repeated HD-tDCS administration to patients, and 3) preliminary efficacy. METHODS Eight patients with chronic post-stroke aphasia participated in a randomized crossover trial with two arms: conventional sponge-based (CS) tDCS and HD-tDCS. Computerized anomia treatment was administered for five consecutive days during each treatment arm. RESULTS Individualized modeling/targeting procedures and an 8-channel HD-tDCS device were developed. CS-tDCS and HD-tDCS were comparable in terms of implementation, acceptability, and outcomes. Naming accuracy and response time improved for both stimulation conditions. Change in accuracy of trained items was numerically higher (but not statistically significant) for HD-tDCS compared to CS-tDCS for most patients. CONCLUSIONS Regarding feasibility, HD-tDCS treatment studies can be implemented when designed similarly to documented CS-tDCS studies. HD-tDCS is likely to be acceptable to patients and clinicians. Preliminary efficacy data suggest that HD-tDCS effects, using only 4 electrodes, are at least comparable to CS-tDCS. PMID:25547776
CT colonography: accuracy, acceptance, safety and position in organised population screening.
de Haan, Margriet C; Pickhardt, Perry J; Stoker, Jaap
2015-02-01
Colorectal cancer (CRC) is the second most common cancer and second most common cause of cancer-related deaths in Europe. The introduction of CRC screening programmes using stool tests and flexible sigmoidoscopy, have been shown to reduce CRC-related mortality substantially. In several European countries, population-based CRC screening programmes are ongoing or being rolled out. Stool tests like faecal occult blood testing are non-invasive and simple to perform, but are primarily designed to detect early invasive cancer. More invasive tests like colonoscopy and CT colonography (CTC) aim at accurately detecting both CRC and cancer precursors, thus providing for cancer prevention. This review focuses on the accuracy, acceptance and safety of CTC as a CRC screening technique and on the current position of CTC in organised population screening. Based on the detection characteristics and acceptability of CTC screening, it might be a viable screening test. The potential disadvantage of radiation exposure is probably overemphasised, especially with newer technology. At this time-point, it is not entirely clear whether the detection of extracolonic findings at CTC is of net benefit and is cost effective, but with responsible handling, this may be the case. Future efforts will seek to further improve the technique, refine appropriate diagnostic algorithms and study cost-effectiveness. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Acceptability of the rainwater harvesting system to the slum dwellers of Dhaka City.
Islam, M M; Chou, F N-F; Kabir, M R
2010-01-01
Urban area like Dhaka City, in Bangladesh, has scarcity of safe drinking water which is one of the prominent basic needs for human kind. This study explored the acceptability of harvested rainwater in a densely populated city like Dhaka, using a simple and low cost technology. A total of 200 random people from four slums of water-scarce Dhaka City were surveyed to determine the dwellers' perception on rainwater and its acceptability as a source of drinking water. The questionnaire was aimed at finding the socio-economic condition and the information on family housing, sanitation, health, existing water supply condition, knowledge about rainwater, willingness to accept rainwater as a drinking source etc. A Yield before Spillage (YBS) model was developed to know the actual rainwater availability and storage conditions which were used to justify the effective tank size. Cost-benefit analysis and feasibility analysis were performed using the survey results and the research findings. The survey result and overall study found that the low cost rainwater harvesting technique was acceptable to the slum dwellers as only the potential alternative source of safe drinking water.
Acceptance and purchase intent of US consumers for nonwheat rice butter cakes.
Sae-Eaw, A; Chompreeda, P; Prinyawiwatkul, W; Haruthaithanasan, V; Suwonsichon, T; Saidu, J E; Xu, Z
2007-03-01
This study evaluated consumer acceptance and purchase intent of nonwheat butter cake formulations prepared with Thai jasmine rice flour. Three nonwheat rice butter cakes were prepared with varying amounts of powdered emulsifier (propylene glycol ester:diacetyl tartaric acid ester of monoglyceride, 8:2) at 0% (product A), 7.5% (product B), and 15% (product C) of the margarine content (15%) in the cake formulation. A commercial wheat-based butter cake served as the control. Consumers (n= 400) evaluated acceptability of 9 sensory attributes using a 9-point hedonic scale. Overall acceptance and purchase intent were determined with a binomial (yes/no) scale. At least 81% of consumers accepted products B and C, of which 42.1% and 47%, respectively, would purchase the products if commercially available. Product A was neither liked nor disliked with an overall liking score of 5.39. The butter cake products were differentiated by textural acceptability (overall texture, softness, and moistness) with a canonical correlation of 0.71 to 0.79. Overall liking and taste influenced overall acceptance and purchase intent. Odor influenced purchase intent (P= 0.0014), but not overall acceptance. The odds ratio of overall liking was 3.462 for purchase intent, indicating the probability of the product being purchased is 3.462 times higher (than not being purchased, P < 0.0001) with every 1-unit increase of the overall liking score. Based on the logit model, overall acceptance and purchase intent could be predicted with 89.3% and 83.3% accuracy, respectively. The study demonstrated feasibility of completely substituting wheat flour with Thai jasmine rice flour for production of butter cake products acceptable to American consumers.
Utilizing Peer Interactions to Promote Learning through a Web-Based Peer Assessment System
ERIC Educational Resources Information Center
Li, Lan; Steckelberg, Allen L.; Srinivasan, Sribhagyam
2008-01-01
Peer assessment is an instructional strategy in which students evaluate each other's performance for the purpose of improving learning. Despite its accepted use in higher education, researchers and educators have reported concerns such as students' time on task, the impact of peer pressure on the accuracy of marking, and students' lack of ability…
ERIC Educational Resources Information Center
Lang, W. Steve; Wilkerson, Judy R.
2008-01-01
The National Council for Accreditation of Teacher Education (NCATE, 2002) requires teacher education units to develop assessment systems and evaluate both the success of candidates and unit operations. Because of a stated, but misguided, fear of statistics, NCATE fails to use accepted terminology to assure the quality of institutional evaluative…
Code of Federal Regulations, 2010 CFR
2010-07-01
...) To improve the quality, accuracy, or completeness of the data or analysis of data contained in the... include the data, analysis, and documentation on which the proposal is based, and, where feasible, include... the petitioner requests access to data from the Postal Service to support the assertions or...
Spring performance tester for miniature extension springs
Salzbrenner, Bradley; Boyce, Brad
2017-05-16
A spring performance tester and method of testing a spring are disclosed that has improved accuracy and precision over prior art spring testers. The tester can perform static and cyclic testing. The spring tester can provide validation for product acceptance as well as test for cyclic degradation of springs, such as the change in the spring rate and fatigue failure.
Using a spatially explicit analysis model to evaluate spatial variation of corn yield
USDA-ARS?s Scientific Manuscript database
Spatial irrigation of agricultural crops using site-specific variable-rate irrigation (VRI) systems is beginning to have wide-spread acceptance. However, optimizing the management of these VRI systems to conserve natural resources and increase profitability requires an understanding of the spatial ...
50 CFR 648.200 - Specifications.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Limit (OFL), Acceptable Biological Catch (ABC), Annual Catch Limit (ACL), Optimum yield (OY), domestic... (BT), the sub-ACL for each management area, including seasonal periods as specified at § 648.201(d... (from 0 to 3 percent of the sub-ACL from any management area). Recommended specifications shall be...
50 CFR 648.200 - Specifications.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Limit (OFL), Acceptable Biological Catch (ABC), Annual Catch Limit (ACL), Optimum yield (OY), domestic... (BT), the sub-ACL for each management area, including seasonal periods as specified at § 648.201(d... (from 0 to 3 percent of the sub-ACL from any management area). Recommended specifications shall be...
50 CFR 648.200 - Specifications.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Limit (OFL), Acceptable Biological Catch (ABC), Annual Catch Limit (ACL), Optimum yield (OY), domestic... (BT), the sub-ACL for each management area, including seasonal periods as specified at § 648.201(d... (from 0 to 3 percent of the sub-ACL from any management area). Recommended specifications shall be...
50 CFR 648.200 - Specifications.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Limit (OFL), Acceptable Biological Catch (ABC), Annual Catch Limit (ACL), Optimum yield (OY), domestic... (BT), the sub-ACL for each management area, including seasonal periods as specified at § 648.201(d... (from 0 to 3 percent of the sub-ACL from any management area). Recommended specifications shall be...
Comparing Features for Classification of MEG Responses to Motor Imagery
Halme, Hanna-Leena; Parkkonen, Lauri
2016-01-01
Background Motor imagery (MI) with real-time neurofeedback could be a viable approach, e.g., in rehabilitation of cerebral stroke. Magnetoencephalography (MEG) noninvasively measures electric brain activity at high temporal resolution and is well-suited for recording oscillatory brain signals. MI is known to modulate 10- and 20-Hz oscillations in the somatomotor system. In order to provide accurate feedback to the subject, the most relevant MI-related features should be extracted from MEG data. In this study, we evaluated several MEG signal features for discriminating between left- and right-hand MI and between MI and rest. Methods MEG was measured from nine healthy participants imagining either left- or right-hand finger tapping according to visual cues. Data preprocessing, feature extraction and classification were performed offline. The evaluated MI-related features were power spectral density (PSD), Morlet wavelets, short-time Fourier transform (STFT), common spatial patterns (CSP), filter-bank common spatial patterns (FBCSP), spatio—spectral decomposition (SSD), and combined SSD+CSP, CSP+PSD, CSP+Morlet, and CSP+STFT. We also compared four classifiers applied to single trials using 5-fold cross-validation for evaluating the classification accuracy and its possible dependence on the classification algorithm. In addition, we estimated the inter-session left-vs-right accuracy for each subject. Results The SSD+CSP combination yielded the best accuracy in both left-vs-right (mean 73.7%) and MI-vs-rest (mean 81.3%) classification. CSP+Morlet yielded the best mean accuracy in inter-session left-vs-right classification (mean 69.1%). There were large inter-subject differences in classification accuracy, and the level of the 20-Hz suppression correlated significantly with the subjective MI-vs-rest accuracy. Selection of the classification algorithm had only a minor effect on the results. Conclusions We obtained good accuracy in sensor-level decoding of MI from single-trial MEG data. Feature extraction methods utilizing both the spatial and spectral profile of MI-related signals provided the best classification results, suggesting good performance of these methods in an online MEG neurofeedback system. PMID:27992574
McCormick, Peter A.; Francis, Lori
2005-01-01
There is debate over the mechanisms that govern the orienting of attention. Some argue that the enhanced performance observed at a cued location is the result of increased perceptual sensitivity or preferential access to decision-making processes. It has also been suggested that these effects may be the result of trades in speed for accuracy on the part of the observers. In the present study, observers performed either an exogenous or an endogenous orienting of attention task under both normal instructions (respond as quickly and as accurately as possible) and speeded instructions that used a deadline procedure to limit the amount of time observers had to complete a choice reaction time (CRT) task. An examination of the speed-accuracy operating characteristics (SAOCs) yielded evidence against the notion that CRT precuing effects are due primarily to a tradeoff of accuracy for speed. PMID:15759078
BRAIN TUMOR SEGMENTATION WITH SYMMETRIC TEXTURE AND SYMMETRIC INTENSITY-BASED DECISION FORESTS.
Bianchi, Anthony; Miller, James V; Tan, Ek Tsoon; Montillo, Albert
2013-04-01
Accurate automated segmentation of brain tumors in MR images is challenging due to overlapping tissue intensity distributions and amorphous tumor shape. However, a clinically viable solution providing precise quantification of tumor and edema volume would enable better pre-operative planning, treatment monitoring and drug development. Our contributions are threefold. First, we design efficient gradient and LBPTOP based texture features which improve classification accuracy over standard intensity features. Second, we extend our texture and intensity features to symmetric texture and symmetric intensity which further improve the accuracy for all tissue classes. Third, we demonstrate further accuracy enhancement by extending our long range features from 100mm to a full 200mm. We assess our brain segmentation technique on 20 patients in the BraTS 2012 dataset. Impact from each contribution is measured and the combination of all the features is shown to yield state-of-the-art accuracy and speed.
Gravity field, geoid and ocean surface by space techniques
NASA Technical Reports Server (NTRS)
Anderle, R. J.
1978-01-01
Knowledge of the earth's gravity field continued to increase during the last four years. Altimetry data from the GEOS-3 satellite has provided the geoid over most of the ocean to an accuracy of about one meter. Increasing amounts of laser data has permitted the solution for 566 terms in the gravity field with which orbits of the GEOS-3 satellite have been computed to an accuracy of about one to two meters. The combination of satellite tracking data, altimetry and gravimetry has yielded a solution for 1360 terms in the earth's gravity field. A number of problems remain to be solved to increase the accuracy of the gravity field determination. New satellite systems would provide gravity data in unsurveyed areas and correction for topographic features of the ocean and improved computational procedures together with a more extensive laser network will considerably improve the accuracy of the results.
NASA Astrophysics Data System (ADS)
Moriya, Gentaro; Chikatsu, Hirofumi
2011-07-01
Recently, pixel numbers and functions of consumer grade digital camera are amazingly increasing by modern semiconductor and digital technology, and there are many low-priced consumer grade digital cameras which have more than 10 mega pixels on the market in Japan. In these circumstances, digital photogrammetry using consumer grade cameras is enormously expected in various application fields. There is a large body of literature on calibration of consumer grade digital cameras and circular target location. Target location with subpixel accuracy had been investigated as a star tracker issue, and many target location algorithms have been carried out. It is widely accepted that the least squares models with ellipse fitting is the most accurate algorithm. However, there are still problems for efficient digital close range photogrammetry. These problems are reconfirmation of the target location algorithms with subpixel accuracy for consumer grade digital cameras, relationship between number of edge points along target boundary and accuracy, and an indicator for estimating the accuracy of normal digital close range photogrammetry using consumer grade cameras. With this motive, an empirical testing of several algorithms for target location with subpixel accuracy and an indicator for estimating the accuracy are investigated in this paper using real data which were acquired indoors using 7 consumer grade digital cameras which have 7.2 mega pixels to 14.7 mega pixels.
Surveys suck: Consumer preferences when purchasing genetically engineered foods.
Powell, Douglas A
2013-01-01
Many studies have attempted to gauge consumers' acceptance of genetically engineered or modified (GM) foods. Surveys, asking people about attitudes and intentions, are easy-to-collect proxies of consumer behavior. However, participants tend to respond as citizens of society, not discrete individuals, thereby inaccurately portraying their potential behavior. The Theory of Planned Behavior improved the accuracy of self-reported information, but its limited capacity to account for intention variance has been attributed to the hypothetical scenarios to which survey participants must respond. Valuation methods, asking how much consumers may be willing to pay or accept for GM foods, have revealed that consumers are usually willing to accept them at some price, or in some cases willing to pay a premium. Ultimately, it's consumers' actual--not intended--behavior that is of most interest to policy makers and business decision-makers. Real choice experiments offer the best avenue for revealing consumers' food choices in normal life.
ATC simulation of helicopter IFR approaches into major terminal areas using RNAV, MLS, and CDTI
NASA Technical Reports Server (NTRS)
Tobias, L.; Lee, H. Q.; Peach, L. L.; Willett, F. M., Jr.; Obrien, P. J.
1981-01-01
The introduction of independent helicopter IFR routes at hub airports was investigated in a real time air traffic control system simulation involving a piloted helicopter simulator, computer generated air traffic, and air traffic controllers. The helicopter simulator was equipped to fly area navigation (RNAV) routes and microwave landing system approaches. Problems studied included: (1) pilot acceptance of the approach procedure and tracking accuracy; (2) ATC procedures for handling a mix of helicopter and fixed wing traffic; and (3) utility of the cockpit display of traffic information (CDTI) for the helicopter in the hub airport environment. Results indicate that the helicopter routes were acceptable to the subject pilots and were noninterfering with fixed wing traffic. Merging and spacing maneuvers using CDTI were successfully carried out by the pilots, but controllers had some reservations concerning the acceptability of the CDTI procedures.
Baumeister, H; Nowoczin, L; Lin, J; Seifferth, H; Seufert, J; Laubner, K; Ebert, D D
2014-07-01
To (1) determine diabetes patients' acceptance of Internet-based interventions (IBIs) for depression, to (2) examine the effectiveness of an acceptance facilitating intervention (AFI) and to (3) explore subgroup specific effects. 141 diabetes patients from two inpatient rehabilitation units and one outpatient clinic in Germany were randomly allocated to an intervention (IG) and a no-intervention control group (CG). The IG received an AFI consisting of a personal information session before filling-out a questionnaire on patients' acceptance of IBIs, predictors of acceptance (performance expectancy, effort expectancy, social influence, facilitating conditions, and Internet anxiety) as well as sociodemographic, depression-related and diabetes-related variables. The CG filled out the questionnaire immediately. Patients' acceptance of IBIs was measured with a four-item scale (sum-score ranging from 4 to 20). The CG showed a low (50.7%) to medium (40.8%) acceptance with only 8.5% of all diabetes patients reporting a high acceptance of IBIs for depression. The AFI had no significant effect on acceptance (IG: M=10.55, SD=4.69, n=70; KG: M=9.65, SD=4.27, n=71; d=0.20 [95%-CI: -0.13;0.53]) and the predictors of acceptance. Yet, subgroup analyses yielded a trend for depressed, diabetes-related distressed, female and younger (<59) participants and for those who do not frequently use the Internet to profit from the AFI. Diabetes patients show a rather low acceptance toward IBIs for depression. Findings indicate that the AFI is likely to be effective in the subgroup of depressed, diabetes-related distressed, female or younger diabetes patients, but not in the whole target population. Hence, AFIs might need to be tailored to the specific needs of subpopulations. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Optimisation of the digital radiographic imaging of suspected non-accidental injury
NASA Astrophysics Data System (ADS)
Offiah, Amaka
Aim: To optimise the digital (radiographic) imaging of children presenting with suspected non-accidental injury (NAI). Objectives: (i) To evaluate existing radiographic quality criteria, and to develop a more suitable system if these are found to be inapplicable to skeletal surveys obtained in suspected NAI. (ii) To document differences in image quality between conventional film-screen and the recently installed Fuji5000R computed radiography (CR) system at Great Ormond Street Hospital for Children, (iii) To document the extent of variability in the standard of skeletal surveys obtained in the UK for suspected NAI. (iv) To determine those radiographic parameters which yield the highest diagnostic accuracy, while still maintaining acceptable radiation dose to the child, (v) To determine how varying degrees of edge-enhancement affect diagnostic accuracy. (vi) To establish the accuracy of soft compared to hard copy interpretation of images in suspected NAI. Materials and Methods: (i) and (ii) Retrospective analysis of 286 paediatric lateral spine radiographs by two observers based on the Commission of European Communities (CEC) quality criteria, (iii) Review of the skeletal surveys of 50 consecutive infants referred from hospitals throughout the United Kingdom (UK) with suspected NAI. (iv) Phantom studies. Leeds TO. 10 and TO. 16 test objects were used to compare the relationship between film density, exposure parameters and visualisation of object details, (iv) Clinical study. Anteroposterior and lateral post mortem skull radiographs of six consecutive infants were obtained at various exposures. Six observers independently scored the images based on visualisation of five criteria, (v) and (vi) A study of diagnostic accuracy in which six observers independently interpreted 50 radiographs from printed copies (with varying degrees of edge-enhancement) and from a monitor. Results: The CEC criteria are useful for optimisation of imaging parameters and allow the detection of differences in quality of film-screen and digital images. There is much variability in the quality and number of radiographs performed as part of skeletal surveys in the UK for suspected NAI. The Leeds test objects are either not sensitive enough (TO. 10) or perhaps over sensitive (TO. 16) for the purposes of this project. Furthermore, the minimum spatial resolution required for digital imaging in NAI has not been established. Therefore the objective interpretation of phantom studies is difficult. There is scope for reduction of radiation dose to children with no effect on image quality. Diagnostic accuracy (fracture detection) in suspected NAI is generally low, and is not affected by image display modality. Conclusions: The CEC quality criteria are not applicable to the assessment of clinical image quality. A national protocol for skeletal surveys in NAI is required. Dedicated training, close supervision, collaboration and consistent exposure of radiologists to cases of NAI should improve diagnostic accuracy. The potential exists for dose reduction when performing skeletal surveys in children and infants with suspected NAI. Future studies should address this issue.