[In vivo model to evaluate the accuracy of complete-tooth spectrophotometer for dental clinics].
Liu, Feng; Yang, Jian; Xu, Tong-Kai; Xu, Ming-Ming; Ma, Yu
2011-02-01
To test ΔE between measured value and right value from the Crystaleye complete-tooth spectrophotometer, and to evaluate the accuracy rate of the spectrophotometer. Twenty prosthodontists participated in the study. Each of them used Vita 3D-Master shadeguide to do the shade matching, and used Crystaleye complete-tooth spectrophotometer (before and after the test training) tested the middle of eight fixed tabs from shadeguide in the dark box. The results of shade matching and spectrophotometer were recorded. The accuracy rate of shade matching and the spectrophotometer before and after training were calculated. The average accuracy rate of shade matching was 49%. The average accuracy rate of the spectrophotometer before and after training was 83% and 99%. The accuracy of the spectrophotometer was significant higher than that in shade matching, and training can improve the accuracy rate.
The accuracy of pain and fatigue items across different reporting periods
Broderick, Joan E.; Schwartz, Joseph E.; Vikingstad, Gregory; Pribbernow, Michelle; Grossman, Steven; Stone, Arthur A.
2008-01-01
The length of the reporting period specified for items assessing pain and fatigue varies among instruments. How the length of recall impacts the accuracy of symptom reporting is largely unknown. This study investigated the accuracy of ratings for reporting periods ranging from 1 day to 28 days for several items from widely used pain and fatigue measures (SF36v2, Brief Pain Inventory, McGill Pain Questionnaire, Brief Fatigue Inventory). Patients from a community rheumatology practice (N=83) completed momentary pain and fatigue items on average 5.4 times per day for a month using an electronic diary. Averaged momentary ratings formed the basis for comparison with recall ratings interspersed throughout the month referencing 1-day, 3-day, 7-day, and 28-day periods. As found in previous research, recall ratings were consistently inflated relative to averaged momentary ratings. Across most items, 1-day recall corresponded well to the averaged momentary assessments for the day. Several, but not all, items demonstrated substantial correlations across the different reporting periods. An additional 7 day-by-day recall task suggested that patients have increasing difficulty actually remembering symptom levels beyond the past several days. These data were collected while patients were receiving usual care and may not generalize to conditions where new interventions are being introduced and outcomes evaluated. Reporting periods can influence the accuracy of retrospective symptom reports and should be a consideration in study design. PMID:18455312
ERIC Educational Resources Information Center
Morris, Darrell; Pennell, Ashley M.; Perney, Jan; Trathen, Woodrow
2018-01-01
This study compared reading rate to reading fluency (as measured by a rating scale). After listening to first graders read short passages, we assigned an overall fluency rating (low, average, or high) to each reading. We then used predictive discriminant analyses to determine which of five measures--accuracy, rate (objective); accuracy, phrasing,…
Gains in accuracy from averaging ratings of abnormality
NASA Astrophysics Data System (ADS)
Swensson, Richard G.; King, Jill L.; Gur, David; Good, Walter F.
1999-05-01
Six radiologists used continuous scales to rate 529 chest-film cases for likelihood of five separate types of abnormalities (interstitial disease, nodules, pneumothorax, alveolar infiltrates and rib fractures) in each of six replicated readings, yielding 36 separate ratings of each case for the five abnormalities. Analyses for each type of abnormality estimated the relative gains in accuracy (area below the ROC curve) obtained by averaging the case-ratings across: (1) six independent replications by each reader (30% gain), (2) six different readers within each replication (39% gain) or (3) all 36 readings (58% gain). Although accuracy differed among both readers and abnormalities, ROC curves for the median ratings showed similar relative gains in accuracy. From a latent-variable model for these gains, we estimate that about 51% of a reader's total decision variance consisted of random (within-reader) errors that were uncorrelated between replications, another 14% came from that reader's consistent (but idiosyncratic) responses to different cases, and only about 35% could be attributed to systematic variations among the sampled cases that were consistent across different readers.
El-Amrawy, Fatema
2015-01-01
Objectives The new wave of wireless technologies, fitness trackers, and body sensor devices can have great impact on healthcare systems and the quality of life. However, there have not been enough studies to prove the accuracy and precision of these trackers. The objective of this study was to evaluate the accuracy, precision, and overall performance of seventeen wearable devices currently available compared with direct observation of step counts and heart rate monitoring. Methods Each participant in this study used three accelerometers at a time, running the three corresponding applications of each tracker on an Android or iOS device simultaneously. Each participant was instructed to walk 200, 500, and 1,000 steps. Each set was repeated 40 times. Data was recorded after each trial, and the mean step count, standard deviation, accuracy, and precision were estimated for each tracker. Heart rate was measured by all trackers (if applicable), which support heart rate monitoring, and compared to a positive control, the Onyx Vantage 9590 professional clinical pulse oximeter. Results The accuracy of the tested products ranged between 79.8% and 99.1%, while the coefficient of variation (precision) ranged between 4% and 17.5%. MisFit Shine showed the highest accuracy and precision (along with Qualcomm Toq), while Samsung Gear 2 showed the lowest accuracy, and Jawbone UP showed the lowest precision. However, Xiaomi Mi band showed the best package compared to its price. Conclusions The accuracy and precision of the selected fitness trackers are reasonable and can indicate the average level of activity and thus average energy expenditure. PMID:26618039
El-Amrawy, Fatema; Nounou, Mohamed Ismail
2015-10-01
The new wave of wireless technologies, fitness trackers, and body sensor devices can have great impact on healthcare systems and the quality of life. However, there have not been enough studies to prove the accuracy and precision of these trackers. The objective of this study was to evaluate the accuracy, precision, and overall performance of seventeen wearable devices currently available compared with direct observation of step counts and heart rate monitoring. Each participant in this study used three accelerometers at a time, running the three corresponding applications of each tracker on an Android or iOS device simultaneously. Each participant was instructed to walk 200, 500, and 1,000 steps. Each set was repeated 40 times. Data was recorded after each trial, and the mean step count, standard deviation, accuracy, and precision were estimated for each tracker. Heart rate was measured by all trackers (if applicable), which support heart rate monitoring, and compared to a positive control, the Onyx Vantage 9590 professional clinical pulse oximeter. The accuracy of the tested products ranged between 79.8% and 99.1%, while the coefficient of variation (precision) ranged between 4% and 17.5%. MisFit Shine showed the highest accuracy and precision (along with Qualcomm Toq), while Samsung Gear 2 showed the lowest accuracy, and Jawbone UP showed the lowest precision. However, Xiaomi Mi band showed the best package compared to its price. The accuracy and precision of the selected fitness trackers are reasonable and can indicate the average level of activity and thus average energy expenditure.
Kroll, Ryan R; Boyd, J Gordon; Maslove, David M
2016-09-20
As the sensing capabilities of wearable devices improve, there is increasing interest in their application in medical settings. Capabilities such as heart rate monitoring may be useful in hospitalized patients as a means of enhancing routine monitoring or as part of an early warning system to detect clinical deterioration. To evaluate the accuracy of heart rate monitoring by a personal fitness tracker (PFT) among hospital inpatients. We conducted a prospective observational study of 50 stable patients in the intensive care unit who each completed 24 hours of heart rate monitoring using a wrist-worn PFT. Accuracy of heart rate recordings was compared with gold standard measurements derived from continuous electrocardiographic (cECG) monitoring. The accuracy of heart rates measured by pulse oximetry (Spo2.R) was also measured as a positive control. On a per-patient basis, PFT-derived heart rate values were slightly lower than those derived from cECG monitoring (average bias of -1.14 beats per minute [bpm], with limits of agreement of 24 bpm). By comparison, Spo2.R recordings produced more accurate values (average bias of +0.15 bpm, limits of agreement of 13 bpm, P<.001 as compared with PFT). Personal fitness tracker device performance was significantly better in patients in sinus rhythm than in those who were not (average bias -0.99 bpm vs -5.02 bpm, P=.02). Personal fitness tracker-derived heart rates were slightly lower than those derived from cECG monitoring in real-world testing and not as accurate as Spo2.R-derived heart rates. Performance was worse among patients who were not in sinus rhythm. Further clinical evaluation is indicated to see if PFTs can augment early warning systems in hospitals. ClinicalTrials.gov NCT02527408; https://clinicaltrials.gov/ct2/show/NCT02527408 (Archived by WebCite at http://www.webcitation.org/6kOFez3on).
Road sign recognition with fuzzy adaptive pre-processing models.
Lin, Chien-Chuan; Wang, Ming-Shi
2012-01-01
A road sign recognition system based on adaptive image pre-processing models using two fuzzy inference schemes has been proposed. The first fuzzy inference scheme is to check the changes of the light illumination and rich red color of a frame image by the checking areas. The other is to check the variance of vehicle's speed and angle of steering wheel to select an adaptive size and position of the detection area. The Adaboost classifier was employed to detect the road sign candidates from an image and the support vector machine technique was employed to recognize the content of the road sign candidates. The prohibitory and warning road traffic signs are the processing targets in this research. The detection rate in the detection phase is 97.42%. In the recognition phase, the recognition rate is 93.04%. The total accuracy rate of the system is 92.47%. For video sequences, the best accuracy rate is 90.54%, and the average accuracy rate is 80.17%. The average computing time is 51.86 milliseconds per frame. The proposed system can not only overcome low illumination and rich red color around the road sign problems but also offer high detection rates and high computing performance.
Road Sign Recognition with Fuzzy Adaptive Pre-Processing Models
Lin, Chien-Chuan; Wang, Ming-Shi
2012-01-01
A road sign recognition system based on adaptive image pre-processing models using two fuzzy inference schemes has been proposed. The first fuzzy inference scheme is to check the changes of the light illumination and rich red color of a frame image by the checking areas. The other is to check the variance of vehicle's speed and angle of steering wheel to select an adaptive size and position of the detection area. The Adaboost classifier was employed to detect the road sign candidates from an image and the support vector machine technique was employed to recognize the content of the road sign candidates. The prohibitory and warning road traffic signs are the processing targets in this research. The detection rate in the detection phase is 97.42%. In the recognition phase, the recognition rate is 93.04%. The total accuracy rate of the system is 92.47%. For video sequences, the best accuracy rate is 90.54%, and the average accuracy rate is 80.17%. The average computing time is 51.86 milliseconds per frame. The proposed system can not only overcome low illumination and rich red color around the road sign problems but also offer high detection rates and high computing performance. PMID:22778650
NASA Technical Reports Server (NTRS)
Cogley, A. C.; Borucki, W. J.
1976-01-01
When incorporating formulations of instantaneous solar heating or photolytic rates as functions of altitude and sun angle into long range forecasting models, it may be desirable to replace the time integrals by daily average rates that are simple functions of latitude and season. This replacement is accomplished by approximating the integral over the solar day by a pure exponential. This gives a daily average rate as a multiplication factor times the instantaneous rate evaluated at an appropriate sun angle. The accuracy of the exponential approximation is investigated by a sample calculation using an instantaneous ozone heating formulation available in the literature.
Cued Speech Transliteration: Effects of Speaking Rate and Lag Time on Production Accuracy
Tessler, Morgan P.
2016-01-01
Many deaf and hard-of-hearing children rely on interpreters to access classroom communication. Although the exact level of access provided by interpreters in these settings is unknown, it is likely to depend heavily on interpreter accuracy (portion of message correctly produced by the interpreter) and the factors that govern interpreter accuracy. In this study, the accuracy of 12 Cued Speech (CS) transliterators with varying degrees of experience was examined at three different speaking rates (slow, normal, fast). Accuracy was measured with a high-resolution, objective metric in order to facilitate quantitative analyses of the effect of each factor on accuracy. Results showed that speaking rate had a large negative effect on accuracy, caused primarily by an increase in omitted cues, whereas the effect of lag time on accuracy, also negative, was quite small and explained just 3% of the variance. Increased experience level was generally associated with increased accuracy; however, high levels of experience did not guarantee high levels of accuracy. Finally, the overall accuracy of the 12 transliterators, 54% on average across all three factors, was low enough to raise serious concerns about the quality of CS transliteration services that (at least some) children receive in educational settings. PMID:27221370
He, Ning; Sun, Hechun; Dai, Miaomiao
2014-05-01
To evaluate the influence of temperature and humidity on the drug stability by initial average rate experiment, and to obtained the kinetic parameters. The effect of concentration error, drug degradation extent, humidity and temperature numbers, humidity and temperature range, and average humidity and temperature on the accuracy and precision of kinetic parameters in the initial average rate experiment was explored. The stability of vitamin C, as a solid state model, was investigated by an initial average rate experiment. Under the same experimental conditions, the kinetic parameters obtained from this proposed method were comparable to those from classical isothermal experiment at constant humidity. The estimates were more accurate and precise by controlling the extent of drug degradation, changing humidity and temperature range, or by setting the average temperature closer to room temperature. Compared with isothermal experiments at constant humidity, our proposed method saves time, labor, and materials.
A Novel Energy-Efficient Approach for Human Activity Recognition.
Zheng, Lingxiang; Wu, Dihong; Ruan, Xiaoyang; Weng, Shaolin; Peng, Ao; Tang, Biyu; Lu, Hai; Shi, Haibin; Zheng, Huiru
2017-09-08
In this paper, we propose a novel energy-efficient approach for mobile activity recognition system (ARS) to detect human activities. The proposed energy-efficient ARS, using low sampling rates, can achieve high recognition accuracy and low energy consumption. A novel classifier that integrates hierarchical support vector machine and context-based classification (HSVMCC) is presented to achieve a high accuracy of activity recognition when the sampling rate is less than the activity frequency, i.e., the Nyquist sampling theorem is not satisfied. We tested the proposed energy-efficient approach with the data collected from 20 volunteers (14 males and six females) and the average recognition accuracy of around 96.0% was achieved. Results show that using a low sampling rate of 1Hz can save 17.3% and 59.6% of energy compared with the sampling rates of 5 Hz and 50 Hz. The proposed low sampling rate approach can greatly reduce the power consumption while maintaining high activity recognition accuracy. The composition of power consumption in online ARS is also investigated in this paper.
NASA Astrophysics Data System (ADS)
Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.
2018-02-01
Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.
Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng
2016-01-01
A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints. PMID:27579033
Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng; Kuo, Chung-Hsien
2016-01-01
A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints.
Fukuyama, Atsushi; Isoda, Haruo; Morita, Kento; Mori, Marika; Watanabe, Tomoya; Ishiguro, Kenta; Komori, Yoshiaki; Kosugi, Takafumi
2017-01-01
Introduction: We aim to elucidate the effect of spatial resolution of three-dimensional cine phase contrast magnetic resonance (3D cine PC MR) imaging on the accuracy of the blood flow analysis, and examine the optimal setting for spatial resolution using flow phantoms. Materials and Methods: The flow phantom has five types of acrylic pipes that represent human blood vessels (inner diameters: 15, 12, 9, 6, and 3 mm). The pipes were fixed with 1% agarose containing 0.025 mol/L gadolinium contrast agent. A blood-mimicking fluid with human blood property values was circulated through the pipes at a steady flow. Magnetic resonance (MR) images (three-directional phase images with speed information and magnitude images for information of shape) were acquired using the 3-Tesla MR system and receiving coil. Temporal changes in spatially-averaged velocity and maximum velocity were calculated using hemodynamic analysis software. We calculated the error rates of the flow velocities based on the volume flow rates measured with a flowmeter and examined measurement accuracy. Results: When the acrylic pipe was the size of the thoracicoabdominal or cervical artery and the ratio of pixel size for the pipe was set at 30% or lower, spatially-averaged velocity measurements were highly accurate. When the pixel size ratio was set at 10% or lower, maximum velocity could be measured with high accuracy. It was difficult to accurately measure maximum velocity of the 3-mm pipe, which was the size of an intracranial major artery, but the error for spatially-averaged velocity was 20% or less. Conclusions: Flow velocity measurement accuracy of 3D cine PC MR imaging for pipes with inner sizes equivalent to vessels in the cervical and thoracicoabdominal arteries is good. The flow velocity accuracy for the pipe with a 3-mm-diameter that is equivalent to major intracranial arteries is poor for maximum velocity, but it is relatively good for spatially-averaged velocity. PMID:28132996
Ghelani, Karen; Sidhu, Robindra; Jain, Umesh; Tannock, Rosemary
2004-11-01
Reading comprehension is a very complex task that requires different cognitive processes and reading abilities over the life span. There are fewer studies of reading comprehension relative to investigations of word reading abilities. Reading comprehension difficulties, however, have been identified in two common and frequently overlapping childhood disorders: reading disability (RD) and attention-deficit/hyperactivity disorder (ADHD). The nature of reading comprehension difficulties in these groups remains unclear. The performance of four groups of adolescents (RD, ADHD, comorbid ADHD and RD, and normal controls) was compared on reading comprehension tasks as well as on reading rate and accuracy tasks. Adolescents with RD showed difficulties across most reading tasks, although their comprehension scores were average. Adolescents with ADHD exhibited adequate single word reading abilities. Subtle difficulties were observed, however, on measures of text reading rate and accuracy as well as on silent reading comprehension, but scores remained in the average range. The comorbid group demonstrated similar difficulties to the RD group on word reading accuracy and on reading rate but experienced problems on only silent reading comprehension. Implications for reading interventions are outlined, as well as the clinical relevance for diagnosis.
ERIC Educational Resources Information Center
Niedo, Jasmin; Lee, Yen-Ling; Breznitz, Zvia; Berninger, Virginia W.
2014-01-01
Fourth graders whose silent word reading and/or sentence reading rate was, on average, two-thirds standard deviation below their oral reading of real and pseudowords and reading comprehension accuracy were randomly assigned to treatment ("n" = 7) or wait-listed ("n" = 7) control groups. Following nine sessions combining…
Cued Speech Transliteration: Effects of Speaking Rate and Lag Time on Production Accuracy.
Krause, Jean C; Tessler, Morgan P
2016-10-01
Many deaf and hard-of-hearing children rely on interpreters to access classroom communication. Although the exact level of access provided by interpreters in these settings is unknown, it is likely to depend heavily on interpreter accuracy (portion of message correctly produced by the interpreter) and the factors that govern interpreter accuracy. In this study, the accuracy of 12 Cued Speech (CS) transliterators with varying degrees of experience was examined at three different speaking rates (slow, normal, fast). Accuracy was measured with a high-resolution, objective metric in order to facilitate quantitative analyses of the effect of each factor on accuracy. Results showed that speaking rate had a large negative effect on accuracy, caused primarily by an increase in omitted cues, whereas the effect of lag time on accuracy, also negative, was quite small and explained just 3% of the variance. Increased experience level was generally associated with increased accuracy; however, high levels of experience did not guarantee high levels of accuracy. Finally, the overall accuracy of the 12 transliterators, 54% on average across all three factors, was low enough to raise serious concerns about the quality of CS transliteration services that (at least some) children receive in educational settings. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A Novel Energy-Efficient Approach for Human Activity Recognition
Zheng, Lingxiang; Wu, Dihong; Ruan, Xiaoyang; Weng, Shaolin; Tang, Biyu; Lu, Hai; Shi, Haibin
2017-01-01
In this paper, we propose a novel energy-efficient approach for mobile activity recognition system (ARS) to detect human activities. The proposed energy-efficient ARS, using low sampling rates, can achieve high recognition accuracy and low energy consumption. A novel classifier that integrates hierarchical support vector machine and context-based classification (HSVMCC) is presented to achieve a high accuracy of activity recognition when the sampling rate is less than the activity frequency, i.e., the Nyquist sampling theorem is not satisfied. We tested the proposed energy-efficient approach with the data collected from 20 volunteers (14 males and six females) and the average recognition accuracy of around 96.0% was achieved. Results show that using a low sampling rate of 1Hz can save 17.3% and 59.6% of energy compared with the sampling rates of 5 Hz and 50 Hz. The proposed low sampling rate approach can greatly reduce the power consumption while maintaining high activity recognition accuracy. The composition of power consumption in online ARS is also investigated in this paper. PMID:28885560
The systematic component of phylogenetic error as a function of taxonomic sampling under parsimony.
Debry, Ronald W
2005-06-01
The effect of taxonomic sampling on phylogenetic accuracy under parsimony is examined by simulating nucleotide sequence evolution. Random error is minimized by using very large numbers of simulated characters. This allows estimation of the consistency behavior of parsimony, even for trees with up to 100 taxa. Data were simulated on 8 distinct 100-taxon model trees and analyzed as stratified subsets containing either 25 or 50 taxa, in addition to the full 100-taxon data set. Overall accuracy decreased in a majority of cases when taxa were added. However, the magnitude of change in the cases in which accuracy increased was larger than the magnitude of change in the cases in which accuracy decreased, so, on average, overall accuracy increased as more taxa were included. A stratified sampling scheme was used to assess accuracy for an initial subsample of 25 taxa. The 25-taxon analyses were compared to 50- and 100-taxon analyses that were pruned to include only the original 25 taxa. On average, accuracy for the 25 taxa was improved by taxon addition, but there was considerable variation in the degree of improvement among the model trees and across different rates of substitution.
Utility of an Algorithm to Increase the Accuracy of Medication History in an Obstetrical Setting.
Corbel, Aline; Baud, David; Chaouch, Aziz; Beney, Johnny; Csajka, Chantal; Panchaud, Alice
2016-01-01
In an obstetrical setting, inaccurate medication histories at hospital admission may result in failure to identify potentially harmful treatments for patients and/or their fetus(es). This prospective study was conducted to assess average concordance rates between (1) a medication list obtained with a one-page structured medication history algorithm developed for the obstetrical setting and (2) the medication list reported in medical records and obtained by open-ended questions based on standard procedures. Both lists were converted into concordance rate using a best possible medication history approach as the reference (information obtained by patients, prescribers and community pharmacists' interviews). The algorithm-based method obtained a higher average concordance rate than the standard method, with respectively 90.2% [CI95% 85.8-94.3] versus 24.6% [CI95%15.3-34.4] concordance rates (p<0.01). Our algorithm-based method strongly enhanced the accuracy of the medication history in our obstetric population, without using substantial resources. Its implementation is an effective first step to the medication reconciliation process, which has been recognized as a very important component of patients' drug safety.
Accurate Reading with Sequential Presentation of Single Letters
Price, Nicholas S. C.; Edwards, Gemma L.
2012-01-01
Rapid, accurate reading is possible when isolated, single words from a sentence are sequentially presented at a fixed spatial location. We investigated if reading of words and sentences is possible when single letters are rapidly presented at the fovea under user-controlled or automatically controlled rates. When tested with complete sentences, trained participants achieved reading rates of over 60 wpm and accuracies of over 90% with the single letter reading (SLR) method and naive participants achieved average reading rates over 30 wpm with greater than 90% accuracy. Accuracy declined as individual letters were presented for shorter periods of time, even when the overall reading rate was maintained by increasing the duration of spaces between words. Words in the lexicon that occur more frequently were identified with higher accuracy and more quickly, demonstrating that trained participants have lexical access. In combination, our data strongly suggest that comprehension is possible and that SLR is a practicable form of reading under conditions in which normal scanning of text is not possible, or for scenarios with limited spatial and temporal resolution such as patients with low vision or prostheses. PMID:23115548
Combining forecast weights: Why and how?
NASA Astrophysics Data System (ADS)
Yin, Yip Chee; Kok-Haur, Ng; Hock-Eam, Lim
2012-09-01
This paper proposes a procedure called forecast weight averaging which is a specific combination of forecast weights obtained from different methods of constructing forecast weights for the purpose of improving the accuracy of pseudo out of sample forecasting. It is found that under certain specified conditions, forecast weight averaging can lower the mean squared forecast error obtained from model averaging. In addition, we show that in a linear and homoskedastic environment, this superior predictive ability of forecast weight averaging holds true irrespective whether the coefficients are tested by t statistic or z statistic provided the significant level is within the 10% range. By theoretical proofs and simulation study, we have shown that model averaging like, variance model averaging, simple model averaging and standard error model averaging, each produces mean squared forecast error larger than that of forecast weight averaging. Finally, this result also holds true marginally when applied to business and economic empirical data sets, Gross Domestic Product (GDP growth rate), Consumer Price Index (CPI) and Average Lending Rate (ALR) of Malaysia.
Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.
Brezis, Noam; Bronfman, Zohar Z; Usher, Marius
2015-06-04
We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.
Improving coding accuracy in an academic practice.
Nguyen, Dana; O'Mara, Heather; Powell, Robert
2017-01-01
Practice management has become an increasingly important component of graduate medical education. This applies to every practice environment; private, academic, and military. One of the most critical aspects of practice management is documentation and coding for physician services, as they directly affect the financial success of any practice. Our quality improvement project aimed to implement a new and innovative method for teaching billing and coding in a longitudinal fashion in a family medicine residency. We hypothesized that implementation of a new teaching strategy would increase coding accuracy rates among residents and faculty. Design: single group, pretest-posttest. military family medicine residency clinic. Study populations: 7 faculty physicians and 18 resident physicians participated as learners in the project. Educational intervention: monthly structured coding learning sessions in the academic curriculum that involved learner-presented cases, small group case review, and large group discussion. overall coding accuracy (compliance) percentage and coding accuracy per year group for the subjects that were able to participate longitudinally. Statistical tests used: average coding accuracy for population; paired t test to assess improvement between 2 intervention periods, both aggregate and by year group. Overall coding accuracy rates remained stable over the course of time regardless of the modality of the educational intervention. A paired t test was conducted to compare coding accuracy rates at baseline (mean (M)=26.4%, SD=10%) to accuracy rates after all educational interventions were complete (M=26.8%, SD=12%); t24=-0.127, P=.90. Didactic teaching and small group discussion sessions did not improve overall coding accuracy in a residency practice. Future interventions could focus on educating providers at the individual level.
Zheng, Dandan; Todor, Dorin A
2011-01-01
In real-time trans-rectal ultrasound (TRUS)-based high-dose-rate prostate brachytherapy, the accurate identification of needle-tip position is critical for treatment planning and delivery. Currently, needle-tip identification on ultrasound images can be subject to large uncertainty and errors because of ultrasound image quality and imaging artifacts. To address this problem, we developed a method based on physical measurements with simple and practical implementation to improve the accuracy and robustness of needle-tip identification. Our method uses measurements of the residual needle length and an off-line pre-established coordinate transformation factor, to calculate the needle-tip position on the TRUS images. The transformation factor was established through a one-time systematic set of measurements of the probe and template holder positions, applicable to all patients. To compare the accuracy and robustness of the proposed method and the conventional method (ultrasound detection), based on the gold-standard X-ray fluoroscopy, extensive measurements were conducted in water and gel phantoms. In water phantom, our method showed an average tip-detection accuracy of 0.7 mm compared with 1.6 mm of the conventional method. In gel phantom (more realistic and tissue-like), our method maintained its level of accuracy while the uncertainty of the conventional method was 3.4mm on average with maximum values of over 10mm because of imaging artifacts. A novel method based on simple physical measurements was developed to accurately detect the needle-tip position for TRUS-based high-dose-rate prostate brachytherapy. The method demonstrated much improved accuracy and robustness over the conventional method. Copyright © 2011 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
TH-A-9A-10: Prostate SBRT Delivery with Flattening-Filter-Free Mode: Benefit and Accuracy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, T; Yuan, L; Sheng, Y
Purpose: Flattening-filter-free (FFF) beam mode offered on TrueBeam™ linac enables delivering IMRT at 2400 MU/min dose rate. This study investigates the benefit and delivery accuracy of using high dose rate in the context of prostate SBRT. Methods: 8 prostate SBRT patients were retrospectively studied. In 5 cases treated with 600-MU/min dose rate, continuous prostate motion data acquired during radiation-beam-on was used to analyze motion range. In addition, the initial 1/3 of prostate motion trajectories during each radiation-beam-on was separated to simulate motion range if 2400-MU/min were used. To analyze delivery accuracy in FFF mode, MLC trajectory log files from anmore » additional 3 cases treated at 2400-MU/min were acquired. These log files record MLC expected and actual positions every 20ms, and therefore can be used to reveal delivery accuracy. Results: (1) Benefit. On average treatment at 600-MU/min takes 30s per beam; whereas 2400-MU/min requires only 11s. When shortening delivery time to ~1/3, the prostate motion range was significantly smaller (p<0.001). Largest motion reduction occurred in Sup-Inf direction, from [−3.3mm, 2.1mm] to [−1.7mm, 1.7mm], followed by reduction from [−2.1mm, 2.4mm] to [−1.0mm, 2.4mm] in Ant-Pos direction. No change observed in LR direction [−0.8mm, 0.6mm]. The combined motion amplitude (vector norm) confirms that average motion and ranges are significantly smaller when beam-on was limited to the 1st 1/3 of actual delivery time. (2) Accuracy. Trajectory log file analysis showed excellent delivery accuracy with at 2400 MU/min. Most leaf deviations during beam-on were within 0.07mm (99-percentile). Maximum leaf-opening deviations during each beam-on were all under 0.1mm for all leaves. Dose-rate was maintained at 2400-MU/min during beam-on without dipping. Conclusion: Delivery prostate SBRT with 2400 MU/min is both beneficial and accurate. High dose rates significantly reduced both treatment time and intra-beam prostate motion range. Excellent delivery accuracy was confirmed with very small leaf motion deviation.« less
Accuracy improvement of the ice flow rate measurements on Antarctic ice sheet by DInSAR method
NASA Astrophysics Data System (ADS)
Shiramizu, Kaoru; Doi, Koichiro; Aoyama, Yuichi
2015-04-01
DInSAR (Differential Interferometric Synthetic Aperture Radar) is an effective tool to measure the flow rate of slow flowing ice streams on Antarctic ice sheet with high resolution. In the flow rate measurement by DInSAR method, we use Digital Elevation Model (DEM) at two times in the estimating process. At first, we use it to remove topographic fringes from InSAR images. And then, it is used to project obtained displacements along Line-Of-Sight (LOS) direction to the actual flow direction. ASTER-GDEM widely-used for InSAR prosessing of the data of polar region has a lot of errors especially in the inland ice sheet area. Thus the errors yield irregular flow rates and directions. Therefore, quality of DEM has a substantial influence on the ice flow rate measurement. In this study, we created a new DEM (resolution 10m; hereinafter referred to as PRISM-DEM) based on ALOS/PRISM images, and compared PRISM-DEM and ASTER-GDEM. The study area is around Skallen, 90km south from Syowa Station, in the southern part of Sôya Coast, East Antarctica. For making DInSAR images, we used ALOS/PALSAR data of 13 pairs (Path633, Row 571-572), observed during the period from November 23, 2007 through January 16, 2011. PRISM-DEM covering the PALSAR scene was created from nadir and backward view images of ALOS/PRISM (Observation date: 2009/1/18) by applying stereo processing with a digital mapping equipment, and then the automatically created a primary DEM was corrected manually to make a final DEM. The number of irregular values of actual ice flow rate was reduced by applying PRISM-DEM compared with that by applying ASTER-GDEM. Additionally, an averaged displacement of approximately 0.5cm was obtained by applying PRISM-DEM over outcrop area, where no crustal displacement considered to occur during the recurrence period of ALOS/PALSAR (46days), while an averaged displacement of approximately 1.65 cm was observed by applying ASTER-GDEM. Since displacements over outcrop area are considered to be apparent ones, the average could be a measure of flow rate estimation accuracy by DInSAR. Therefore, it is concluded that the accuracy of the ice flow rate measurement can be improved by using PRISM-DEM. In this presentation, we will show the results of the estimated flow rate of ice streams in the region of interest, and discuss the additional accuracy improvement of this method.
Rezaei-Darzi, Ehsan; Farzadfar, Farshad; Hashemi-Meshkini, Amir; Navidi, Iman; Mahmoudi, Mahmoud; Varmaghani, Mehdi; Mehdipour, Parinaz; Soudi Alamdari, Mahsa; Tayefi, Batool; Naderimagham, Shohreh; Soleymani, Fatemeh; Mesdaghinia, Alireza; Delavari, Alireza; Mohammad, Kazem
2014-12-01
This study aimed to evaluate and compare the prediction accuracy of two data mining techniques, including decision tree and neural network models in labeling diagnosis to gastrointestinal prescriptions in Iran. This study was conducted in three phases: data preparation, training phase, and testing phase. A sample from a database consisting of 23 million pharmacy insurance claim records, from 2004 to 2011 was used, in which a total of 330 prescriptions were assessed and used to train and test the models simultaneously. In the training phase, the selected prescriptions were assessed by both a physician and a pharmacist separately and assigned a diagnosis. To test the performance of each model, a k-fold stratified cross validation was conducted in addition to measuring their sensitivity and specificity. Generally, two methods had very similar accuracies. Considering the weighted average of true positive rate (sensitivity) and true negative rate (specificity), the decision tree had slightly higher accuracy in its ability for correct classification (83.3% and 96% versus 80.3% and 95.1%, respectively). However, when the weighted average of ROC area (AUC between each class and all other classes) was measured, the ANN displayed higher accuracies in predicting the diagnosis (93.8% compared with 90.6%). According to the result of this study, artificial neural network and decision tree model represent similar accuracy in labeling diagnosis to GI prescription.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Isotalo, Aarno
A method referred to as tally nuclides is presented for accurately and efficiently calculating the time-step averages and integrals of any quantities that are weighted sums of atomic densities with constant weights during the step. The method allows all such quantities to be calculated simultaneously as a part of a single depletion solution with existing depletion algorithms. Some examples of the results that can be extracted include step-average atomic densities and macroscopic reaction rates, the total number of fissions during the step, and the amount of energy released during the step. Furthermore, the method should be applicable with several depletionmore » algorithms, and the integrals or averages should be calculated with an accuracy comparable to that reached by the selected algorithm for end-of-step atomic densities. The accuracy of the method is demonstrated in depletion calculations using the Chebyshev rational approximation method. Here, we demonstrate how the ability to calculate energy release in depletion calculations can be used to determine the accuracy of the normalization in a constant-power burnup calculation during the calculation without a need for a reference solution.« less
Calculating Time-Integral Quantities in Depletion Calculations
Isotalo, Aarno
2016-06-02
A method referred to as tally nuclides is presented for accurately and efficiently calculating the time-step averages and integrals of any quantities that are weighted sums of atomic densities with constant weights during the step. The method allows all such quantities to be calculated simultaneously as a part of a single depletion solution with existing depletion algorithms. Some examples of the results that can be extracted include step-average atomic densities and macroscopic reaction rates, the total number of fissions during the step, and the amount of energy released during the step. Furthermore, the method should be applicable with several depletionmore » algorithms, and the integrals or averages should be calculated with an accuracy comparable to that reached by the selected algorithm for end-of-step atomic densities. The accuracy of the method is demonstrated in depletion calculations using the Chebyshev rational approximation method. Here, we demonstrate how the ability to calculate energy release in depletion calculations can be used to determine the accuracy of the normalization in a constant-power burnup calculation during the calculation without a need for a reference solution.« less
On determining dose rate constants spectroscopically.
Rodriguez, M; Rogers, D W O
2013-01-01
To investigate several aspects of the Chen and Nath spectroscopic method of determining the dose rate constants of (125)I and (103)Pd seeds [Z. Chen and R. Nath, Phys. Med. Biol. 55, 6089-6104 (2010)] including the accuracy of using a line or dual-point source approximation as done in their method, and the accuracy of ignoring the effects of the scattered photons in the spectra. Additionally, the authors investigate the accuracy of the literature's many different spectra for bare, i.e., unencapsulated (125)I and (103)Pd sources. Spectra generated by 14 (125)I and 6 (103)Pd seeds were calculated in vacuo at 10 cm from the source in a 2.7 × 2.7 × 0.05 cm(3) voxel using the EGSnrc BrachyDose Monte Carlo code. Calculated spectra used the initial photon spectra recommended by AAPM's TG-43U1 and NCRP (National Council of Radiation Protection and Measurements) Report 58 for the (125)I seeds, or TG-43U1 and NNDC(2000) (National Nuclear Data Center, 2000) for (103)Pd seeds. The emitted spectra were treated as coming from a line or dual-point source in a Monte Carlo simulation to calculate the dose rate constant. The TG-43U1 definition of the dose rate constant was used. These calculations were performed using the full spectrum including scattered photons or using only the main peaks in the spectrum as done experimentally. Statistical uncertainties on the air kerma/history and the dose rate/history were ≤0.2%. The dose rate constants were also calculated using Monte Carlo simulations of the full seed model. The ratio of the intensity of the 31 keV line relative to that of the main peak in (125)I spectra is, on average, 6.8% higher when calculated with the NCRP Report 58 initial spectrum vs that calculated with TG-43U1 initial spectrum. The (103)Pd spectra exhibit an average 6.2% decrease in the 22.9 keV line relative to the main peak when calculated with the TG-43U1 rather than the NNDC(2000) initial spectrum. The measured values from three different investigations are in much better agreement with the calculations using the NCRP Report 58 and NNDC(2000) initial spectra with average discrepancies of 0.9% and 1.7% for the (125)I and (103)Pd seeds, respectively. However, there are no differences in the calculated TG-43U1 brachytherapy parameters using either initial spectrum in both cases. Similarly, there were no differences outside the statistical uncertainties of 0.1% or 0.2%, in the average energy, air kerma/history, dose rate/history, and dose rate constant when calculated using either the full photon spectrum or the main-peaks-only spectrum. Our calculated dose rate constants based on using the calculated on-axis spectrum and a line or dual-point source model are in excellent agreement (0.5% on average) with the values of Chen and Nath, verifying the accuracy of their more approximate method of going from the spectrum to the dose rate constant. However, the dose rate constants based on full seed models differ by between +4.6% and -1.5% from those based on the line or dual-point source approximations. These results suggest that the main value of spectroscopic measurements is to verify full Monte Carlo models of the seeds by comparison to the calculated spectra.
Determination of rain rate from a spaceborne radar using measurements of total attenuation
NASA Technical Reports Server (NTRS)
Meneghini, R.; Eckerman, J.; Atlas, D.
1981-01-01
Studies shows that path-integrated rain rates can be determined by means of a direct measurement of attenuation. For ground based radars this is done by measuring the backscattering cross section of a fixed target in the presence and absence of rain along the radar beam. A ratio of the two measurements yields a factor proportional to the attenuation from which the average rain rate is deduced. The technique is extended to spaceborne radars by choosing the ground as reference target. The technique is also generalized so that both the average and range-profiled rain rates are determined. The accuracies of the resulting estimates are evaluated for a narrow beam radar located on a low earth orbiting satellite.
The Effect of Rate of Presentation on Digit Serial Recall in Reading Retarded Children.
ERIC Educational Resources Information Center
Gan, Jennifer; Tymchuk, Alexander J.
This study examined the effect of presentation rate on accuracy of digit serial recall and on serial position curves of digit strings of different lengths with 18 boys classified as reading retarded and a comparison group of children (ages for both groups averaged 11 years) who read at grade level. The results indicated that normal children…
A robust omnifont open-vocabulary Arabic OCR system using pseudo-2D-HMM
NASA Astrophysics Data System (ADS)
Rashwan, Abdullah M.; Rashwan, Mohsen A.; Abdel-Hameed, Ahmed; Abdou, Sherif; Khalil, A. H.
2012-01-01
Recognizing old documents is highly desirable since the demand for quickly searching millions of archived documents has recently increased. Using Hidden Markov Models (HMMs) has been proven to be a good solution to tackle the main problems of recognizing typewritten Arabic characters. These attempts however achieved a remarkable success for omnifont OCR under very favorable conditions, they didn't achieve the same performance in practical conditions, i.e. noisy documents. In this paper we present an omnifont, large-vocabulary Arabic OCR system using Pseudo Two Dimensional Hidden Markov Model (P2DHMM), which is a generalization of the HMM. P2DHMM offers a more efficient way to model the Arabic characters, such model offer both minimal dependency on the font size/style (omnifont), and high level of robustness against noise. The evaluation results of this system are very promising compared to a baseline HMM system and best OCRs available in the market (Sakhr and NovoDynamics). The recognition accuracy of the P2DHMM classifier is measured against the classic HMM classifier, the average word accuracy rates for P2DHMM and HMM classifiers are 79% and 66% respectively. The overall system accuracy is measured against Sakhr and NovoDynamics OCR systems, the average word accuracy rates for P2DHMM, NovoDynamics, and Sakhr are 74%, 71%, and 61% respectively.
Lin, Xiaohui; Li, Chao; Zhang, Yanhui; Su, Benzhe; Fan, Meng; Wei, Hai
2017-12-26
Feature selection is an important topic in bioinformatics. Defining informative features from complex high dimensional biological data is critical in disease study, drug development, etc. Support vector machine-recursive feature elimination (SVM-RFE) is an efficient feature selection technique that has shown its power in many applications. It ranks the features according to the recursive feature deletion sequence based on SVM. In this study, we propose a method, SVM-RFE-OA, which combines the classification accuracy rate and the average overlapping ratio of the samples to determine the number of features to be selected from the feature rank of SVM-RFE. Meanwhile, to measure the feature weights more accurately, we propose a modified SVM-RFE-OA (M-SVM-RFE-OA) algorithm that temporally screens out the samples lying in a heavy overlapping area in each iteration. The experiments on the eight public biological datasets show that the discriminative ability of the feature subset could be measured more accurately by combining the classification accuracy rate with the average overlapping degree of the samples compared with using the classification accuracy rate alone, and shielding the samples in the overlapping area made the calculation of the feature weights more stable and accurate. The methods proposed in this study can also be used with other RFE techniques to define potential biomarkers from big biological data.
Matsuba, Shinji; Tabuchi, Hitoshi; Ohsugi, Hideharu; Enno, Hiroki; Ishitobi, Naofumi; Masumoto, Hiroki; Kiuchi, Yoshiaki
2018-05-09
To predict exudative age-related macular degeneration (AMD), we combined a deep convolutional neural network (DCNN), a machine-learning algorithm, with Optos, an ultra-wide-field fundus imaging system. First, to evaluate the diagnostic accuracy of DCNN, 364 photographic images (AMD: 137) were amplified and the area under the curve (AUC), sensitivity and specificity were examined. Furthermore, in order to compare the diagnostic abilities between DCNN and six ophthalmologists, we prepared yield 84 sheets comprising 50% of normal and wet-AMD data each, and calculated the correct answer rate, specificity, sensitivity, and response times. DCNN exhibited 100% sensitivity and 97.31% specificity for wet-AMD images, with an average AUC of 99.76%. Moreover, comparing the diagnostic abilities of DCNN versus six ophthalmologists, the average accuracy of the DCNN was 100%. On the other hand, the accuracy of ophthalmologists, determined only by Optos images without a fundus examination, was 81.9%. A combination of DCNN with Optos images is not better than a medical examination; however, it can identify exudative AMD with a high level of accuracy. Our system is considered useful for screening and telemedicine.
Lessons in molecular recognition. 2. Assessing and improving cross-docking accuracy.
Sutherland, Jeffrey J; Nandigam, Ravi K; Erickson, Jon A; Vieth, Michal
2007-01-01
Docking methods are used to predict the manner in which a ligand binds to a protein receptor. Many studies have assessed the success rate of programs in self-docking tests, whereby a ligand is docked into the protein structure from which it was extracted. Cross-docking, or using a protein structure from a complex containing a different ligand, provides a more realistic assessment of a docking program's ability to reproduce X-ray results. In this work, cross-docking was performed with CDocker, Fred, and Rocs using multiple X-ray structures for eight proteins (two kinases, one nuclear hormone receptor, one serine protease, two metalloproteases, and two phosphodiesterases). While average cross-docking accuracy is not encouraging, it is shown that using the protein structure from the complex that contains the bound ligand most similar to the docked ligand increases docking accuracy for all methods ("similarity selection"). Identifying the most successful protein conformer ("best selection") and similarity selection substantially reduce the difference between self-docking and average cross-docking accuracy. We identify universal predictors of docking accuracy (i.e., showing consistent behavior across most protein-method combinations), and show that models for predicting docking accuracy built using these parameters can be used to select the most appropriate docking method.
Relationship of sediment discharge to streamflow
Colby, B.R.
1956-01-01
The relationship between rate of sediment discharge and rate of water discharge at a cross section of a stream is frequently expressed by an average curve. This curve is the sediment rating curve. It has been widely used in the computation of average sediment discharge from water discharge for periods when sediment samples were not collected. This report discusses primarily the applications of sediment rating curves for periods during which at least occasional sediment samples were collected. Because sediment rating curves are of many kinds, the selection of the correct kind for each use is important. Each curve should be carefully prepared. In particular, the correct dependent variable must be used or the slope of the sediment rating curve may be incorrect for computing sediment discharges. Sediment rating curves and their applications were studied for the following gaging stations: 1. Niobrara River near Cody, Nebr. 2. Colorado River near Grand Canyon, Ariz. 3. Rio Grande at San Martial, N. Mex. 4. Rio Puerto near Bernardo, N. Mex. 5. White River near Kadoka, S. Dak. 6. Sandusky River near Fremont, Ohio Except for the Sandusky River and the Rio Puerco, which transport mostly fine sediment, one instantaneous sediment rating curve was prepared for the discharge of suspended sands, at each station, and another for the discharge of sediment finer than 0.082 millimeter. Each curve was studied separately, and by trial-end-error multiple correlation some of the factors that cause scatter from the sediment rating curves were determined. Average velocity at the cross section, Water temperature, and erratic fluctuations in concentration seemed to be the three major factors that caused departures from the sediment rating curves for suspended sands. The concentration of suspended sands varied with about the 2.8 power of the mean velocity for the four sediment, rating curves for suspended sands. The effect of water temperature was not so consistent as that of velocity and theoretically should vary considerably with differences in the size composition of the suspended sands. Scatter from the sediment rating curves for sediments finer than 0.082 millimeter seemed to be caused by changes in supply of these sediments. Some of the scatter could be explained by seasonal variations, by a pattern of change in concentration of fine sediment following a rise, or by source of the runoff as indicated by the measured relative flows of certain tributaries. Daily or instantaneous sediment rating curves adjusted for factors that account for some of the scatter from an average curve often can be used to compute approximate daily, monthly, and annual sediment discharges. Accuracy of the computed sediment discharges should be better than average for streams that transport mostly sands rather than fine sediments and for some ephemeral or intermittent streams, such as Rio Puerco, in semiarid regions. Accuracy of computed sediment discharges can be much improved for many streams by shifting the sediment rating curve on the basis of 2 or 4 measurements of sediment discharge per month. Of 26 annual sediment discharges that were computed by shifting sediment rating curves to either 2 or 4 measured sediment discharges per month, 18 were within I0 percent of the annual-sediment discharges that were computed on the basis of a daily sampling program. Monthly and daily sediment discharges computed from daily or instantaneous sediment rating curves, either shifted or unshifted, were less accurate than similarly computed annual sediment discharges. Even so, the difference in cost between occasional sediment samples and daily samples is so great that the added accuracy from daily sampling may not Justify the added cost. Monthly and annual sediment-rating curves can be applied simply, with adjustments if required, to compute monthly and annual sediment discharges with reasonably good accuracy for gaging stations like the Rio Puerco near Bernardo,
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christian, Mark H; Hadjerioua, Boualem; Lee, Kyutae
2015-01-01
The following paper represents the results of an investigation into the impact of the number and placement of Current Meter (CM) flow sensors on the accuracy to which they are capable of predicting the overall flow rate. Flow measurement accuracy is of particular importance in multiunit plants because it plays a pivotal role in determining the operational efficiency characteristics of each unit, allowing the operator to select the unit (or combination of units) which most efficiently meet demand. Several case studies have demonstrated that optimization of unit dispatch has the potential to increase plant efficiencies from between 1 to 4.4more » percent [2] [3]. Unfortunately current industry standards do not have an established methodology to measure the flow rate through hydropower units with short converging intakes (SCI); the only direction provided is that CM sensors should be used. The most common application of CM is horizontally, along a trolley which is incrementally lowered across a measurement cross section. As such, the measurement resolution is defined horizontally and vertically by the number of CM and the number of measurement increments respectively. There has not been any published research on the role of resolution in either direction on the accuracy of flow measurement. The work below investigates the effectiveness of flow measurement in a SCI by performing a case study in which point velocity measurements were extracted from a physical plant and then used to calculate a series of reference flow distributions. These distributions were then used to perform sensitivity studies on the relation between the number of CM and the accuracy to which the flow rate was predicted. The following research uncovered that a minimum of 795 plants contain SCI, a quantity which represents roughly 12% of total domestic hydropower capacity. In regards to measurement accuracy, it was determined that accuracy ceases to increase considerably due to strict increases in vertical resolution beyond the application of 49 transects. Moreover the research uncovered that the application of 5 CM (when applied at 49 vertical transects) resulted in an average accuracy of 95.6% and the application of additional sensors resulted in a linear increase in accuracy up to 17 CM which had an average accuracy of 98.5%. Beyond 17 CM incremental increases in accuracy due to the addition of CM was found decrease exponentially. Future work that will be performed in this area will investigate the use of computational fluid dynamics to acquire a broader range of flow fields within SCI.« less
Ansari, Mozafar; Othman, Faridah; Abunama, Taher; El-Shafie, Ahmed
2018-04-01
The function of a sewage treatment plant is to treat the sewage to acceptable standards before being discharged into the receiving waters. To design and operate such plants, it is necessary to measure and predict the influent flow rate. In this research, the influent flow rate of a sewage treatment plant (STP) was modelled and predicted by autoregressive integrated moving average (ARIMA), nonlinear autoregressive network (NAR) and support vector machine (SVM) regression time series algorithms. To evaluate the models' accuracy, the root mean square error (RMSE) and coefficient of determination (R 2 ) were calculated as initial assessment measures, while relative error (RE), peak flow criterion (PFC) and low flow criterion (LFC) were calculated as final evaluation measures to demonstrate the detailed accuracy of the selected models. An integrated model was developed based on the individual models' prediction ability for low, average and peak flow. An initial assessment of the results showed that the ARIMA model was the least accurate and the NAR model was the most accurate. The RE results also prove that the SVM model's frequency of errors above 10% or below - 10% was greater than the NAR model's. The influent was also forecasted up to 44 weeks ahead by both models. The graphical results indicate that the NAR model made better predictions than the SVM model. The final evaluation of NAR and SVM demonstrated that SVM made better predictions at peak flow and NAR fit well for low and average inflow ranges. The integrated model developed includes the NAR model for low and average influent and the SVM model for peak inflow.
Detection of stress factors in crop and weed species using hyperspectral remote sensing reflectance
NASA Astrophysics Data System (ADS)
Henry, William Brien
The primary objective of this work was to determine if stress factors such as moisture stress or herbicide injury stress limit the ability to distinguish between weeds and crops using remotely sensed data. Additional objectives included using hyperspectral reflectance data to measure moisture content within a species, and to measure crop injury in response to drift rates of non-selective herbicides. Moisture stress did not reduce the ability to discriminate between species. Regardless of analysis technique, the trend was that as moisture stress increased, so too did the ability to distinguish between species. Signature amplitudes (SA) of the top 5 bands, discrete wavelet transforms (DWT), and multiple indices were promising analysis techniques. Discriminant models created from one year's data set and validated on additional data sets provided, on average, approximately 80% accurate classification among weeds and crop. This suggests that these models are relatively robust and could potentially be used across environmental conditions in field scenarios. Distinguishing between leaves grown at high-moisture stress and no-stress was met with limited success, primarily because there was substantial variation among samples within the treatments. Leaf water potential (LWP) was measured, and these were classified into three categories using indices. Classification accuracies were as high as 68%. The 10 bands most highly correlated to LWP were selected; however, there were no obvious trends or patterns in these top 10 bands with respect to time, species or moisture level, suggesting that LWP is an elusive parameter to quantify spectrally. In order to address herbicide injury stress and its impact on species discrimination, discriminant models were created from combinations of multiple indices. The model created from the second experimental run's data set and validated on the first experimental run's data provided an average of 97% correct classification of soybean and an overall average classification accuracy of 65% for all species. This suggests that these models are relatively robust and could potentially be used across a wide range of herbicide applications in field scenarios. From the pooled data set, a single discriminant model was created with multiple indices that discriminated soybean from weeds 88%, on average, regardless of herbicide, rate or species. Several analysis techniques including multiple indices, signature amplitude with spectral bands as features, and wavelet analysis were employed to distinguish between herbicide-treated and nontreated plants. Classification accuracy using signature amplitude (SA) analysis of paraquat injury on soybean was better than 75% for both 1/2 and 1/8X rates at 1, 4, and 7 DAA. Classification accuracy of paraquat injury on corn was better than 72% for the 1/2X rate at 1, 4, and 7 DAA. These data suggest that hyperspectral reflectance may be used to distinguish between healthy plants and injured plants to which herbicides have been applied; however, the classification accuracies remained at 75% or higher only when the higher rates of herbicide were applied. (Abstract shortened by UMI.)
Semaan, Hassan; Bazerbashi, Mohamad F; Siesel, Geoffrey; Aldinger, Paul; Obri, Tawfik
2018-03-01
To determine the accuracy and non-detection rate of cancer related findings (CRFs) on follow-up non-contrast-enhanced CT (NECT) versus contrast-enhanced CT (CECT) images of the abdomen in patients with a known cancer diagnosis. A retrospective review of 352 consecutive CTs of the abdomen performed with and without IV contrast between March 2010 and October 2014 for follow-up of cancer was included. Two radiologists independently assessed the NECT portions of the studies. The reader was provided the primary cancer diagnosis and access to the most recent prior NECT study. The accuracy and non-detection rates were determined by comparing our results to the archived reports as a gold standard. A total of 383 CRFs were found in the archived reports of the 352 abdominal CTs. The average non-detection rate for the NECTs compared to the CECTs was 3.0% (11.5/383) with an accuracy of 97.0% (371.5/383) in identifying CRFs. The most common findings missed were vascular thrombosis with a non-detection rate of 100%. The accuracy for non-vascular CRFs was 99.1%. Follow-up NECT abdomen studies are highly accurate in the detection of CRFs in patients with an established cancer diagnosis, except in cases where vascular involvement is suspected.
Towards SSVEP-based, portable, responsive Brain-Computer Interface.
Kaczmarek, Piotr; Salomon, Pawel
2015-08-01
A Brain-Computer Interface in motion control application requires high system responsiveness and accuracy. SSVEP interface consisted of 2-8 stimuli and 2 channel EEG amplifier was presented in this paper. The observed stimulus is recognized based on a canonical correlation calculated in 1 second window, ensuring high interface responsiveness. A threshold classifier with hysteresis (T-H) was proposed for recognition purposes. Obtained results suggest that T-H classifier enables to significantly increase classifier performance (resulting in accuracy of 76%, while maintaining average false positive detection rate of stimulus different then observed one between 2-13%, depending on stimulus frequency). It was shown that the parameters of T-H classifier, maximizing true positive rate, can be estimated by gradient-based search since the single maximum was observed. Moreover the preliminary results, performed on a test group (N=4), suggest that for T-H classifier exists a certain set of parameters for which the system accuracy is similar to accuracy obtained for user-trained classifier.
Hand hygiene compliance rates: Fact or fiction?
McLaws, Mary-Louise; Kwok, Yen Lee Angela
2018-05-16
The mandatory national hand hygiene program requires Australian public hospitals to use direct human auditing to establish compliance rates. To establish the magnitude of the Hawthorne effect, we compared direct human audit rates with concurrent automated surveillance rates. A large tertiary Australian teaching hospital previously trialed automated surveillance while simultaneously performing mandatory human audits for 20 minutes daily on a medical and a surgical ward. Subtracting automated surveillance rates from human audit rates provided differences in percentage points (PPs) for each of the 3 quarterly reporting periods for 2014 and 2015. Direct human audit rates for the medical ward were inflated by an average of 55 PPs in 2014 and 64 PPs in 2015, 2.8-3.1 times higher than automated surveillance rates. The rates for the surgical ward were inflated by an average of 32 PPs in 2014 and 31 PPs in 2015, 1.6 times higher than automated surveillance rates. Over the 6 mandatory reporting quarters, human audits collected an average of 255 opportunities, whereas automation collected 578 times more data, averaging 147,308 opportunities per quarter. The magnitude of the Hawthorne effect on direct human auditing was not trivial and produced highly inflated compliance rates. Mandatory compliance necessitates accuracy that only automated surveillance can achieve, whereas daily hand hygiene ambassadors or reminder technology could harness clinicians' ability to hyperrespond to produce habitual compliance. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.
Lee, Hyunwoo; Lee, Hana; Whang, Mincheol
2018-01-15
Continuous cardiac monitoring has been developed to evaluate cardiac activity outside of clinical environments due to the advancement of novel instruments. Seismocardiography (SCG) is one of the vital components that could develop such a monitoring system. Although SCG has been presented with a lower accuracy, this novel cardiac indicator has been steadily proposed over traditional methods such as electrocardiography (ECG). Thus, it is necessary to develop an enhanced method by combining the significant cardiac indicators. In this study, the six-axis signals of accelerometer and gyroscope were measured and integrated by the L2 normalization and multi-dimensional kineticardiography (MKCG) approaches, respectively. The waveforms of accelerometer and gyroscope were standardized and combined via ensemble averaging, and the heart rate was calculated from the dominant frequency. Thirty participants (15 females) were asked to stand or sit in relaxed and aroused conditions. Their SCG was measured during the task. As a result, proposed method showed higher accuracy than traditional SCG methods in all measurement conditions. The three main contributions are as follows: (1) the ensemble averaging enhanced heart rate estimation with the benefits of the six-axis signals; (2) the proposed method was compared with the previous SCG method that employs fewer-axis; and (3) the method was tested in various measurement conditions for a more practical application.
Teaching acute care nurses cognitive assessment using LOCFAS: what's the best method?
Flannery, J; Land, K
2001-02-01
The Levels of Cognitive Functioning Assessment Scale (LOCFAS) is a behavioral checklist used by nurses in the acute care setting to assess the level of cognitive functioning in severely brain-injured patients in the early post-trauma period. Previous research studies have supported the reliability and validity of LOCFAS. For LOCFAS to become a more firmly established method of cognitive assessment, nurses must become familiar with and proficient in the use of this instrument. The purpose of this study was to find the most effective method of instruction by comparing three methods: a self-directed manual, a teaching video, and a classroom presentation. Videotaped vignettes of actual brain-injured patients were presented at the end of each training session, and participants were required to categorize these videotaped patients by using LOCFAS. High levels of reliability were observed for both the self-directed manual group and the teaching video group, but an overall lower level of reliability was observed for the classroom presentation group. Examination of the accuracy of overall LOCFAS ratings revealed a significant difference for instructional groups; the accuracy of the classroom presentation group was significantly lower than that of either the self-directed manual group or the teaching video group. The three instructional groups also differed on the average accuracy of ratings of the individual behaviors; the accuracy of the classroom presentation group was significantly lower than that of the teaching video group, whereas the self-directed manual group fell in between. Nurses also rated the instructional methods across a number of evaluative dimensions on a 5-point Likert-type scale. Evaluative statements ranged from average to good, with no significant differences among instructional methods.
NASA Astrophysics Data System (ADS)
Rantz, William Gene
This study examined whether pilots completed airplane digital or paper checklists more accurately when they received post-flight graphic and verbal feedback. Participants were 6 college student pilots with instrument rating. The task consisted of flying flight patterns using a Frasca 241 Flight Training Device which emulates a Cirrus SR20 aircraft. The main dependent variable was the number of checklist items completed correctly per flight. An alternating treatment, multiple baseline design across pairs with reversal, was used. During baseline, the average percent of correctly completed items per flight varied considerably across participants, ranging from 13% to 57% for traditional paper checklists and ranging from 11% to 67% for digital checklists. Checklist performance increased to an average of 90% for paper checklist and an average of 89% for digital checklists after participants were given feedback and praise, and continued to improve to an average of nearly 100% for paper checklists and an average of 99% for digital checklists after the feedback and praise were removed. A slight decrement in performance was observed during a post-experiment probe between 60--90 days. Visual inspection and statistical analysis of the data suggest that paper checklist accuracy does not differ significantly from digital checklist accuracy. The results suggest that graphic feedback and praise can be used to increase the extent to which pilots use both digital and paper checklists accurately during normal workload conditions.
Playing vs. nonplaying aerobic training in tennis: physiological and performance outcomes.
Pialoux, Vincent; Genevois, Cyril; Capoen, Arnaud; Forbes, Scott C; Thomas, Jordan; Rogowski, Isabelle
2015-01-01
This study compared the effects of playing and nonplaying high intensity intermittent training (HIIT) on physiological demands and tennis stroke performance in young tennis players. Eleven competitive male players (13.4 ± 1.3 years) completed both a playing and nonplaying HIIT session of equal distance, in random order. During each HIIT session, heart rate (HR), blood lactate, and ratings of perceived exertion (RPE) were monitored. Before and after each HIIT session, the velocity and accuracy of the serve, and forehand and backhand strokes were evaluated. The results demonstrated that both HIIT sessions achieved an average HR greater than 90% HRmax. The physiological demands (average HR) were greater during the playing session compared to the nonplaying session, despite similar lactate concentrations and a lower RPE. The results also indicate a reduction in shot velocity after both HIIT sessions; however, the playing HIIT session had a more deleterious effect on stroke accuracy. These findings suggest that 1) both HIIT sessions may be sufficient to develop maximal aerobic power, 2) playing HIIT sessions provide a greater physiological demand with a lower RPE, and 3) playing HIIT has a greater deleterious effect on stroke performance, and in particular on the accuracy component of the ground stroke performance, and should be incorporated appropriately into a periodization program in young male tennis players.
Variation and Likeness in Ambient Artistic Portraiture.
Hayes, Susan; Rheinberger, Nick; Powley, Meagan; Rawnsley, Tricia; Brown, Linda; Brown, Malcolm; Butler, Karen; Clarke, Ann; Crichton, Stephen; Henderson, Maggie; McCosker, Helen; Musgrave, Ann; Wilcock, Joyce; Williams, Darren; Yeaman, Karin; Zaracostas, T S; Taylor, Adam C; Wallace, Gordon
2018-06-01
An artist-led exploration of portrait accuracy and likeness involved 12 Artists producing 12 portraits referencing a life-size 3D print of the same Sitter. The works were assessed during a public exhibition, and the resulting likeness assessments were compared to portrait accuracy as measured using geometric morphometrics (statistical shape analysis). Our results are that, independently of the assessors' prior familiarity with the Sitter's face, the likeness judgements tended to be higher for less morphologically accurate portraits. The two highest rated were the portrait that most exaggerated the Sitter's distinctive features, and a portrait that was a more accurate (but not the most accurate) depiction. In keeping with research showing photograph likeness assessments involve recognition, we found familiar assessors rated the two highest ranked portraits even higher than those with some or no familiarity. In contrast, those lacking prior familiarity with the Sitter's face showed greater favour for the portrait with the highest morphological accuracy, and therefore most likely engaged in face-matching with the exhibited 3D print. Furthermore, our research indicates that abstraction in portraiture may not enhance likeness, and we found that when our 12 highly diverse portraits were statistically averaged, this resulted in a portrait that is more morphologically accurate than any of the individual artworks comprising the average.
Playing vs. Nonplaying Aerobic Training in Tennis: Physiological and Performance Outcomes
Pialoux, Vincent; Genevois, Cyril; Capoen, Arnaud; Forbes, Scott C.; Thomas, Jordan; Rogowski, Isabelle
2015-01-01
This study compared the effects of playing and nonplaying high intensity intermittent training (HIIT) on physiological demands and tennis stroke performance in young tennis players. Eleven competitive male players (13.4 ± 1.3 years) completed both a playing and nonplaying HIIT session of equal distance, in random order. During each HIIT session, heart rate (HR), blood lactate, and ratings of perceived exertion (RPE) were monitored. Before and after each HIIT session, the velocity and accuracy of the serve, and forehand and backhand strokes were evaluated. The results demonstrated that both HIIT sessions achieved an average HR greater than 90% HRmax. The physiological demands (average HR) were greater during the playing session compared to the nonplaying session, despite similar lactate concentrations and a lower RPE. The results also indicate a reduction in shot velocity after both HIIT sessions; however, the playing HIIT session had a more deleterious effect on stroke accuracy. These findings suggest that 1) both HIIT sessions may be sufficient to develop maximal aerobic power, 2) playing HIIT sessions provide a greater physiological demand with a lower RPE, and 3) playing HIIT has a greater deleterious effect on stroke performance, and in particular on the accuracy component of the ground stroke performance, and should be incorporated appropriately into a periodization program in young male tennis players. PMID:25816346
NASA Astrophysics Data System (ADS)
Chen, Ting-Yu
2012-06-01
This article presents a useful method for relating anchor dependency and accuracy functions to multiple attribute decision-making (MADM) problems in the context of Atanassov intuitionistic fuzzy sets (A-IFSs). Considering anchored judgement with displaced ideals and solution precision with minimal hesitation, several auxiliary optimisation models have proposed to obtain the optimal weights of the attributes and to acquire the corresponding TOPSIS (the technique for order preference by similarity to the ideal solution) index for alternative rankings. Aside from the TOPSIS index, as a decision-maker's personal characteristics and own perception of self may also influence the direction in the axiom of choice, the evaluation of alternatives is conducted based on distances of each alternative from the positive and negative ideal alternatives, respectively. This article originates from Li's [Li, D.-F. (2005), 'Multiattribute Decision Making Models and Methods Using Intuitionistic Fuzzy Sets', Journal of Computer and System Sciences, 70, 73-85] work, which is a seminal study of intuitionistic fuzzy decision analysis using deduced auxiliary programming models, and deems it a benchmark method for comparative studies on anchor dependency and accuracy functions. The feasibility and effectiveness of the proposed methods are illustrated by a numerical example. Finally, a comparative analysis is illustrated with computational experiments on averaging accuracy functions, TOPSIS indices, separation measures from positive and negative ideal alternatives, consistency rates of ranking orders, contradiction rates of the top alternative and average Spearman correlation coefficients.
Predicting online ratings based on the opinion spreading process
NASA Astrophysics Data System (ADS)
He, Xing-Sheng; Zhou, Ming-Yang; Zhuo, Zhao; Fu, Zhong-Qian; Liu, Jian-Guo
2015-10-01
Predicting users' online ratings is always a challenge issue and has drawn lots of attention. In this paper, we present a rating prediction method by combining the user opinion spreading process with the collaborative filtering algorithm, where user similarity is defined by measuring the amount of opinion a user transfers to another based on the primitive user-item rating matrix. The proposed method could produce a more precise rating prediction for each unrated user-item pair. In addition, we introduce a tunable parameter λ to regulate the preferential diffusion relevant to the degree of both opinion sender and receiver. The numerical results for Movielens and Netflix data sets show that this algorithm has a better accuracy than the standard user-based collaborative filtering algorithm using Cosine and Pearson correlation without increasing computational complexity. By tuning λ, our method could further boost the prediction accuracy when using Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) as measurements. In the optimal cases, on Movielens and Netflix data sets, the corresponding algorithmic accuracy (MAE and RMSE) are improved 11.26% and 8.84%, 13.49% and 10.52% compared to the item average method, respectively.
Initial attitude determination for the hipparcos satellite
NASA Astrophysics Data System (ADS)
Van der Ha, Jozef C.
The present paper described the strategy and algorithms used during the initial on-ground three-axes attitude determination of ESA's astrometry satellite HIPPARCOS. The estimation is performed using calculated crossing times of identified stars over the Star Mapper's vertical and inclined slit systems as well as outputs from a set of rate-integrating gyros. Valid star transits in either of the two fields of view are expected to occur in average about every 30 s whereas the gyros are sampled at about 1 Hz. The state vector to be estimated consists of the three angles, three rates and three gyro drift rate components. Simulations have shown that convergence of the estimator is established within about 10 min and that the accuracies achieved are in the order of a few arcsec for the angles and a few milliarcsec per s for the rates. These stringent accuracies are in fact required for initialisation of subsequent autonomous on-board real-time attitude determination.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arai, K; Tohoku University Graduate School of Medicine, Sendal, Miyagi; Kadoya, N
Purpose: The aim of this study was to confirm On-Board Imager cone-beam computed tomography (CBCT) using a histogram-matching algorithm as a useful method for proton dose calculation in head and neck radiotherapy. Methods: We studied one head and neck phantom and ten patients with head and neck cancer treated using intensity-modulated radiation therapy (IMRT) and proton beam therapy. We modified Hounsfield unit (HU) values of CBCT (mCBCT) using a histogram-matching algorithm. In order to evaluate the accuracy of the proton dose calculation, we compared dose differences in dosimetric parameters (Dmean) for clinical target volume (CTV), planning target volume (PTV) andmore » left parotid and proton ranges (PR) between the planning CT (reference) and CBCT or mCBCT, and gamma passing rates of CBCT and mCBCT. To minimize the effect of organ deformation, we also performed image registration. Results: For patients, the average differences in Dmean for CTV, PTV, and left parotid between planning CT and CBCT were 1.63 ± 2.34%, 3.30 ± 1.02%, and 5.42 ± 3.06%, respectively. Similarly, the average differences between planning CT and mCBCT were 0.20 ± 0.19%, 0.58 ±0.43%, and 3.53 ±2.40%, respectively. The average differences in PR between planning CT and CBCT or mCBCT of a 50° beam for ten patients were 2.1 ± 2.1 mm and 0.3 ± 0.5 mm, respectively. Similarly, the average differences in PR of a 120° beam were 2.9 ± 2.6 mm and 1.1 ± 0.9 mm, respectively. The average dose and PR differences of mCBCT were smaller than those of CBCT. Additionally, the average gamma passing rates of mCBCT were larger than those of CBCT. Conclusion: We evaluated the accuracy of the proton dose calculation in CBCT and mCBCT with the image registration for ten patients. Our results showed that HU modification using a histogram-matching algorithm could improve the accuracy of the proton dose calculation.« less
Optimum spectral resolution for computing atmospheric heating and photodissociation rates
NASA Astrophysics Data System (ADS)
Stamnes, K.; Tsay, S.-C.
1990-06-01
Rapid, reliable and accurate computations of atmospheric heating rates are needed in climate models aimed at predicting the impact of greenhouse gases on the surface temperature. Photolysis rates play a major role in photochemical models used to assess potential changes in atmospheric ozone abundance due to man's release of chlorofluorocarbons. Both rates depend directly on the amount of solar radiation available at any level in the atmosphere. We present a very efficient method of computing these rates in which integration over the solar spectrum is reduced to a minimum number of monochromatic (or pseudogray) problems by appealing to the continuum features of the ozone absorption cross-sections. To explore the resolutions needed to obtain adequate results we have divided the spectral range between 175 and 700 nm into four regions. Within each of these regions we may vary the resolution as we wish. Accurate results are obtained for very coarse spectral resolution provided all cross-sections are averaged by weighting them with the solar flux across any bin. By using this procedure we find that heating rate errors are less than 20% for all altitudes when only four spectral bands across the entire wavelength region from 175 to 700 nm are used to compute the heating rate profile. Similarly, we find that the error in the photodissociation of ozone is less than a few percent when 10 nm resolution is used in the Hartley and Huggins bands (below 330 nm), while an average over the entire wavelength region from 400 to 700 nm yields similar accuracy for the Chappuis band. For integrated u.v. dose estimates a resolution slightly better than 10 nm is required in the u.v.B region (290-315 nm) to yield an accuracy better than 10%, but we may treat the u.v.A region (315-400 nm) as a single band and yet have an accuracy better than 2%.
Predicting Atomic Decay Rates Using an Informational-Entropic Approach
NASA Astrophysics Data System (ADS)
Gleiser, Marcelo; Jiang, Nan
2018-06-01
We show that a newly proposed Shannon-like entropic measure of shape complexity applicable to spatially-localized or periodic mathematical functions known as configurational entropy (CE) can be used as a predictor of spontaneous decay rates for one-electron atoms. The CE is constructed from the Fourier transform of the atomic probability density. For the hydrogen atom with degenerate states labeled with the principal quantum number n, we obtain a scaling law relating the n-averaged decay rates to the respective CE. The scaling law allows us to predict the n-averaged decay rate without relying on the traditional computation of dipole matrix elements. We tested the predictive power of our approach up to n = 20, obtaining an accuracy better than 3.7% within our numerical precision, as compared to spontaneous decay tables listed in the literature.
Predicting Atomic Decay Rates Using an Informational-Entropic Approach
NASA Astrophysics Data System (ADS)
Gleiser, Marcelo; Jiang, Nan
2018-02-01
We show that a newly proposed Shannon-like entropic measure of shape complexity applicable to spatially-localized or periodic mathematical functions known as configurational entropy (CE) can be used as a predictor of spontaneous decay rates for one-electron atoms. The CE is constructed from the Fourier transform of the atomic probability density. For the hydrogen atom with degenerate states labeled with the principal quantum number n, we obtain a scaling law relating the n-averaged decay rates to the respective CE. The scaling law allows us to predict the n-averaged decay rate without relying on the traditional computation of dipole matrix elements. We tested the predictive power of our approach up to n = 20, obtaining an accuracy better than 3.7% within our numerical precision, as compared to spontaneous decay tables listed in the literature.
Rajasekaran, S; Bhushan, Manindra; Aiyer, Siddharth; Kanna, Rishi; Shetty, Ajoy Prasad
2018-01-09
To develop a classification based on the technical complexity encountered during pedicle screw insertion and to evaluate the performance of AIRO ® CT navigation system based on this classification, in the clinical scenario of complex spinal deformity. 31 complex spinal deformity correction surgeries were prospectively analyzed for performance of AIRO ® mobile CT-based navigation system. Pedicles were classified according to complexity of insertion into five types. Analysis was performed to estimate the accuracy of screw placement and time for screw insertion. Breach greater than 2 mm was considered for analysis. 452 pedicle screws were inserted (T1-T6: 116; T7-T12: 171; L1-S1: 165). The average Cobb angle was 68.3° (range 60°-104°). We had 242 grade 2 pedicles, 133 grade 3, and 77 grade 4, and 44 pedicles were unfit for pedicle screw insertion. We noted 27 pedicle screw breach (medial: 10; lateral: 16; anterior: 1). Among lateral breach (n = 16), ten screws were planned for in-out-in pedicle screw insertion. Among lateral breach (n = 16), ten screws were planned for in-out-in pedicle screw insertion. Average screw insertion time was 1.76 ± 0.89 min. After accounting for planned breach, the effective breach rate was 3.8% resulting in 96.2% accuracy for pedicle screw placement. This classification helps compare the accuracy of screw insertion in range of conditions by considering the complexity of screw insertion. Considering the clinical scenario of complex pedicle anatomy in spinal deformity AIRO ® navigation showed an excellent accuracy rate of 96.2%.
Evaluation of factors affecting CGMS calibration.
Buckingham, Bruce A; Kollman, Craig; Beck, Roy; Kalajian, Andrea; Fiallo-Scharer, Rosanna; Tansey, Michael J; Fox, Larry A; Wilson, Darrell M; Weinzimer, Stuart A; Ruedy, Katrina J; Tamborlane, William V
2006-06-01
The optimal number/timing of calibrations entered into the CGMS (Medtronic MiniMed, Northridge, CA) continuous glucose monitoring system have not been previously described. Fifty subjects with Type 1 diabetes mellitus (10-18 years old) were hospitalized in a clinical research center for approximately 24 h on two separate days. CGMS and OneTouch Ultra meter (LifeScan, Milpitas, CA) data were obtained. The CGMS was retrospectively recalibrated using the Ultra data varying the number and timing of calibrations. Resulting CGMS values were compared against laboratory reference values. There was a modest improvement in accuracy with increasing number of calibrations. The median relative absolute deviation (RAD) was 14%, 15%, 13%, and 13% when using three, four, five, and seven calibration values, respectively (P < 0.001). Corresponding percentages of CGMS-reference pairs meeting the International Organisation for Standardisation criteria were 66%, 67%, 71%, and 72% (P < 0.001). Nighttime accuracy improved when daytime calibrations (pre-lunch and pre-dinner) were removed leaving only two calibrations at 9 p.m. and 6 a.m. (median difference, -2 vs. -9 mg/dL, P < 0.001; median RAD, 12% vs. 15%, P = 0.001). Accuracy was better on visits where the average absolute rate of glucose change at the times of calibration was lower. On visits with average absolute rates <0.5, 0.5 to <1.0, 1.0 to <1.5, and >or=1.5 mg/dL/min, median RAD values were 13% versus 14% versus 17% versus 19%, respectively (P = 0.05). Although accuracy is slightly improved with more calibrations, the timing of the calibrations appears more important. Modifying the algorithm to put less weight on daytime calibrations for nighttime values and calibrating during times of relative glucose stability may have greater impact on accuracy.
Evaluation of Factors Affecting CGMS Calibration
2006-01-01
Background The optimal number/timing of calibrations entered into the Continuous Glucose Monitoring System (“CGMS”; Medtronic MiniMed, Northridge, CA) have not been previously described. Methods Fifty subjects with T1DM (10–18y) were hospitalized in a clinical research center for ~24h on two separate days. CGMS and OneTouch® Ultra® Meter (“Ultra”; LifeScan, Milpitas, CA) data were obtained. The CGMS was retrospectively recalibrated using the Ultra data varying the number and timing of calibrations. Resulting CGMS values were compared against laboratory reference values. Results There was a modest improvement in accuracy with increasing number of calibrations. The median relative absolute deviation (RAD) was 14%, 15%, 13% and 13% when using 3, 4, 5 and 7 calibration values, respectively (p<0.001). Corresponding percentages of CGMS-reference pairs meeting the ISO criteria were 66%, 67%, 71% and 72% (p<0.001). Nighttime accuracy improved when daytime calibrations (pre-lunch and pre-dinner) were removed leaving only two calibrations at 9p.m. and 6a.m. (median difference: −2 vs. −9mg/dL, p<0.001; median RAD: 12% vs. 15%, p=0.001). Accuracy was better on visits where the average absolute rate of glucose change at the times of calibration was lower. On visits with average absolute rates <0.5, 0.5-<1.0, 1.0-<1.5 and ≥1.5mg/dL/min, median RAD values were 13% vs. 14% vs. 17% vs. 19%, respectively (p=0.05). Conclusions Although accuracy is slightly improved with more calibrations, the timing of the calibrations appears more important. Modifying the algorithm to put less weight on daytime calibrations for nighttime values and calibrating during times of relative glucose stability may have greater impact on accuracy. PMID:16800753
On determining dose rate constants spectroscopically
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodriguez, M.; Rogers, D. W. O.
2013-01-15
Purpose: To investigate several aspects of the Chen and Nath spectroscopic method of determining the dose rate constants of {sup 125}I and {sup 103}Pd seeds [Z. Chen and R. Nath, Phys. Med. Biol. 55, 6089-6104 (2010)] including the accuracy of using a line or dual-point source approximation as done in their method, and the accuracy of ignoring the effects of the scattered photons in the spectra. Additionally, the authors investigate the accuracy of the literature's many different spectra for bare, i.e., unencapsulated {sup 125}I and {sup 103}Pd sources. Methods: Spectra generated by 14 {sup 125}I and 6 {sup 103}Pd seedsmore » were calculated in vacuo at 10 cm from the source in a 2.7 Multiplication-Sign 2.7 Multiplication-Sign 0.05 cm{sup 3} voxel using the EGSnrc BrachyDose Monte Carlo code. Calculated spectra used the initial photon spectra recommended by AAPM's TG-43U1 and NCRP (National Council of Radiation Protection and Measurements) Report 58 for the {sup 125}I seeds, or TG-43U1 and NNDC(2000) (National Nuclear Data Center, 2000) for {sup 103}Pd seeds. The emitted spectra were treated as coming from a line or dual-point source in a Monte Carlo simulation to calculate the dose rate constant. The TG-43U1 definition of the dose rate constant was used. These calculations were performed using the full spectrum including scattered photons or using only the main peaks in the spectrum as done experimentally. Statistical uncertainties on the air kerma/history and the dose rate/history were Less-Than-Or-Slanted-Equal-To 0.2%. The dose rate constants were also calculated using Monte Carlo simulations of the full seed model. Results: The ratio of the intensity of the 31 keV line relative to that of the main peak in {sup 125}I spectra is, on average, 6.8% higher when calculated with the NCRP Report 58 initial spectrum vs that calculated with TG-43U1 initial spectrum. The {sup 103}Pd spectra exhibit an average 6.2% decrease in the 22.9 keV line relative to the main peak when calculated with the TG-43U1 rather than the NNDC(2000) initial spectrum. The measured values from three different investigations are in much better agreement with the calculations using the NCRP Report 58 and NNDC(2000) initial spectra with average discrepancies of 0.9% and 1.7% for the {sup 125}I and {sup 103}Pd seeds, respectively. However, there are no differences in the calculated TG-43U1 brachytherapy parameters using either initial spectrum in both cases. Similarly, there were no differences outside the statistical uncertainties of 0.1% or 0.2%, in the average energy, air kerma/history, dose rate/history, and dose rate constant when calculated using either the full photon spectrum or the main-peaks-only spectrum. Conclusions: Our calculated dose rate constants based on using the calculated on-axis spectrum and a line or dual-point source model are in excellent agreement (0.5% on average) with the values of Chen and Nath, verifying the accuracy of their more approximate method of going from the spectrum to the dose rate constant. However, the dose rate constants based on full seed models differ by between +4.6% and -1.5% from those based on the line or dual-point source approximations. These results suggest that the main value of spectroscopic measurements is to verify full Monte Carlo models of the seeds by comparison to the calculated spectra.« less
Percutaneous spinal fixation simulation with virtual reality and haptics.
Luciano, Cristian J; Banerjee, P Pat; Sorenson, Jeffery M; Foley, Kevin T; Ansari, Sameer A; Rizzi, Silvio; Germanwala, Anand V; Kranzler, Leonard; Chittiboina, Prashant; Roitberg, Ben Z
2013-01-01
In this study, we evaluated the use of a part-task simulator with 3-dimensional and haptic feedback as a training tool for percutaneous spinal needle placement. To evaluate the learning effectiveness in terms of entry point/target point accuracy of percutaneous spinal needle placement on a high-performance augmented-reality and haptic technology workstation with the ability to control the duration of computer-simulated fluoroscopic exposure, thereby simulating an actual situation. Sixty-three fellows and residents performed needle placement on the simulator. A virtual needle was percutaneously inserted into a virtual patient's thoracic spine derived from an actual patient computed tomography data set. Ten of 126 needle placement attempts by 63 participants ended in failure for a failure rate of 7.93%. From all 126 needle insertions, the average error (15.69 vs 13.91), average fluoroscopy exposure (4.6 vs 3.92), and average individual performance score (32.39 vs 30.71) improved from the first to the second attempt. Performance accuracy yielded P = .04 from a 2-sample t test in which the rejected null hypothesis assumes no improvement in performance accuracy from the first to second attempt in the test session. The experiments showed evidence (P = .04) of performance accuracy improvement from the first to the second percutaneous needle placement attempt. This result, combined with previous learning retention and/or face validity results of using the simulator for open thoracic pedicle screw placement and ventriculostomy catheter placement, supports the efficacy of augmented reality and haptics simulation as a learning tool.
Identification of Anisomerous Motor Imagery EEG Signals Based on Complex Algorithms
Zhang, Zhiwen; Duan, Feng; Zhou, Xin; Meng, Zixuan
2017-01-01
Motor imagery (MI) electroencephalograph (EEG) signals are widely applied in brain-computer interface (BCI). However, classified MI states are limited, and their classification accuracy rates are low because of the characteristics of nonlinearity and nonstationarity. This study proposes a novel MI pattern recognition system that is based on complex algorithms for classifying MI EEG signals. In electrooculogram (EOG) artifact preprocessing, band-pass filtering is performed to obtain the frequency band of MI-related signals, and then, canonical correlation analysis (CCA) combined with wavelet threshold denoising (WTD) is used for EOG artifact preprocessing. We propose a regularized common spatial pattern (R-CSP) algorithm for EEG feature extraction by incorporating the principle of generic learning. A new classifier combining the K-nearest neighbor (KNN) and support vector machine (SVM) approaches is used to classify four anisomerous states, namely, imaginary movements with the left hand, right foot, and right shoulder and the resting state. The highest classification accuracy rate is 92.5%, and the average classification accuracy rate is 87%. The proposed complex algorithm identification method can significantly improve the identification rate of the minority samples and the overall classification performance. PMID:28874909
Accuracy of vertical radial plume mapping technique in measuring lagoon gas emissions.
Viguria, Maialen; Ro, Kyoung S; Stone, Kenneth C; Johnson, Melvin H
2015-04-01
Recently, the U.S. Environmental Protection Agency (EPA) posted a ground-based optical remote sensing method on its Web site called Other Test Method (OTM) 10 for measuring fugitive gas emission flux from area sources such as closed landfills. The OTM 10 utilizes the vertical radial plume mapping (VRPM) technique to calculate fugitive gas emission mass rates based on measured wind speed profiles and path-integrated gas concentrations (PICs). This study evaluates the accuracy of the VRPM technique in measuring gas emission from animal waste treatment lagoons. A field trial was designed to evaluate the accuracy of the VRPM technique. Control releases of methane (CH4) were made from a 45 m×45 m floating perforated pipe network located on an irrigation pond that resembled typical treatment lagoon environments. The accuracy of the VRPM technique was expressed by the ratio of the calculated emission rates (QVRPM) to actual emission rates (Q). Under an ideal condition of having mean wind directions mostly normal to a downwind vertical plane, the average VRPM accuracy was 0.77±0.32. However, when mean wind direction was mostly not normal to the downwind vertical plane, the emission plume was not adequately captured resulting in lower accuracies. The accuracies of these nonideal wind conditions could be significantly improved if we relaxed the VRPM wind direction criteria and combined the emission rates determined from two adjacent downwind vertical planes surrounding the lagoon. With this modification, the VRPM accuracy improved to 0.97±0.44, whereas the number of valid data sets also increased from 113 to 186. The need for developing accurate and feasible measuring techniques for fugitive gas emission from animal waste lagoons is vital for livestock gas inventories and implementation of mitigation strategies. This field lagoon gas emission study demonstrated that the EPA's vertical radial plume mapping (VRPM) technique can be used to accurately measure lagoon gas emission with two downwind vertical concentration planes surrounding the lagoon.
Blizzard, Daniel J; Thomas, J Alex
2018-03-15
Retrospective review of prospectively collected data of the first 72 consecutive patients treated with single-position one- or two-level lateral (LLIF) or oblique lateral interbody fusion (OLLIF) with bilateral percutaneous pedicle screw and rod fixation by a single spine surgeon. To evaluate the clinical feasibility, accuracy, and efficiency of a single-position technique for LLIF and OLLIF with bilateral pedicle screw and rod fixation. Minimally-invasive lateral interbody approaches are performed in the lateral decubitus position. Subsequent repositioning prone for bilateral pedicle screw and rod fixation requires significant time and resources and does not facilitate increased lumbar lordosis. The first 72 consecutive patients (300 screws) treated with single-position LLIF or OLLIF and bilateral pedicle screws by a single surgeon between December 2013 and August 2016 were included in the study. Screw accuracy and fusion were graded using computed tomography and several timing parameters were recorded including retractor, fluoroscopy, and screw placement time. Complications including reoperation, infection, and postoperative radicular pain and weakness were recorded. Average screw placement time was 5.9 min/screw (standard deviation, SD: 1.5 min; range: 3-9.5 min). Average total operative time (interbody cage and pedicle screw placement) was 87.9 minutes (SD: 25.1 min; range: 49-195 min). Average fluoroscopy time was 15.0 s/screw (SD: 4.7 s; range: 6-25 s). The pedicle screw breach rate was 5.1% with 10/13 breaches measured as < 2 mm in magnitude. Fusion rate at 6-months postoperative was 87.5%. Two (2.8%) patients underwent reoperation for malpositioned pedicle screws with subsequent resolution of symptoms. The single-position, all-lateral technique was found to be feasible with accuracy, fluoroscopy usage, and complication rates comparable with the published literature. This technique eliminates the time and staffing associated with intraoperative repositioning and may lead to significant improvements in operative efficiency and cost savings. 4.
NASA Technical Reports Server (NTRS)
Goldhirsh, Julius; Krichevsky, Vladimir; Gebo, Norman
1992-01-01
Five years of rain rate and modeled slant path attenuation distributions at 20 GHz and 30 GHz derived from a network of 10 tipping bucket rain gages was examined. The rain gage network is located within a grid 70 km north-south and 47 km east-west in the Mid-Atlantic coast of the United States in the vicinity of Wallops Island, Virginia. Distributions were derived from the variable integration time data and from one minute averages. It was demonstrated that for realistic fade margins, the variable integration time results are adequate to estimate slant path attenuations at frequencies above 20 GHz using models which require one minute averages. An accurate empirical formula was developed to convert the variable integration time rain rates to one minute averages. Fade distributions at 20 GHz and 30 GHz were derived employing Crane's Global model because it was demonstrated to exhibit excellent accuracy with measured COMSTAR fades at 28.56 GHz.
Day, J D; Weaver, L D; Franti, C E
1995-01-01
The objective of this prospective cohort study was to determine the sensitivity, specificity, accuracy, and predictive value of twin pregnancy diagnosis by rectal palpation and to examine fetal survival, culling rates, and gestational lengths of cows diagnosed with twins. In this prospective study, 5309 cows on 14 farms in California were followed from pregnancy diagnosis to subsequent abortion or calving. The average sensitivity, specificity, accuracy, and predictive value of twin pregnancy diagnosis were 49.3%, 99.4%, 96.0%, and 86.1%, respectively. The abortion rate for single pregnancies of 12.0% differed significantly from those for bicornual twin pregnancies and unicornual twin pregnancies of 26.2% and 32.4%, respectively (P < 0.05). The early calf mortality between cows calving with singles (3.2%) and twins (15.7%) were significantly different (P < 0.005). The difference in fetal survival between single pregnancies and all twin pregnancies resulted in 0.42 and 0.29 viable heifers per pregnancy, respectively. The average gestation for single, bicornual, and unicornual pregnancies that did not abort before drying-off was 278, 272, and 270 days, respectively. Results of this study show that there is an increased fetal wastage associated with twin pregnancies and suggest a need for further research exploring management strategies for cows carrying twins. PMID:7728734
Myers, Michael J; Yancy, Haile F; Araneta, Michael; Armour, Jennifer; Derr, Janice; Hoostelaere, Lawrence A D; Farmer, Doris; Jackson, Falana; Kiessling, William M; Koch, Henry; Lin, Huahua; Liu, Yan; Mowlds, Gabrielle; Pinero, David; Riter, Ken L; Sedwick, John; Shen, Yuelian; Wetherington, June; Younkins, Ronsha
2006-01-01
A method trial was initiated to validate the use of a commercial DNA forensic kit to extract DNA from animal feed as part of a PCR-based method. Four different PCR primer pairs (one bovine pair, one porcine pair, one ovine primer pair, and one multispecies pair) were also evaluated. Each laboratory was required to analyze a total of 120 dairy feed samples either not fortified (control, true negative) or fortified with bovine meat and bone meal, porcine meat and bone meal (PMBM), or lamb meal. Feeds were fortified with the animal meals at a concentration of 0.1% (wt/wt). Ten laboratories participated in this trial, and each laboratory was required to evaluate two different primer pairs, i.e., each PCR primer pair was evaluated by five different laboratories. The method was considered to be validated for a given animal source when three or more laboratories achieved at least 97% accuracy (29 correct of 30 samples for 96.7% accuracy, rounded up to 97%) in detecting the fortified samples for that source. Using this criterion, the method was validated for the bovine primer because three laboratories met the criterion, with an average accuracy of 98.9%. The average false-positive rate was 3.0% in these laboratories. A fourth laboratory was 80% accurate in identifying the samples fortified with bovine meat and bone meal. A fifth laboratory was not able to consistently extract the DNA from the feed samples and did not achieve the criterion for accuracy for either the bovine or multispecies PCR primers. For the porcine primers, the method was validated, with four laboratories meeting the criterion for accuracy with an average accuracy of 99.2%. The fifth laboratory had a 93.3% accuracy outcome for the porcine primer. Collectively, these five laboratories had a 1.3% false-positive rate for the porcine primer. No laboratory was able to meet the criterion for accuracy with the ovine primers, most likely because of problems with the synthesis of the primer pair; none of the positive control DNA samples could be detected with the ovine primers. The multispecies primer pair was validated in three laboratories for use with bovine meat and bone meal and lamb meal but not with PMBM. The three laboratories had an average accuracy of 98.9% for bovine meat and bone meal, 97.8% for lamb meal, and 63.3% for PMBM. When examined on an individual laboratory basis, one of these four laboratories could not identify a single feed sample containing PMBM by using the multispecies primer, whereas the other laboratory identified only one PMBM-fortified sample, suggesting that the limit of detection for PMBM with this primer pair is around 0.1% (wt/wt). The results of this study demonstrated that the DNA forensic kit can be used to extract DNA from animal feed, which can then be used for PCR analysis to detect animal-derived protein present in the feed sample.
NASA Astrophysics Data System (ADS)
O'Neil, Gina L.; Goodall, Jonathan L.; Watson, Layne T.
2018-04-01
Wetlands are important ecosystems that provide many ecological benefits, and their quality and presence are protected by federal regulations. These regulations require wetland delineations, which can be costly and time-consuming to perform. Computer models can assist in this process, but lack the accuracy necessary for environmental planning-scale wetland identification. In this study, the potential for improvement of wetland identification models through modification of digital elevation model (DEM) derivatives, derived from high-resolution and increasingly available light detection and ranging (LiDAR) data, at a scale necessary for small-scale wetland delineations is evaluated. A novel approach of flow convergence modelling is presented where Topographic Wetness Index (TWI), curvature, and Cartographic Depth-to-Water index (DTW), are modified to better distinguish wetland from upland areas, combined with ancillary soil data, and used in a Random Forest classification. This approach is applied to four study sites in Virginia, implemented as an ArcGIS model. The model resulted in significant improvement in average wetland accuracy compared to the commonly used National Wetland Inventory (84.9% vs. 32.1%), at the expense of a moderately lower average non-wetland accuracy (85.6% vs. 98.0%) and average overall accuracy (85.6% vs. 92.0%). From this, we concluded that modifying TWI, curvature, and DTW provides more robust wetland and non-wetland signatures to the models by improving accuracy rates compared to classifications using the original indices. The resulting ArcGIS model is a general tool able to modify these local LiDAR DEM derivatives based on site characteristics to identify wetlands at a high resolution.
NASA Technical Reports Server (NTRS)
Green, S.; Cochrane, D. L.; Truhlar, D. G.
1986-01-01
The utility of the energy-corrected sudden (ECS) scaling method is evaluated on the basis of how accurately it predicts the entire matrix of state-to-state rate constants, when the fundamental rate constants are independently known. It is shown for the case of Ar-CO collisions at 500 K that when a critical impact parameter is about 1.75-2.0 A, the ECS method yields excellent excited state rates on the average and has an rms error of less than 20 percent.
Short-term forecasts gain in accuracy. [Regression technique using ''Box-Jenkins'' analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Box-Jenkins time-series models offer accuracy for short-term forecasts that compare with large-scale macroeconomic forecasts. Utilities need to be able to forecast peak demand in order to plan their generating, transmitting, and distribution systems. This new method differs from conventional models by not assuming specific data patterns, but by fitting available data into a tentative pattern on the basis of auto-correlations. Three types of models (autoregressive, moving average, or mixed autoregressive/moving average) can be used according to which provides the most appropriate combination of autocorrelations and related derivatives. Major steps in choosing a model are identifying potential models, estimating the parametersmore » of the problem, and running a diagnostic check to see if the model fits the parameters. The Box-Jenkins technique is well suited for seasonal patterns, which makes it possible to have as short as hourly forecasts of load demand. With accuracy up to two years, the method will allow electricity price-elasticity forecasting that can be applied to facility planning and rate design. (DCK)« less
McGarraugh, Geoffrey
2010-01-01
Continuous glucose monitoring (CGM) devices available in the United States are approved for use as adjuncts to self-monitoring of blood glucose (SMBG); all CGM alarms require SMBG confirmation before treatment. In this report, an analysis method is proposed to determine the CGM threshold alarm accuracy required to eliminate SMBG confirmation. The proposed method builds on the Clinical and Laboratory Standards Institute (CLSI) guideline for evaluating CGM threshold alarms using data from an in-clinic study of subjects with type 1 diabetes. The CLSI method proposes a maximum time limit of +/-30 minutes for the detection of hypo- and hyperglycemic events but does not include limits for glucose measurement accuracy. The International Standards Organization (ISO) standard for SMBG glucose measurement accuracy (ISO 15197) is +/-15 mg/dl for glucose <75 mg/dl and +/-20% for glucose > or = 75 mg/dl. This standard was combined with the CLSI method to more completely characterize the accuracy of CGM alarms. Incorporating the ISO 15197 accuracy margins, FreeStyle Navigator CGM system alarms detected 70 mg/dl hypoglycemia within 30 minutes at a rate of 70.3%, with a false alarm rate of 11.4%. The device detected high glucose in the range of 140-300 mg/dl within 30 minutes at an average rate of 99.2%, with a false alarm rate of 2.1%. Self-monitoring of blood glucose confirmation is necessary for detecting and treating hypoglycemia with the FreeStyle Navigator CGM system, but at high glucose levels, SMBG confirmation adds little incremental value to CGM alarms. 2010 Diabetes Technology Society.
Erickson, Jon A; Jalaie, Mehran; Robertson, Daniel H; Lewis, Richard A; Vieth, Michal
2004-01-01
The key to success for computational tools used in structure-based drug design is the ability to accurately place or "dock" a ligand in the binding pocket of the target of interest. In this report we examine the effect of several factors on docking accuracy, including ligand and protein flexibility. To examine ligand flexibility in an unbiased fashion, a test set of 41 ligand-protein cocomplex X-ray structures were assembled that represent a diversity of size, flexibility, and polarity with respect to the ligands. Four docking algorithms, DOCK, FlexX, GOLD, and CDOCKER, were applied to the test set, and the results were examined in terms of the ability to reproduce X-ray ligand positions within 2.0A heavy atom root-mean-square deviation. Overall, each method performed well (>50% accuracy) but for all methods it was found that docking accuracy decreased substantially for ligands with eight or more rotatable bonds. Only CDOCKER was able to accurately dock most of those ligands with eight or more rotatable bonds (71% accuracy rate). A second test set of structures was gathered to examine how protein flexibility influences docking accuracy. CDOCKER was applied to X-ray structures of trypsin, thrombin, and HIV-1-protease, using protein structures bound to several ligands and also the unbound (apo) form. Docking experiments of each ligand to one "average" structure and to the apo form were carried out, and the results were compared to docking each ligand back to its originating structure. The results show that docking accuracy falls off dramatically if one uses an average or apo structure. In fact, it is shown that the drop in docking accuracy mirrors the degree to which the protein moves upon ligand binding.
Bhat, Somanath; Polanowski, Andrea M; Double, Mike C; Jarman, Simon N; Emslie, Kerry R
2012-01-01
Recent advances in nanofluidic technologies have enabled the use of Integrated Fluidic Circuits (IFCs) for high-throughput Single Nucleotide Polymorphism (SNP) genotyping (GT). In this study, we implemented and validated a relatively low cost nanofluidic system for SNP-GT with and without Specific Target Amplification (STA). As proof of principle, we first validated the effect of input DNA copy number on genotype call rate using well characterised, digital PCR (dPCR) quantified human genomic DNA samples and then implemented the validated method to genotype 45 SNPs in the humpback whale, Megaptera novaeangliae, nuclear genome. When STA was not incorporated, for a homozygous human DNA sample, reaction chambers containing, on average 9 to 97 copies, showed 100% call rate and accuracy. Below 9 copies, the call rate decreased, and at one copy it was 40%. For a heterozygous human DNA sample, the call rate decreased from 100% to 21% when predicted copies per reaction chamber decreased from 38 copies to one copy. The tightness of genotype clusters on a scatter plot also decreased. In contrast, when the same samples were subjected to STA prior to genotyping a call rate and a call accuracy of 100% were achieved. Our results demonstrate that low input DNA copy number affects the quality of data generated, in particular for a heterozygous sample. Similar to human genomic DNA, a call rate and a call accuracy of 100% was achieved with whale genomic DNA samples following multiplex STA using either 15 or 45 SNP-GT assays. These calls were 100% concordant with their true genotypes determined by an independent method, suggesting that the nanofluidic system is a reliable platform for executing call rates with high accuracy and concordance in genomic sequences derived from biological tissue.
Perception and analysis of Spanish accents in English speech
NASA Astrophysics Data System (ADS)
Chism, Cori; Lass, Norman
2002-05-01
The purpose of the present study was to determine what relates most closely to the degree of perceived foreign accent in the English speech of native Spanish speakers: intonation, vowel length, stress, voice onset time (VOT), or segmental accuracy. Nineteen native English speaking listeners rated speech samples from 7 native English speakers and 15 native Spanish speakers for comprehensibility and degree of foreign accent. The speech samples were analyzed spectrographically and perceptually to obtain numerical values for each variable. Correlation coefficients were computed to determine the relationship beween these values and the average foreign accent scores. Results showed that the average foreign accent scores were statistically significantly correlated with three variables: the length of stressed vowels (r=-0.48, p=0.05), voice onset time (r =-0.62, p=0.01), and segmental accuracy (r=0.92, p=0.001). Implications of these findings and suggestions for future research are discussed.
Accurate measurement of imaging photoplethysmographic signals based camera using weighted average
NASA Astrophysics Data System (ADS)
Pang, Zongguang; Kong, Lingqin; Zhao, Yuejin; Sun, Huijuan; Dong, Liquan; Hui, Mei; Liu, Ming; Liu, Xiaohua; Liu, Lingling; Li, Xiaohui; Li, Rongji
2018-01-01
Imaging Photoplethysmography (IPPG) is an emerging technique for the extraction of vital signs of human being using video recordings. IPPG technology with its advantages like non-contact measurement, low cost and easy operation has become one research hot spot in the field of biomedicine. However, the noise disturbance caused by non-microarterial area cannot be removed because of the uneven distribution of micro-arterial, different signal strength of each region, which results in a low signal noise ratio of IPPG signals and low accuracy of heart rate. In this paper, we propose a method of improving the signal noise ratio of camera-based IPPG signals of each sub-region of the face using a weighted average. Firstly, we obtain the region of interest (ROI) of a subject's face based camera. Secondly, each region of interest is tracked and feature-based matched in each frame of the video. Each tracked region of face is divided into 60x60 pixel block. Thirdly, the weights of PPG signal of each sub-region are calculated, based on the signal-to-noise ratio of each sub-region. Finally, we combine the IPPG signal from all the tracked ROI using weighted average. Compared with the existing approaches, the result shows that the proposed method takes modest but significant effects on improvement of signal noise ratio of camera-based PPG estimated and accuracy of heart rate measurement.
Chai, Rifai; Naik, Ganesh R; Ling, Sai Ho; Nguyen, Hung T
2017-01-07
One of the key challenges of the biomedical cyber-physical system is to combine cognitive neuroscience with the integration of physical systems to assist people with disabilities. Electroencephalography (EEG) has been explored as a non-invasive method of providing assistive technology by using brain electrical signals. This paper presents a unique prototype of a hybrid brain computer interface (BCI) which senses a combination classification of mental task, steady state visual evoked potential (SSVEP) and eyes closed detection using only two EEG channels. In addition, a microcontroller based head-mounted battery-operated wireless EEG sensor combined with a separate embedded system is used to enhance portability, convenience and cost effectiveness. This experiment has been conducted with five healthy participants and five patients with tetraplegia. Generally, the results show comparable classification accuracies between healthy subjects and tetraplegia patients. For the offline artificial neural network classification for the target group of patients with tetraplegia, the hybrid BCI system combines three mental tasks, three SSVEP frequencies and eyes closed, with average classification accuracy at 74% and average information transfer rate (ITR) of the system of 27 bits/min. For the real-time testing of the intentional signal on patients with tetraplegia, the average success rate of detection is 70% and the speed of detection varies from 2 to 4 s.
Evaluation of the accuracy of estimated baseline serum creatinine for acute kidney injury diagnosis.
Hatakeyama, Yutaka; Horino, Taro; Nagata, Keitaro; Kataoka, Hiromi; Matsumoto, Tatsuki; Terada, Yoshio; Okuhara, Yoshiyasu
2018-04-01
Modern epidemiologic studies of acute kidney injury (AKI) have been facilitated by the increasing availability of electronic medical records. However, pre-morbid reference serum creatinine (SCr) data are often unavailable in such records. Investigators substitute estimated baseline SCr with the eGFR 75 approach, instead of using actually measured baseline SCr. Here, we evaluated the accuracy of estimated baseline SCr for AKI diagnosis in the Japanese population. Inpatients and outpatients aged 18-80 years were retrospectively enrolled. AKI was diagnosed according to the Kidney Disease Improving Global Outcomes (KDIGO) criteria, using SCr levels. The non-AKI and AKI groups were selected using the following criteria: increase 1.5 times greater than baseline SCr ("baseline SCr") or increase 0.3 mg/dL greater than baseline SCr in 48 h ("increase in 48 h"). AKI accuracy defined by the estimated reference SCr, the average SCr value of the non-AKI population (eb-GFR-A approach), or the back-calculated SCr from fixed eGFR = 75 mL/min/1.73 m 2 (eGFR 75 approach, or, eb-GFR-B approach in this study), was evaluated. We analyzed data from 131,358 Japanese patients. The number of patients with reference baseline SCr in the non-AKI and AKI patients were 29,834 and 8952, respectively. For AKI patients diagnosed using "baseline SCr", the AKI diagnostic accuracy rates as defined by eb-GFR-A and eb-GFR-B were 63.5 and 57.7%, respectively, while in AKI diagnosed using "increase in 48 h", the AKI diagnostic accuracy rates as defined by eb-GFR-A and eb-GFR-B were 78.7 and 75.1%, respectively. In non-AKI patients, false-positive rates of AKI misdiagnosed via eb-GFR-A and eb-GFR-B were 7.4 and 6.8%, respectively. AKI diagnosis using the average SCr value of the general population may yield more accurate results than diagnosis using the eGFR 75 approach when the reference SCr is unavailable.
NASA Astrophysics Data System (ADS)
Moghimi, Saba; Kushki, Azadeh; Power, Sarah; Guerguerian, Anne Marie; Chau, Tom
2012-04-01
Emotional responses can be induced by external sensory stimuli. For severely disabled nonverbal individuals who have no means of communication, the decoding of emotion may offer insight into an individual’s state of mind and his/her response to events taking place in the surrounding environment. Near-infrared spectroscopy (NIRS) provides an opportunity for bed-side monitoring of emotions via measurement of hemodynamic activity in the prefrontal cortex, a brain region known to be involved in emotion processing. In this paper, prefrontal cortex activity of ten able-bodied participants was monitored using NIRS as they listened to 78 music excerpts with different emotional content and a control acoustic stimuli consisting of the Brown noise. The participants rated their emotional state after listening to each excerpt along the dimensions of valence (positive versus negative) and arousal (intense versus neutral). These ratings were used to label the NIRS trial data. Using a linear discriminant analysis-based classifier and a two-dimensional time-domain feature set, trials with positive and negative emotions were discriminated with an average accuracy of 71.94% ± 8.19%. Trials with audible Brown noise representing a neutral response were differentiated from high arousal trials with an average accuracy of 71.93% ± 9.09% using a two-dimensional feature set. In nine out of the ten participants, response to the neutral Brown noise was differentiated from high arousal trials with accuracies exceeding chance level, and positive versus negative emotional differentiation accuracies exceeded the chance level in seven out of the ten participants. These results illustrate that NIRS recordings of the prefrontal cortex during presentation of music with emotional content can be automatically decoded in terms of both valence and arousal encouraging future investigation of NIRS-based emotion detection in individuals with severe disabilities.
Knott, Jayne Fifield; Olimpio, Julio C.
1986-01-01
Estimation of the average annual rate of ground-water recharge to sand and gravel aquifers using elevated tritium concentrations in ground water is an alternative to traditional steady-state and water-balance recharge-rate methods. The concept of the tritium tracer method is that the average annual rate of ground-water recharge over a period of time can be calculated from the depth of the peak tritium concentration in the aquifer. Assuming that ground-water flow is vertically downward and that aquifer properties are reasonably homogeneous, and knowing the date of maximum tritium concentration in precipitation and the current depth to the tritium peak from the water table, the average recharge rate can be calculated. The method, which is a direct-measurement technique, was applied at two sites on Nantucket Island, Massachusetts. At site 1, the average annual recharge rate between 1964 and 1983 was 26.1 inches per year, or 68 percent of the average annual precipitation, and the estimated uncertainty is ?15 percent. At site 2, the multilevel water samplers were not constructed deep enough to determine the peak concentration of tritium in ground water. The tritium profile at site 2 resembles the upper part of the tritium profile at site 1 and indicates that the average recharge rate was at least 16 .7 inches per year, or at least 44 percent of the average annual precipitation. The Nantucket tritium recharge rates clearly are higher than rates determined elsewhere in southeastern Massachusetts using the tritium, water-table-fluctuation, and water-balance (Thornthwaite) methods, regardless of the method or the area. Because the recharge potential on Nantucket is so high (runoff is only 2 percent of the total water balance), the tritium recharge rates probably represent the effective upper limit for ground-water recharge in this region. The recharge-rate values used by Guswa and LeBlanc (1985) and LeBlanc (1984) in their ground-water-flow computer models of Cape Cod are 20 to 30 percent lower than this upper limit. The accuracy of the tritium method is dependent on two key factors: the accuracy of the effective-porosity data, and the sampling interval used at the site. For some sites, the need for recharge-rate data may require a determination as statistically accurate as that which can be provided by the tritium method. However, the tritium method is more costly and more time consuming than the other methods because numerous wells must be drilled and installed and because many water samples must be analyzed for tritium, to a very small level of analytical detection. For many sites, a less accurate, less expensive, and faster method of recharge-rate determination might be more satisfactory . The factor that most seriously limits the usefulness of the tritium tracer method is the current depth of the tritium peak. Water with peak concentrations of tritium entered the ground more than 20 years ago, and, according to the Nantucket data, that water now is more than 100 feet below the land surface. This suggests that the tracer method will work only in sand and gravel aquifers that are exceedingly thick by New England standards. Conversely, the results suggest that the method may work in areas where saturated thicknesses are less than 100 feet and the rate of vertical ground-water movement is relatively slow, such as in till and in silt- and clay-rich sand and gravel deposits.
NASA Astrophysics Data System (ADS)
Chan, Heang-Ping; Helvie, Mark A.; Petrick, Nicholas; Sahiner, Berkman; Adler, Dorit D.; Blane, Caroline E.; Joynt, Lynn K.; Paramagul, Chintana; Roubidoux, Marilyn A.; Wilson, Todd E.; Hadjiiski, Lubomir M.; Goodsitt, Mitchell M.
1999-05-01
A receiver operating characteristic (ROC) experiment was conducted to evaluate the effects of pixel size on the characterization of mammographic microcalcifications. Digital mammograms were obtained by digitizing screen-film mammograms with a laser film scanner. One hundred twelve two-view mammograms with biopsy-proven microcalcifications were digitized at a pixel size of 35 micrometer X 35 micrometer. A region of interest (ROI) containing the microcalcifications was extracted from each image. ROI images with pixel sizes of 70 micrometers, 105 micrometers, and 140 micrometers were derived from the ROI of 35 micrometer pixel size by averaging 2 X 2, 3 X 3, and 4 X 4 neighboring pixels, respectively. The ROI images were printed on film with a laser imager. Seven MQSA-approved radiologists participated as observers. The likelihood of malignancy of the microcalcifications was rated on a 10-point confidence rating scale and analyzed with ROC methodology. The classification accuracy was quantified by the area, Az, under the ROC curve. The statistical significance of the differences in the Az values for different pixel sizes was estimated with the Dorfman-Berbaum-Metz (DBM) method for multi-reader, multi-case ROC data. It was found that five of the seven radiologists demonstrated a higher classification accuracy with the 70 micrometer or 105 micrometer images. The average Az also showed a higher classification accuracy in the range of 70 to 105 micrometer pixel size. However, the differences in A(subscript z/ between different pixel sizes did not achieve statistical significance. The low specificity of image features of microcalcifications an the large interobserver and intraobserver variabilities may have contributed to the relatively weak dependence of classification accuracy on pixel size.
The devil is in the details: maximizing revenue for daily trauma care.
Barnes, Stephen L; Robinson, Bryce R H; Richards, J Taliesin; Zimmerman, Cindy E; Pritts, Tim A; Tsuei, Betty J; Butler, Karyn L; Muskat, Peter C; Davis, Kenneth; Johannigman, Jay A
2008-10-01
Falling reimbursement rates for trauma care demand a concerted effort of charge capture for the fiscal survival of trauma surgeons. We compared current procedure terminology code distribution and billing patterns for Subsequent Hospital Care (SHC) before and after the institution of standardized documentation. Standardized SHC progress notes were created. The note was formulated with an emphasis on efficiency and accuracy. Documentation was completed by residents in conjunction with attendings following standard guidelines of linkage. Year-to-year patient volume, length of stay (LOS), injury severity, bills submitted, coding of service, work relative value units (wRVUs), revenue stream, and collection rate were compared with and without standardized documentation. A 394% average revenue increase was observed with the standardization of SHC documentation. Submitted charges more than doubled in the first year despite a 14% reduction in admissions and no change in length of stay. Significant increases in level II and level III billing and billing volume (P < .05) were sustainable year to year and resulted in an average per patient admission SHC income increase from $91.85 to $362.31. Use of a standardized daily progress note dramatically increases the accuracy of coding and associated billing of subsequent hospital care for trauma services.
Approximated mutual information training for speech recognition using myoelectric signals.
Guo, Hua J; Chan, A D C
2006-01-01
A new training algorithm called the approximated maximum mutual information (AMMI) is proposed to improve the accuracy of myoelectric speech recognition using hidden Markov models (HMMs). Previous studies have demonstrated that automatic speech recognition can be performed using myoelectric signals from articulatory muscles of the face. Classification of facial myoelectric signals can be performed using HMMs that are trained using the maximum likelihood (ML) algorithm; however, this algorithm maximizes the likelihood of the observations in the training sequence, which is not directly associated with optimal classification accuracy. The AMMI training algorithm attempts to maximize the mutual information, thereby training the HMMs to optimize their parameters for discrimination. Our results show that AMMI training consistently reduces the error rates compared to these by the ML training, increasing the accuracy by approximately 3% on average.
The verification of lightning location accuracy in Finland deduced from lightning strikes to trees
NASA Astrophysics Data System (ADS)
Mäkelä, Antti; Mäkelä, Jakke; Haapalainen, Jussi; Porjo, Niko
2016-05-01
We present a new method to determine the ground truth and accuracy of lightning location systems (LLS), using natural lightning strikes to trees. Observations of strikes to trees are being collected with a Web-based survey tool at the Finnish Meteorological Institute. Since the Finnish thunderstorms tend to have on average a low flash rate, it is often possible to identify from the LLS data unambiguously the stroke that caused damage to a given tree. The coordinates of the tree are then the ground truth for that stroke. The technique has clear advantages over other methods used to determine the ground truth. Instrumented towers and rocket launches measure upward-propagating lightning. Video and audio records, even with triangulation, are rarely capable of high accuracy. We present data for 36 quality-controlled tree strikes in the years 2007-2008. We show that the average inaccuracy of the lightning location network for that period was 600 m. In addition, we show that the 50% confidence ellipse calculated by the lightning location network and used operationally for describing the location accuracy is physically meaningful: half of all the strikes were located within the uncertainty ellipse of the nearest recorded stroke. Using tree strike data thus allows not only the accuracy of the LLS to be estimated but also the reliability of the uncertainty ellipse. To our knowledge, this method has not been attempted before for natural lightning.
Steeden, Jennifer A; Muthurangu, Vivek
2015-04-01
1) To validate an R-R interval averaged golden angle spiral phase contrast magnetic resonance (RAGS PCMR) sequence against conventional cine PCMR for assessment of renal blood flow (RBF) in normal volunteers; and 2) To investigate the effects of motion and heart rate on the accuracy of flow measurements using an in silico simulation. In 20 healthy volunteers RAGS (∼6 sec breath-hold) and respiratory-navigated cine (∼5 min) PCMR were performed in both renal arteries to assess RBF. A simulation of RAGS PCMR was used to assess the effect of heart rate (30-105 bpm), vessel expandability (0-150%) and translational motion (x1.0-4.0) on the accuracy of RBF measurements. There was good agreement between RAGS and cine PCMR in the volunteer study (bias: 0.01 L/min, limits of agreement: -0.04 to +0.06 L/min, P = 0.0001). The simulation demonstrated a positive linear relationship between heart rate and error (r = 0.9894, P < 0.0001), a negative linear relationship between vessel expansion and error (r = -0.9484, P < 0.0001), and a nonlinear, heart rate-dependent relationship between vessel translation and error. We have demonstrated that RAGS PCMR accurately measures RBF in vivo. However, the simulation reveals limitations in this technique at extreme heart rates (<40 bpm, >100 bpm), or when there is significant motion (vessel expandability: >80%, vessel translation: >x2.2). © 2014 Wiley Periodicals, Inc.
An automatic tooth preparation technique: A preliminary study
NASA Astrophysics Data System (ADS)
Yuan, Fusong; Wang, Yong; Zhang, Yaopeng; Sun, Yuchun; Wang, Dangxiao; Lyu, Peijun
2016-04-01
The aim of this study is to validate the feasibility and accuracy of a new automatic tooth preparation technique in dental healthcare. An automatic tooth preparation robotic device with three-dimensional motion planning software was developed, which controlled an ultra-short pulse laser (USPL) beam (wavelength 1,064 nm, pulse width 15 ps, output power 30 W, and repeat frequency rate 100 kHz) to complete the tooth preparation process. A total of 15 freshly extracted human intact first molars were collected and fixed into a phantom head, and the target preparation shapes of these molars were designed using customised computer-aided design (CAD) software. The accuracy of tooth preparation was evaluated using the Geomagic Studio and Imageware software, and the preparing time of each tooth was recorded. Compared with the target preparation shape, the average shape error of the 15 prepared molars was 0.05-0.17 mm, the preparation depth error of the occlusal surface was approximately 0.097 mm, and the error of the convergence angle was approximately 1.0°. The average preparation time was 17 minutes. These results validated the accuracy and feasibility of the automatic tooth preparation technique.
An automatic tooth preparation technique: A preliminary study.
Yuan, Fusong; Wang, Yong; Zhang, Yaopeng; Sun, Yuchun; Wang, Dangxiao; Lyu, Peijun
2016-04-29
The aim of this study is to validate the feasibility and accuracy of a new automatic tooth preparation technique in dental healthcare. An automatic tooth preparation robotic device with three-dimensional motion planning software was developed, which controlled an ultra-short pulse laser (USPL) beam (wavelength 1,064 nm, pulse width 15 ps, output power 30 W, and repeat frequency rate 100 kHz) to complete the tooth preparation process. A total of 15 freshly extracted human intact first molars were collected and fixed into a phantom head, and the target preparation shapes of these molars were designed using customised computer-aided design (CAD) software. The accuracy of tooth preparation was evaluated using the Geomagic Studio and Imageware software, and the preparing time of each tooth was recorded. Compared with the target preparation shape, the average shape error of the 15 prepared molars was 0.05-0.17 mm, the preparation depth error of the occlusal surface was approximately 0.097 mm, and the error of the convergence angle was approximately 1.0°. The average preparation time was 17 minutes. These results validated the accuracy and feasibility of the automatic tooth preparation technique.
Global Geopotential Modelling from Satellite-to-Satellite Tracking,
1981-10-01
measured range-rate sampled at regular intervals. The expansion of the potential has been truncated at degree n = 331, because little information on...averaging interval is 4 s , and sampling takes place every 4 s ; if residual data are used, with respect to a reference model of specified accuracy, complete...LEGFDN, MODEL, andNVAR... .. ....... 93 B-4 Sample Output .. .. .. .... ..... ..... ..... 94 Appendix C: Detailed Listings Degree by Degree
Pahlavian, Soroush Heidari; Bunck, Alexander C.; Thyagaraj, Suraj; Giese, Daniel; Loth, Francis; Hedderich, Dennis M.; Kröger, Jan Robert; Martin, Bryn A.
2016-01-01
Abnormal alterations in cerebrospinal fluid (CSF) flow are thought to play an important role in pathophysiology of various craniospinal disorders such as hydrocephalus and Chiari malformation. Three directional phase contrast MRI (4D Flow) has been proposed as one method for quantification of the CSF dynamics in healthy and disease states, but prior to further implementation of this technique, its accuracy in measuring CSF velocity magnitude and distribution must be evaluated. In this study, an MR-compatible experimental platform was developed based on an anatomically detailed 3D printed model of the cervical subarachnoid space and subject specific flow boundary conditions. Accuracy of 4D Flow measurements was assessed by comparison of CSF velocities obtained within the in vitro model with the numerically predicted velocities calculated from a spatially averaged computational fluid dynamics (CFD) model based on the same geometry and flow boundary conditions. Good agreement was observed between CFD and 4D Flow in terms of spatial distribution and peak magnitude of through-plane velocities with an average difference of 7.5% and 10.6% for peak systolic and diastolic velocities, respectively. Regression analysis showed lower accuracy of 4D Flow measurement at the timeframes corresponding to low CSF flow rate and poor correlation between CFD and 4D Flow in-plane velocities. PMID:27043214
Effect of Variations in IRU Integration Time Interval On Accuracy of Aqua Attitude Estimation
NASA Technical Reports Server (NTRS)
Natanson, G. A.; Tracewell, Dave
2003-01-01
During Aqua launch support, attitude analysts noticed several anomalies in Onboard Computer (OBC) rates and in rates computed by the ground Attitude Determination System (ADS). These included: 1) periodic jumps in the OBC pitch rate every 2 minutes; 2) spikes in ADS pitch rate every 4 minutes; 3) close agreement between pitch rates computed by ADS and those derived from telemetered OBC quaternions (in contrast to the step-wise pattern observed for telemetered OBC rates); 4) spikes of +/- 10 milliseconds in telemetered IRU integration time every 4 minutes (despite the fact that telemetered time tags of any two sequential IRU measurements were always 1 second apart from each other). An analysis presented in the paper explains this anomalous behavior by a small average offset of about 0.5 +/- 0.05 microsec in the time interval between two sequential accumulated angle measurements. It is shown that errors in the estimated pitch angle due to neglecting the aforementioned variations in the integration time interval by the OBC is within +/- 2 arcseconds. Ground attitude solutions are found to be accurate enough to see the effect of the variations on the accuracy of the estimated pitch angle.
Dynamic sample size detection in learning command line sequence for continuous authentication.
Traore, Issa; Woungang, Isaac; Nakkabi, Youssef; Obaidat, Mohammad S; Ahmed, Ahmed Awad E; Khalilian, Bijan
2012-10-01
Continuous authentication (CA) consists of authenticating the user repetitively throughout a session with the goal of detecting and protecting against session hijacking attacks. While the accuracy of the detector is central to the success of CA, the detection delay or length of an individual authentication period is important as well since it is a measure of the window of vulnerability of the system. However, high accuracy and small detection delay are conflicting requirements that need to be balanced for optimum detection. In this paper, we propose the use of sequential sampling technique to achieve optimum detection by trading off adequately between detection delay and accuracy in the CA process. We illustrate our approach through CA based on user command line sequence and naïve Bayes classification scheme. Experimental evaluation using the Greenberg data set yields encouraging results consisting of a false acceptance rate (FAR) of 11.78% and a false rejection rate (FRR) of 1.33%, with an average command sequence length (i.e., detection delay) of 37 commands. When using the Schonlau (SEA) data set, we obtain FAR = 4.28% and FRR = 12%.
Dynamic thresholds and a summary ROC curve: Assessing prognostic accuracy of longitudinal markers.
Saha-Chaudhuri, P; Heagerty, P J
2018-04-19
Cancer patients, chronic kidney disease patients, and subjects infected with HIV are routinely monitored over time using biomarkers that represent key health status indicators. Furthermore, biomarkers are frequently used to guide initiation of new treatments or to inform changes in intervention strategies. Since key medical decisions can be made on the basis of a longitudinal biomarker, it is important to evaluate the potential accuracy associated with longitudinal monitoring. To characterize the overall accuracy of a time-dependent marker, we introduce a summary ROC curve that displays the overall sensitivity associated with a time-dependent threshold that controls time-varying specificity. The proposed statistical methods are similar to concepts considered in disease screening, yet our methods are novel in choosing a potentially time-dependent threshold to define a positive test, and our methods allow time-specific control of the false-positive rate. The proposed summary ROC curve is a natural averaging of time-dependent incident/dynamic ROC curves and therefore provides a single summary of net error rates that can be achieved in the longitudinal setting. Copyright © 2018 John Wiley & Sons, Ltd.
Fu, Xi; Qiao, Jia; Girod, Sabine; Niu, Feng; Liu, Jian Feng; Lee, Gordon K; Gui, Lai
2017-09-01
Mandible contour surgery, including reduction gonioplasty and genioplasty, has become increasingly popular in East Asia. However, it is technically challenging and, hence, leads to a long learning curve and high complication rates and often needs secondary revisions. The increasing use of 3-dimensional (3D) technology makes accurate single-stage mandible contour surgery with minimum complication rates possible with a virtual surgical plan (VSP) and 3-D surgical templates. This study is to establish a standardized protocol for VSP and 3-D surgical templates-assisted mandible contour surgery and evaluate the accuracy of the protocol. In this study, we enrolled 20 patients for mandible contour surgery. Our protocol is to perform VSP based on 3-D computed tomography data. Then, design and 3-D print surgical templates based on preoperative VSP. The accuracy of the method was analyzed by 3-D comparison of VSP and postoperative results using detailed computer analysis. All patients had symmetric, natural osteotomy lines and satisfactory facial ratios in a single-stage operation. The average relative error of VSP and postoperative result on the entire skull was 0.41 ± 0.13 mm. The average new left gonial error was 0.43 ± 0.77 mm. The average new right gonial error was 0.45 ± 0.69 mm. The average pognion error was 0.79 ± 1.21 mm. Patients were very satisfied with the aesthetic results. Surgeons were very satisfied with the performance of surgical templates to facilitate the operation. Our standardized protocol of VSP and 3-D printed surgical templates-assisted single-stage mandible contour surgery results in accurate, safe, and predictable outcome in a single stage.
[Electroencephalogram Feature Selection Based on Correlation Coefficient Analysis].
Zhou, Jinzhi; Tang, Xiaofang
2015-08-01
In order to improve the accuracy of classification with small amount of motor imagery training data on the development of brain-computer interface (BCD systems, we proposed an analyzing method to automatically select the characteristic parameters based on correlation coefficient analysis. Throughout the five sample data of dataset IV a from 2005 BCI Competition, we utilized short-time Fourier transform (STFT) and correlation coefficient calculation to reduce the number of primitive electroencephalogram dimension, then introduced feature extraction based on common spatial pattern (CSP) and classified by linear discriminant analysis (LDA). Simulation results showed that the average rate of classification accuracy could be improved by using correlation coefficient feature selection method than those without using this algorithm. Comparing with support vector machine (SVM) optimization features algorithm, the correlation coefficient analysis can lead better selection parameters to improve the accuracy of classification.
Indirect Validation of Probe Speed Data on Arterial Corridors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eshragh, Sepideh; Young, Stanley E.; Sharifi, Elham
This study aimed to estimate the accuracy of probe speed data on arterial corridors on the basis of roadway geometric attributes and functional classification. It was assumed that functional class (medium and low) along with other road characteristics (such as weighted average of the annual average daily traffic, average signal density, average access point density, and average speed) were available as correlation factors to estimate the accuracy of probe traffic data. This study tested these factors as predictors of the fidelity of probe traffic data by using the results of an extensive validation exercise. This study showed strong correlations betweenmore » these geometric attributes and the accuracy of probe data when they were assessed by using average absolute speed error. Linear models were regressed to existing data to estimate appropriate models for medium- and low-type arterial corridors. The proposed models for medium- and low-type arterials were validated further on the basis of the results of a slowdown analysis. These models can be used to predict the accuracy of probe data indirectly in medium and low types of arterial corridors.« less
NASA Astrophysics Data System (ADS)
Joyce, C. J.; Schwadron, N. A.; Townsend, L. W.; deWet, W. C.; Wilson, J. K.; Spence, H. E.; Tobiska, W. K.; Shelton-Mur, K.; Yarborough, A.; Harvey, J.; Herbst, A.; Koske-Phillips, A.; Molina, F.; Omondi, S.; Reid, C.; Reid, D.; Shultz, J.; Stephenson, B.; McDevitt, M.; Phillips, T.
2016-09-01
We provide an analysis of the galactic cosmic ray radiation environment of Earth's atmosphere using measurements from the Cosmic Ray Telescope for the Effects of Radiation (CRaTER) aboard the Lunar Reconnaissance Orbiter (LRO) together with the Badhwar-O'Neil model and dose lookup tables generated by the Earth-Moon-Mars Radiation Environment Module (EMMREM). This study demonstrates an updated atmospheric radiation model that uses new dose tables to improve the accuracy of the modeled dose rates. Additionally, a method for computing geomagnetic cutoffs is incorporated into the model in order to account for location-dependent effects of the magnetosphere. Newly available measurements of atmospheric dose rates from instruments aboard commercial aircraft and high-altitude balloons enable us to evaluate the accuracy of the model in computing atmospheric dose rates. When compared to the available observations, the model seems to be reasonably accurate in modeling atmospheric radiation levels, overestimating airline dose rates by an average of 20%, which falls within the uncertainty limit recommended by the International Commission on Radiation Units and Measurements (ICRU). Additionally, measurements made aboard high-altitude balloons during simultaneous launches from New Hampshire and California provide an additional comparison to the model. We also find that the newly incorporated geomagnetic cutoff method enables the model to represent radiation variability as a function of location with sufficient accuracy.
Markers of data quality in computer audit: the Manchester Orthopaedic Database.
Ricketts, D; Newey, M; Patterson, M; Hitchin, D; Fowler, S
1993-11-01
This study investigates the efficiency of the Manchester Orthopaedic Database (MOD), a computer software package for record collection and audit. Data is entered into the system in the form of diagnostic, operative and complication keywords. We have calculated the completeness, accuracy and quality (completeness x accuracy) of keyword data in the MOD in two departments of orthopaedics (Departments A and B). In each department, 100 sets of inpatient notes were reviewed. Department B obtained results which were significantly better than those in A at the 5% level. We attribute this to the presence of a systems coordinator to motivate and organise the team for audit. Senior and junior staff did not differ significantly with respect to completeness, accuracy and quality measures, but locum junior staff recorded data with a quality of 0%. Statistically, the biggest difference between the departments was the quality of operation keywords. Sample sizes were too small to permit effective statistical comparisons between the quality of complication keywords. In both departments, however, the poorest quality data was seen in complication keywords. The low complication keyword completeness contributed to this; on average, the true complication rate (39%) was twice the recorded complication rate (17%). In the recent Royal College of Surgeons of England Confidential Comparative Audit, the recorded complication rate was 4.7%. In the light of the above findings, we suggest that the true complication rate of the RCS CCA should approach 9%.
Rapid recipe formulation for plasma etching of new materials
NASA Astrophysics Data System (ADS)
Chopra, Meghali; Zhang, Zizhuo; Ekerdt, John; Bonnecaze, Roger T.
2016-03-01
A fast and inexpensive scheme for etch rate prediction using flexible continuum models and Bayesian statistics is demonstrated. Bulk etch rates of MgO are predicted using a steady-state model with volume-averaged plasma parameters and classical Langmuir surface kinetics. Plasma particle and surface kinetics are modeled within a global plasma framework using single component Metropolis Hastings methods and limited data. The accuracy of these predictions is evaluated with synthetic and experimental etch rate data for magnesium oxide in an ICP-RIE system. This approach is compared and superior to factorial models generated from JMP, a software package frequently employed for recipe creation and optimization.
ERIC Educational Resources Information Center
Scott, Katelyn C.; Skinner, Christopher H.; Moore, Tara C.; McCurdy, Merilee; Ciancio, Dennis; Cihak, David F.
2017-01-01
An adapted alternating treatments design was used to evaluate and compare the effects of two group contingency interventions on mathematics assignment accuracy in an intact first-grade classroom. Both an interdependent contingency with class-average criteria (16 students) and a dependent contingency with criteria based on the average of a smaller,…
Ma, Liheng; Bernelli-Zazzera, Franco; Jiang, Guangwen; Wang, Xingshu; Huang, Zongsheng; Qin, Shiqiao
2016-06-10
Under dynamic conditions, the centroiding accuracy of the motion-blurred star image decreases and the number of identified stars reduces, which leads to the degradation of the attitude accuracy of the star sensor. To improve the attitude accuracy, a region-confined restoration method, which concentrates on the noise removal and signal to noise ratio (SNR) improvement of the motion-blurred star images, is proposed for the star sensor under dynamic conditions. A multi-seed-region growing technique with the kinematic recursive model for star image motion is given to find the star image regions and to remove the noise. Subsequently, a restoration strategy is employed in the extracted regions, taking the time consumption and SNR improvement into consideration simultaneously. Simulation results indicate that the region-confined restoration method is effective in removing noise and improving the centroiding accuracy. The identification rate and the average number of identified stars in the experiments verify the advantages of the region-confined restoration method.
A Real-Time Wireless Sweat Rate Measurement System for Physical Activity Monitoring.
Brueck, Andrew; Iftekhar, Tashfin; Stannard, Alicja B; Yelamarthi, Kumar; Kaya, Tolga
2018-02-10
There has been significant research on the physiology of sweat in the past decade, with one of the main interests being the development of a real-time hydration monitor that utilizes sweat. The contents of sweat have been known for decades; sweat provides significant information on the physiological condition of the human body. However, it is important to know the sweat rate as well, as sweat rate alters the concentration of the sweat constituents, and ultimately affects the accuracy of hydration detection. Towards this goal, a calorimetric based flow-rate detection system was built and tested to determine sweat rate in real time. The proposed sweat rate monitoring system has been validated through both controlled lab experiments (syringe pump) and human trials. An Internet of Things (IoT) platform was embedded, with the sensor using a Simblee board and Raspberry Pi. The overall prototype is capable of sending sweat rate information in real time to either a smartphone or directly to the cloud. Based on a proven theoretical concept, our overall system implementation features a pioneer device that can truly measure the rate of sweat in real time, which was tested and validated on human subjects. Our realization of the real-time sweat rate watch is capable of detecting sweat rates as low as 0.15 µL/min/cm², with an average error in accuracy of 18% compared to manual sweat rate readings.
Estimation of Temporal Gait Parameters Using a Wearable Microphone-Sensor-Based System
Wang, Cheng; Wang, Xiangdong; Long, Zhou; Yuan, Jing; Qian, Yueliang; Li, Jintao
2016-01-01
Most existing wearable gait analysis methods focus on the analysis of data obtained from inertial sensors. This paper proposes a novel, low-cost, wireless and wearable gait analysis system which uses microphone sensors to collect footstep sound signals during walking. This is the first time a microphone sensor is used as a wearable gait analysis device as far as we know. Based on this system, a gait analysis algorithm for estimating the temporal parameters of gait is presented. The algorithm fully uses the fusion of two feet footstep sound signals and includes three stages: footstep detection, heel-strike event and toe-on event detection, and calculation of gait temporal parameters. Experimental results show that with a total of 240 data sequences and 1732 steps collected using three different gait data collection strategies from 15 healthy subjects, the proposed system achieves an average 0.955 F1-measure for footstep detection, an average 94.52% accuracy rate for heel-strike detection and 94.25% accuracy rate for toe-on detection. Using these detection results, nine temporal related gait parameters are calculated and these parameters are consistent with their corresponding normal gait temporal parameters and labeled data calculation results. The results verify the effectiveness of our proposed system and algorithm for temporal gait parameter estimation. PMID:27999321
Ji, Eun Sook; Park, Kyu-Hyun
2012-12-01
This study was conducted to evaluate methane (CH4) and nitrous oxide (N2O) emissions from livestock agriculture in 16 local administrative districts of Korea from 1990 to 2030. National Inventory Report used 3 yr averaged livestock population but this study used 1 yr livestock population to find yearly emission fluctuations. Extrapolation of the livestock population from 1990 to 2009 was used to forecast future livestock population from 2010 to 2030. Past (yr 1990 to 2009) and forecasted (yr 2010 to 2030) averaged enteric CH4 emissions and CH4 and N2O emissions from manure treatment were estimated. In the section of enteric fermentation, forecasted average CH4 emissions from 16 local administrative districts were estimated to increase by 4%-114% compared to that of the past except for Daejeon (-63%), Seoul (-36%) and Gyeonggi (-7%). As for manure treatment, forecasted average CH4 emissions from the 16 local administrative districts were estimated to increase by 3%-124% compared to past average except for Daejeon (-77%), Busan (-60%), Gwangju (-48%) and Seoul (-8%). For manure treatment, forecasted average N2O emissions from the 16 local administrative districts were estimated to increase by 10%-153% compared to past average CH4 emissions except for Daejeon (-60%), Seoul (-4.0%), and Gwangju (-0.2%). With the carbon dioxide equivalent emissions (CO2-Eq), forecasted average CO2-Eq from the 16 local administrative districts were estimated to increase by 31%-120% compared to past average CH4 emissions except Daejeon (-65%), Seoul (-24%), Busan (-18%), Gwangju (-8%) and Gyeonggi (-1%). The decreased CO2-Eq from 5 local administrative districts was only 34 kt, which was insignificantly small compared to increase of 2,809 kt from other 11 local administrative districts. Annual growth rates of enteric CH4 emissions, CH4 and N2O emissions from manure management in Korea from 1990 to 2009 were 1.7%, 2.6%, and 3.2%, respectively. The annual growth rate of total CO2-Eq was 2.2%. Efforts by the local administrative offices to improve the accuracy of activity data are essential to improve GHG inventories. Direct measurements of GHG emissions from enteric fermentation and manure treatment systems will further enhance the accuracy of the GHG data. (Key Words: Greenhouse Gas, Methane, Nitrous Oxide, Carbon Dioxide Equivalent Emission, Climate Change).
DOE Office of Scientific and Technical Information (OSTI.GOV)
James, Joshua, E-mail: joshua.james@louisville.edu; Dunlap, Neal E.; Nguyen, Vi Nhan
Purpose: Tracking soft-tissue targets has recently been cleared as a new application of Calypso, an electromagnetic wireless transponder tracking system, allowing for gated treatment of the liver based on the motion of the target volume itself. The purpose of this study is to describe the details of validating the Calypso system for wireless transponder tracking of the liver and to present the clinical workflow for using it to deliver gated stereotactic ablative radiotherapy (SABR). Methods: A commercial 3D diode array motion system was used to evaluate the dynamic tracking accuracy of Calypso when tracking continuous large amplitude motion. It wasmore » then used to perform end-to-end tests to evaluate the dosimetric accuracy of gated beam delivery for liver SABR. In addition, gating limits were investigated to determine how large the gating window can be while still maintaining dosimetric accuracy. The gating latency of the Calypso system was also measured using a customized motion phantom. Results: The average absolute difference between the measured and expected positional offset was 0.3 mm. The 2%/2 mm gamma pass rates for the gated treatment delivery were greater than 97%. When increasing the gating limits beyond the known extent of planned motion, the gamma pass rates decreased as expected. The 2%/2 mm gamma pass rate for a 1, 2, and 3 mm increase in gating limits was measured to be 97.8%, 82.9%, and 61.4%, respectively. The average gating latency was measured to be 63.8 ms for beam-hold and 195.8 ms for beam-on. Four liver patients with 17 total fractions have been successfully treated at our institution. Conclusions: Wireless transponder tracking was validated as a dosimetrically accurate way to provide gated SABR of the liver. The dynamic tracking accuracy of the Calypso system met manufacturer’s specification, even for continuous large amplitude motion that can be encountered when tracking liver tumors close to the diaphragm. The measured beam-hold gating latency was appropriate for targets that will traverse the gating limit each respiratory cycle causing the beam to be interrupted constantly throughout treatment delivery.« less
Fast Deep Tracking via Semi-Online Domain Adaptation
NASA Astrophysics Data System (ADS)
Li, Xiaoping; Luo, Wenbing; Zhu, Yi; Li, Hanxi; Wang, Mingwen
2018-04-01
Deep tracking has been illustrating overwhelming superiorities over the shallow methods. Unfortunately, it also suffers from low FPS rates. To alleviate the problem, a number of real-time deep trackers have been proposed via removing the online updating procedure on the CNN model. However, the absent of the online update leads to a significant drop on tracking accuracy. In this work, we propose to perform the domain adaptation for visual tracking in two stages for transferring the information from the visual tracking domain and the instance domain respectively. In this way, the proposed visual tracker achieves comparable tracking accuracy to the state-of-the-art trackers and runs at real-time speed on an average consuming GPU.
The wire-mesh sensor as a two-phase flow meter
NASA Astrophysics Data System (ADS)
Shaban, H.; Tavoularis, S.
2015-01-01
A novel gas and liquid flow rate measurement method is proposed for use in vertical upward and downward gas-liquid pipe flows. This method is based on the analysis of the time history of area-averaged void fraction that is measured using a conductivity wire-mesh sensor (WMS). WMS measurements were collected in vertical upward and downward air-water flows in a pipe with an internal diameter of 32.5 mm at nearly atmospheric pressure. The relative frequencies and the power spectral density of area-averaged void fraction were calculated and used as representative properties. Independent features, extracted from these properties using Principal Component Analysis and Independent Component Analysis, were used as inputs to artificial neural networks, which were trained to give the gas and liquid flow rates as outputs. The present method was shown to be accurate for all four encountered flow regimes and for a wide range of flow conditions. Besides providing accurate predictions for steady flows, the method was also tested successfully in three flows with transient liquid flow rates. The method was augmented by the use of the cross-correlation function of area-averaged void fraction determined from the output of a dual WMS unit as an additional representative property, which was found to improve the accuracy of flow rate prediction.
An estimate of the prevalence of developmental phonagnosia.
Shilowich, Bryan E; Biederman, Irving
2016-08-01
A web-based survey estimated the distribution of voice recognition abilities with a focus on determining the prevalence of developmental phonagnosia, the inability to identify a familiar person based on their voice. Participants matched clips of 50 celebrity voices to 1-4 named headshots of celebrities whose voices they had previously rated for familiarity. Given a strong correlation between rated familiarity and recognition performance, a residual was calculated based on the average familiarity rating on each trial, which thus constituted each respondent's voice recognition ability that could not be accounted for by familiarity. 3.2% of the respondents (23 of 730 participants) had residual recognition scores 2.28 SDs below the mean (whereas 8 or 1.1% would have been expected from a normal distribution). They also judged whether they could imagine the voice of five familiar celebrities. Individuals who had difficulty in imagining voices were also generally below average in their accuracy of recognition. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Richter, J.; Mayer, J.; Weigand, B.
2018-02-01
Non-resonant laser-induced thermal acoustics (LITA) was applied to measure Mach number, temperature and turbulence level along the centerline of a transonic nozzle flow. The accuracy of the measurement results was systematically studied regarding misalignment of the interrogation beam and frequency analysis of the LITA signals. 2D steady-state Reynolds-averaged Navier-Stokes (RANS) simulations were performed for reference. The simulations were conducted using ANSYS CFX 18 employing the shear-stress transport turbulence model. Post-processing of the LITA signals is performed by applying a discrete Fourier transformation (DFT) to determine the beat frequencies. It is shown that the systematical error of the DFT, which depends on the number of oscillations, signal chirp, and damping rate, is less than 1.5% for our experiments resulting in an average error of 1.9% for Mach number. Further, the maximum calibration error is investigated for a worst-case scenario involving maximum in situ readjustment of the interrogation beam within the limits of constructive interference. It is shown that the signal intensity becomes zero if the interrogation angle is altered by 2%. This, together with the accuracy of frequency analysis, results in an error of about 5.4% for temperature throughout the nozzle. Comparison with numerical results shows good agreement within the error bars.
Jenke, Christoph; Pallejà Rubio, Jaume; Kibler, Sebastian; Häfner, Johannes; Richter, Martin; Kutter, Christoph
2017-01-01
With the combination of micropumps and flow sensors, highly accurate and secure closed-loop controlled micro dosing systems for liquids are possible. Implementing a single stroke based control mode with piezoelectrically driven micro diaphragm pumps can provide a solution for dosing of volumes down to nanoliters or variable average flow rates in the range of nL/min to μL/min. However, sensor technologies feature a yet undetermined accuracy for measuring highly pulsatile micropump flow. Two miniaturizable in-line sensor types providing electrical readout—differential pressure based flow sensors and thermal calorimetric flow sensors—are evaluated for their suitability of combining them with mircopumps. Single stroke based calibration of the sensors was carried out with a new method, comparing displacement volumes and sensor flow volumes. Limitations of accuracy and performance for single stroke based flow control are described. Results showed that besides particle robustness of sensors, controlling resistive and capacitive damping are key aspects for setting up reproducible and reliable liquid dosing systems. Depending on the required average flow or defined volume, dosing systems with an accuracy of better than 5% for the differential pressure based sensor and better than 6.5% for the thermal calorimeter were achieved. PMID:28368344
An embedded implementation based on adaptive filter bank for brain-computer interface systems.
Belwafi, Kais; Romain, Olivier; Gannouni, Sofien; Ghaffari, Fakhreddine; Djemal, Ridha; Ouni, Bouraoui
2018-07-15
Brain-computer interface (BCI) is a new communication pathway for users with neurological deficiencies. The implementation of a BCI system requires complex electroencephalography (EEG) signal processing including filtering, feature extraction and classification algorithms. Most of current BCI systems are implemented on personal computers. Therefore, there is a great interest in implementing BCI on embedded platforms to meet system specifications in terms of time response, cost effectiveness, power consumption, and accuracy. This article presents an embedded-BCI (EBCI) system based on a Stratix-IV field programmable gate array. The proposed system relays on the weighted overlap-add (WOLA) algorithm to perform dynamic filtering of EEG-signals by analyzing the event-related desynchronization/synchronization (ERD/ERS). The EEG-signals are classified, using the linear discriminant analysis algorithm, based on their spatial features. The proposed system performs fast classification within a time delay of 0.430 s/trial, achieving an average accuracy of 76.80% according to an offline approach and 80.25% using our own recording. The estimated power consumption of the prototype is approximately 0.7 W. Results show that the proposed EBCI system reduces the overall classification error rate for the three datasets of the BCI-competition by 5% compared to other similar implementations. Moreover, experiment shows that the proposed system maintains a high accuracy rate with a short processing time, a low power consumption, and a low cost. Performing dynamic filtering of EEG-signals using WOLA increases the recognition rate of ERD/ERS patterns of motor imagery brain activity. This approach allows to develop a complete prototype of a EBCI system that achieves excellent accuracy rates. Copyright © 2018 Elsevier B.V. All rights reserved.
Fomekong, Edward; Pierrard, Julien; Raftopoulos, Christian
2018-03-01
The major limitation of computer-based three-dimensional fluoroscopy is increased radiation exposure of patients and operating room staff. Combining spine navigation with intraoperative three-dimensional fluoroscopy (io3DF) can likely overcome this shortcoming, while increasing pedicle screw accuracy rate. We compared data from a cohort of patients undergoing lumbar percutaneous pedicle screw placement using io3DF alone or in combination with spine navigation. This study consisted of 168 patients who underwent percutaneous pedicle screw implantation between 2009 and 2016. The primary endpoint was to compare pedicle screw accuracy between the 2 groups. Secondary endpoints were to compare radiation exposure of patients and operating room staff, duration of surgery, and postoperative complications. In group 1, 438 screws were placed without navigation guidance; in group 2, 276 screws were placed with spine navigation. Mean patient age in both groups was 58.6 ± 14.1 years. The final pedicle accuracy rate was 97.9% in group 1 and 99.6% in group 2. Average radiation dose per patient was significantly larger in group 1 (571.9 mGym 2 ) than in group 2 (365.6 mGym 2 ) (P = 0.000088). Surgery duration and complication rate were not significantly different between the 2 groups (P > 0.05). io3DF with spine navigation minimized radiation exposure of patients and operating room staff and provided an excellent percutaneous pedicle screw accuracy rate with no permanent complications compared with io3DF alone. This setup is recommended, especially for patients with a complex degenerative spine condition. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blake, S; Thwaites, D; Hansen, C
2015-06-15
Purpose: This study evaluated the plan quality and dose delivery accuracy of stereotactic body radiotherapy (SBRT) helical Tomotherapy (HT) treatments for lung cancer. Results were compared with those previously reported by our group for flattening filter (FF) and flattening filter free (FFF) VMAT treatments. This work forms part of an ongoing multicentre and multisystem planning and dosimetry audit on FFF beams for lung SBRT. Methods: CT datasets and DICOM RT structures delineating the target volume and organs at risk for 6 lung cancer patients were selected. Treatment plans were generated using the HT treatment planning system. Tumour locations were classifiedmore » as near rib, near bronchial tree or in free lung with prescribed doses of 48Gy/4fr, 50Gy/5fr and 54Gy/3fr respectively. Dose constraints were specified by a modified RTOG0915 protocol used for an Australian SBRT phase II trial. Plan quality was evaluated using mean PTV dose, PTV volume receiving 100% of the prescribed dose (V100%), target conformity (CI=VD100%/VPTV) and low dose spillage (LDS=VD50%/VPTV). Planned dose distributions were compared to those measured using an ArcCheck phantom. Delivery accuracy was evaluated using a gamma-index pass rate of 95% with 3% (of max dose) and 3mm criteria. Results: Treatment plans for all patients were clinically acceptable in terms of quality and accuracy of dose delivery. The following DVH metrics are reported as averages (SD) of all plans investigated: mean PTV dose was 115.3(2.4)% of prescription, V100% was 98.8(0.9)%, CI was 1.14(0.03) and LDS was 5.02(0.37). The plans had an average gamma-index passing rate of 99.3(1.3)%. Conclusion: The results reported in this study for HT agree within 1 SD to those previously published by our group for VMAT FF and FFF lung SBRT treatments. This suggests that HT delivers lung SBRT treatments of comparable quality and delivery accuracy as VMAT using both FF and FFF beams.« less
The Validity of Peer Review in a General Medicine Journal
Jackson, Jeffrey L.; Srinivasan, Malathi; Rea, Joanna; Fletcher, Kathlyn E.; Kravitz, Richard L.
2011-01-01
All the opinions in this article are those of the authors and should not be construed to reflect, in any way, those of the Department of Veterans Affairs. Background Our study purpose was to assess the predictive validity of reviewer quality ratings and editorial decisions in a general medicine journal. Methods Submissions to the Journal of General Internal Medicine (JGIM) between July 2004 and June 2005 were included. We abstracted JGIM peer review quality ratings, verified the publication status of all articles and calculated an impact factor for published articles (Rw) by dividing the 3-year citation rate by the average for this group of papers; an Rw>1 indicates a greater than average impact. Results Of 507 submissions, 128 (25%) were published in JGIM, 331 rejected (128 with review) and 48 were either not resubmitted after revision was requested or were withdrawn by the author. Of 331 rejections, 243 were published elsewhere. Articles published in JGIM had a higher citation rate than those published elsewhere (Rw: 1.6 vs. 1.1, p = 0.002). Reviewer quality ratings of article quality had good internal consistency and reviewer recommendations markedly influenced publication decisions. There was no quality rating cutpoint that accurately distinguished high from low impact articles. There was a stepwise increase in Rw for articles rejected without review, rejected after review or accepted by JGIM (Rw 0.60 vs. 0.87 vs. 1.56, p<0.0005). However, there was low agreement between reviewers for quality ratings and publication recommendations. The editorial publication decision accurately discriminated high and low impact articles in 68% of submissions. We found evidence of better accuracy with a greater number of reviewers. Conclusions The peer review process largely succeeds in selecting high impact articles and dispatching lower impact ones, but the process is far from perfect. While the inter-rater reliability between individual reviewers is low, the accuracy of sorting is improved with a greater number of reviewers. PMID:21799867
Catchment-scale groundwater recharge and vegetation water use efficiency
NASA Astrophysics Data System (ADS)
Troch, P. A. A.; Dwivedi, R.; Liu, T.; Meira, A.; Roy, T.; Valdés-Pineda, R.; Durcik, M.; Arciniega, S.; Brena-Naranjo, J. A.
2017-12-01
Precipitation undergoes a two-step partitioning when it falls on the land surface. At the land surface and in the shallow subsurface, rainfall or snowmelt can either runoff as infiltration/saturation excess or quick subsurface flow. The rest will be stored temporarily in the root zone. From the root zone, water can leave the catchment as evapotranspiration or percolate further and recharge deep storage (e.g. fractured bedrock aquifer). Quantifying the average amount of water that recharges deep storage and sustains low flows is extremely challenging, as we lack reliable methods to quantify this flux at the catchment scale. It was recently shown, however, that for semi-arid catchments in Mexico, an index of vegetation water use efficiency, i.e. the Horton index (HI), could predict deep storage dynamics. Here we test this finding using 247 MOPEX catchments across the conterminous US, including energy-limited catchments. Our results show that the observed HI is indeed a reliable predictor of deep storage dynamics in space and time. We further investigate whether the HI can also predict average recharge rates across the conterminous US. We find that the HI can reliably predict the average recharge rate, estimated from the 50th percentile flow of the flow duration curve. Our results compare favorably with estimates of average recharge rates from the US Geological Survey. Previous research has shown that HI can be reliably estimated based on aridity index, mean slope and mean elevation of a catchment (Voepel et al., 2011). We recalibrated Voepel's model and used it to predict the HI for our 247 catchments. We then used these predicted values of the HI to estimate average recharge rates for our catchments, and compared them with those estimated from observed HI. We find that the accuracies of our predictions based on observed and predicted HI are similar. This provides an estimation method of catchment-scale average recharge rates based on easily derived catchment characteristics, such as climate and topography, and free of discharge measurements.
Sun, Jun; Duan, Yizhou; Li, Jiangtao; Liu, Jiaying; Guo, Zongming
2013-01-01
In the first part of this paper, we derive a source model describing the relationship between the rate, distortion, and quantization steps of the dead-zone plus uniform threshold scalar quantizers with nearly uniform reconstruction quantizers for generalized Gaussian distribution. This source model consists of rate-quantization, distortion-quantization (D-Q), and distortion-rate (D-R) models. In this part, we first rigorously confirm the accuracy of the proposed source model by comparing the calculated results with the coding data of JM 16.0. Efficient parameter estimation strategies are then developed to better employ this source model in our two-pass rate control method for H.264 variable bit rate coding. Based on our D-Q and D-R models, the proposed method is of high stability, low complexity and is easy to implement. Extensive experiments demonstrate that the proposed method achieves: 1) average peak signal-to-noise ratio variance of only 0.0658 dB, compared to 1.8758 dB of JM 16.0's method, with an average rate control error of 1.95% and 2) significant improvement in smoothing the video quality compared with the latest two-pass rate control method.
A Systems Modeling Approach to Forecast Corn Economic Optimum Nitrogen Rate.
Puntel, Laila A; Sawyer, John E; Barker, Daniel W; Thorburn, Peter J; Castellano, Michael J; Moore, Kenneth J; VanLoocke, Andrew; Heaton, Emily A; Archontoulis, Sotirios V
2018-01-01
Historically crop models have been used to evaluate crop yield responses to nitrogen (N) rates after harvest when it is too late for the farmers to make in-season adjustments. We hypothesize that the use of a crop model as an in-season forecast tool will improve current N decision-making. To explore this, we used the Agricultural Production Systems sIMulator (APSIM) calibrated with long-term experimental data for central Iowa, USA (16-years in continuous corn and 15-years in soybean-corn rotation) combined with actual weather data up to a specific crop stage and historical weather data thereafter. The objectives were to: (1) evaluate the accuracy and uncertainty of corn yield and economic optimum N rate (EONR) predictions at four forecast times (planting time, 6th and 12th leaf, and silking phenological stages); (2) determine whether the use of analogous historical weather years based on precipitation and temperature patterns as opposed to using a 35-year dataset could improve the accuracy of the forecast; and (3) quantify the value added by the crop model in predicting annual EONR and yields using the site-mean EONR and the yield at the EONR to benchmark predicted values. Results indicated that the mean corn yield predictions at planting time ( R 2 = 0.77) using 35-years of historical weather was close to the observed and predicted yield at maturity ( R 2 = 0.81). Across all forecasting times, the EONR predictions were more accurate in corn-corn than soybean-corn rotation (relative root mean square error, RRMSE, of 25 vs. 45%, respectively). At planting time, the APSIM model predicted the direction of optimum N rates (above, below or at average site-mean EONR) in 62% of the cases examined ( n = 31) with an average error range of ±38 kg N ha -1 (22% of the average N rate). Across all forecast times, prediction error of EONR was about three times higher than yield predictions. The use of the 35-year weather record was better than using selected historical weather years to forecast (RRMSE was on average 3% lower). Overall, the proposed approach of using the crop model as a forecasting tool could improve year-to-year predictability of corn yields and optimum N rates. Further improvements in modeling and set-up protocols are needed toward more accurate forecast, especially for extreme weather years with the most significant economic and environmental cost.
A Systems Modeling Approach to Forecast Corn Economic Optimum Nitrogen Rate
Puntel, Laila A.; Sawyer, John E.; Barker, Daniel W.; Thorburn, Peter J.; Castellano, Michael J.; Moore, Kenneth J.; VanLoocke, Andrew; Heaton, Emily A.; Archontoulis, Sotirios V.
2018-01-01
Historically crop models have been used to evaluate crop yield responses to nitrogen (N) rates after harvest when it is too late for the farmers to make in-season adjustments. We hypothesize that the use of a crop model as an in-season forecast tool will improve current N decision-making. To explore this, we used the Agricultural Production Systems sIMulator (APSIM) calibrated with long-term experimental data for central Iowa, USA (16-years in continuous corn and 15-years in soybean-corn rotation) combined with actual weather data up to a specific crop stage and historical weather data thereafter. The objectives were to: (1) evaluate the accuracy and uncertainty of corn yield and economic optimum N rate (EONR) predictions at four forecast times (planting time, 6th and 12th leaf, and silking phenological stages); (2) determine whether the use of analogous historical weather years based on precipitation and temperature patterns as opposed to using a 35-year dataset could improve the accuracy of the forecast; and (3) quantify the value added by the crop model in predicting annual EONR and yields using the site-mean EONR and the yield at the EONR to benchmark predicted values. Results indicated that the mean corn yield predictions at planting time (R2 = 0.77) using 35-years of historical weather was close to the observed and predicted yield at maturity (R2 = 0.81). Across all forecasting times, the EONR predictions were more accurate in corn-corn than soybean-corn rotation (relative root mean square error, RRMSE, of 25 vs. 45%, respectively). At planting time, the APSIM model predicted the direction of optimum N rates (above, below or at average site-mean EONR) in 62% of the cases examined (n = 31) with an average error range of ±38 kg N ha−1 (22% of the average N rate). Across all forecast times, prediction error of EONR was about three times higher than yield predictions. The use of the 35-year weather record was better than using selected historical weather years to forecast (RRMSE was on average 3% lower). Overall, the proposed approach of using the crop model as a forecasting tool could improve year-to-year predictability of corn yields and optimum N rates. Further improvements in modeling and set-up protocols are needed toward more accurate forecast, especially for extreme weather years with the most significant economic and environmental cost. PMID:29706974
Not looking yourself: The cost of self-selecting photographs for identity verification.
White, David; Burton, Amy L; Kemp, Richard I
2016-05-01
Photo-identification is based on the premise that photographs are representative of facial appearance. However, previous studies show that ratings of likeness vary across different photographs of the same face, suggesting that some images capture identity better than others. Two experiments were designed to examine the relationship between likeness judgments and face matching accuracy. In Experiment 1, we compared unfamiliar face matching accuracy for self-selected and other-selected high-likeness images. Surprisingly, images selected by previously unfamiliar viewers - after very limited exposure to a target face - were more accurately matched than self-selected images chosen by the target identity themselves. Results also revealed extremely low inter-rater agreement in ratings of likeness across participants, suggesting that perceptions of image resemblance are inherently unstable. In Experiment 2, we test whether the cost of self-selection can be explained by this general disagreement in likeness judgments between individual raters. We find that averaging across rankings by multiple raters produces image selections that provide superior identification accuracy. However, benefit of other-selection persisted for single raters, suggesting that inaccurate representations of self interfere with our ability to judge which images faithfully represent our current appearance. © 2015 The British Psychological Society.
Wan, Jiangwen; Yu, Yang; Wu, Yinfeng; Feng, Renjian; Yu, Ning
2012-01-01
In light of the problems of low recognition efficiency, high false rates and poor localization accuracy in traditional pipeline security detection technology, this paper proposes a type of hierarchical leak detection and localization method for use in natural gas pipeline monitoring sensor networks. In the signal preprocessing phase, original monitoring signals are dealt with by wavelet transform technology to extract the single mode signals as well as characteristic parameters. In the initial recognition phase, a multi-classifier model based on SVM is constructed and characteristic parameters are sent as input vectors to the multi-classifier for initial recognition. In the final decision phase, an improved evidence combination rule is designed to integrate initial recognition results for final decisions. Furthermore, a weighted average localization algorithm based on time difference of arrival is introduced for determining the leak point’s position. Experimental results illustrate that this hierarchical pipeline leak detection and localization method could effectively improve the accuracy of the leak point localization and reduce the undetected rate as well as false alarm rate. PMID:22368464
Wan, Jiangwen; Yu, Yang; Wu, Yinfeng; Feng, Renjian; Yu, Ning
2012-01-01
In light of the problems of low recognition efficiency, high false rates and poor localization accuracy in traditional pipeline security detection technology, this paper proposes a type of hierarchical leak detection and localization method for use in natural gas pipeline monitoring sensor networks. In the signal preprocessing phase, original monitoring signals are dealt with by wavelet transform technology to extract the single mode signals as well as characteristic parameters. In the initial recognition phase, a multi-classifier model based on SVM is constructed and characteristic parameters are sent as input vectors to the multi-classifier for initial recognition. In the final decision phase, an improved evidence combination rule is designed to integrate initial recognition results for final decisions. Furthermore, a weighted average localization algorithm based on time difference of arrival is introduced for determining the leak point's position. Experimental results illustrate that this hierarchical pipeline leak detection and localization method could effectively improve the accuracy of the leak point localization and reduce the undetected rate as well as false alarm rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nierman, William C.
At TIGR, the human Bacterial Artificial Chromosome (BAC) end sequencing and trimming were with an overall sequencing success rate of 65%. CalTech human BAC libraries A, B, C and D as well as Roswell Park Cancer Institute's library RPCI-11 were used. To date, we have generated >300,000 end sequences from >186,000 human BAC clones with an average read length {approx}460 bp for a total of 141 Mb covering {approx}4.7% of the genome. Over sixty percent of the clones have BAC end sequences (BESs) from both ends representing over five-fold coverage of the genome by the paired-end clones. The average phredmore » Q20 length is {approx}400 bp. This high accuracy makes our BESs match the human finished sequences with an average identity of 99% and a match length of 450 bp, and a frequency of one match per 12.8 kb contig sequence. Our sample tracking has ensured a clone tracking accuracy of >90%, which gives researchers a high confidence in (1) retrieving the right clone from the BA C libraries based on the sequence matches; and (2) building a minimum tiling path of sequence-ready clones across the genome and genome assembly scaffolds.« less
Spüler, Martin; Rosenstiel, Wolfgang; Bogdan, Martin
2012-01-01
The goal of a Brain-Computer Interface (BCI) is to control a computer by pure brain activity. Recently, BCIs based on code-modulated visual evoked potentials (c-VEPs) have shown great potential to establish high-performance communication. In this paper we present a c-VEP BCI that uses online adaptation of the classifier to reduce calibration time and increase performance. We compare two different approaches for online adaptation of the system: an unsupervised method and a method that uses the detection of error-related potentials. Both approaches were tested in an online study, in which an average accuracy of 96% was achieved with adaptation based on error-related potentials. This accuracy corresponds to an average information transfer rate of 144 bit/min, which is the highest bitrate reported so far for a non-invasive BCI. In a free-spelling mode, the subjects were able to write with an average of 21.3 error-free letters per minute, which shows the feasibility of the BCI system in a normal-use scenario. In addition we show that a calibration of the BCI system solely based on the detection of error-related potentials is possible, without knowing the true class labels.
An investigation of interface transferring mechanism of surface-bonded fiber Bragg grating sensors
NASA Astrophysics Data System (ADS)
Wu, Rujun; Fu, Kunkun; Chen, Tian
2017-08-01
Surface-bonded fiber Bragg grating sensor has been widely used in measuring strain in materials. The existence of fiber Bragg grating sensor affects strain distribution of the host material, which may result in a decrease in strain measurement accuracy. To improve the measurement accuracy, a theoretical model of strain transfer from the host material to optical fiber was developed, incorporating the influence of the fiber Bragg grating sensor. Subsequently, theoretical predictions were validated by comparing with data from finite element analysis and the existing experiment [F. Ansari and Y. Libo, J. Eng. Mech. 124(4), 385-394 (1998)]. Finally, the effect of parameters of fiber Bragg grating sensors on the average strain transfer rate was discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garnon, J., E-mail: juliengarnon@gmail.com; Ramamurthy, N., E-mail: nitin-ramamurthy@hotmail.com; Caudrelier J, J., E-mail: caudjean@yahoo.fr
2016-05-15
ObjectiveTo evaluate the diagnostic accuracy and safety of magnetic resonance imaging (MRI)-guided percutaneous biopsy of mediastinal masses performed using a wide-bore high-field scanner.Materials and MethodsThis is a retrospective study of 16 consecutive patients (8 male, 8 female; mean age 74 years) who underwent MRI-guided core needle biopsy of a mediastinal mass between February 2010 and January 2014. Size and location of lesion, approach taken, time for needle placement, overall duration of procedure, and post-procedural complications were evaluated. Technical success rates and correlation with surgical pathology (where available) were assessed.ResultsTarget lesions were located in the anterior (n = 13), middle (n = 2), and posterior mediastinummore » (n = 1), respectively. Mean size was 7.2 cm (range 3.6–11 cm). Average time for needle placement was 9.4 min (range 3–18 min); average duration of entire procedure was 42 min (range 27–62 min). 2–5 core samples were obtained from each lesion (mean 2.6). Technical success rate was 100 %, with specimens successfully obtained in all 16 patients. There were no immediate complications. Histopathology revealed malignancy in 12 cases (4 of which were surgically confirmed), benign lesions in 3 cases (1 of which was false negative following surgical resection), and one inconclusive specimen (treated as inaccurate since repeat CT-guided biopsy demonstrated thymic hyperplasia). Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy in our study were 92.3, 100, 100, 66.7, and 87.5 %, respectively.ConclusionMRI-guided mediastinal biopsy is a safe procedure with high diagnostic accuracy, which may offer a non-ionizing alternative to CT guidance.« less
Palucci Vieira, Luiz H; de Andrade, Vitor L; Aquino, Rodrigo L; Moraes, Renato; Barbieri, Fabio A; Cunha, Sérgio A; Bedo, Bruno L; Santiago, Paulo R
2017-12-01
The main aim of this study was to verify the relationship between the classification of coaches and actual performance in field tests that measure the kicking performance in young soccer players, using the K-means clustering technique. Twenty-three U-14 players performed 8 tests to measure their kicking performance. Four experienced coaches provided a rating for each player as follows: 1: poor; 2: below average; 3: average; 4: very good; 5: excellent as related to three parameters (i.e. accuracy, power and ability to put spin on the ball). The scores interval established from k-means cluster metric was useful to originating five groups of performance level, since ANOVA revealed significant differences between clusters generated (P<0.01). Accuracy seems to be moderately predicted by the penalty kick, free kick, kicking the ball rolling and Wall Volley Test (0.44≤r≤0.56), while the ability to put spin on the ball can be measured by the free kick and the corner kick tests (0.52≤r≤0.61). Body measurements, age and PHV did not systematically influence the performance. The Wall Volley Test seems to be a good predictor of other tests. Five tests showed reasonable construct validity and can be used to predict the accuracy (penalty kick, free kick, kicking a rolling ball and Wall Volley Test) and ability to put spin on the ball (free kick and corner kick tests) when kicking in soccer. In contrast, the goal kick, kicking the ball when airborne and the vertical kick tests exhibited low power of discrimination and using them should be viewed with caution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zakjevskii, V; Knill, C; Rakowski, J
2014-06-01
Purpose: To develop a comprehensive end-to-end test for Varian's TrueBeam linear accelerator for head and neck IMRT using a custom phantom designed to utilize multiple dosimetry devices. Methods: The initial end-to-end test and custom H and N phantom were designed to yield maximum information in anatomical regions significant to H and N plans with respect to: i) geometric accuracy, ii) dosimetric accuracy, and iii) treatment reproducibility. The phantom was designed in collaboration with Integrated Medical Technologies. A CT image was taken with a 1mm slice thickness. The CT was imported into Varian's Eclipse treatment planning system, where OARs and themore » PTV were contoured. A clinical template was used to create an eight field static gantry angle IMRT plan. After optimization, dose was calculated using the Analytic Anisotropic Algorithm with inhomogeneity correction. Plans were delivered with a TrueBeam equipped with a high definition MLC. Preliminary end-to-end results were measured using film and ion chambers. Ion chamber dose measurements were compared to the TPS. Films were analyzed with FilmQAPro using composite gamma index. Results: Film analysis for the initial end-to-end plan with a geometrically simple PTV showed average gamma pass rates >99% with a passing criterion of 3% / 3mm. Film analysis of a plan with a more realistic, ie. complex, PTV yielded pass rates >99% in clinically important regions containing the PTV, spinal cord and parotid glands. Ion chamber measurements were on average within 1.21% of calculated dose for both plans. Conclusion: trials have demonstrated that our end-to-end testing methods provide baseline values for the dosimetric and geometric accuracy of Varian's TrueBeam system.« less
Towards Photoplethysmography-Based Estimation of Instantaneous Heart Rate During Physical Activity.
Jarchi, Delaram; Casson, Alexander J
2017-09-01
Recently numerous methods have been proposed for estimating average heart rate using photoplethysmography (PPG) during physical activity, overcoming the significant interference that motion causes in PPG traces. We propose a new algorithm framework for extracting instantaneous heart rate from wearable PPG and Electrocardiogram (ECG) signals to provide an estimate of heart rate variability during exercise. For ECG signals, we propose a new spectral masking approach which modifies a particle filter tracking algorithm, and for PPG signals constrains the instantaneous frequency obtained from the Hilbert transform to a region of interest around a candidate heart rate measure. Performance is verified using accelerometry and wearable ECG and PPG data from subjects while biking and running on a treadmill. Instantaneous heart rate provides more information than average heart rate alone. The instantaneous heart rate can be extracted during motion to an accuracy of 1.75 beats per min (bpm) from PPG signals and 0.27 bpm from ECG signals. Estimates of instantaneous heart rate can now be generated from PPG signals during motion. These estimates can provide more information on the human body during exercise. Instantaneous heart rate provides a direct measure of vagal nerve and sympathetic nervous system activity and is of substantial use in a number of analyzes and applications. Previously it has not been possible to estimate instantaneous heart rate from wrist wearable PPG signals.
A Real-Time Wireless Sweat Rate Measurement System for Physical Activity Monitoring
Brueck, Andrew; Iftekhar, Tashfin; Stannard, Alicja B.; Kaya, Tolga
2018-01-01
There has been significant research on the physiology of sweat in the past decade, with one of the main interests being the development of a real-time hydration monitor that utilizes sweat. The contents of sweat have been known for decades; sweat provides significant information on the physiological condition of the human body. However, it is important to know the sweat rate as well, as sweat rate alters the concentration of the sweat constituents, and ultimately affects the accuracy of hydration detection. Towards this goal, a calorimetric based flow-rate detection system was built and tested to determine sweat rate in real time. The proposed sweat rate monitoring system has been validated through both controlled lab experiments (syringe pump) and human trials. An Internet of Things (IoT) platform was embedded, with the sensor using a Simblee board and Raspberry Pi. The overall prototype is capable of sending sweat rate information in real time to either a smartphone or directly to the cloud. Based on a proven theoretical concept, our overall system implementation features a pioneer device that can truly measure the rate of sweat in real time, which was tested and validated on human subjects. Our realization of the real-time sweat rate watch is capable of detecting sweat rates as low as 0.15 µL/min/cm2, with an average error in accuracy of 18% compared to manual sweat rate readings. PMID:29439398
Accuracy assessment of high-rate GPS measurements for seismology
NASA Astrophysics Data System (ADS)
Elosegui, P.; Davis, J. L.; Ekström, G.
2007-12-01
Analysis of GPS measurements with a controlled laboratory system, built to simulate the ground motions caused by tectonic earthquakes and other transient geophysical signals such as glacial earthquakes, enables us to assess the technique of high-rate GPS. The root-mean-square (rms) position error of this system when undergoing realistic simulated seismic motions is 0.05~mm, with maximum position errors of 0.1~mm, thus providing "ground truth" GPS displacements. We have acquired an extensive set of high-rate GPS measurements while inducing seismic motions on a GPS antenna mounted on this system with a temporal spectrum similar to real seismic events. We found that, for a particular 15-min-long test event, the rms error of the 1-Hz GPS position estimates was 2.5~mm, with maximum position errors of 10~mm, and the error spectrum of the GPS estimates was approximately flicker noise. These results may however represent a best-case scenario since they were obtained over a short (~10~m) baseline, thereby greatly mitigating baseline-dependent errors, and when the number and distribution of satellites on the sky was good. For example, we have determined that the rms error can increase by a factor of 2--3 as the GPS constellation changes throughout the day, with an average value of 3.5~mm for eight identical, hourly-spaced, consecutive test events. The rms error also increases with increasing baseline, as one would expect, with an average rms error for a ~1400~km baseline of 9~mm. We will present an assessment of the accuracy of high-rate GPS based on these measurements, discuss the implications of this study for seismology, and describe new applications in glaciology.
Rogers, Katherine H; Biesanz, Jeremy C
2015-12-01
There are strong differences between individuals in the tendency to view the personality of others as similar to the average person. That is, some people tend to form more normatively accurate impressions than do others. However, the process behind the formation of normatively accurate first impressions is not yet fully understood. Given that the average individual's personality is highly socially desirable (Borkenau & Zaltauskas, 2009; Wood, Gosling & Potter, 2007), individuals may achieve high normative accuracy by viewing others as similar to the average person or by viewing them in an overly socially desirable manner. The average self-reported personality profile and social desirability, despite being strongly correlated, independently and strongly predict first impressions. Further, some individuals have a more accurate understanding of the average individual's personality than do others. Perceivers with more accurate knowledge about the average individual's personality rated the personality of specific others more normatively accurately (more similar to the average person), suggesting that individual differences in normative judgments include a component of accurate knowledge regarding the average personality. In contrast, perceivers who explicitly evaluated others more positively formed more socially desirable impressions, but not more normatively accurate impressions. (c) 2015 APA, all rights reserved).
Training improves interobserver reliability for the diagnosis of scaphoid fracture displacement.
Buijze, Geert A; Guitton, Thierry G; van Dijk, C Niek; Ring, David
2012-07-01
The diagnosis of displacement in scaphoid fractures is notorious for poor interobserver reliability. We tested whether training can improve interobserver reliability and sensitivity, specificity, and accuracy for the diagnosis of scaphoid fracture displacement on radiographs and CT scans. Sixty-four orthopaedic surgeons rated a set of radiographs and CT scans of 10 displaced and 10 nondisplaced scaphoid fractures for the presence of displacement, using a web-based rating application. Before rating, observers were randomized to a training group (34 observers) and a nontraining group (30 observers). The training group received an online training module before the rating session, and the nontraining group did not. Interobserver reliability for training and nontraining was assessed by Siegel's multirater kappa and the Z-test was used to test for significance. There was a small, but significant difference in the interobserver reliability for displacement ratings in favor of the training group compared with the nontraining group. Ratings of radiographs and CT scans combined resulted in moderate agreement for both groups. The average sensitivity, specificity, and accuracy of diagnosing displacement of scaphoid fractures were, respectively, 83%, 85%, and 84% for the nontraining group and 87%, 86%, and 87% for the training group. Assuming a 5% prevalence of fracture displacement, the positive predictive value was 0.23 in the nontraining group and 0.25 in the training group. The negative predictive value was 0.99 in both groups. Our results suggest training can improve interobserver reliability and sensitivity, specificity and accuracy for the diagnosis of scaphoid fracture displacement, but the improvements are slight. These findings are encouraging for future research regarding interobserver variation and how to reduce it further.
Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul
2011-07-01
In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a tgamma-test with a 3%/3 mm criterion. The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the gamma-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation. The delivery efficiency of moving average tracking was up to four times higher than that of real-time tracking and approached the efficiency of no compensation for all cases. The geometric accuracy and dosimetric accuracy of the moving average algorithm was between real-time tracking and no compensation, approximately half the percentage of dosimetric points failing the gamma-test compared with no compensation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, L; M Yang, Y; Nelson, B
Purpose: A novel end-to-end test system using a CCD camera and a scintillator based phantom (XRV-124, Logos Systems Int’l) capable of measuring the beam-by-beam delivery accuracy of Robotic Radiosurgery (CyberKnife) was developed and reported in our previous work. This work investigates its application in assessing the motion tracking (Synchrony) accuracy for CyberKnife. Methods: A QA plan with Anterior and Lateral beams (with 4 different collimator sizes) was created (Multiplan v5.3) for the XRV-124 phantom. The phantom was placed on a motion platform (superior and inferior movement), and the plans were delivered on the CyberKnife M6 system using four motion patterns:more » static, Sine- wave, Sine with 15° phase shift, and a patient breathing pattern composed of 2cm maximum motion with 4 second breathing cycle. Under integral recording mode, the time-averaged beam vectors (X, Y, Z) were measured by the phantom and compared with static delivery. In dynamic recording mode, the beam spots were recorded at a rate of 10 frames/second. The beam vector deviation from average position was evaluated against the various breathing patterns. Results: The average beam position of the six deliveries with no motion and three deliveries with Synchrony tracking on ideal motion (sinewave without phase shift) all agree within −0.03±0.00 mm, 0.10±0.04, and 0.04±0.03 in the X, Y, and X directions. Radiation beam width (FWHM) variations are within ±0.03 mm. Dynamic video record showed submillimeter tracking stability for both regular and irregular breathing pattern; however the tracking error up to 3.5 mm was observed when a 15 degree phase shift was introduced. Conclusion: The XRV-124 system is able to provide 3D and 4D targeting accuracy for CyberKnife delivery with Synchrony. The experimental results showed sub-millimeter delivery in phantom with excellent correlation in target to breathing motion. The accuracy was degraded when irregular motion and phase shift was introduced.« less
Development and implementation of a human accuracy program in patient foodservice.
Eden, S H; Wood, S M; Ptak, K M
1987-04-01
For many years, industry has utilized the concept of human error rates to monitor and minimize human errors in the production process. A consistent quality-controlled product increases consumer satisfaction and repeat purchase of product. Administrative dietitians have applied the concepts of using human error rates (the number of errors divided by the number of opportunities for error) at four hospitals, with a total bed capacity of 788, within a tertiary-care medical center. Human error rate was used to monitor and evaluate trayline employee performance and to evaluate layout and tasks of trayline stations, in addition to evaluating employees in patient service areas. Long-term employees initially opposed the error rate system with some hostility and resentment, while newer employees accepted the system. All employees now believe that the constant feedback given by supervisors enhances their self-esteem and productivity. Employee error rates are monitored daily and are used to counsel employees when necessary; they are also utilized during annual performance evaluation. Average daily error rates for a facility staffed by new employees decreased from 7% to an acceptable 3%. In a facility staffed by long-term employees, the error rate increased, reflecting improper error documentation. Patient satisfaction surveys reveal satisfaction, for tray accuracy increased from 88% to 92% in the facility staffed by long-term employees and has remained above the 90% standard in the facility staffed by new employees.
Weed Growth Stage Estimator Using Deep Convolutional Neural Networks.
Teimouri, Nima; Dyrmann, Mads; Nielsen, Per Rydahl; Mathiassen, Solvejg Kopp; Somerville, Gayle J; Jørgensen, Rasmus Nyholm
2018-05-16
This study outlines a new method of automatically estimating weed species and growth stages (from cotyledon until eight leaves are visible) of in situ images covering 18 weed species or families. Images of weeds growing within a variety of crops were gathered across variable environmental conditions with regards to soil types, resolution and light settings. Then, 9649 of these images were used for training the computer, which automatically divided the weeds into nine growth classes. The performance of this proposed convolutional neural network approach was evaluated on a further set of 2516 images, which also varied in term of crop, soil type, image resolution and light conditions. The overall performance of this approach achieved a maximum accuracy of 78% for identifying Polygonum spp. and a minimum accuracy of 46% for blackgrass. In addition, it achieved an average 70% accuracy rate in estimating the number of leaves and 96% accuracy when accepting a deviation of two leaves. These results show that this new method of using deep convolutional neural networks has a relatively high ability to estimate early growth stages across a wide variety of weed species.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCowan, P. M., E-mail: pmccowan@cancercare.mb.ca; McCurdy, B. M. C.; Medical Physics Department, CancerCare Manitoba, 675 McDermot Avenue, Winnipeg, Manitoba R3E 0V9
Purpose: The in vivo 3D dose delivered to a patient during volumetric modulated arc therapy (VMAT) delivery can be calculated using electronic portal imaging device (EPID) images. These images must be acquired in cine-mode (i.e., “movie” mode) in order to capture the time-dependent delivery information. The angle subtended by each cine-mode EPID image during an arc can be changed via the frame averaging number selected within the image acquisition software. A large frame average number will decrease the EPID’s angular resolution and will result in a decrease in the accuracy of the dose information contained within each image. Alternatively, lessmore » EPID images acquired per delivery will decrease the overall 3D patient dose calculation time, which is appealing for large-scale clinical implementation. Therefore, the purpose of this study was to determine the optimal frame average value per EPID image, defined as the highest frame averaging that can be used without an appreciable loss in 3D dose reconstruction accuracy for VMAT treatments. Methods: Six different VMAT plans and six different SBRT-VMAT plans were delivered to an anthropomorphic phantom. Delivery was carried out on a Varian 2300ix model linear accelerator (Linac) equipped with an aS1000 EPID running at a frame acquisition rate of 7.5 Hz. An additional PC was set up at the Linac console area, equipped with specialized frame-grabber hardware and software packages allowing continuous acquisition of all EPID frames during delivery. Frames were averaged into “frame-averaged” EPID images using MATLAB. Each frame-averaged data set was used to calculate the in vivo dose to the patient and then compared to the single EPID frame in vivo dose calculation (the single frame calculation represents the highest possible angular resolution per EPID image). A mean percentage dose difference of low dose (<20% prescription dose) and high dose regions (>80% prescription dose) was calculated for each frame averaged scenario for each plan. The authors defined their unacceptable loss of accuracy as no more than a ±1% mean dose difference in the high dose region. Optimal frame average numbers were then determined as a function of the Linac’s average gantry speed and the dose per fraction. Results: The authors found that 9 and 11 frame averages were suitable for all VMAT and SBRT-VMAT treatments, respectively. This resulted in no more than a 1% loss to any of the dose region’s mean percentage difference when compared to the single frame reconstruction. The optimized number was dependent on the treatment’s dose per fraction and was determined to be as high as 14 for 12 Gy/fraction (fx), 15 for 8 Gy/fx, 11 for 6 Gy/fx, and 9 for 2 Gy/fx. Conclusions: The authors have determined an optimal EPID frame averaging number for multiple VMAT-type treatments. These are given as a function of the dose per fraction and average gantry speed. These optimized values are now used in the authors’ clinical, 3D, in vivo patient dosimetry program. This provides a reduction in calculation time while maintaining the authors’ required level of accuracy in the dose reconstruction.« less
Guest, J F; Vowden, K; Vowden, P
2017-06-02
To estimate the patterns of care and related resource use attributable to managing acute and chronic wounds among a catchment population of a typical clinical commissioning group (CCG)/health board and corresponding National Health Service (NHS) costs in the UK. This was a sub-analysis of a retrospective cohort analysis of the records of 2000 patients in The Health Improvement Network (THIN) database. Patients' characteristics, wound-related health outcomes and health-care resource use were quantified for an average CCG/health board with a catchment population of 250,000 adults ≥18 years of age, and the corresponding NHS cost of patient management was estimated at 2013/2014 prices. An average CCG/health board was estimated to be managing 11,200 wounds in 2012/2013. Of these, 40% were considered to be acute wounds, 48% chronic and 12% lacking any specific diagnosis. The prevalence of acute, chronic and unspecified wounds was estimated to be growing at the rate of 9%, 12% and 13% per annum respectively. Our analysis indicated that the current rate of wound healing must increase by an average of at least 1% per annum across all wound types in order to slow down the increasing prevalence. Otherwise, an average CCG/health board is predicted to manage ~23,200 wounds per annum by 2019/2020 and is predicted to spend a discounted (the process of determining the present value of a payment that is to be received in the future) £50 million on managing these wounds and associated comorbidities. Real-world evidence highlights the substantial burden that acute and chronic wounds impose on an average CCG/health board. Strategies are required to improve the accuracy of diagnosis and healing rates.
Accuracy, reliability, and timing of visual evaluations of decay in fresh-cut lettuce
Hayes, Ryan J.
2018-01-01
Visual assessments are used for evaluating the quality of food products, such as fresh-cut lettuce packaged in bags with modified atmosphere. We have compared the accuracy and the reliability of visual evaluations of decay on fresh-cut lettuce performed with experienced and inexperienced raters. In addition, we have analyzed decay data from over 4.5 thousand bags to determine the optimum timing for evaluations to detect differences among accessions. Lin’s concordance coefficient (ρc) that takes into consideration both the closeness of the data and the conformance to the identity line showed high repeatability (intra-rater reliability, ρc = 0.97), reproducibility (inter-rater reliability, ρc = 0.92), and accuracy (ρc = 0.96) for experienced raters. Inexperienced raters did not perform as well and their ratings showed decreased repeatability (ρc = 0.93), but even larger reduction in reproducibility (ρc = 0.80) and accuracy (ρc = 0.90). We have detected that 5.3% of ratings were outside of the 95% limits of agreement. These under- or overestimates were predominantly found for bags with intermediate levels of decay, which corresponds to the middle of the rating scale. This occurs because intermediate amounts of decay are more difficult to discriminate than extremes. The frequencies of aberrant ratings for experienced raters ranged from 0.6% to 4.4% (mean = 2.1%), for inexperienced raters the frequencies were substantially higher, ranging from 6.1% to 15.6% (mean = 9.4%). Therefore, we recommend that new raters receive training that includes practical examples in this range of decay, use of standard area diagrams, and continuing interaction with experienced raters (consultation during actual rating). Very high agreement among experienced raters indicate that visual ratings can be successfully used for evaluations of decay, until a more objective, rapid, and affordable method is developed. We recommend evaluating samples at multiple time points until 42 days after processing (about 80% decay on average) and then combining these individual ratings into the area under the decay progress stairs (AUDePS) score. Applying this approach, experienced evaluators can accurately detect difference among lettuce accessions and identify lettuce cultivars with reduced decay. PMID:29664945
Increasing N200 Potentials Via Visual Stimulus Depicting Humanoid Robot Behavior.
Li, Mengfan; Li, Wei; Zhou, Huihui
2016-02-01
Achieving recognizable visual event-related potentials plays an important role in improving the success rate in telepresence control of a humanoid robot via N200 or P300 potentials. The aim of this research is to intensively investigate ways to induce N200 potentials with obvious features by flashing robot images (images with meaningful information) and by flashing pictures containing only solid color squares (pictures with incomprehensible information). Comparative studies have shown that robot images evoke N200 potentials with recognizable negative peaks at approximately 260 ms in the frontal and central areas. The negative peak amplitudes increase, on average, from 1.2 μV, induced by flashing the squares, to 6.7 μV, induced by flashing the robot images. The data analyses support that the N200 potentials induced by the robot image stimuli exhibit recognizable features. Compared with the square stimuli, the robot image stimuli increase the average accuracy rate by 9.92%, from 83.33% to 93.25%, and the average information transfer rate by 24.56 bits/min, from 72.18 bits/min to 96.74 bits/min, in a single repetition. This finding implies that the robot images might provide the subjects with more information to understand the visual stimuli meanings and help them more effectively concentrate on their mental activities.
NASA Technical Reports Server (NTRS)
Radhadrishnan, Krishnan
1993-01-01
A detailed analysis of the accuracy of several techniques recently developed for integrating stiff ordinary differential equations is presented. The techniques include two general-purpose codes EPISODE and LSODE developed for an arbitrary system of ordinary differential equations, and three specialized codes CHEMEQ, CREK1D, and GCKP4 developed specifically to solve chemical kinetic rate equations. The accuracy study is made by application of these codes to two practical combustion kinetics problems. Both problems describe adiabatic, homogeneous, gas-phase chemical reactions at constant pressure, and include all three combustion regimes: induction, heat release, and equilibration. To illustrate the error variation in the different combustion regimes the species are divided into three types (reactants, intermediates, and products), and error versus time plots are presented for each species type and the temperature. These plots show that CHEMEQ is the most accurate code during induction and early heat release. During late heat release and equilibration, however, the other codes are more accurate. A single global quantity, a mean integrated root-mean-square error, that measures the average error incurred in solving the complete problem is used to compare the accuracy of the codes. Among the codes examined, LSODE is the most accurate for solving chemical kinetics problems. It is also the most efficient code, in the sense that it requires the least computational work to attain a specified accuracy level. An important finding is that use of the algebraic enthalpy conservation equation to compute the temperature can be more accurate and efficient than integrating the temperature differential equation.
Wang, Dean; Jayakar, Rohit G; Leong, Natalie L; Leathers, Michael P; Williams, Riley J; Jones, Kristofer J
2017-04-01
Objective Patients commonly use the Internet to obtain their health-related information. The purpose of this study was to investigate the quality, accuracy, and readability of online patient resources for the management of articular cartilage defects. Design Three search terms ("cartilage defect," "cartilage damage," "cartilage injury") were entered into 3 Internet search engines (Google, Bing, Yahoo). The first 25 websites from each search were collected and reviewed. The quality and accuracy of online information were independently evaluated by 3 reviewers using predetermined scoring criteria. The readability was evaluated using the Flesch-Kincaid (FK) grade score. Results Fifty-three unique websites were evaluated. Quality ratings were significantly higher in websites with a FK score >11 compared to those with a score of ≤11 ( P = 0.021). Only 10 websites (19%) differentiated between focal cartilage defects and diffuse osteoarthritis. Of these, 7 (70%) were elicited using the search term "cartilage defect" ( P = 0.038). The average accuracy of the websites was high (11.7 out of maximum 12), and the average FK grade level (13.4) was several grades higher than the recommended level for readable patient education material (eighth grade level). Conclusions The quality and readability of online patient resources for articular cartilage defects favor those with a higher level of education. Additionally, the majority of these websites do not distinguish between focal chondral defects and diffuse osteoarthritis, which can fail to provide appropriate patient education and guidance for available treatment. Clinicians should help guide patients toward high-quality, accurate, and readable online patient education material.
Njeh, Christopher F; Salmon, Howard W; Schiller, Claire
2017-01-01
Intensity-modulated radiation therapy (IMRT) delivery using "step-and-shoot" technique on Varian C-Series linear accelerator (linac) is influenced by the communication frequency between the multileaf collimator and linac controllers. Hence, the dose delivery accuracy is affected by the dose rate. Our aim was to quantify the impact of using two dose rates on plan quality assurance (QA). Twenty IMRT patients were selected for this study. The plan QA was measured at two different dose rates. A gamma analysis was performed, and the degree of plan modulation on the QA pass rate was also evaluated in terms of average monitor unit per segment (MU/segment) and the total number of segments. The mean percentage gamma pass rate of 94.9% and 93.5% for 300 MU/min and 600 MU/min dose rate, respectively, was observed. There was a significant ( P = 0.001) decrease in percentage gamma pass rate when the dose rate was increased from 300 MU/min to 600 MU/min. There was a weak, but significant association between the percentage pass rate at both dose rate and total number of segments. The total number of MU was significantly correlated to the total number of segments ( r = 0.59). We found a positive correlation between the percentage pass rate and mean MU/segment, r = 0.52 and r = 0.57 for 300 MU/min and 600 MU/min, respectively. IMRT delivery using step-and-shoot technique on Varian 2300CD is impacted by the dose rate and the total amount of segments.
NASA Astrophysics Data System (ADS)
Molotch, N. P.; Painter, T. H.; Bales, R. C.; Dozier, J.
2003-04-01
In this study, an accumulated net radiation / accumulated degree-day index snowmelt model was coupled with remotely sensed snow covered area (SCA) data to simulate snow cover depletion and reconstruct maximum snow water equivalent (SWE) in the 19.1-km2 Tokopah Basin of the Sierra Nevada, California. Simple net radiation snowmelt models are attractive for operational snowmelt runoff forecasts as they are computationally inexpensive and have low input requirements relative to physically based energy balance models. The objective of this research was to assess the accuracy of a simple net radiation snowmelt model in a topographically heterogeneous alpine environment. Previous applications of net radiation / temperature index snowmelt models have not been evaluated in alpine terrain with intensive field observations of SWE. Solar radiation data from two meteorological stations were distributed using the topographic radiation model TOPORAD. Relative humidity and temperature data were distributed based on the lapse rate calculated between three meteorological stations within the basin. Fractional SCA data from the Landsat Enhanced Thematic Mapper (5 acquisitions) and the Airborne Visible and Infrared Imaging Spectrometer (AVIRIS) (2 acquisitions) were used to derive daily SCA using a linear regression between acquisition dates. Grain size data from AVIRIS (4 acquisitions) were used to infer snow surface albedo and interpolated linearly with time to derive daily albedo values. Modeled daily snowmelt rates for each 30-m pixel were scaled by the SCA and integrated over the snowmelt season to obtain estimates of maximum SWE accumulation. Snow surveys consisting of an average of 335 depth measurements and 53 density measurements during April, May and June, 1997 were interpolated using a regression tree / co-krig model, with independent variables of average incoming solar radiation, elevation, slope and maximum upwind slope. The basin was clustered into 7 elevation / average-solar-radiation zones for SWE accuracy assessment. Model simulations did a poor job at estimating the spatial distribution of SWE. Basin clusters where the solar radiative flux dominated the melt flux were simulated more accurately than those dominated by the turbulent fluxes or the longwave radiative flux.
Arapiraca, A F C; Jonsson, Dan; Mohallem, J R
2011-12-28
We report an upgrade of the Dalton code to include post Born-Oppenheimer nuclear mass corrections in the calculations of (ro-)vibrational averages of molecular properties. These corrections are necessary to achieve an accuracy of 10(-4) debye in the calculations of isotopic dipole moments. Calculations on the self-consistent field level present this accuracy, while numerical instabilities compromise correlated calculations. Applications to HD, ethane, and ethylene isotopologues are implemented, all of them approaching the experimental values.
Umut, İlhan; Çentik, Güven
2016-01-01
The number of channels used for polysomnographic recording frequently causes difficulties for patients because of the many cables connected. Also, it increases the risk of having troubles during recording process and increases the storage volume. In this study, it is intended to detect periodic leg movement (PLM) in sleep with the use of the channels except leg electromyography (EMG) by analysing polysomnography (PSG) data with digital signal processing (DSP) and machine learning methods. PSG records of 153 patients of different ages and genders with PLM disorder diagnosis were examined retrospectively. A novel software was developed for the analysis of PSG records. The software utilizes the machine learning algorithms, statistical methods, and DSP methods. In order to classify PLM, popular machine learning methods (multilayer perceptron, K-nearest neighbour, and random forests) and logistic regression were used. Comparison of classified results showed that while K-nearest neighbour classification algorithm had higher average classification rate (91.87%) and lower average classification error value (RMSE = 0.2850), multilayer perceptron algorithm had the lowest average classification rate (83.29%) and the highest average classification error value (RMSE = 0.3705). Results showed that PLM can be classified with high accuracy (91.87%) without leg EMG record being present. PMID:27213008
Umut, İlhan; Çentik, Güven
2016-01-01
The number of channels used for polysomnographic recording frequently causes difficulties for patients because of the many cables connected. Also, it increases the risk of having troubles during recording process and increases the storage volume. In this study, it is intended to detect periodic leg movement (PLM) in sleep with the use of the channels except leg electromyography (EMG) by analysing polysomnography (PSG) data with digital signal processing (DSP) and machine learning methods. PSG records of 153 patients of different ages and genders with PLM disorder diagnosis were examined retrospectively. A novel software was developed for the analysis of PSG records. The software utilizes the machine learning algorithms, statistical methods, and DSP methods. In order to classify PLM, popular machine learning methods (multilayer perceptron, K-nearest neighbour, and random forests) and logistic regression were used. Comparison of classified results showed that while K-nearest neighbour classification algorithm had higher average classification rate (91.87%) and lower average classification error value (RMSE = 0.2850), multilayer perceptron algorithm had the lowest average classification rate (83.29%) and the highest average classification error value (RMSE = 0.3705). Results showed that PLM can be classified with high accuracy (91.87%) without leg EMG record being present.
Using rainfall radar data to improve interpolated maps of dose rate in the Netherlands.
Hiemstra, Paul H; Pebesma, Edzer J; Heuvelink, Gerard B M; Twenhöfel, Chris J W
2010-12-01
The radiation monitoring network in the Netherlands is designed to detect and track increased radiation levels, dose rate more specifically, in 10-minute intervals. The network consists of 153 monitoring stations. Washout of radon progeny by rainfall is the most important cause of natural variations in dose rate. The increase in dose rate at a given time is a function of the amount of progeny decaying, which in turn is a balance between deposition of progeny by rainfall and radioactive decay. The increase in progeny is closely related to average rainfall intensity over the last 2.5h. We included decay of progeny by using weighted averaged rainfall intensity, where the weight decreases back in time. The decrease in weight is related to the half-life of radon progeny. In this paper we show for a rainstorm on the 20th of July 2007 that weighted averaged rainfall intensity estimated from rainfall radar images, collected every 5min, performs much better as a predictor of increases in dose rate than using the non-averaged rainfall intensity. In addition, we show through cross-validation that including weighted averaged rainfall intensity in an interpolated map using universal kriging (UK) does not necessarily lead to a more accurate map. This might be attributed to the high density of monitoring stations in comparison to the spatial extent of a typical rain event. Reducing the network density improved the accuracy of the map when universal kriging was used instead of ordinary kriging (no trend). Consequently, in a less dense network the positive influence of including a trend is likely to increase. Furthermore, we suspect that UK better reproduces the sharp boundaries present in rainfall maps, but that the lack of short-distance monitoring station pairs prevents cross-validation from revealing this effect. Copyright © 2010 Elsevier B.V. All rights reserved.
Hirose, Tomohiro; Nitta, Norihisa; Shiraishi, Junji; Nagatani, Yukihiro; Takahashi, Masashi; Murata, Kiyoshi
2008-12-01
The aim of this study was to evaluate the usefulness of computer-aided diagnosis (CAD) software for the detection of lung nodules on multidetector-row computed tomography (MDCT) in terms of improvement in radiologists' diagnostic accuracy in detecting lung nodules, using jackknife free-response receiver-operating characteristic (JAFROC) analysis. Twenty-one patients (6 without and 15 with lung nodules) were selected randomly from 120 consecutive thoracic computed tomographic examinations. The gold standard for the presence or absence of nodules in the observer study was determined by consensus of two radiologists. Six expert radiologists participated in a free-response receiver operating characteristic study for the detection of lung nodules on MDCT, in which cases were interpreted first without and then with the output of CAD software. Radiologists were asked to indicate the locations of lung nodule candidates on the monitor with their confidence ratings for the presence of lung nodules. The performance of the CAD software indicated that the sensitivity in detecting lung nodules was 71.4%, with 0.95 false-positive results per case. When radiologists used the CAD software, the average sensitivity improved from 39.5% to 81.0%, with an increase in the average number of false-positive results from 0.14 to 0.89 per case. The average figure-of-merit values for the six radiologists were 0.390 without and 0.845 with the output of the CAD software, and there was a statistically significant difference (P < .0001) using the JAFROC analysis. The CAD software for the detection of lung nodules on MDCT has the potential to assist radiologists by increasing their accuracy.
Measurement of the magnetotail reconnection rate
NASA Astrophysics Data System (ADS)
Blanchard, G. T.; Lyons, L. R.; de la Beaujardière, O.; Doe, R. A.; Mendillo, M.
1996-07-01
A technique to measure the magnetotail reconnection rate from the ground is described and applied to 71 hours of measurements from 20 nights. The reconnection rate is obtained from the ionospheric flow across the polar cap boundary in the frame of reference of the boundary, measured by the Sondrestrom incoherent scatter radar. For our measurements, the polar cap boundary is located using 6300 Å auroral emissions and E region electron density. The average experimental uncertainty of the reconnection rate measurement is 11.6 mVm-1 in the ionospheric electric field. By using a large data set, we obtain the dependence of the reconnection rate on magnetic local time, the interplanetary magnetic field, and substorm activity, with much higher accuracy. We find that two thirds of the average polar cap potential drop occurs over the 4-hour segment of the separatrix centered on 2330 MLT, that the linear correlation between the reconnection electric field and the half-wave rectified dawn-dusk solar wind electric field VBs peaks between 1.0 and 1.5 hours, with a maximum linear correlation coefficient of 0.46 at 70 min; and that following substorm expansion phase onset, the reconnection electric field becomes larger than the experimental uncertainty, with an average delay of 23 min. The 70-min delay of the reconnection rate with respect to VBs is a typical convection time for a flux tube across the polar cap. This result indicates that reconnection in the magnetotail is influenced by the solar wind electric field VBs on the field line being reconnected.
An approach to emotion recognition in single-channel EEG signals: a mother child interaction
NASA Astrophysics Data System (ADS)
Gómez, A.; Quintero, L.; López, N.; Castro, J.
2016-04-01
In this work, we perform a first approach to emotion recognition from EEG single channel signals extracted in four (4) mother-child dyads experiment in developmental psychology. Single channel EEG signals are analyzed and processed using several window sizes by performing a statistical analysis over features in the time and frequency domains. Finally, a neural network obtained an average accuracy rate of 99% of classification in two emotional states such as happiness and sadness.
Mugler, Emily M; Ruf, Carolin A; Halder, Sebastian; Bensch, Michael; Kubler, Andrea
2010-12-01
An electroencephalographic (EEG) brain-computer interface (BCI) internet browser was designed and evaluated with 10 healthy volunteers and three individuals with advanced amyotrophic lateral sclerosis (ALS), all of whom were given tasks to execute on the internet using the browser. Participants with ALS achieved an average accuracy of 73% and a subsequent information transfer rate (ITR) of 8.6 bits/min and healthy participants with no prior BCI experience over 90% accuracy and an ITR of 14.4 bits/min. We define additional criteria for unrestricted internet access for evaluation of the presented and future internet browsers, and we provide a review of the existing browsers in the literature. The P300-based browser provides unrestricted access and enables free web surfing for individuals with paralysis.
Dajani, Hilmi R; Hosokawa, Kazuya; Ando, Shin-Ichi
2016-11-01
Lung-to-finger circulation time of oxygenated blood during nocturnal periodic breathing in heart failure patients measured using polysomnography correlates negatively with cardiac function but possesses limited accuracy for cardiac output (CO) estimation. CO was recalculated from lung-to-finger circulation time using a multivariable linear model with information on age and average overnight heart rate in 25 patients who underwent evaluation of heart failure. The multivariable model decreased the percentage error to 22.3% relative to invasive CO measured during cardiac catheterization. This improved automated noninvasive CO estimation using multiple variables meets a recently proposed performance criterion for clinical acceptability of noninvasive CO estimation, and compares very favorably with other available methods. Copyright © 2016 Elsevier Inc. All rights reserved.
Precise Point Positioning with Partial Ambiguity Fixing.
Li, Pan; Zhang, Xiaohong
2015-06-10
Reliable and rapid ambiguity resolution (AR) is the key to fast precise point positioning (PPP). We propose a modified partial ambiguity resolution (PAR) method, in which an elevation and standard deviation criterion are first used to remove the low-precision ambiguity estimates for AR. Subsequently the success rate and ratio-test are simultaneously used in an iterative process to increase the possibility of finding a subset of decorrelated ambiguities which can be fixed with high confidence. One can apply the proposed PAR method to try to achieve an ambiguity-fixed solution when full ambiguity resolution (FAR) fails. We validate this method using data from 450 stations during DOY 021 to 027, 2012. Results demonstrate the proposed PAR method can significantly shorten the time to first fix (TTFF) and increase the fixing rate. Compared with FAR, the average TTFF for PAR is reduced by 14.9% for static PPP and 15.1% for kinematic PPP. Besides, using the PAR method, the average fixing rate can be increased from 83.5% to 98.2% for static PPP, from 80.1% to 95.2% for kinematic PPP respectively. Kinematic PPP accuracy with PAR can also be significantly improved, compared to that with FAR, due to a higher fixing rate.
Precise Point Positioning with Partial Ambiguity Fixing
Li, Pan; Zhang, Xiaohong
2015-01-01
Reliable and rapid ambiguity resolution (AR) is the key to fast precise point positioning (PPP). We propose a modified partial ambiguity resolution (PAR) method, in which an elevation and standard deviation criterion are first used to remove the low-precision ambiguity estimates for AR. Subsequently the success rate and ratio-test are simultaneously used in an iterative process to increase the possibility of finding a subset of decorrelated ambiguities which can be fixed with high confidence. One can apply the proposed PAR method to try to achieve an ambiguity-fixed solution when full ambiguity resolution (FAR) fails. We validate this method using data from 450 stations during DOY 021 to 027, 2012. Results demonstrate the proposed PAR method can significantly shorten the time to first fix (TTFF) and increase the fixing rate. Compared with FAR, the average TTFF for PAR is reduced by 14.9% for static PPP and 15.1% for kinematic PPP. Besides, using the PAR method, the average fixing rate can be increased from 83.5% to 98.2% for static PPP, from 80.1% to 95.2% for kinematic PPP respectively. Kinematic PPP accuracy with PAR can also be significantly improved, compared to that with FAR, due to a higher fixing rate. PMID:26067196
Ship navigation using Navstar GPS - An application study
NASA Technical Reports Server (NTRS)
Mohan, S. N.
1982-01-01
Ocean current measurement applications in physical oceanography require knowledge of inertial ship velocity to a precision of 1-2 cm/sec over a typical five minute averaging interval. The navigation accuracy must be commensurate with data precision obtainable from ship borne acoustic profilers used in sensing ocean currents. The Navstar Global Positioning System is viewed as a step in user technological simplification, extension in coverage availability, and enhancement in performance accuracy as well as reliability over the existing systems, namely, Loran-C, Transit, and Omega. Error analyses have shown the possibility of attaining the 1-2 cm/sec accuracy during active GPS coverage at a data rate of four position fixes per minute under varying sea-states. This paper is intended to present results of data validation exercises leading to design of an experiment at sea for deployment of both a GPS y-set and a direct Doppler measurement system as the autonomous navigation system used in conjunction with an acoustic Doppler as the sensor for ocean current measurement.
Using simple artificial intelligence methods for predicting amyloidogenesis in antibodies
2010-01-01
Background All polypeptide backbones have the potential to form amyloid fibrils, which are associated with a number of degenerative disorders. However, the likelihood that amyloidosis would actually occur under physiological conditions depends largely on the amino acid composition of a protein. We explore using a naive Bayesian classifier and a weighted decision tree for predicting the amyloidogenicity of immunoglobulin sequences. Results The average accuracy based on leave-one-out (LOO) cross validation of a Bayesian classifier generated from 143 amyloidogenic sequences is 60.84%. This is consistent with the average accuracy of 61.15% for a holdout test set comprised of 103 AM and 28 non-amyloidogenic sequences. The LOO cross validation accuracy increases to 81.08% when the training set is augmented by the holdout test set. In comparison, the average classification accuracy for the holdout test set obtained using a decision tree is 78.64%. Non-amyloidogenic sequences are predicted with average LOO cross validation accuracies between 74.05% and 77.24% using the Bayesian classifier, depending on the training set size. The accuracy for the holdout test set was 89%. For the decision tree, the non-amyloidogenic prediction accuracy is 75.00%. Conclusions This exploratory study indicates that both classification methods may be promising in providing straightforward predictions on the amyloidogenicity of a sequence. Nevertheless, the number of available sequences that satisfy the premises of this study are limited, and are consequently smaller than the ideal training set size. Increasing the size of the training set clearly increases the accuracy, and the expansion of the training set to include not only more derivatives, but more alignments, would make the method more sound. The accuracy of the classifiers may also be improved when additional factors, such as structural and physico-chemical data, are considered. The development of this type of classifier has significant applications in evaluating engineered antibodies, and may be adapted for evaluating engineered proteins in general. PMID:20144194
Using simple artificial intelligence methods for predicting amyloidogenesis in antibodies.
David, Maria Pamela C; Concepcion, Gisela P; Padlan, Eduardo A
2010-02-08
All polypeptide backbones have the potential to form amyloid fibrils, which are associated with a number of degenerative disorders. However, the likelihood that amyloidosis would actually occur under physiological conditions depends largely on the amino acid composition of a protein. We explore using a naive Bayesian classifier and a weighted decision tree for predicting the amyloidogenicity of immunoglobulin sequences. The average accuracy based on leave-one-out (LOO) cross validation of a Bayesian classifier generated from 143 amyloidogenic sequences is 60.84%. This is consistent with the average accuracy of 61.15% for a holdout test set comprised of 103 AM and 28 non-amyloidogenic sequences. The LOO cross validation accuracy increases to 81.08% when the training set is augmented by the holdout test set. In comparison, the average classification accuracy for the holdout test set obtained using a decision tree is 78.64%. Non-amyloidogenic sequences are predicted with average LOO cross validation accuracies between 74.05% and 77.24% using the Bayesian classifier, depending on the training set size. The accuracy for the holdout test set was 89%. For the decision tree, the non-amyloidogenic prediction accuracy is 75.00%. This exploratory study indicates that both classification methods may be promising in providing straightforward predictions on the amyloidogenicity of a sequence. Nevertheless, the number of available sequences that satisfy the premises of this study are limited, and are consequently smaller than the ideal training set size. Increasing the size of the training set clearly increases the accuracy, and the expansion of the training set to include not only more derivatives, but more alignments, would make the method more sound. The accuracy of the classifiers may also be improved when additional factors, such as structural and physico-chemical data, are considered. The development of this type of classifier has significant applications in evaluating engineered antibodies, and may be adapted for evaluating engineered proteins in general.
Lindsay, Grace M; Niven, Kate A; Brodie, Eric E; Gaw, Allan; Belcher, Philip R
2009-02-01
The accuracy with which patients recall their cardiac symptoms prior to aorta-coronary artery bypass grafting is assessed approximately one year after surgery together with patient-related factors potentially influencing accuracy of recall. This is a novel investigation of patient's rating of preoperative symptom severity before and approximately one year following aorta-coronary artery bypass grafting. Patients undergoing aorta-coronary artery bypass grafting (n = 208) were recruited preoperatively and 177 of these were successfully followed up at 16.4 (SD 2.1) months after surgery and asked to describe current and recalled preoperative symptoms using a 15-point numerical scale. Accuracy of recall was measured and correlated (Pearson's correlation) with current and past symptoms, health-related quality of life and coronary artery disease risk factors. Hypothesis tests used Student's t-test and the chi-squared test. Respective angina and breathlessness scores were recalled accurately by 16.9% and 14.1% while 59% and 58% were inaccurate by more than one point. Although the mean preoperative and recalled scores for severity of both angina and breathlessness and were not statistically different, patients who recalled most accurately their preoperative scores had, on average, significantly higher preoperative scores than those with less accurate recall. Patients whose angina and breathlessness symptoms were relieved by operation had significantly better accuracy of recall than patients with greater levels of symptoms postoperatively. Patient's rating of preoperative symptom severity before and one year following aorta-coronary artery bypass grafting was completely accurate in approximately one sixth of patients with similar proportions of the remaining patients overestimating and underestimating symptoms. The extent to which angina and breathlessness was relieved by operation was a significant factor in improving accuracy of recall. Factors associated with accuracy of recall of symptoms provide useful insights for clinicians when interpreting patients' views of the effectiveness of aorta-coronary artery bypass grafting for the relief of symptoms associated with coronary heart disease.
Automatic system for radar echoes filtering based on textural features and artificial intelligence
NASA Astrophysics Data System (ADS)
Hedir, Mehdia; Haddad, Boualem
2017-10-01
Among the very popular Artificial Intelligence (AI) techniques, Artificial Neural Network (ANN) and Support Vector Machine (SVM) have been retained to process Ground Echoes (GE) on meteorological radar images taken from Setif (Algeria) and Bordeaux (France) with different climates and topologies. To achieve this task, AI techniques were associated with textural approaches. We used Gray Level Co-occurrence Matrix (GLCM) and Completed Local Binary Pattern (CLBP); both methods were largely used in image analysis. The obtained results show the efficiency of texture to preserve precipitations forecast on both sites with the accuracy of 98% on Bordeaux and 95% on Setif despite the AI technique used. 98% of GE are suppressed with SVM, this rate is outperforming ANN skills. CLBP approach associated to SVM eliminates 98% of GE and preserves precipitations forecast on Bordeaux site better than on Setif's, while it exhibits lower accuracy with ANN. SVM classifier is well adapted to the proposed application since the average filtering rate is 95-98% with texture and 92-93% with CLBP. These approaches allow removing Anomalous Propagations (APs) too with a better accuracy of 97.15% with texture and SVM. In fact, textural features associated to AI techniques are an efficient tool for incoherent radars to surpass spurious echoes.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)
2000-01-01
Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.
Evaluation and optimization of sampling errors for the Monte Carlo Independent Column Approximation
NASA Astrophysics Data System (ADS)
Räisänen, Petri; Barker, W. Howard
2004-07-01
The Monte Carlo Independent Column Approximation (McICA) method for computing domain-average broadband radiative fluxes is unbiased with respect to the full ICA, but its flux estimates contain conditional random noise. McICA's sampling errors are evaluated here using a global climate model (GCM) dataset and a correlated-k distribution (CKD) radiation scheme. Two approaches to reduce McICA's sampling variance are discussed. The first is to simply restrict all of McICA's samples to cloudy regions. This avoids wasting precious few samples on essentially homogeneous clear skies. Clear-sky fluxes need to be computed separately for this approach, but this is usually done in GCMs for diagnostic purposes anyway. Second, accuracy can be improved by repeated sampling, and averaging those CKD terms with large cloud radiative effects. Although this naturally increases computational costs over the standard CKD model, random errors for fluxes and heating rates are reduced by typically 50% to 60%, for the present radiation code, when the total number of samples is increased by 50%. When both variance reduction techniques are applied simultaneously, globally averaged flux and heating rate random errors are reduced by a factor of #3.
Gas-liquid Phase Distribution and Void Fraction Measurements Using the MRI
NASA Technical Reports Server (NTRS)
Daidzic, N. E.; Schmidt, E.; Hasan, M. M.; Altobelli, S.
2004-01-01
We used a permanent-magnet MRI system to estimate the integral and spatially- and/or temporally-resolved void-fraction distributions and flow patterns in gas-liquid two-phase flows. Air was introduced at the bottom of the stagnant liquid column using an accurate and programmable syringe pump. Air flow rates were varied between 1 and 200 ml/min. The cylindrical non-conducting test tube in which two-phase flow was measured was placed in a 2.67 kGauss MRI with MRT spectrometer/imager. Roughly linear relationship has been obtained for the integral void-fraction, obtained by volume-averaging of the spatially-resolved signals, and the air flow rate in upward direction. The time-averaged spatially-resolved void fraction has also been obtained for the quasi-steady flow of air in a stagnant liquid column. No great accuracy is claimed as this was an exploratory proof-of-concept type of experiment. Preliminary results show that MRI a non-invasive and non-intrusive experimental technique can indeed provide a wealth of different qualitative and quantitative data and is especially well suited for averaged transport processes in adiabatic and diabatic multi-phase and/or multi-component flows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lü, X.; Schrottke, L.; Grahn, H. T.
We present scattering rates for electrons at longitudinal optical phonons within a model completely formulated in the Fourier domain. The total intersubband scattering rates are obtained by averaging over the intrasubband electron distributions. The rates consist of the Fourier components of the electron wave functions and a contribution depending only on the intersubband energies and the intrasubband carrier distributions. The energy-dependent part can be reproduced by a rational function, which allows for the separation of the scattering rates into a dipole-like contribution, an overlap-like contribution, and a contribution which can be neglected for low and intermediate carrier densities of themore » initial subband. For a balance between accuracy and computation time, the number of Fourier components can be adjusted. This approach facilitates an efficient design of complex heterostructures with realistic, temperature- and carrier density-dependent rates.« less
Visual Perception Based Rate Control Algorithm for HEVC
NASA Astrophysics Data System (ADS)
Feng, Zeqi; Liu, PengYu; Jia, Kebin
2018-01-01
For HEVC, rate control is an indispensably important video coding technology to alleviate the contradiction between video quality and the limited encoding resources during video communication. However, the rate control benchmark algorithm of HEVC ignores subjective visual perception. For key focus regions, bit allocation of LCU is not ideal and subjective quality is unsatisfied. In this paper, a visual perception based rate control algorithm for HEVC is proposed. First bit allocation weight of LCU level is optimized based on the visual perception of luminance and motion to ameliorate video subjective quality. Then λ and QP are adjusted in combination with the bit allocation weight to improve rate distortion performance. Experimental results show that the proposed algorithm reduces average 0.5% BD-BR and maximum 1.09% BD-BR at no cost in bitrate accuracy compared with HEVC (HM15.0). The proposed algorithm devotes to improving video subjective quality under various video applications.
Heart rate estimation from FBG sensors using cepstrum analysis and sensor fusion.
Zhu, Yongwei; Fook, Victor Foo Siang; Jianzhong, Emily Hao; Maniyeri, Jayachandran; Guan, Cuntai; Zhang, Haihong; Jiliang, Eugene Phua; Biswas, Jit
2014-01-01
This paper presents a method of estimating heart rate from arrays of fiber Bragg grating (FBG) sensors embedded in a mat. A cepstral domain signal analysis technique is proposed to characterize Ballistocardiogram (BCG) signals. With this technique, the average heart beat intervals can be estimated by detecting the dominant peaks in the cepstrum, and the signals of multiple sensors can be fused together to obtain higher signal to noise ratio than each individual sensor. Experiments were conducted with 10 human subjects lying on 2 different postures on a bed. The estimated heart rate from BCG was compared with heart rate ground truth from ECG, and the mean error of estimation obtained is below 1 beat per minute (BPM). The results show that the proposed fusion method can achieve promising heart rate measurement accuracy and robustness against various sensor contact conditions.
Kassab, Safa; Pietrzak, William S
2014-01-01
Traditional manual instruments for total knee arthroplasty are associated with a malalignment rate of nearly 30%. Patient-specific positioning guides, developed to help address alignment, may also influence other intraoperative factors. This study compared a consecutive series of 270 Vanguard total knee replacements performed with Signature patient-specific positioning guides (study group) to a consecutive series of 595 similar knee replacements performed with manual instrumentation (control group). The study group averaged 16.7 fewer minutes in the operating room (p < .001), utilized tibial inserts that averaged 0.4 mm thinner with a smaller proportion of "thick" tibial inserts (14-18 mm) (p < .001), and required fewer transfusions (p = .022). The Signature-derived surgical plan accurately predicted correct femoral and tibial component sizes in 86.3% and 70.3% of the cases, respectively. These rates increased to 99.3% and 99.2%, respectively, for accuracy to within one size of the surgical plan, similar to published values for manual instrumentation.
Face recognition accuracy of forensic examiners, superrecognizers, and face recognition algorithms.
Phillips, P Jonathon; Yates, Amy N; Hu, Ying; Hahn, Carina A; Noyes, Eilidh; Jackson, Kelsey; Cavazos, Jacqueline G; Jeckeln, Géraldine; Ranjan, Rajeev; Sankaranarayanan, Swami; Chen, Jun-Cheng; Castillo, Carlos D; Chellappa, Rama; White, David; O'Toole, Alice J
2018-06-12
Achieving the upper limits of face identification accuracy in forensic applications can minimize errors that have profound social and personal consequences. Although forensic examiners identify faces in these applications, systematic tests of their accuracy are rare. How can we achieve the most accurate face identification: using people and/or machines working alone or in collaboration? In a comprehensive comparison of face identification by humans and computers, we found that forensic facial examiners, facial reviewers, and superrecognizers were more accurate than fingerprint examiners and students on a challenging face identification test. Individual performance on the test varied widely. On the same test, four deep convolutional neural networks (DCNNs), developed between 2015 and 2017, identified faces within the range of human accuracy. Accuracy of the algorithms increased steadily over time, with the most recent DCNN scoring above the median of the forensic facial examiners. Using crowd-sourcing methods, we fused the judgments of multiple forensic facial examiners by averaging their rating-based identity judgments. Accuracy was substantially better for fused judgments than for individuals working alone. Fusion also served to stabilize performance, boosting the scores of lower-performing individuals and decreasing variability. Single forensic facial examiners fused with the best algorithm were more accurate than the combination of two examiners. Therefore, collaboration among humans and between humans and machines offers tangible benefits to face identification accuracy in important applications. These results offer an evidence-based roadmap for achieving the most accurate face identification possible. Copyright © 2018 the Author(s). Published by PNAS.
Application of a single-flicker online SSVEP BCI for spatial navigation.
Chen, Jingjing; Zhang, Dan; Engel, Andreas K; Gong, Qin; Maye, Alexander
2017-01-01
A promising approach for brain-computer interfaces (BCIs) employs the steady-state visual evoked potential (SSVEP) for extracting control information. Main advantages of these SSVEP BCIs are a simple and low-cost setup, little effort to adjust the system parameters to the user and comparatively high information transfer rates (ITR). However, traditional frequency-coded SSVEP BCIs require the user to gaze directly at the selected flicker stimulus, which is liable to cause fatigue or even photic epileptic seizures. The spatially coded SSVEP BCI we present in this article addresses this issue. It uses a single flicker stimulus that appears always in the extrafoveal field of view, yet it allows the user to control four control channels. We demonstrate the embedding of this novel SSVEP stimulation paradigm in the user interface of an online BCI for navigating a 2-dimensional computer game. Offline analysis of the training data reveals an average classification accuracy of 96.9±1.64%, corresponding to an information transfer rate of 30.1±1.8 bits/min. In online mode, the average classification accuracy reached 87.9±11.4%, which resulted in an ITR of 23.8±6.75 bits/min. We did not observe a strong relation between a subject's offline and online performance. Analysis of the online performance over time shows that users can reliably control the new BCI paradigm with stable performance over at least 30 minutes of continuous operation.
Ensemble coding remains accurate under object and spatial visual working memory load.
Epstein, Michael L; Emmanouil, Tatiana A
2017-10-01
A number of studies have provided evidence that the visual system statistically summarizes large amounts of information that would exceed the limitations of attention and working memory (ensemble coding). However the necessity of working memory resources for ensemble coding has not yet been tested directly. In the current study, we used a dual task design to test the effect of object and spatial visual working memory load on size averaging accuracy. In Experiment 1, we tested participants' accuracy in comparing the mean size of two sets under various levels of object visual working memory load. Although the accuracy of average size judgments depended on the difference in mean size between the two sets, we found no effect of working memory load. In Experiment 2, we tested the same average size judgment while participants were under spatial visual working memory load, again finding no effect of load on averaging accuracy. Overall our results reveal that ensemble coding can proceed unimpeded and highly accurately under both object and spatial visual working memory load, providing further evidence that ensemble coding reflects a basic perceptual process distinct from that of individual object processing.
A method for mapping corn using the US Geological Survey 1992 National Land Cover Dataset
Maxwell, S.K.; Nuckols, J.R.; Ward, M.H.
2006-01-01
Long-term exposure to elevated nitrate levels in community drinking water supplies has been associated with an elevated risk of several cancers including non-Hodgkin's lymphoma, colon cancer, and bladder cancer. To estimate human exposure to nitrate, specific crop type information is needed as fertilizer application rates vary widely by crop type. Corn requires the highest application of nitrogen fertilizer of crops grown in the Midwest US. We developed a method to refine the US Geological Survey National Land Cover Dataset (NLCD) (including map and original Landsat images) to distinguish corn from other crops. Overall average agreement between the resulting corn and other row crops class and ground reference data was 0.79 kappa coefficient with individual Landsat images ranging from 0.46 to 0.93 kappa. The highest accuracies occurred in Regions where corn was the single dominant crop (greater than 80.0%) and the crop vegetation conditions at the time of image acquisition were optimum for separation of corn from all other crops. Factors that resulted in lower accuracies included the accuracy of the NLCD map, accuracy of corn areal estimates, crop mixture, crop condition at the time of Landsat overpass, and Landsat scene anomalies.
Käthner, Ivo; Halder, Sebastian; Hintermüller, Christoph; Espinosa, Arnau; Guger, Christoph; Miralles, Felip; Vargiu, Eloisa; Dauwalder, Stefan; Rafael-Palou, Xavier; Solà, Marc; Daly, Jean M.; Armstrong, Elaine; Martin, Suzanne; Kübler, Andrea
2017-01-01
Current brain-computer interface (BCIs) software is often tailored to the needs of scientists and technicians and therefore complex to allow for versatile use. To facilitate home use of BCIs a multifunctional P300 BCI with a graphical user interface intended for non-expert set-up and control was designed and implemented. The system includes applications for spelling, web access, entertainment, artistic expression and environmental control. In addition to new software, it also includes new hardware for the recording of electroencephalogram (EEG) signals. The EEG system consists of a small and wireless amplifier attached to a cap that can be equipped with gel-based or dry contact electrodes. The system was systematically evaluated with a healthy sample, and targeted end users of BCI technology, i.e., people with a varying degree of motor impairment tested the BCI in a series of individual case studies. Usability was assessed in terms of effectiveness, efficiency and satisfaction. Feedback of users was gathered with structured questionnaires. Two groups of healthy participants completed an experimental protocol with the gel-based and the dry contact electrodes (N = 10 each). The results demonstrated that all healthy participants gained control over the system and achieved satisfactory to high accuracies with both gel-based and dry electrodes (average error rates of 6 and 13%). Average satisfaction ratings were high, but certain aspects of the system such as the wearing comfort of the dry electrodes and design of the cap, and speed (in both groups) were criticized by some participants. Six potential end users tested the system during supervised sessions. The achieved accuracies varied greatly from no control to high control with accuracies comparable to that of healthy volunteers. Satisfaction ratings of the two end-users that gained control of the system were lower as compared to healthy participants. The advantages and disadvantages of the BCI and its applications are discussed and suggestions are presented for improvements to pave the way for user friendly BCIs intended to be used as assistive technology by persons with severe paralysis. PMID:28588442
Käthner, Ivo; Halder, Sebastian; Hintermüller, Christoph; Espinosa, Arnau; Guger, Christoph; Miralles, Felip; Vargiu, Eloisa; Dauwalder, Stefan; Rafael-Palou, Xavier; Solà, Marc; Daly, Jean M; Armstrong, Elaine; Martin, Suzanne; Kübler, Andrea
2017-01-01
Current brain-computer interface (BCIs) software is often tailored to the needs of scientists and technicians and therefore complex to allow for versatile use. To facilitate home use of BCIs a multifunctional P300 BCI with a graphical user interface intended for non-expert set-up and control was designed and implemented. The system includes applications for spelling, web access, entertainment, artistic expression and environmental control. In addition to new software, it also includes new hardware for the recording of electroencephalogram (EEG) signals. The EEG system consists of a small and wireless amplifier attached to a cap that can be equipped with gel-based or dry contact electrodes. The system was systematically evaluated with a healthy sample, and targeted end users of BCI technology, i.e., people with a varying degree of motor impairment tested the BCI in a series of individual case studies. Usability was assessed in terms of effectiveness, efficiency and satisfaction. Feedback of users was gathered with structured questionnaires. Two groups of healthy participants completed an experimental protocol with the gel-based and the dry contact electrodes ( N = 10 each). The results demonstrated that all healthy participants gained control over the system and achieved satisfactory to high accuracies with both gel-based and dry electrodes (average error rates of 6 and 13%). Average satisfaction ratings were high, but certain aspects of the system such as the wearing comfort of the dry electrodes and design of the cap, and speed (in both groups) were criticized by some participants. Six potential end users tested the system during supervised sessions. The achieved accuracies varied greatly from no control to high control with accuracies comparable to that of healthy volunteers. Satisfaction ratings of the two end-users that gained control of the system were lower as compared to healthy participants. The advantages and disadvantages of the BCI and its applications are discussed and suggestions are presented for improvements to pave the way for user friendly BCIs intended to be used as assistive technology by persons with severe paralysis.
Monitoring Chewing and Eating in Free-Living Using Smart Eyeglasses.
Zhang, Rui; Amft, Oliver
2018-01-01
We propose to 3-D-print personal fitted regular-look smart eyeglasses frames equipped with bilateral electromyography recording to monitor temporalis muscles' activity for automatic dietary monitoring. Personal fitting supported electrode-skin contacts are at temple ear bend and temple end positions. We evaluated the smart monitoring eyeglasses during in-lab and free-living studies of food chewing and eating event detection with ten participants. The in-lab study was designed to explore three natural food hardness levels and determine parameters of an energy-based chewing cycle detection. Our free-living study investigated whether chewing monitoring and eating event detection using smart eyeglasses is feasible in free-living. An eating event detection algorithm was developed to determine intake activities based on the estimated chewing rate. Results showed an average food hardness classification accuracy of 94% and chewing cycle detection precision and recall above 90% for the in-lab study and above 77% for the free-living study covering 122 hours of recordings. Eating detection revealed the 44 eating events with an average accuracy above 95%. We conclude that smart eyeglasses are suitable for monitoring chewing and eating events in free-living and even could provide further insights into the wearer's natural chewing patterns.
NASA Astrophysics Data System (ADS)
Kale, Mandar; Mukhopadhyay, Sudipta; Dash, Jatindra K.; Garg, Mandeep; Khandelwal, Niranjan
2016-03-01
Interstitial lung disease (ILD) is complicated group of pulmonary disorders. High Resolution Computed Tomography (HRCT) considered to be best imaging technique for analysis of different pulmonary disorders. HRCT findings can be categorised in several patterns viz. Consolidation, Emphysema, Ground Glass Opacity, Nodular, Normal etc. based on their texture like appearance. Clinician often find it difficult to diagnosis these pattern because of their complex nature. In such scenario computer-aided diagnosis system could help clinician to identify patterns. Several approaches had been proposed for classification of ILD patterns. This includes computation of textural feature and training /testing of classifier such as artificial neural network (ANN), support vector machine (SVM) etc. In this paper, wavelet features are calculated from two different ILD database, publically available MedGIFT ILD database and private ILD database, followed by performance evaluation of ANN and SVM classifiers in terms of average accuracy. It is found that average classification accuracy by SVM is greater than ANN where trained and tested on same database. Investigation continued further to test variation in accuracy of classifier when training and testing is performed with alternate database and training and testing of classifier with database formed by merging samples from same class from two individual databases. The average classification accuracy drops when two independent databases used for training and testing respectively. There is significant improvement in average accuracy when classifiers are trained and tested with merged database. It infers dependency of classification accuracy on training data. It is observed that SVM outperforms ANN when same database is used for training and testing.
A real-time freehand ultrasound calibration system with automatic accuracy feedback and control.
Chen, Thomas Kuiran; Thurston, Adrian D; Ellis, Randy E; Abolmaesumi, Purang
2009-01-01
This article describes a fully automatic, real-time, freehand ultrasound calibration system. The system was designed to be simple and sterilizable, intended for operating-room usage. The calibration system employed an automatic-error-retrieval and accuracy-control mechanism based on a set of ground-truth data. Extensive validations were conducted on a data set of 10,000 images in 50 independent calibration trials to thoroughly investigate the accuracy, robustness, and performance of the calibration system. On average, the calibration accuracy (measured in three-dimensional reconstruction error against a known ground truth) of all 50 trials was 0.66 mm. In addition, the calibration errors converged to submillimeter in 98% of all trials within 12.5 s on average. Overall, the calibration system was able to consistently, efficiently and robustly achieve high calibration accuracy with real-time performance.
Dai, Jiewen; Wu, Jinyang; Wang, Xudong; Yang, Xudong; Wu, Yunong; Xu, Bing; Shi, Jun; Yu, Hongbo; Cai, Min; Zhang, Wenbin; Zhang, Lei; Sun, Hao; Shen, Guofang; Zhang, Shilei
2016-01-01
Numerous problems regarding craniomaxillofacial navigation surgery are not well understood. In this study, we performed a double-center clinical study to quantitatively evaluate the characteristics of our navigation system and experience in craniomaxillofacial navigation surgery. Fifty-six patients with craniomaxillofacial disease were included and randomly divided into experimental (using our AccuNavi-A system) and control (using Strker system) groups to compare the surgical effects. The results revealed that the average pre-operative planning time was 32.32 mins vs 29.74 mins between the experimental and control group, respectively (p > 0.05). The average operative time was 295.61 mins vs 233.56 mins (p > 0.05). The point registration orientation accuracy was 0.83 mm vs 0.92 mm. The maximal average preoperative navigation orientation accuracy was 1.03 mm vs 1.17 mm. The maximal average persistent navigation orientation accuracy was 1.15 mm vs 0.09 mm. The maximal average navigation orientation accuracy after registration recovery was 1.15 mm vs 1.39 mm between the experimental and control group. All patients healed, and their function and profile improved. These findings demonstrate that although surgeons should consider the patients’ time and monetary costs, our qualified navigation surgery system and experience could offer an accurate guide during a variety of craniomaxillofacial surgeries. PMID:27305855
In-use activity, fuel use, and emissions of heavy-duty diesel roll-off refuse trucks.
Sandhu, Gurdas S; Frey, H Christopher; Bartelt-Hunt, Shannon; Jones, Elizabeth
2015-03-01
The objectives of this study were to quantify real-world activity, fuel use, and emissions for heavy duty diesel roll-off refuse trucks; evaluate the contribution of duty cycles and emissions controls to variability in cycle average fuel use and emission rates; quantify the effect of vehicle weight on fuel use and emission rates; and compare empirical cycle average emission rates with the U.S. Environmental Protection Agency's MOVES emission factor model predictions. Measurements were made at 1 Hz on six trucks of model years 2005 to 2012, using onboard systems. The trucks traveled 870 miles, had an average speed of 16 mph, and collected 165 tons of trash. The average fuel economy was 4.4 mpg, which is approximately twice previously reported values for residential trash collection trucks. On average, 50% of time is spent idling and about 58% of emissions occur in urban areas. Newer trucks with selective catalytic reduction and diesel particulate filter had NOx and PM cycle average emission rates that were 80% lower and 95% lower, respectively, compared to older trucks without. On average, the combined can and trash weight was about 55% of chassis weight. The marginal effect of vehicle weight on fuel use and emissions is highest at low loads and decreases as load increases. Among 36 cycle average rates (6 trucks×6 cycles), MOVES-predicted values and estimates based on real-world data have similar relative trends. MOVES-predicted CO2 emissions are similar to those of the real world, while NOx and PM emissions are, on average, 43% lower and 300% higher, respectively. The real-world data presented here can be used to estimate benefits of replacing old trucks with new trucks. Further, the data can be used to improve emission inventories and model predictions. In-use measurements of the real-world activity, fuel use, and emissions of heavy-duty diesel roll-off refuse trucks can be used to improve the accuracy of predictive models, such as MOVES, and emissions inventories. Further, the activity data from this study can be used to generate more representative duty cycles for more accurate chassis dynamometer testing. Comparisons of old and new model year diesel trucks are useful in analyzing the effect of fleet turnover. The analysis of effect of haul weight on fuel use can be used by fleet managers to optimize operations to reduce fuel cost.
2016-01-01
Time domain cyclic-selective mapping (TDC-SLM) reduces the peak-to-average power ratio (PAPR) in OFDM systems while the amounts of cyclic shifts are required to recover the transmitted signal in a receiver. One of the critical issues of the SLM scheme is sending the side information (SI) which reduces the throughputs in wireless OFDM systems. The proposed scheme implements delayed correlation and matched filtering (DC-MF) to estimate the amounts of the cyclic shifts in the receiver. In the proposed scheme, the DC-MF is placed after the frequency domain equalization (FDE) to improve the accuracy of cyclic shift estimation. The accuracy rate of the propose scheme reaches 100% at E b/N 0 = 5 dB and the bit error rate (BER) improves by 0.2 dB as compared with the conventional TDC-SLM. The BER performance of the proposed scheme is also better than that of the conventional TDC-SLM even though a nonlinear high power amplifier is assumed. PMID:27752539
Validation of Contact-Free Sleep Monitoring Device with Comparison to Polysomnography.
Tal, Asher; Shinar, Zvika; Shaki, David; Codish, Shlomi; Goldbart, Aviv
2017-03-15
To validate a contact-free system designed to achieve maximal comfort during long-term sleep monitoring, together with high monitoring accuracy. We used a contact-free monitoring system (EarlySense, Ltd., Israel), comprising an under-the-mattress piezoelectric sensor and a smartphone application, to collect vital signs and analyze sleep. Heart rate (HR), respiratory rate (RR), body movement, and calculated sleep-related parameters from the EarlySense (ES) sensor were compared to data simultaneously generated by the gold standard, polysomnography (PSG). Subjects in the sleep laboratory underwent overnight technician-attended full PSG, whereas subjects at home were recorded for 1 to 3 nights with portable partial PSG devices. Data were compared epoch by epoch. A total of 63 subjects (85 nights) were recorded under a variety of sleep conditions. Compared to PSG, the contact-free system showed similar values for average total sleep time (TST), % wake, % rapid eye movement, and % non-rapid eye movement sleep, with 96.1% and 93.3% accuracy of continuous measurement of HR and RR, respectively. We found a linear correlation between TST measured by the sensor and TST determined by PSG, with a coefficient of 0.98 (R = 0.87). Epoch-by-epoch comparison with PSG in the sleep laboratory setting revealed that the system showed sleep detection sensitivity, specificity, and accuracy of 92.5%, 80.4%, and 90.5%, respectively. TST estimates with the contact-free sleep monitoring system were closely correlated with the gold-standard reference. This system shows good sleep staging capability with improved performance over accelerometer-based apps, and collects additional physiological information on heart rate and respiratory rate. © 2017 American Academy of Sleep Medicine
Thieler, E. Robert; Danforth, William W.
1994-01-01
A new, state-of-the-art method for mapping historical shorelines from maps and aerial photographs, the Digital Shoreline Mapping System (DSMS), has been developed. The DSMS is a freely available, public domain software package that meets the cartographic and photogrammetric requirements of precise coastal mapping, and provides a means to quantify and analyze different sources of error in the mapping process. The DSMS is also capable of resolving imperfections in aerial photography that commonly are assumed to be nonexistent. The DSMS utilizes commonly available computer hardware and software, and permits the entire shoreline mapping process to be executed rapidly by a single person in a small lab. The DSMS generates output shoreline position data that are compatible with a variety of Geographic Information Systems (GIS). A second suite of programs, the Digital Shoreline Analysis System (DSAS) has been developed to calculate shoreline rates-of-change from a series of shoreline data residing in a GIS. Four rate-of-change statistics are calculated simultaneously (end-point rate, average of rates, linear regression and jackknife) at a user-specified interval along the shoreline using a measurement baseline approach. An example of DSMS and DSAS application using historical maps and air photos of Punta Uvero, Puerto Rico provides a basis for assessing the errors associated with the source materials as well as the accuracy of computed shoreline positions and erosion rates. The maps and photos used here represent a common situation in shoreline mapping: marginal-quality source materials. The maps and photos are near the usable upper limit of scale and accuracy, yet the shoreline positions are still accurate ±9.25 m when all sources of error are considered. This level of accuracy yields a resolution of ±0.51 m/yr for shoreline rates-of-change in this example, and is sufficient to identify the short-term trend (36 years) of shoreline change in the study area.
The impact of cine EPID image acquisition frame rate on markerless soft-tissue tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yip, Stephen, E-mail: syip@lroc.harvard.edu; Rottmann, Joerg; Berbeco, Ross
2014-06-15
Purpose: Although reduction of the cine electronic portal imaging device (EPID) acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor autotracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87 Hz with an amorphous silicon portal imager (AS1000, Varian Medical Systems, Palo Alto, CA). The maximum frame rate of 12.87 Hz is imposed by the EPID. Low frame rate images were obtained bymore » continuous frame averaging. A previously validated tracking algorithm was employed for autotracking. The difference between the programmed and autotracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at 11 field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise are correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the autotracking errors increased at frame rates lower than 4.29 Hz. Above 4.29 Hz, changes in errors were negligible withδ < 1.60 mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R = 0.94) and patient studies (R = 0.72). Moderate to poor correlation was found between image noise and tracking error with R −0.58 and −0.19 for both studies, respectively. Conclusions: Cine EPID image acquisition at the frame rate of at least 4.29 Hz is recommended. Motion blurring in the images with frame rates below 4.29 Hz can significantly reduce the accuracy of autotracking.« less
Water quality modeling in the dead end sections of drinking water distribution networks.
Abokifa, Ahmed A; Yang, Y Jeffrey; Lo, Cynthia S; Biswas, Pratim
2016-02-01
Dead-end sections of drinking water distribution networks are known to be problematic zones in terms of water quality degradation. Extended residence time due to water stagnation leads to rapid reduction of disinfectant residuals allowing the regrowth of microbial pathogens. Water quality models developed so far apply spatial aggregation and temporal averaging techniques for hydraulic parameters by assigning hourly averaged water demands to the main nodes of the network. Although this practice has generally resulted in minimal loss of accuracy for the predicted disinfectant concentrations in main water transmission lines, this is not the case for the peripheries of the distribution network. This study proposes a new approach for simulating disinfectant residuals in dead end pipes while accounting for both spatial and temporal variability in hydraulic and transport parameters. A stochastic demand generator was developed to represent residential water pulses based on a non-homogenous Poisson process. Dispersive solute transport was considered using highly dynamic dispersion rates. A genetic algorithm was used to calibrate the axial hydraulic profile of the dead-end pipe based on the different demand shares of the withdrawal nodes. A parametric sensitivity analysis was done to assess the model performance under variation of different simulation parameters. A group of Monte-Carlo ensembles was carried out to investigate the influence of spatial and temporal variations in flow demands on the simulation accuracy. A set of three correction factors were analytically derived to adjust residence time, dispersion rate and wall demand to overcome simulation error caused by spatial aggregation approximation. The current model results show better agreement with field-measured concentrations of conservative fluoride tracer and free chlorine disinfectant than the simulations of recent advection dispersion reaction models published in the literature. Accuracy of the simulated concentration profiles showed significant dependence on the spatial distribution of the flow demands compared to temporal variation. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Liang, Sheng-Fu; Chen, Yi-Chun; Wang, Yu-Lin; Chen, Pin-Tzu; Yang, Chia-Hsiang; Chiueh, Herming
2013-08-01
Objective. Around 1% of the world's population is affected by epilepsy, and nearly 25% of patients cannot be treated effectively by available therapies. The presence of closed-loop seizure-triggered stimulation provides a promising solution for these patients. Realization of fast, accurate, and energy-efficient seizure detection is the key to such implants. In this study, we propose a two-stage on-line seizure detection algorithm with low-energy consumption for temporal lobe epilepsy (TLE). Approach. Multi-channel signals are processed through independent component analysis and the most representative independent component (IC) is automatically selected to eliminate artifacts. Seizure-like intracranial electroencephalogram (iEEG) segments are fast detected in the first stage of the proposed method and these seizures are confirmed in the second stage. The conditional activation of the second-stage signal processing reduces the computational effort, and hence energy, since most of the non-seizure events are filtered out in the first stage. Main results. Long-term iEEG recordings of 11 patients who suffered from TLE were analyzed via leave-one-out cross validation. The proposed method has a detection accuracy of 95.24%, a false alarm rate of 0.09/h, and an average detection delay time of 9.2 s. For the six patients with mesial TLE, a detection accuracy of 100.0%, a false alarm rate of 0.06/h, and an average detection delay time of 4.8 s can be achieved. The hierarchical approach provides a 90% energy reduction, yielding effective and energy-efficient implementation for real-time epileptic seizure detection. Significance. An on-line seizure detection method that can be applied to monitor continuous iEEG signals of patients who suffered from TLE was developed. An IC selection strategy to automatically determine the most seizure-related IC for seizure detection was also proposed. The system has advantages of (1) high detection accuracy, (2) low false alarm, (3) short detection latency, and (4) energy-efficient design for hardware implementation.
Massillon-JL, Guerda; Cueva-Prócel, Diego; Díaz-Aguirre, Porfirio; Rodríguez-Ponce, Miguel; Herrera-Martínez, Flor
2013-01-01
This work investigated the suitability of passive dosimeters for reference dosimetry in small fields with acceptable accuracy. Absorbed dose to water rate was determined in nine small radiation fields with diameters between 4 and 35 mm in a Leksell Gamma Knife (LGK) and a modified linear accelerator (linac) for stereotactic radiosurgery treatments. Measurements were made using Gafchromic film (MD-V2-55), alanine and thermoluminescent (TLD-100) dosimeters and compared with conventional dosimetry systems. Detectors were calibrated in terms of absorbed dose to water in 60Co gamma-ray and 6 MV x-ray reference (10×10 cm2) fields using an ionization chamber calibrated at a standards laboratory. Absorbed dose to water rate computed with MD-V2-55 was higher than that obtained with the others dosimeters, possibly due to a smaller volume averaging effect. Ratio between the dose-rates determined with each dosimeter and those obtained with the film was evaluated for both treatment modalities. For the LGK, the ratio decreased as the dosimeter size increased and remained constant for collimator diameters larger than 8 mm. The same behaviour was observed for the linac and the ratio increased with field size, independent of the dosimeter used. These behaviours could be explained as an averaging volume effect due to dose gradient and lack of electronic equilibrium. Evaluation of the output factors for the LGK collimators indicated that, even when agreement was observed between Monte Carlo simulation and measurements with different dosimeters, this does not warrant that the absorbed dose to water rate in the field was properly known and thus, investigation of the reference dosimetry should be an important issue. These results indicated that alanine dosimeter provides a high degree of accuracy but cannot be used in fields smaller than 20 mm diameter. Gafchromic film can be considered as a suitable methodology for reference dosimetry. TLD dosimeters are not appropriate in fields smaller than 10 mm diameters. PMID:23671677
NASA Astrophysics Data System (ADS)
Barbetta, Silvia; Moramarco, Tommaso; Perumal, Muthiah
2017-11-01
Quite often the discharge at a site is estimated using the rating curve developed for that site and its development requires river flow measurements, which are costly, tedious and dangerous during severe floods. To circumvent the conventional rating curve development approach, Perumal et al. in 2007 and 2010 applied the Variable Parameter Muskingum Stage-hydrograph (VPMS) routing method for developing stage-discharge relationships especially at those ungauged river sites where stage measurements and details of section geometry are available, but discharge measurements are not made. The VPMS method enables to estimate rating curves at ungauged river sites with acceptable accuracy. But the application of the method is subjected to the limitation of negligible presence of lateral flow within the routing reach. To overcome this limitation, this study proposes an extension of the VPMS method, henceforth, known herein as the VPMS-Lin method, for enabling the streamflow assessment even when significant lateral inflow occurs along the river reach considered for routing. The lateral inflow is estimated through the continuity equation expressed in the characteristic form as advocated by Barbetta et al. in 2012. The VPMS-Lin, is tested on two rivers characterized by different geometric and hydraulic properties: 1) a 50 km reach of the Tiber River in (central Italy) and 2) a 73 km reach of the Godavari River in the peninsular India. The study demonstrates that both the upstream and downstream discharge hydrographs are well reproduced, with a root mean square error equal on average to about 35 and 1700 m3 s-1 for the Tiber River and the Godavari River case studies, respectively. Moreover, simulation studies carried out on a river stretch of the Tiber River using the one-dimensional hydraulic model MIKE11 and the VPMS-Lin models demonstrate the accuracy of the VMPS-Lin model, which besides enabling the estimation of streamflow, also enables the estimation of reach averaged optimal roughness coefficients for the considered routing events.
Shin, Jaeyoung; Kwon, Jinuk; Im, Chang-Hwan
2018-01-01
The performance of a brain-computer interface (BCI) can be enhanced by simultaneously using two or more modalities to record brain activity, which is generally referred to as a hybrid BCI. To date, many BCI researchers have tried to implement a hybrid BCI system by combining electroencephalography (EEG) and functional near-infrared spectroscopy (NIRS) to improve the overall accuracy of binary classification. However, since hybrid EEG-NIRS BCI, which will be denoted by hBCI in this paper, has not been applied to ternary classification problems, paradigms and classification strategies appropriate for ternary classification using hBCI are not well investigated. Here we propose the use of an hBCI for the classification of three brain activation patterns elicited by mental arithmetic, motor imagery, and idle state, with the aim to elevate the information transfer rate (ITR) of hBCI by increasing the number of classes while minimizing the loss of accuracy. EEG electrodes were placed over the prefrontal cortex and the central cortex, and NIRS optodes were placed only on the forehead. The ternary classification problem was decomposed into three binary classification problems using the "one-versus-one" (OVO) classification strategy to apply the filter-bank common spatial patterns filter to EEG data. A 10 × 10-fold cross validation was performed using shrinkage linear discriminant analysis (sLDA) to evaluate the average classification accuracies for EEG-BCI, NIRS-BCI, and hBCI when the meta-classification method was adopted to enhance classification accuracy. The ternary classification accuracies for EEG-BCI, NIRS-BCI, and hBCI were 76.1 ± 12.8, 64.1 ± 9.7, and 82.2 ± 10.2%, respectively. The classification accuracy of the proposed hBCI was thus significantly higher than those of the other BCIs ( p < 0.005). The average ITR for the proposed hBCI was calculated to be 4.70 ± 1.92 bits/minute, which was 34.3% higher than that reported for a previous binary hBCI study.
A Pulse Rate Detection Method for Mouse Application Based on Multi-PPG Sensors
Chen, Wei-Hao
2017-01-01
Heart rate is an important physiological parameter for healthcare. Among measurement methods, photoplethysmography (PPG) is an easy and convenient method for pulse rate detection. However, as the PPG signal faces the challenge of motion artifacts and is constrained by the position chosen, the purpose of this paper is to implement a comfortable and easy-to-use multi-PPG sensor module combined with a stable and accurate real-time pulse rate detection method on a computer mouse. A weighted average method for multi-PPG sensors is used to adjust the weight of each signal channel in order to raise the accuracy and stability of the detected signal, therefore reducing the disturbance of noise under the environment of moving effectively and efficiently. According to the experiment results, the proposed method can increase the usability and probability of PPG signal detection on palms. PMID:28708112
Murphy, Jennifer; Millgate, Edward; Geary, Hayley; Ichijo, Eri; Coll, Michel-Pierre; Brewer, Rebecca; Catmur, Caroline; Bird, Geoffrey
2018-03-01
Evidence suggests that intelligence is positively associated with performance on the heartbeat counting task (HCT). The HCT is often employed as measure of interoception - the ability to perceive the internal state of one's body - however it's use remains controversial as performance on the HCT is strongly influenced by knowledge of resting heart rate. This raises the possibility that heart rate knowledge may mediate the previously-observed association between intelligence and HCT performance. Study One demonstrates an association between intelligence and HCT performance (N = 94), and Study Two demonstrates that this relationship is mediated by knowledge of the average resting heart rate (N = 134). These data underscore the need to account for the influence of prior knowledge and beliefs when examining individual differences in cardiac interoceptive accuracy using the HCT. Copyright © 2018 The Author(s). Published by Elsevier B.V. All rights reserved.
Use of temperature to improve West Nile virus forecasts
Schneider, Zachary D.; Caillouet, Kevin A.; Campbell, Scott R.; Damian, Dan; Irwin, Patrick; Jones, Herff M. P.; Townsend, John
2018-01-01
Ecological and laboratory studies have demonstrated that temperature modulates West Nile virus (WNV) transmission dynamics and spillover infection to humans. Here we explore whether inclusion of temperature forcing in a model depicting WNV transmission improves WNV forecast accuracy relative to a baseline model depicting WNV transmission without temperature forcing. Both models are optimized using a data assimilation method and two observed data streams: mosquito infection rates and reported human WNV cases. Each coupled model-inference framework is then used to generate retrospective ensemble forecasts of WNV for 110 outbreak years from among 12 geographically diverse United States counties. The temperature-forced model improves forecast accuracy for much of the outbreak season. From the end of July until the beginning of October, a timespan during which 70% of human cases are reported, the temperature-forced model generated forecasts of the total number of human cases over the next 3 weeks, total number of human cases over the season, the week with the highest percentage of infectious mosquitoes, and the peak percentage of infectious mosquitoes that on average increased absolute forecast accuracy 5%, 10%, 12%, and 6%, respectively, over the non-temperature forced baseline model. These results indicate that use of temperature forcing improves WNV forecast accuracy and provide further evidence that temperature influences rates of WNV transmission. The findings provide a foundation for implementation of a statistically rigorous system for real-time forecast of seasonal WNV outbreaks and their use as a quantitative decision support tool for public health officials and mosquito control programs. PMID:29522514
Skill in Precipitation Forecasting in the National Weather Service.
NASA Astrophysics Data System (ADS)
Charba, Jerome P.; Klein, William H.
1980-12-01
All known long-term records of forecasting performance for different types of precipitation forecasts in the National Weather Service were examined for relative skill and secular trends in skill. The largest upward trends were achieved by local probability of precipitation (PoP) forecasts for the periods 24-36 h and 36-48 h after 0000 and 1200 GMT. Over the last 13 years, the skill of these forecasts has improved at an average rate of 7.2% per 10-year interval. Over the same period, improvement has been smaller in local PoP skill in the 12-24 h range (2.0% per 10 years) and in the accuracy of "Yea/No" forecasts of measurable precipitation. The overall trend in accuracy of centralized quantitative precipitation forecasts of 0.5 in and 1.0 in has been slightly upward at the 0-24 h range and strongly upward at the 24-48 h range. Most of the improvement in these forecasts has been achieved from the early 1970s to the present. Strong upward accuracy trends in all types of precipitation forecasts within the past eight years are attributed primarily to improvements in numerical and statistical centralized guidance forecasts.The skill and accuracy of both measurable and quantitative precipitation forecasts is 35-55% greater during the cool season than during the warm season. Also, the secular rate of improvement of the cool season precipitation forecasts is 50-110% greater than that of the warm season. This seasonal difference in performance reflects the relative difficulty of forecasting predominantly stratiform precipitation of the cool season and convective precipitation of the warm season.
Social Eavesdropping: Can You Hear the Emotionality in a "Hello" That Is Not Meant for You?
Karthikeyan, Sethu; Ramachandra, Vijayachandra
2017-01-01
The study examined third-party listeners' ability to detect the Hellos spoken to prevalidated happy, neutral, and sad facial expressions. The average detection accuracies from the happy and sad (HS), happy and neutral (HN), and sad and neutral (SN) listening tests followed the average vocal pitch differences between the two sets of Hellos in each of the tests; HS and HN detection accuracies were above chance reflecting the significant pitch differences between the respective Hellos. The SN detection accuracy was at chance reflecting the lack of pitch difference between sad and neutral Hellos. As expected, the SN detection accuracy positively correlated with theory of mind; participating in these tests has been likened to the act of eavesdropping, which has been discussed from an evolutionary perspective. An unexpected negative correlation between the HS detection accuracy and the empathy quotient has been discussed with respect to autism research on empathy and pitch discrimination.
Sequenced subjective accents for brain-computer interfaces
NASA Astrophysics Data System (ADS)
Vlek, R. J.; Schaefer, R. S.; Gielen, C. C. A. M.; Farquhar, J. D. R.; Desain, P.
2011-06-01
Subjective accenting is a cognitive process in which identical auditory pulses at an isochronous rate turn into the percept of an accenting pattern. This process can be voluntarily controlled, making it a candidate for communication from human user to machine in a brain-computer interface (BCI) system. In this study we investigated whether subjective accenting is a feasible paradigm for BCI and how its time-structured nature can be exploited for optimal decoding from non-invasive EEG data. Ten subjects perceived and imagined different metric patterns (two-, three- and four-beat) superimposed on a steady metronome. With an offline classification paradigm, we classified imagined accented from non-accented beats on a single trial (0.5 s) level with an average accuracy of 60.4% over all subjects. We show that decoding of imagined accents is also possible with a classifier trained on perception data. Cyclic patterns of accents and non-accents were successfully decoded with a sequence classification algorithm. Classification performances were compared by means of bit rate. Performance in the best scenario translates into an average bit rate of 4.4 bits min-1 over subjects, which makes subjective accenting a promising paradigm for an online auditory BCI.
A NEW INSAR DERIVED DEM OF BLACK RAPIDS GLACIER
NASA Astrophysics Data System (ADS)
Shugar, D. H.; Rabus, B.; Clague, J. J.
2009-12-01
We have constructed a new digital elevation model representing the 1995 surface of surge-type Black Rapids Glacier and the surrounding central Alaska Range, using ERS-1/2 repeat-pass interferometry. First, we isolated the topographic phase from three interferograms with contrasting perpendicular baselines. Next we attempted to automatically unwrap this topographic phase but encountered numerous errors due to the terrain containing areas of poor coherence from fringe aliasing, radar layover or shadow. We then consistently corrected these persistent phase-unwrapping errors in all three interferograms using an iterative semi-automated approach that capitalizes on the multi-baseline nature of the data set. Over the surface of Black Rapids Glacier, the accuracy of the new DEM is estimated at better than +/- 12 m. Ground-surveyed spot elevations from 1995 corroborate this accuracy estimate. Comparison of the new DEM with a 1951 U.S. Geological Survey topographic map, and with ground survey data from other years, shows the gradual return of Black Rapids Glacier to pre-surge conditions. In the 44-year period between 1951 and 1995 the observed average steepening of the longitudinal profile is ~0.6°. The maximum elevation changes in the ablation and accumulation zones are -256 m and +75 m, respectively, suggesting corresponding average rates of elevation change of about -5.8 m/yr and +1.7 m/yr. These rates are 1.5-2 times higher than those indicated by the ground survey spot elevation measurements over the period 1975 to 2005. Considering the significant overlap of the two periods of measurement, the inferred average rates for 1951-1975 would have to be very large (-7.5 m/yr and +2.3 m/yr, respectively) for these two findings to be consistent. A second comparison with the recently released ASTER G-DEM (data from 2001) led to no glaciologically usable results due to major artifacts in the ASTER G-DEM. We therefore conclude that the 1951 U.S. Geological Survey map and the ASTER G-DEM both appear biased over the Black Rapids Glacier surface and caution is advised when using either for quantitative estimates of elevation change over the glacier surface.
Chae, Jin Seok; Park, Jin; So, Wi-Young
2017-07-28
The purpose of this study was to suggest a ranking prediction model using the competition record of the Ladies Professional Golf Association (LPGA) players. The top 100 players on the tour money list from the 2013-2016 US Open were analyzed in this model. Stepwise regression analysis was conducted to examine the effect of performance and independent variables (i.e., driving accuracy, green in regulation, putts per round, driving distance, percentage of sand saves, par-3 average, par-4 average, par-5 average, birdies average, and eagle average) on dependent variables (i.e., scoring average, official money, top-10 finishes, winning percentage, and 60-strokes average). The following prediction model was suggested:Y (Scoring average) = 55.871 - 0.947 (Birdies average) + 4.576 (Par-4 average) - 0.028 (Green in regulation) - 0.012 (Percentage of sand saves) + 2.088 (Par-3 average) - 0.026 (Driving accuracy) - 0.017 (Driving distance) + 0.085 (Putts per round)Y (Official money) = 6628736.723 + 528557.907 (Birdies average) - 1831800.821 (Par-4 average) + 11681.739 (Green in regulation) + 6476.344 (Percentage of sand saves) - 688115.074 (Par-3 average) + 7375.971 (Driving accuracy)Y (Top-10 finish%) = 204.462 + 12.562 (Birdies average) - 47.745 (Par-4 average) + 1.633 (Green in regulation) - 5.151 (Putts per round) + 0.132 (Percentage of sand saves)Y (Winning percentage) = 49.949 + 3.191 (Birdies average) - 15.023 (Par-4 average) + 0.043 (Percentage of sand saves)Y (60-strokes average) = 217.649 + 13.978 (Birdies average) - 44.855 (Par-4 average) - 22.433 (Par-3 average) + 0.16 (Green in regulation)Scoring of the above five prediction models and the prediction of golf ranking in the 2016 Women's Golf Olympic competition in Rio revealed a significant correlation between the predicted and real ranking (r = 0.689, p < 0.001) and between the predicted and the real average score (r = 0.653, p < 0.001). Our ranking prediction model using LPGA data may help coaches and players to identify which players are likely to participate in Olympic and World competitions, based on their performance.
Mapping Shoreline Change Using Digital Orthophotogrammetry on Maui, Hawaii
Fletcher, C.; Rooney, J.; Barbee, M.; Lim, S.-C.; Richmond, B.
2003-01-01
Digital, aerial orthophotomosaics with 0.5-3.0 m horizontal accuracy, used with NOAA topographic maps (T-sheets), document past shoreline positions on Maui Island, Hawaii. Outliers in the shoreline position database are determined using a least median of squares regression. Least squares linear regression of the reweighted data (outliers excluded) is used to determine a shoreline trend termed the reweighted linear squares (RLS). To determine the annual erosion hazard rate (AEHR) for use by shoreline managers the RLS data is smoothed in the longshore direction using a weighted moving average five transects wide with the smoothed rate applied to the center transect. Weightings within each five transect group are 1,3,5,3,1. AEHR's (smoothed RLS values) are plotted on a 1:3000 map series for use by shoreline managers and planners. These maps are displayed on the web for public reference at http://www.co.maui.hi.us/ departments/Planning/erosion.htm. An end-point rate of change is also calculated using the earliest T-sheet and the latest collected shoreline (1997 or 2002). The resulting database consists of 3565 separate erosion rates spaced every 20 m along 90 km of sandy shoreline. Three regions are analyzed: Kihei, West Maui, and North Shore coasts. The Kihei Coast has an average AEHR of about 0.3 m/yr, an end point rate (EPR) of 0.2 m/yr, 2.8 km of beach loss and 19 percent beach narrowing in the period 1949-1997. Over the same period the West Maui coast has an average AEHR of about 0.2 m/yr, an average EPR of about 0.2 m/yr, about 4.5 km of beach loss and 25 percent beach narrowing. The North Shore has an average AEHR of about 0.4 m/yr, an average EPR of about 0.3 m/yr, 0.8 km of beach loss and 15 percent beach narrowing. The mean, island-wide EPR of eroding shorelines is 0.24 m/yr and the average AEHR of eroding shorelines is about 0.3 m/yr. The overall shoreline change rate, erosion and accretion included, as measured using the unsmoothed RLS technique is 0.21 m/yr. Island wide changes in beach width show a 19 percent decrease over the period 1949/ 1950 to 1997/2002. Island-wide, about 8 km of dry beach has been lost since 1949 (i.e., high water against hard engineering structures and natural rock substrate).
Field Assessment of Energy Audit Tools for Retrofit Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, J.; Bohac, D.; Nelson, C.
2013-07-01
This project focused on the use of home energy ratings as a tool to promote energy retrofits in existing homes. A home energy rating provides a quantitative appraisal of a home’s energy performance, usually compared to a benchmark such as the average energy use of similar homes in the same region. Rating systems based on energy performance models, the focus of this report, can establish a home’s achievable energy efficiency potential and provide a quantitative assessment of energy savings after retrofits are completed, although their accuracy needs to be verified by actual measurement or billing data. Ratings can also showmore » homeowners where they stand compared to their neighbors, thus creating social pressure to conform to or surpass others. This project field-tested three different building performance models of varying complexity, in order to assess their value as rating systems in the context of a residential retrofit program: Home Energy Score, SIMPLE, and REM/Rate.« less
The Accuracy of Talking Pedometers when Used during Free-Living: A Comparison of Four Devices
ERIC Educational Resources Information Center
Albright, Carolyn; Jerome, Gerald J.
2011-01-01
The purpose of this study was to determine the accuracy of four commercially available talking pedometers in measuring accumulated daily steps of adult participants while they moved independently. Ten young sighted adults (with an average age of 24.1 [plus or minus] 4.6 years), 10 older sighted adults (with an average age of 73 [plus or minus] 5.5…
Age accuracy and resolution of Quaternary corals used as proxies for sea level
NASA Astrophysics Data System (ADS)
Edinger, E. N.; Burr, G. S.; Pandolfi, J. M.; Ortiz, J. C.
2007-01-01
The accuracy of global eustatic sea level curves measured from raised Quaternary reefs, using radiometric ages of corals at known heights, may be limited by time-averaging, which affects the variation in coral age at a given height. Time-averaging was assessed in uplifted Holocene reef sequences from the Huon Peninsula, Papua New Guinea, using radiocarbon dating of coral skeletons in both horizontal transects and vertical sequences. Calibrated 2σ age ranges varied from 800 to 1060 years along horizontal transects, but weighted mean ages calculated from 15-18 dates per horizon were accurate to a resolution within 154-214 yr. Approximately 40% of the variability in age estimate resulted from internal variability inherent to 14C estimates, and 60% was due to time-averaging. The accuracy of age estimates of sea level change in studies using single dated corals as proxies for sea level is probably within 1000 yr of actual age, but can be resolved to ≤ 250 yr if supported by dates from analysis of a statistical population of corals at each stratigraphic interval. The range of time-averaging among reef corals was much less than that for shelly benthos. Ecological time-averaging dominated over sedimentological time averaging for reef corals, opposite to patterns reported from shelly benthos in siliciclastic environments.
Chipps, S.R.; Einfalt, L.M.; Wahl, David H.
2000-01-01
We measured growth of age-0 tiger muskellunge as a function of ration size (25, 50, 75, and 100% C(max))and water temperature (7.5-25??C) and compared experimental results with those predicted from a bioenergetic model. Discrepancies between actual and predicted values varied appreciably with water temperature and growth rate. On average, model output overestimated winter consumption rates at 10 and 7.5??C by 113 to 328%, respectively, whereas model predictions in summer and autumn (20-25??C) were in better agreement with actual values (4 to 58%). We postulate that variation in model performance was related to seasonal changes in esocid metabolic rate, which were not accounted for in the bioenergetic model. Moreover, accuracy of model output varied with feeding and growth rate of tiger muskellunge. The model performed poorly for fish fed low rations compared with estimates based on fish fed ad libitum rations and was attributed, in part, to the influence of growth rate on the accuracy of bioenergetic predictions. Based on modeling simulations, we found that errors associated with bioenergetic parameters had more influence on model output when growth rate was low, which is consistent with our observations. In addition, reduced conversion efficiency at high ration levels may contribute to variable model performance, thereby implying that waste losses should be modeled as a function of ration size for esocids. Our findings support earlier field tests of the esocid bioenergetic model and indicate that food consumption is generally overestimated by the model, particularly in winter months and for fish exhibiting low feeding and growth rates.
Testing the accuracy of a 1-D volcanic plume model in estimating mass eruption rate
Mastin, Larry G.
2014-01-01
During volcanic eruptions, empirical relationships are used to estimate mass eruption rate from plume height. Although simple, such relationships can be inaccurate and can underestimate rates in windy conditions. One-dimensional plume models can incorporate atmospheric conditions and give potentially more accurate estimates. Here I present a 1-D model for plumes in crosswind and simulate 25 historical eruptions where plume height Hobs was well observed and mass eruption rate Mobs could be calculated from mapped deposit mass and observed duration. The simulations considered wind, temperature, and phase changes of water. Atmospheric conditions were obtained from the National Center for Atmospheric Research Reanalysis 2.5° model. Simulations calculate the minimum, maximum, and average values (Mmin, Mmax, and Mavg) that fit the plume height. Eruption rates were also estimated from the empirical formula Mempir = 140Hobs4.14 (Mempir is in kilogram per second, Hobs is in kilometer). For these eruptions, the standard error of the residual in log space is about 0.53 for Mavg and 0.50 for Mempir. Thus, for this data set, the model is slightly less accurate at predicting Mobs than the empirical curve. The inability of this model to improve eruption rate estimates may lie in the limited accuracy of even well-observed plume heights, inaccurate model formulation, or the fact that most eruptions examined were not highly influenced by wind. For the low, wind-blown plume of 14–18 April 2010 at Eyjafjallajökull, where an accurate plume height time series is available, modeled rates do agree better with Mobs than Mempir.
Rutkoski, Jessica; Poland, Jesse; Mondal, Suchismita; Autrique, Enrique; Pérez, Lorena González; Crossa, José; Reynolds, Matthew; Singh, Ravi
2016-01-01
Genomic selection can be applied prior to phenotyping, enabling shorter breeding cycles and greater rates of genetic gain relative to phenotypic selection. Traits measured using high-throughput phenotyping based on proximal or remote sensing could be useful for improving pedigree and genomic prediction model accuracies for traits not yet possible to phenotype directly. We tested if using aerial measurements of canopy temperature, and green and red normalized difference vegetation index as secondary traits in pedigree and genomic best linear unbiased prediction models could increase accuracy for grain yield in wheat, Triticum aestivum L., using 557 lines in five environments. Secondary traits on training and test sets, and grain yield on the training set were modeled as multivariate, and compared to univariate models with grain yield on the training set only. Cross validation accuracies were estimated within and across-environment, with and without replication, and with and without correcting for days to heading. We observed that, within environment, with unreplicated secondary trait data, and without correcting for days to heading, secondary traits increased accuracies for grain yield by 56% in pedigree, and 70% in genomic prediction models, on average. Secondary traits increased accuracy slightly more when replicated, and considerably less when models corrected for days to heading. In across-environment prediction, trends were similar but less consistent. These results show that secondary traits measured in high-throughput could be used in pedigree and genomic prediction to improve accuracy. This approach could improve selection in wheat during early stages if validated in early-generation breeding plots. PMID:27402362
Design and Performance Evaluation on Ultra-Wideband Time-Of-Arrival 3D Tracking System
NASA Technical Reports Server (NTRS)
Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Dusl, John
2012-01-01
A three-dimensional (3D) Ultra-Wideband (UWB) Time--of-Arrival (TOA) tracking system has been studied at NASA Johnson Space Center (JSC) to provide the tracking capability inside the International Space Station (ISS) modules for various applications. One of applications is to locate and report the location where crew experienced possible high level of carbon-dioxide and felt upset. In order to accurately locate those places in a multipath intensive environment like ISS modules, it requires a robust real-time location system (RTLS) which can provide the required accuracy and update rate. A 3D UWB TOA tracking system with two-way ranging has been proposed and studied. The designed system will be tested in the Wireless Habitat Testbed which simulates the ISS module environment. In this presentation, we discuss the 3D TOA tracking algorithm and the performance evaluation based on different tracking baseline configurations. The simulation results show that two configurations of the tracking baseline are feasible. With 100 picoseconds standard deviation (STD) of TOA estimates, the average tracking error 0.2392 feet (about 7 centimeters) can be achieved for configuration Twisted Rectangle while the average tracking error 0.9183 feet (about 28 centimeters) can be achieved for configuration Slightly-Twisted Top Rectangle . The tracking accuracy can be further improved with the improvement of the STD of TOA estimates. With 10 picoseconds STD of TOA estimates, the average tracking error 0.0239 feet (less than 1 centimeter) can be achieved for configuration "Twisted Rectangle".
Cost-effectiveness of the U.S. Geological Survey stream-gaging program in Indiana
Stewart, J.A.; Miller, R.L.; Butch, G.K.
1986-01-01
Analysis of the stream gaging program in Indiana was divided into three phases. The first phase involved collecting information concerning the data need and the funding source for each of the 173 surface water stations in Indiana. The second phase used alternate methods to produce streamflow records at selected sites. Statistical models were used to generate stream flow data for three gaging stations. In addition, flow routing models were used at two of the sites. Daily discharges produced from models did not meet the established accuracy criteria and, therefore, these methods should not replace stream gaging procedures at those gaging stations. The third phase of the study determined the uncertainty of the rating and the error at individual gaging stations, and optimized travel routes and frequency of visits to gaging stations. The annual budget, in 1983 dollars, for operating the stream gaging program in Indiana is $823,000. The average standard error of instantaneous discharge for all continuous record gaging stations is 25.3%. A budget of $800,000 could maintain this level of accuracy if stream gaging stations were visited according to phase III results. A minimum budget of $790,000 is required to operate the gaging network. At this budget, the average standard error of instantaneous discharge would be 27.7%. A maximum budget of $1 ,000,000 was simulated in the analysis and the average standard error of instantaneous discharge was reduced to 16.8%. (Author 's abstract)
AutoSyP: A Low-Cost, Low-Power Syringe Pump for Use in Low-Resource Settings.
Juarez, Alexa; Maynard, Kelley; Skerrett, Erica; Molyneux, Elizabeth; Richards-Kortum, Rebecca; Dube, Queen; Oden, Z Maria
2016-10-05
This article describes the design and evaluation of AutoSyP, a low-cost, low-power syringe pump intended to deliver intravenous (IV) infusions in low-resource hospitals. A constant-force spring within the device provides mechanical energy to depress the syringe plunger. As a result, the device can run on rechargeable battery power for 66 hours, a critical feature for low-resource settings where the power grid may be unreliable. The device is designed to be used with 5- to 60-mL syringes and can deliver fluids at flow rates ranging from 3 to 60 mL/hour. The cost of goods to build one AutoSyP device is approximately $500. AutoSyP was tested in a laboratory setting and in a pilot clinical study. Laboratory accuracy was within 4% of the programmed flow rate. The device was used to deliver fluid to 10 healthy adult volunteers and 30 infants requiring IV fluid therapy at Queen Elizabeth Central Hospital in Blantyre, Malawi. The device delivered fluid with an average mean flow rate error of -2.3% ± 1.9% for flow rates ranging from 3 to 60 mL/hour. AutoSyP has the potential to improve the accuracy and safety of IV fluid delivery in low-resource settings. © The American Society of Tropical Medicine and Hygiene.
Bolormaa, S; Pryce, J E; Kemper, K; Savin, K; Hayes, B J; Barendse, W; Zhang, Y; Reich, C M; Mason, B A; Bunch, R J; Harrison, B E; Reverter, A; Herd, R M; Tier, B; Graser, H-U; Goddard, M E
2013-07-01
The aim of this study was to assess the accuracy of genomic predictions for 19 traits including feed efficiency, growth, and carcass and meat quality traits in beef cattle. The 10,181 cattle in our study had real or imputed genotypes for 729,068 SNP although not all cattle were measured for all traits. Animals included Bos taurus, Brahman, composite, and crossbred animals. Genomic EBV (GEBV) were calculated using 2 methods of genomic prediction [BayesR and genomic BLUP (GBLUP)] either using a common training dataset for all breeds or using a training dataset comprising only animals of the same breed. Accuracies of GEBV were assessed using 5-fold cross-validation. The accuracy of genomic prediction varied by trait and by method. Traits with a large number of recorded and genotyped animals and with high heritability gave the greatest accuracy of GEBV. Using GBLUP, the average accuracy was 0.27 across traits and breeds, but the accuracies between breeds and between traits varied widely. When the training population was restricted to animals from the same breed as the validation population, GBLUP accuracies declined by an average of 0.04. The greatest decline in accuracy was found for the 4 composite breeds. The BayesR accuracies were greater by an average of 0.03 than GBLUP accuracies, particularly for traits with known genes of moderate to large effect mutations segregating. The accuracies of 0.43 to 0.48 for IGF-I traits were among the greatest in the study. Although accuracies are low compared with those observed in dairy cattle, genomic selection would still be beneficial for traits that are hard to improve by conventional selection, such as tenderness and residual feed intake. BayesR identified many of the same quantitative trait loci as a genomewide association study but appeared to map them more precisely. All traits appear to be highly polygenic with thousands of SNP independently associated with each trait.
Accuracy of S2 Alar-Iliac Screw Placement Under Robotic Guidance.
Laratta, Joseph L; Shillingford, Jamal N; Lombardi, Joseph M; Alrabaa, Rami G; Benkli, Barlas; Fischer, Charla; Lenke, Lawrence G; Lehman, Ronald A
Case series. To determine the safety and feasibility of S2 alar-iliac (S2AI) screw placement under robotic guidance. Similar to standard iliac fixation, S2AI screws aid in achieving fixation across the sacropelvic junction and decreasing S1 screw strain. Fortunately, the S2AI technique minimizes prominent instrumentation and the need for offset connectors to the fusion construct. Herein, we present an analysis of the largest series of robotic-guided S2AI screws in the literature without any significant author conflicts of interest with the robotics industry. Twenty-three consecutive patients who underwent spinopelvic fixation with 46 S2AI screws under robotic guidance were analyzed from 2015 to 2016. Screws were placed by two senior spine surgeons, along with various fellow or resident surgical assistants, using a proprietary robotic guidance system (Renaissance; Mazor Robotics Ltd., Caesara, Israel). Screw position and accuracy was assessed on intraoperative CT O-arm scans and analyzed using three-dimensional interactive viewing and manipulation of the images. The average caudal angle in the sagittal plane was 31.0° ± 10.0°. The average horizontal angle in the axial plane using the posterior superior iliac spine as a reference was 42.8° ± 6.6°. The average S1 screw to S2AI screw angle was 11.3° ± 9.9°. Two violations of the iliac cortex were noted, with an average breach distance of 7.9 ± 4.8 mm. One breach was posterior (2.2%) and one was anterior (2.2%). The overall robotic S2AI screw accuracy rate was 95.7%. There were no intraoperative neurologic, vascular, or visceral complications related to the placement of the S2AI screws. Spinopelvic fixation achieved using a bone-mounted miniature robotic-guided S2AI screw insertion technique is safe and reliable. Despite two breaches, no complications related to the placement of the S2AI screws occurred in this series. Level IV, therapeutic. Copyright © 2017 Scoliosis Research Society. Published by Elsevier Inc. All rights reserved.
Cost effectiveness of stream-gaging program in Michigan
Holtschlag, D.J.
1985-01-01
This report documents the results of a study of the cost effectiveness of the stream-gaging program in Michigan. Data uses and funding sources were identified for the 129 continuous gaging stations being operated in Michigan as of 1984. One gaging station was identified as having insufficient reason to continue its operation. Several stations were identified for reactivation, should funds become available, because of insufficiencies in the data network. Alternative methods of developing streamflow information based on routing and regression analyses were investigated for 10 stations. However, no station records were reproduced with sufficient accuracy to replace conventional gaging practices. A cost-effectiveness analysis of the data-collection procedure for the ice-free season was conducted using a Kalman-filter analysis. To define missing-record characteristics, cross-correlation coefficients and coefficients of variation were computed at stations on the basis of daily mean discharge. Discharge-measurement data were used to describe the gage/discharge rating stability at each station. The results of the cost-effectiveness analysis for a 9-month ice-free season show that the current policy of visiting most stations on a fixed servicing schedule once every 6 weeks results in an average standard error of 12.1 percent for the current $718,100 budget. By adopting a flexible servicing schedule, the average standard error could be reduced to 11.1 percent. Alternatively, the budget could be reduced to $700,200 while maintaining the current level of accuracy. A minimum budget of $680,200 is needed to operate the 129-gaging-station program; a budget less than this would not permit proper service and maintenance of stations. At the minimum budget, the average standard error would be 14.4 percent. A budget of $789,900 (the maximum analyzed) would result in a decrease in the average standard error to 9.07 percent. Owing to continual changes in the composition of the network and the changes in the uncertainties of streamflow accuracy at individual stations, the cost-effectiveness analysis will need to be updated regularly if it is to be used as a management tool. Cost of these updates need to be considered in decisions concerning the feasibility of flexible servicing schedules.
Accuracy Rates of Ancestry Estimation by Forensic Anthropologists Using Identified Forensic Cases.
Thomas, Richard M; Parks, Connie L; Richard, Adam H
2017-07-01
A common task in forensic anthropology involves the estimation of the ancestry of a decedent by comparing their skeletal morphology and measurements to skeletons of individuals from known geographic groups. However, the accuracy rates of ancestry estimation methods in actual forensic casework have rarely been studied. This article uses 99 forensic cases with identified skeletal remains to develop accuracy rates for ancestry estimations conducted by forensic anthropologists. The overall rate of correct ancestry estimation from these cases is 90.9%, which is comparable to most research-derived rates and those reported by individual practitioners. Statistical tests showed no significant difference in accuracy rates depending on examiner education level or on the estimated or identified ancestry. More recent cases showed a significantly higher accuracy rate. The incorporation of metric analyses into the ancestry estimate in these cases led to a higher accuracy rate. © 2017 American Academy of Forensic Sciences.
Gorny, Alexander Wilhelm; Liew, Seaw Jia; Tan, Chuen Seng; Müller-Riemenschneider, Falk
2017-10-20
Many modern smart watches and activity trackers feature an optical sensor that estimates the wearer's heart rate. Recent studies have evaluated the performance of these consumer devices in the laboratory. The objective of our study was to examine the accuracy and sensitivity of a common wrist-worn tracker device in measuring heart rates and detecting 1-min bouts of moderate to vigorous physical activity (MVPA) under free-living conditions. Ten healthy volunteers were recruited from a large university in Singapore to participate in a limited field test, followed by a month of continuous data collection. During the field test, each participant would wear one Fitbit Charge HR activity tracker and one Polar H6 heart rate monitor. Fitbit measures were accessed at 1-min intervals, while Polar readings were available for 10-s intervals. We derived intraclass correlation coefficients (ICCs) for individual participants comparing heart rate estimates. We applied Centers for Disease Control and Prevention heart rate zone cut-offs to ascertain the sensitivity and specificity of Fitbit in identifying 1-min epochs falling into MVPA heart rate zone. We collected paired heart rate data for 2509 1-min epochs in 10 individuals under free-living conditions of 3 to 6 hours. The overall ICC comparing 1-min Fitbit measures with average 10-s Polar H6 measures for the same epoch was .83 (95% CI .63-.91). On average, the Fitbit tracker underestimated heart rate measures by -5.96 bpm (standard error, SE=0.18). At the low intensity heart rate zone, the underestimate was smaller at -4.22 bpm (SE=0.15). This underestimate grew to -16.2 bpm (SE=0.74) in the MVPA heart rate zone. Fitbit devices detected 52.9% (192/363) of MVPA heart rate zone epochs correctly. Positive and negative predictive values were 86.1% (192/223) and 92.52% (2115/2286), respectively. During subsequent 1 month of continuous data collection (270 person-days), only 3.9% of 1-min epochs could be categorized as MVPA according to heart rate zones. This measure was affected by decreasing wear time and adherence over the period of follow-up. Under free-living conditions, Fitbit trackers are affected by significant systematic errors. Improvements in tracker accuracy and sensitivity when measuring MVPA are required before they can be considered for use in the context of exercise prescription to promote better health. ©Alexander Wilhelm Gorny, Seaw Jia Liew, Chuen Seng Tan, Falk Müller-Riemenschneider. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 20.10.2017.
Heat transfer about a vertical permeable membrane
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaviany, M.
1988-05-01
The natural convection heat transfer about both sides of vertical walls without any seepage has been studied and the effects of the wall thickness and thermal conductivity on the local and average heat transfer rates have been determined. Viskanta and Lankford have concluded that in predicting the heat transfer rate through the wall, for low-thermal-conductivity walls the a priori unknown wall surface temperatures can be walls the a priori unknown wall surface temperatures can be estimated as the arithmetic average of the reservoir temperatures without loss of accuracy (for most practical situations). Sparrow and Prakash treated the surface temperature asmore » variable but used the local temperature along with the available isothermal boundary-layer analysis for determination of the local heat transfer rate and found this to be reasonable at relatively low Grashof numbers. In this study the heat trasnfer rate between two reservoirs of different temperature connected in part through a permeable membrane is analyzed. Rather than solving the complete problem numerically for the three domains (fluid-wall-fluid), the available results on the effects of suction and blowing on the natural convection boundary layer are used in an analysis of the membranes with low thermal conductivity and small seepage velocities, which are characteristic of membranes considered. This will lead to rather simple expressions for the determination of the heat transfer rate.« less
Mahajan, Ruhi; Viangteeravat, Teeradache; Akbilgic, Oguz
2017-12-01
A timely diagnosis of congestive heart failure (CHF) is crucial to evade a life-threatening event. This paper presents a novel probabilistic symbol pattern recognition (PSPR) approach to detect CHF in subjects from their cardiac interbeat (R-R) intervals. PSPR discretizes each continuous R-R interval time series by mapping them onto an eight-symbol alphabet and then models the pattern transition behavior in the symbolic representation of the series. The PSPR-based analysis of the discretized series from 107 subjects (69 normal and 38 CHF subjects) yielded discernible features to distinguish normal subjects and subjects with CHF. In addition to PSPR features, we also extracted features using the time-domain heart rate variability measures such as average and standard deviation of R-R intervals. An ensemble of bagged decision trees was used to classify two groups resulting in a five-fold cross-validation accuracy, specificity, and sensitivity of 98.1%, 100%, and 94.7%, respectively. However, a 20% holdout validation yielded an accuracy, specificity, and sensitivity of 99.5%, 100%, and 98.57%, respectively. Results from this study suggest that features obtained with the combination of PSPR and long-term heart rate variability measures can be used in developing automated CHF diagnosis tools. Copyright © 2017 Elsevier B.V. All rights reserved.
Sun-Direction Estimation Using a Partially Underdetermined Set of Coarse Sun Sensors
NASA Astrophysics Data System (ADS)
O'Keefe, Stephen A.; Schaub, Hanspeter
2015-09-01
A comparison of different methods to estimate the sun-direction vector using a partially underdetermined set of cosine-type coarse sun sensors (CSS), while simultaneously controlling the attitude towards a power-positive orientation, is presented. CSS are commonly used in performing power-positive sun-pointing and are attractive due to their relative inexpensiveness, small size, and reduced power consumption. For this study only CSS and rate gyro measurements are available, and the sensor configuration does not provide global triple coverage required for a unique sun-direction calculation. The methods investigated include a vector average method, a combination of least squares and minimum norm criteria, and an extended Kalman filter approach. All cases are formulated such that precise ground calibration of the CSS is not required. Despite significant biases in the state dynamics and measurement models, Monte Carlo simulations show that an extended Kalman filter approach, despite the underdetermined sensor coverage, can provide degree-level accuracy of the sun-direction vector both with and without a control algorithm running simultaneously. If no rate gyro measurements are available, and rates are partially estimated from CSS, the EKF performance degrades as expected, but is still able to achieve better than 10∘ accuracy using only CSS measurements.
SU-E-J-112: The Impact of Cine EPID Image Acquisition Frame Rate On Markerless Soft-Tissue Tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yip, S; Rottmann, J; Berbeco, R
2014-06-01
Purpose: Although reduction of the cine EPID acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor auto-tracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87Hz on an AS1000 portal imager. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for auto-tracking. The difference between the programmed and auto-tracked positions of a Las Vegas phantommore » moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at eleven field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise were correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the auto-tracking errors increased at frame rates lower than 4.29Hz. Above 4.29Hz, changes in errors were negligible with δ<1.60mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R=0.94) and patient studies (R=0.72). Moderate to poor correlation was found between image noise and tracking error with R -0.58 and -0.19 for both studies, respectively. Conclusion: An image acquisition frame rate of at least 4.29Hz is recommended for cine EPID tracking. Motion blurring in images with frame rates below 4.39Hz can substantially reduce the accuracy of auto-tracking. This work is supported in part by the Varian Medical Systems, Inc.« less
Lee, Won-Joon; Wilkinson, Caroline M; Hwang, Hyeon-Shik; Lee, Sang-Mi
2015-05-01
Accuracy is the most important factor supporting the reliability of forensic facial reconstruction (FFR) comparing to the corresponding actual face. A number of methods have been employed to evaluate objective accuracy of FFR. Recently, it has been attempted that the degree of resemblance between computer-generated FFR and actual face is measured by geometric surface comparison method. In this study, three FFRs were produced employing live adult Korean subjects and three-dimensional computerized modeling software. The deviations of the facial surfaces between the FFR and the head scan CT of the corresponding subject were analyzed in reverse modeling software. The results were compared with those from a previous study which applied the same methodology as this study except average facial soft tissue depth dataset. Three FFRs of this study that applied updated dataset demonstrated lesser deviation errors between the facial surfaces of the FFR and corresponding subject than those from the previous study. The results proposed that appropriate average tissue depth data are important to increase quantitative accuracy of FFR. © 2015 American Academy of Forensic Sciences.
Tropical rain mapping radar on the Space Station
NASA Technical Reports Server (NTRS)
Im, Eastwood; Li, Fuk
1989-01-01
The conceptual design for a tropical rain mapping radar for flight on the manned Space Station is discussed. In this design the radar utilizes a narrow, dual-frequency (9.7 GHz and 24.1 GHz) beam, electronically scanned antenna to achieve high spatial (4 km) and vertical (250 m) resolutions and a relatively large (800 km) cross-track swath. An adaptive scan strategy will be used for better utilization of radar energy and dwell time. Such a system can detect precipitation at rates of up to 100 mm/hr with accuracies of roughly 15 percent. With the proposed space-time sampling strategy, the monthly averaged rainfall rate can be estimated to within 8 percent, which is essential for many climatological studies.
Xiao, Bo; Imel, Zac E; Georgiou, Panayiotis G; Atkins, David C; Narayanan, Shrikanth S
2015-01-01
The technology for evaluating patient-provider interactions in psychotherapy-observational coding-has not changed in 70 years. It is labor-intensive, error prone, and expensive, limiting its use in evaluating psychotherapy in the real world. Engineering solutions from speech and language processing provide new methods for the automatic evaluation of provider ratings from session recordings. The primary data are 200 Motivational Interviewing (MI) sessions from a study on MI training methods with observer ratings of counselor empathy. Automatic Speech Recognition (ASR) was used to transcribe sessions, and the resulting words were used in a text-based predictive model of empathy. Two supporting datasets trained the speech processing tasks including ASR (1200 transcripts from heterogeneous psychotherapy sessions and 153 transcripts and session recordings from 5 MI clinical trials). The accuracy of computationally-derived empathy ratings were evaluated against human ratings for each provider. Computationally-derived empathy scores and classifications (high vs. low) were highly accurate against human-based codes and classifications, with a correlation of 0.65 and F-score (a weighted average of sensitivity and specificity) of 0.86, respectively. Empathy prediction using human transcription as input (as opposed to ASR) resulted in a slight increase in prediction accuracies, suggesting that the fully automatic system with ASR is relatively robust. Using speech and language processing methods, it is possible to generate accurate predictions of provider performance in psychotherapy from audio recordings alone. This technology can support large-scale evaluation of psychotherapy for dissemination and process studies.
In vivo study of flow-rate accuracy of the MedStream Programmable Infusion System.
Venugopalan, Ramakrishna; Ginggen, Alec; Bork, Toralf; Anderson, William; Buffen, Elaine
2011-01-01
Flow-rate accuracy and precision are important parameters to optimizing the efficacy of programmable intrathecal (IT) infusion pump delivery systems. Current programmable IT pumps are accurate within ±14.5% of their programmed infusion rate when assessed under ideal environmental conditions and specific flow-rate settings in vitro. We assessed the flow-rate accuracy of a novel programmable pump system across its entire flow-rate range under typical conditions in sheep (in vivo) and nominal conditions in vitro. The flow-rate accuracy of the MedStream Programmable Pump was assessed in both the in vivo and in vitro settings. In vivo flow-rate accuracy was assessed in 16 sheep at various flow-rates (producing 90 flow intervals) more than 90 ± 3 days. Pumps were then explanted, re-sterilized and in vitro flow-rate accuracy was assessed at 37°C and 1013 mBar (80 flow intervals). In vivo (sheep body temperatures 38.1°C-39.8°C), mean ± SD flow-rate error was 9.32% ± 9.27% and mean ± SD leak-rate was 0.028 ± 0.08 mL/day. Following explantation, mean in vitro flow-rate error and leak-rate were -1.05% ± 2.55% and 0.003 ± 0.004 mL/day (37°C, 1013 mBar), respectively. The MedStream Programmable Pump demonstrated high flow-rate accuracy when tested in vivo and in vitro at normal body temperature and environmental pressure as well as when tested in vivo at variable sheep body temperature. The flow-rate accuracy of the MedStream Programmable Pump across its flow-rate range, compares favorably to the accuracy of current clinically utilized programmable IT infusion pumps reported at specific flow-rate settings and conditions. © 2011 International Neuromodulation Society.
Heart-rate monitoring by air pressure and causal analysis
NASA Astrophysics Data System (ADS)
Tsuchiya, Naoki; Nakajima, Hiroshi; Hata, Yutaka
2011-06-01
Among lots of vital signals, heart-rate (HR) is an important index for diagnose human's health condition. For instance, HR provides an early stage of cardiac disease, autonomic nerve behavior, and so forth. However, currently, HR is measured only in medical checkups and clinical diagnosis during the rested state by using electrocardiograph (ECG). Thus, some serious cardiac events in daily life could be lost. Therefore, a continuous HR monitoring during 24 hours is desired. Considering the use in daily life, the monitoring should be noninvasive and low intrusive. Thus, in this paper, an HR monitoring in sleep by using air pressure sensors is proposed. The HR monitoring is realized by employing the causal analysis among air pressure and HR. The causality is described by employing fuzzy logic. According to the experiment on 7 males at age 22-25 (23 on average), the correlation coefficient against ECG is 0.73-0.97 (0.85 on average). In addition, the cause-effect structure for HR monitoring is arranged by employing causal decomposition, and the arranged causality is applied to HR monitoring in a setting posture. According to the additional experiment on 6 males, the correlation coefficient is 0.66-0.86 (0.76 on average). Therefore, the proposed method is suggested to have enough accuracy and robustness for some daily use cases.
Control of humanoid robot via motion-onset visual evoked potentials
Li, Wei; Li, Mengfan; Zhao, Jing
2015-01-01
This paper investigates controlling humanoid robot behavior via motion-onset specific N200 potentials. In this study, N200 potentials are induced by moving a blue bar through robot images intuitively representing robot behaviors to be controlled with mind. We present the individual impact of each subject on N200 potentials and discuss how to deal with individuality to obtain a high accuracy. The study results document the off-line average accuracy of 93% for hitting targets across over five subjects, so we use this major component of the motion-onset visual evoked potential (mVEP) to code people's mental activities and to perform two types of on-line operation tasks: navigating a humanoid robot in an office environment with an obstacle and picking-up an object. We discuss the factors that affect the on-line control success rate and the total time for completing an on-line operation task. PMID:25620918
Kim, Heejun; Bian, Jiantao; Mostafa, Javed; Jonnalagadda, Siddhartha; Del Fiol, Guilherme
2016-01-01
Motivation: Clinicians need up-to-date evidence from high quality clinical trials to support clinical decisions. However, applying evidence from the primary literature requires significant effort. Objective: To examine the feasibility of automatically extracting key clinical trial information from ClinicalTrials.gov. Methods: We assessed the coverage of ClinicalTrials.gov for high quality clinical studies that are indexed in PubMed. Using 140 random ClinicalTrials.gov records, we developed and tested rules for the automatic extraction of key information. Results: The rate of high quality clinical trial registration in ClinicalTrials.gov increased from 0.2% in 2005 to 17% in 2015. Trials reporting results increased from 3% in 2005 to 19% in 2015. The accuracy of the automatic extraction algorithm for 10 trial attributes was 90% on average. Future research is needed to improve the algorithm accuracy and to design information displays to optimally present trial information to clinicians.
Automated thematic mapping and change detection of ERTS-A images
NASA Technical Reports Server (NTRS)
Gramenopoulos, N. (Principal Investigator)
1975-01-01
The author has identified the following significant results. In the first part of the investigation, spatial and spectral features were developed which were employed to automatically recognize terrain features through a clustering algorithm. In this part of the investigation, the size of the cell which is the number of digital picture elements used for computing the spatial and spectral features was varied. It was determined that the accuracy of terrain recognition decreases slowly as the cell size is reduced and coincides with increased cluster diffuseness. It was also proven that a cell size of 17 x 17 pixels when used with the clustering algorithm results in high recognition rates for major terrain classes. ERTS-1 data from five diverse geographic regions of the United States were processed through the clustering algorithm with 17 x 17 pixel cells. Simple land use maps were produced and the average terrain recognition accuracy was 82 percent.
Electromyogram whitening for improved classification accuracy in upper limb prosthesis control.
Liu, Lukai; Liu, Pu; Clancy, Edward A; Scheme, Erik; Englehart
2013-09-01
Time and frequency domain features of the surface electromyogram (EMG) signal acquired from multiple channels have frequently been investigated for use in controlling upper-limb prostheses. A common control method is EMG-based motion classification. We propose the use of EMG signal whitening as a preprocessing step in EMG-based motion classification. Whitening decorrelates the EMG signal and has been shown to be advantageous in other EMG applications including EMG amplitude estimation and EMG-force processing. In a study of ten intact subjects and five amputees with up to 11 motion classes and ten electrode channels, we found that the coefficient of variation of time domain features (mean absolute value, average signal length and normalized zero crossing rate) was significantly reduced due to whitening. When using these features along with autoregressive power spectrum coefficients, whitening added approximately five percentage points to classification accuracy when small window lengths were considered.
Finger vein recognition based on finger crease location
NASA Astrophysics Data System (ADS)
Lu, Zhiying; Ding, Shumeng; Yin, Jing
2016-07-01
Finger vein recognition technology has significant advantages over other methods in terms of accuracy, uniqueness, and stability, and it has wide promising applications in the field of biometric recognition. We propose using finger creases to locate and extract an object region. Then we use linear fitting to overcome the problem of finger rotation in the plane. The method of modular adaptive histogram equalization (MAHE) is presented to enhance image contrast and reduce computational cost. To extract the finger vein features, we use a fusion method, which can obtain clear and distinguishable vein patterns under different conditions. We used the Hausdorff average distance algorithm to examine the recognition performance of the system. The experimental results demonstrate that MAHE can better balance the recognition accuracy and the expenditure of time compared with three other methods. Our resulting equal error rate throughout the total procedure was 3.268% in a database of 153 finger vein images.
Inui, Hiroshi; Taketomi, Shuji; Nakamura, Kensuke; Sanada, Takaki; Tanaka, Sakae; Nakagawa, Takumi
2013-05-01
Few studies have demonstrated improvement in accuracy of rotational alignment using image-free navigation systems mainly due to the inconsistent registration of anatomical landmarks. We have used an image-free navigation for total knee arthroplasty, which adopts the average algorithm between two reference axes (transepicondylar axis and axis perpendicular to the Whiteside axis) for femoral component rotation control. We hypothesized that addition of another axis (condylar twisting axis measured on a preoperative radiograph) would improve the accuracy. One group using the average algorithm (double-axis group) was compared with the other group using another axis to confirm the accuracy of the average algorithm (triple-axis group). Femoral components were more accurately implanted for rotational alignment in the triple-axis group (ideal: triple-axis group 100%, double-axis group 82%, P<0.05). Copyright © 2013 Elsevier Inc. All rights reserved.
Larmer, S G; Sargolzaei, M; Schenkel, F S
2014-05-01
Genomic selection requires a large reference population to accurately estimate single nucleotide polymorphism (SNP) effects. In some Canadian dairy breeds, the available reference populations are not large enough for accurate estimation of SNP effects for traits of interest. If marker phase is highly consistent across multiple breeds, it is theoretically possible to increase the accuracy of genomic prediction for one or all breeds by pooling several breeds into a common reference population. This study investigated the extent of linkage disequilibrium (LD) in 5 major dairy breeds using a 50,000 (50K) SNP panel and 3 of the same breeds using the 777,000 (777K) SNP panel. Correlation of pair-wise SNP phase was also investigated on both panels. The level of LD was measured using the squared correlation of alleles at 2 loci (r(2)), and the consistency of SNP gametic phases was correlated using the signed square root of these values. Because of the high cost of the 777K panel, the accuracy of imputation from lower density marker panels [6,000 (6K) or 50K] was examined both within breed and using a multi-breed reference population in Holstein, Ayrshire, and Guernsey. Imputation was carried out using FImpute V2.2 and Beagle 3.3.2 software. Imputation accuracies were then calculated as both the proportion of correct SNP filled in (concordance rate) and allelic R(2). Computation time was also explored to determine the efficiency of the different algorithms for imputation. Analysis showed that LD values >0.2 were found in all breeds at distances at or shorter than the average adjacent pair-wise distance between SNP on the 50K panel. Correlations of r-values, however, did not reach high levels (<0.9) at these distances. High correlation values of SNP phase between breeds were observed (>0.94) when the average pair-wise distances using the 777K SNP panel were examined. High concordance rate (0.968-0.995) and allelic R(2) (0.946-0.991) were found for all breeds when imputation was carried out with FImpute from 50K to 777K. Imputation accuracy for Guernsey and Ayrshire was slightly lower when using the imputation method in Beagle. Computing time was significantly greater when using Beagle software, with all comparable procedures being 9 to 13 times less efficient, in terms of time, compared with FImpute. These findings suggest that use of a multi-breed reference population might increase prediction accuracy using the 777K SNP panel and that 777K genotypes can be efficiently and effectively imputed using the lower density 50K SNP panel. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
PubChem3D: Conformer generation
2011-01-01
Background PubChem, an open archive for the biological activities of small molecules, provides search and analysis tools to assist users in locating desired information. Many of these tools focus on the notion of chemical structure similarity at some level. PubChem3D enables similarity of chemical structure 3-D conformers to augment the existing similarity of 2-D chemical structure graphs. It is also desirable to relate theoretical 3-D descriptions of chemical structures to experimental biological activity. As such, it is important to be assured that the theoretical conformer models can reproduce experimentally determined bioactive conformations. In the present study, we investigate the effects of three primary conformer generation parameters (the fragment sampling rate, the energy window size, and force field variant) upon the accuracy of theoretical conformer models, and determined optimal settings for PubChem3D conformer model generation and conformer sampling. Results Using the software package OMEGA from OpenEye Scientific Software, Inc., theoretical 3-D conformer models were generated for 25,972 small-molecule ligands, whose 3-D structures were experimentally determined. Different values for primary conformer generation parameters were systematically tested to find optimal settings. Employing a greater fragment sampling rate than the default did not improve the accuracy of the theoretical conformer model ensembles. An ever increasing energy window did increase the overall average accuracy, with rapid convergence observed at 10 kcal/mol and 15 kcal/mol for model building and torsion search, respectively; however, subsequent study showed that an energy threshold of 25 kcal/mol for torsion search resulted in slightly improved results for larger and more flexible structures. Exclusion of coulomb terms from the 94s variant of the Merck molecular force field (MMFF94s) in the torsion search stage gave more accurate conformer models at lower energy windows. Overall average accuracy of reproduction of bioactive conformations was remarkably linear with respect to both non-hydrogen atom count ("size") and effective rotor count ("flexibility"). Using these as independent variables, a regression equation was developed to predict the RMSD accuracy of a theoretical ensemble to reproduce bioactive conformations. The equation was modified to give a minimum RMSD conformer sampling value to help ensure that 90% of the sampled theoretical models should contain at least one conformer within the RMSD sampling value to a "bioactive" conformation. Conclusion Optimal parameters for conformer generation using OMEGA were explored and determined. An equation was developed that provides an RMSD sampling value to use that is based on the relative accuracy to reproduce bioactive conformations. The optimal conformer generation parameters and RMSD sampling values determined are used by the PubChem3D project to generate theoretical conformer models. PMID:21272340
Automatically rating trainee skill at a pediatric laparoscopic suturing task.
Oquendo, Yousi A; Riddle, Elijah W; Hiller, Dennis; Blinman, Thane A; Kuchenbecker, Katherine J
2018-04-01
Minimally invasive surgeons must acquire complex technical skills while minimizing patient risk, a challenge that is magnified in pediatric surgery. Trainees need realistic practice with frequent detailed feedback, but human grading is tedious and subjective. We aim to validate a novel motion-tracking system and algorithms that automatically evaluate trainee performance of a pediatric laparoscopic suturing task. Subjects (n = 32) ranging from medical students to fellows performed two trials of intracorporeal suturing in a custom pediatric laparoscopic box trainer after watching a video of ideal performance. The motions of the tools and endoscope were recorded over time using a magnetic sensing system, and both tool grip angles were recorded using handle-mounted flex sensors. An expert rated the 63 trial videos on five domains from the Objective Structured Assessment of Technical Skill (OSATS), yielding summed scores from 5 to 20. Motion data from each trial were processed to calculate 280 features. We used regularized least squares regression to identify the most predictive features from different subsets of the motion data and then built six regression tree models that predict summed OSATS score. Model accuracy was evaluated via leave-one-subject-out cross-validation. The model that used all sensor data streams performed best, achieving 71% accuracy at predicting summed scores within 2 points, 89% accuracy within 4, and a correlation of 0.85 with human ratings. 59% of the rounded average OSATS score predictions were perfect, and 100% were within 1 point. This model employed 87 features, including none based on completion time, 77 from tool tip motion, 3 from tool tip visibility, and 7 from grip angle. Our novel hardware and software automatically rated previously unseen trials with summed OSATS scores that closely match human expert ratings. Such a system facilitates more feedback-intensive surgical training and may yield insights into the fundamental components of surgical skill.
Domínguez, Rocio Berenice; Moreno-Barón, Laura; Muñoz, Roberto; Gutiérrez, Juan Manuel
2014-01-01
This paper describes a new method based on a voltammetric electronic tongue (ET) for the recognition of distinctive features in coffee samples. An ET was directly applied to different samples from the main Mexican coffee regions without any pretreatment before the analysis. The resulting electrochemical information was modeled with two different mathematical tools, namely Linear Discriminant Analysis (LDA) and Support Vector Machines (SVM). Growing conditions (i.e., organic or non-organic practices and altitude of crops) were considered for a first classification. LDA results showed an average discrimination rate of 88% ± 6.53% while SVM successfully accomplished an overall accuracy of 96.4% ± 3.50% for the same task. A second classification based on geographical origin of samples was carried out. Results showed an overall accuracy of 87.5% ± 7.79% for LDA and a superior performance of 97.5% ± 3.22% for SVM. Given the complexity of coffee samples, the high accuracy percentages achieved by ET coupled with SVM in both classification problems suggested a potential applicability of ET in the assessment of selected coffee features with a simpler and faster methodology along with a null sample pretreatment. In addition, the proposed method can be applied to authentication assessment while improving cost, time and accuracy of the general procedure. PMID:25254303
Domínguez, Rocio Berenice; Moreno-Barón, Laura; Muñoz, Roberto; Gutiérrez, Juan Manuel
2014-09-24
This paper describes a new method based on a voltammetric electronic tongue (ET) for the recognition of distinctive features in coffee samples. An ET was directly applied to different samples from the main Mexican coffee regions without any pretreatment before the analysis. The resulting electrochemical information was modeled with two different mathematical tools, namely Linear Discriminant Analysis (LDA) and Support Vector Machines (SVM). Growing conditions (i.e., organic or non-organic practices and altitude of crops) were considered for a first classification. LDA results showed an average discrimination rate of 88% ± 6.53% while SVM successfully accomplished an overall accuracy of 96.4% ± 3.50% for the same task. A second classification based on geographical origin of samples was carried out. Results showed an overall accuracy of 87.5% ± 7.79% for LDA and a superior performance of 97.5% ± 3.22% for SVM. Given the complexity of coffee samples, the high accuracy percentages achieved by ET coupled with SVM in both classification problems suggested a potential applicability of ET in the assessment of selected coffee features with a simpler and faster methodology along with a null sample pretreatment. In addition, the proposed method can be applied to authentication assessment while improving cost, time and accuracy of the general procedure.
40 CFR 53.53 - Test for flow rate accuracy, regulation, measurement accuracy, and cut-off.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., measurement accuracy, and cut-off. 53.53 Section 53.53 Protection of Environment ENVIRONMENTAL PROTECTION..., measurement accuracy, and cut-off. (a) Overview. This test procedure is designed to evaluate a candidate... measurement accuracy, coefficient of variability measurement accuracy, and the flow rate cut-off function. The...
Towards 24/7 continuous heart rate monitoring.
Tarniceriu, Adrian; Parak, Jakub; Renevey, Philippe; Nurmi, Marko; Bertschi, Mattia; Delgado-Gonzalo, Ricard; Korhonen, Ilkka
2016-08-01
Heart rate (HR) and HR variability (HRV) carry rich information about physical activity, mental and physical load, physiological status, and health of an individual. When combined with activity monitoring and personalized physiological modelling, HR/HRV monitoring may be used for monitoring of complex behaviors and impact of behaviors and external factors on the current physiological status of an individual. Optical HR monitoring (OHR) from wrist provides a comfortable and unobtrusive method for HR/HRV monitoring and is better adhered by users than traditional ECG electrodes or chest straps. However, OHR power consumption is significantly higher than that for ECG based methods due to the measurement principle based on optical illumination of the tissue. We developed an algorithmic approach to reduce power consumption of the OHR in 24/7 HR trending. We use continuous activity monitoring and a fast converging frequency domain algorithm to derive a reliable HR estimate in 7.1s (during outdoor sports, in average) to 10.0s (during daily life). The method allows >80% reduction in power consumption in 24/7 OHR monitoring when average HR monitoring is targeted, without significant reduction in tracking accuracy.
Soares, André E R; Schrago, Carlos G
2015-01-07
Although taxon sampling is commonly considered an important issue in phylogenetic inference, it is rarely considered in the Bayesian estimation of divergence times. In fact, the studies conducted to date have presented ambiguous results, and the relevance of taxon sampling for molecular dating remains unclear. In this study, we developed a series of simulations that, after six hundred Bayesian molecular dating analyses, allowed us to evaluate the impact of taxon sampling on chronological estimates under three scenarios of among-lineage rate heterogeneity. The first scenario allowed us to examine the influence of the number of terminals on the age estimates based on a strict molecular clock. The second scenario imposed an extreme example of lineage specific rate variation, and the third scenario permitted extensive rate variation distributed along the branches. We also analyzed empirical data on selected mitochondrial genomes of mammals. Our results showed that in the strict molecular-clock scenario (Case I), taxon sampling had a minor impact on the accuracy of the time estimates, although the precision of the estimates was greater with an increased number of terminals. The effect was similar in the scenario (Case III) based on rate variation distributed among the branches. Only under intensive rate variation among lineages (Case II) taxon sampling did result in biased estimates. The results of an empirical analysis corroborated the simulation findings. We demonstrate that taxonomic sampling affected divergence time inference but that its impact was significant if the rates deviated from those derived for the strict molecular clock. Increased taxon sampling improved the precision and accuracy of the divergence time estimates, but the impact on precision is more relevant. On average, biased estimates were obtained only if lineage rate variation was pronounced. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Franklin, J.; Duncan, J.
1992-01-01
The rate at which a light field decays in water is characterized by the diffuse attenuation coefficient k. The Li-Strahler discrete-object canopy reflectance model was tested in two sites, a shrub grass savanna and a degraded shrub savanna on bare soil, in the proposed HAPEX (Hydrologic Atmospheric Pilot Experiment) II/Sahel study area in Niger, West Africa. Average site reflectance was predicted for each site from the reflectances and cover proportions of four components: shrub canopy, background (soil or grass and soil), shaded canopy, and shaded background. Component reflectances were sampled in the SPOT wavebands using a hand-held radiometer. Predicted reflectance was compared to average site reflectance measured using the same radiometer mounted on a backpack with measurements recorded every 5 m along two 1-km transects, also in the SPOT (Systeme Probatoire d'Observation de la Terre) bands. Measurements and predictions were made for each of the three days during the summer growing season, approximately two weeks apart. Red, near infrared reflectance, and the NDVI (normalized difference vegetation index) were all predicted with a high degree of accuracy for the shrub/grass site and with reasonable accuracy for the degraded shrub site.
Phase noise in pulsed Doppler lidar and limitations on achievable single-shot velocity accuracy
NASA Technical Reports Server (NTRS)
Mcnicholl, P.; Alejandro, S.
1992-01-01
The smaller sampling volumes afforded by Doppler lidars compared to radars allows for spatial resolutions at and below some sheer and turbulence wind structure scale sizes. This has brought new emphasis on achieving the optimum product of wind velocity and range resolutions. Several recent studies have considered the effects of amplitude noise, reduction algorithms, and possible hardware related signal artifacts on obtainable velocity accuracy. We discuss here the limitation on this accuracy resulting from the incoherent nature and finite temporal extent of backscatter from aerosols. For a lidar return from a hard (or slab) target, the phase of the intermediate frequency (IF) signal is random and the total return energy fluctuates from shot to shot due to speckle; however, the offset from the transmitted frequency is determinable with an accuracy subject only to instrumental effects and the signal to noise ratio (SNR), the noise being determined by the LO power in the shot noise limited regime. This is not the case for a return from a media extending over a range on the order of or greater than the spatial extent of the transmitted pulse, such as from atmospheric aerosols. In this case, the phase of the IF signal will exhibit a temporal random walk like behavior. It will be uncorrelated over times greater than the pulse duration as the transmitted pulse samples non-overlapping volumes of scattering centers. Frequency analysis of the IF signal in a window similar to the transmitted pulse envelope will therefore show shot-to-shot frequency deviations on the order of the inverse pulse duration reflecting the random phase rate variations. Like speckle, these deviations arise from the incoherent nature of the scattering process and diminish if the IF signal is averaged over times greater than a single range resolution cell (here the pulse duration). Apart from limiting the high SNR performance of a Doppler lidar, this shot-to-shot variance in velocity estimates has a practical impact on lidar design parameters. In high SNR operation, for example, a lidar's efficiency in obtaining mean wind measurements is determined by its repetition rate and not pulse energy or average power. In addition, this variance puts a practical limit on the shot-to-shot hard target performance required of a lidar.
Schrank, Elisa S; Hitch, Lester; Wallace, Kevin; Moore, Richard; Stanhope, Steven J
2013-10-01
Passive-dynamic ankle-foot orthosis (PD-AFO) bending stiffness is a key functional characteristic for achieving enhanced gait function. However, current orthosis customization methods inhibit objective premanufacture tuning of the PD-AFO bending stiffness, making optimization of orthosis function challenging. We have developed a novel virtual functional prototyping (VFP) process, which harnesses the strengths of computer aided design (CAD) model parameterization and finite element analysis, to quantitatively tune and predict the functional characteristics of a PD-AFO, which is rapidly manufactured via fused deposition modeling (FDM). The purpose of this study was to assess the VFP process for PD-AFO bending stiffness. A PD-AFO CAD model was customized for a healthy subject and tuned to four bending stiffness values via VFP. Two sets of each tuned model were fabricated via FDM using medical-grade polycarbonate (PC-ISO). Dimensional accuracy of the fabricated orthoses was excellent (average 0.51 ± 0.39 mm). Manufacturing precision ranged from 0.0 to 0.74 Nm/deg (average 0.30 ± 0.36 Nm/deg). Bending stiffness prediction accuracy was within 1 Nm/deg using the manufacturer provided PC-ISO elastic modulus (average 0.48 ± 0.35 Nm/deg). Using an experimentally derived PC-ISO elastic modulus improved the optimized bending stiffness prediction accuracy (average 0.29 ± 0.57 Nm/deg). Robustness of the derived modulus was tested by carrying out the VFP process for a disparate subject, tuning the PD-AFO model to five bending stiffness values. For this disparate subject, bending stiffness prediction accuracy was strong (average 0.20 ± 0.14 Nm/deg). Overall, the VFP process had excellent dimensional accuracy, good manufacturing precision, and strong prediction accuracy with the derived modulus. Implementing VFP as part of our PD-AFO customization and manufacturing framework, which also includes fit customization, provides a novel and powerful method to predictably tune and precisely manufacture orthoses with objectively customized fit and functional characteristics.
Tian, Chao; Wang, Lixin; Novick, Kimberly A
2016-10-15
High-precision analysis of atmospheric water vapor isotope compositions, especially δ(17) O values, can be used to improve our understanding of multiple hydrological and meteorological processes (e.g., differentiate equilibrium or kinetic fractionation). This study focused on assessing, for the first time, how the accuracy and precision of vapor δ(17) O laser spectroscopy measurements depend on vapor concentration, delta range, and averaging-time. A Triple Water Vapor Isotope Analyzer (T-WVIA) was used to evaluate the accuracy and precision of δ(2) H, δ(18) O and δ(17) O measurements. The sensitivity of accuracy and precision to water vapor concentration was evaluated using two international standards (GISP and SLAP2). The sensitivity of precision to delta value was evaluated using four working standards spanning a large delta range. The sensitivity of precision to averaging-time was assessed by measuring one standard continuously for 24 hours. Overall, the accuracy and precision of the δ(2) H, δ(18) O and δ(17) O measurements were high. Across all vapor concentrations, the accuracy of δ(2) H, δ(18) O and δ(17) O observations ranged from 0.10‰ to 1.84‰, 0.08‰ to 0.86‰ and 0.06‰ to 0.62‰, respectively, and the precision ranged from 0.099‰ to 0.430‰, 0.009‰ to 0.080‰ and 0.022‰ to 0.054‰, respectively. The accuracy and precision of all isotope measurements were sensitive to concentration, with the higher accuracy and precision generally observed under moderate vapor concentrations (i.e., 10000-15000 ppm) for all isotopes. The precision was also sensitive to the range of delta values, although the effect was not as large compared with the sensitivity to concentration. The precision was much less sensitive to averaging-time than the concentration and delta range effects. The accuracy and precision performance of the T-WVIA depend on concentration but depend less on the delta value and averaging-time. The instrument can simultaneously and continuously measure δ(2) H, δ(18) O and δ(17) O values in water vapor, opening a new window to better understand ecological, hydrological and meteorological processes. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Molnar, Melissa; Marek, C. John
2005-01-01
A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two time step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting rates of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx are obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3). The temperature of the gas entering the turbine (T4) was also correlated as a function of the initial combustor temperature (T3), equivalence ratio, water to fuel mass ratio, and pressure.
Pires, Gabriel; Nunes, Urbano; Castelo-Branco, Miguel
2012-06-01
Non-invasive brain-computer interface (BCI) based on electroencephalography (EEG) offers a new communication channel for people suffering from severe motor disorders. This paper presents a novel P300-based speller called lateral single-character (LSC). The LSC performance is compared to that of the standard row-column (RC) speller. We developed LSC, a single-character paradigm comprising all letters of the alphabet following an event strategy that significantly reduces the time for symbol selection, and explores the intrinsic hemispheric asymmetries in visual perception to improve the performance of the BCI. RC and LSC paradigms were tested by 10 able-bodied participants, seven participants with amyotrophic lateral sclerosis (ALS), five participants with cerebral palsy (CP), one participant with Duchenne muscular dystrophy (DMD), and one participant with spinal cord injury (SCI). The averaged results, taking into account all participants who were able to control the BCI online, were significantly higher for LSC, 26.11 bit/min and 89.90% accuracy, than for RC, 21.91 bit/min and 88.36% accuracy. The two paradigms produced different waveforms and the signal-to-noise ratio was significantly higher for LSC. Finally, the novel LSC also showed new discriminative features. The results suggest that LSC is an effective alternative to RC, and that LSC still has a margin for potential improvement in bit rate and accuracy. The high bit rates and accuracy of LSC are a step forward for the effective use of BCI in clinical applications. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Beat-to-beat heart rate estimation fusing multimodal video and sensor data
Antink, Christoph Hoog; Gao, Hanno; Brüser, Christoph; Leonhardt, Steffen
2015-01-01
Coverage and accuracy of unobtrusively measured biosignals are generally relatively low compared to clinical modalities. This can be improved by exploiting redundancies in multiple channels with methods of sensor fusion. In this paper, we demonstrate that two modalities, skin color variation and head motion, can be extracted from the video stream recorded with a webcam. Using a Bayesian approach, these signals are fused with a ballistocardiographic signal obtained from the seat of a chair with a mean absolute beat-to-beat estimation error below 25 milliseconds and an average coverage above 90% compared to an ECG reference. PMID:26309754
Beat-to-beat heart rate estimation fusing multimodal video and sensor data.
Antink, Christoph Hoog; Gao, Hanno; Brüser, Christoph; Leonhardt, Steffen
2015-08-01
Coverage and accuracy of unobtrusively measured biosignals are generally relatively low compared to clinical modalities. This can be improved by exploiting redundancies in multiple channels with methods of sensor fusion. In this paper, we demonstrate that two modalities, skin color variation and head motion, can be extracted from the video stream recorded with a webcam. Using a Bayesian approach, these signals are fused with a ballistocardiographic signal obtained from the seat of a chair with a mean absolute beat-to-beat estimation error below 25 milliseconds and an average coverage above 90% compared to an ECG reference.
Kamalian, Shervin; Atkinson, Wendy L; Florin, Lauren A; Pomerantz, Stuart R; Lev, Michael H; Romero, Javier M
2014-06-01
Evaluation of the posterior fossa (PF) on 5-mm-thick helical CT images (current default) has improved diagnostic accuracy compared to 5-mm sequential CT images; however, 5-mm-thick images may not be ideal for PF pathology due to volume averaging of rapid changes in anatomy in the Z-direction. Therefore, we sought to determine if routine review of 1.25-mm-thin helical CT images has superior accuracy in screening for nontraumatic PF pathology. MRI proof of diagnosis was obtained within 6 h of helical CT acquisition for 90 consecutive ED patients with, and 88 without, posterior fossa lesions. Helical CT images were post-processed at 1.25 and 5-mm-axial slice thickness. Two neuroradiologists blinded to the clinical/MRI findings reviewed both image sets. Interobserver agreement and accuracy were rated using Kappa statistics and ROC analysis, respectively. Of the 90/178 (51 %) who were MR positive, 60/90 (66 %) had stroke and 30/90 (33 %) had other etiologies. There was excellent interobserver agreement (κ > 0.97) for both thick and thin slice assessments. The accuracy, sensitivity, and specificity for 1.25-mm images were 65, 44, and 84 %, respectively, and for 5-mm images were 67, 45, and 85 %, respectively. The diagnostic accuracy was not significantly different (p > 0.5). In this cohort of patients with nontraumatic neurological symptoms referred to the posterior fossa, 1.25-mm-thin slice CT reformatted images do not have superior accuracy compared to 5-mm-thick images. This information has implications on optimizing resource utilizations and efficiency in a busy emergency room. Review of 1.25-mm-thin images may help diagnostic accuracy only when review of 5-mm-thick images as current default is inconclusive.
Dietrich, Ariana B; Hu, Xiaoqing; Rosenfeld, J Peter
2014-03-01
In the first of two experiments, we compared the accuracy of the P300 concealed information test protocol as a function of numbers of trials experienced by subjects and ERP averages analyzed by investigators. Contrary to Farwell et al. (Cogn Neurodyn 6(2):115-154, 2012), we found no evidence that 100 trial based averages are more accurate than 66 or 33 trial based averages (all numbers led to accuracies of 84-94 %). There was actually a trend favoring the lowest trial numbers. The second study compared numbers of irrelevant stimuli recalled and recognized in the 3-stimulus protocol versus the complex trial protocol (Rosenfeld in Memory detection: theory and application of the concealed information test, Cambridge University Press, New York, pp 63-89, 2011). Again, in contrast to expectations from Farwell et al. (Cogn Neurodyn 6(2):115-154, 2012), there were no differences between protocols, although there were more irrelevant stimuli recognized than recalled, and irrelevant 4-digit number group stimuli were neither recalled nor recognized as well as irrelevant city name stimuli. We therefore conclude that stimulus processing in the P300-based complex trial protocol-with no more than 33 sweep averages-is adequate to allow accurate detection of concealed information.
Zhang, Chu; Liu, Fei; He, Yong
2018-02-01
Hyperspectral imaging was used to identify and to visualize the coffee bean varieties. Spectral preprocessing of pixel-wise spectra was conducted by different methods, including moving average smoothing (MA), wavelet transform (WT) and empirical mode decomposition (EMD). Meanwhile, spatial preprocessing of the gray-scale image at each wavelength was conducted by median filter (MF). Support vector machine (SVM) models using full sample average spectra and pixel-wise spectra, and the selected optimal wavelengths by second derivative spectra all achieved classification accuracy over 80%. Primarily, the SVM models using pixel-wise spectra were used to predict the sample average spectra, and these models obtained over 80% of the classification accuracy. Secondly, the SVM models using sample average spectra were used to predict pixel-wise spectra, but achieved with lower than 50% of classification accuracy. The results indicated that WT and EMD were suitable for pixel-wise spectra preprocessing. The use of pixel-wise spectra could extend the calibration set, and resulted in the good prediction results for pixel-wise spectra and sample average spectra. The overall results indicated the effectiveness of using spectral preprocessing and the adoption of pixel-wise spectra. The results provided an alternative way of data processing for applications of hyperspectral imaging in food industry.
Elgendi, Mohamed; Norton, Ian; Brearley, Matt; Abbott, Derek; Schuurmans, Dale
2013-01-01
Photoplethysmogram (PPG) monitoring is not only essential for critically ill patients in hospitals or at home, but also for those undergoing exercise testing. However, processing PPG signals measured after exercise is challenging, especially if the environment is hot and humid. In this paper, we propose a novel algorithm that can detect systolic peaks under challenging conditions, as in the case of emergency responders in tropical conditions. Accurate systolic-peak detection is an important first step for the analysis of heart rate variability. Algorithms based on local maxima-minima, first-derivative, and slope sum are evaluated, and a new algorithm is introduced to improve the detection rate. With 40 healthy subjects, the new algorithm demonstrates the highest overall detection accuracy (99.84% sensitivity, 99.89% positive predictivity). Existing algorithms, such as Billauer's, Li's and Zong's, have comparable although lower accuracy. However, the proposed algorithm presents an advantage for real-time applications by avoiding human intervention in threshold determination. For best performance, we show that a combination of two event-related moving averages with an offset threshold has an advantage in detecting systolic peaks, even in heat-stressed PPG signals.
Pressure Monitoring Using Hybrid fs/ps Rotational CARS
NASA Technical Reports Server (NTRS)
Kearney, Sean P.; Danehy, Paul M.
2015-01-01
We investigate the feasibility of gas-phase pressure measurements at kHz-rates using fs/ps rotational CARS. Femtosecond pump and Stokes pulses impulsively prepare a rotational Raman coherence, which is then probed by a high-energy 6-ps pulse introduced at a time delay from the Raman preparation. Rotational CARS spectra were recorded in N2 contained in a room-temperature gas cell for pressures from 0.1 to 3 atm and probe delays ranging from 10-330 ps. Using published self-broadened collisional linewidth data for N2, both the spectrally integrated coherence decay rate and the spectrally resolved decay were investigated as means for detecting pressure. Shot-averaged and single-laser-shot spectra were interrogated for pressure and the accuracy and precision as a function of probe delay and cell pressure are discussed. Single-shot measurement accuracies were within 0.1 to 6.5% when compared to a transducer values, while the precision was generally between 1% and 6% of measured pressure for probe delays of 200 ps or more, and better than 2% as the delay approached 300 ps. A byproduct of the pressure measurement is an independent but simultaneous measurement of the gas temperature.
NASA Astrophysics Data System (ADS)
Taha, Zahari; Muazu Musa, Rabiu; Majeed, Anwar P. P. Abdul; Razali Abdullah, Mohamad; Amirul Abdullah, Muhammad; Hasnun Arif Hassan, Mohd; Khalil, Zubair
2018-04-01
The present study employs a machine learning algorithm namely support vector machine (SVM) to classify high and low potential archers from a collection of bio-physiological variables trained on different SVMs. 50 youth archers with the average age and standard deviation of (17.0 ±.056) gathered from various archery programmes completed a one end shooting score test. The bio-physiological variables namely resting heart rate, resting respiratory rate, resting diastolic blood pressure, resting systolic blood pressure, as well as calories intake, were measured prior to their shooting tests. k-means cluster analysis was applied to cluster the archers based on their scores on variables assessed. SVM models i.e. linear, quadratic and cubic kernel functions, were trained on the aforementioned variables. The k-means clustered the archers into high (HPA) and low potential archers (LPA), respectively. It was demonstrated that the linear SVM exhibited good accuracy with a classification accuracy of 94% in comparison the other tested models. The findings of this investigation can be valuable to coaches and sports managers to recognise high potential athletes from the selected bio-physiological variables examined.
Bryan's effect and anisotropic nonlinear damping
NASA Astrophysics Data System (ADS)
Joubert, Stephan V.; Shatalov, Michael Y.; Fay, Temple H.; Manzhirov, Alexander V.
2018-03-01
In 1890, G. H. Bryan discovered the following: "The vibration pattern of a revolving cylinder or bell revolves at a rate proportional to the inertial rotation rate of the cylinder or bell." We call this phenomenon Bryan's law or Bryan's effect. It is well known that any imperfections in a vibratory gyroscope (VG) affect Bryan's law and this affects the accuracy of the VG. Consequently, in this paper, we assume that all such imperfections are either minimised or eliminated by some known control method and that only damping is present within the VG. If the damping is isotropic (linear or nonlinear), then it has been recently demonstrated in this journal, using symbolic analysis, that Bryan's law remains invariant. However, it is known that linear anisotropic damping does affect Bryan's law. In this paper, we generalise Rayleigh's dissipation function so that anisotropic nonlinear damping may be introduced into the equations of motion. Using a mixture of numeric and symbolic analysis on the ODEs of motion of the VG, for anisotropic light nonlinear damping, we demonstrate (up to an approximate average), that Bryan's law is affected by any form of such damping, causing pattern drift, compromising the accuracy of the VG.
Azami, Hamed; Escudero, Javier
2015-08-01
Breast cancer is one of the most common types of cancer in women all over the world. Early diagnosis of this kind of cancer can significantly increase the chances of long-term survival. Since diagnosis of breast cancer is a complex problem, neural network (NN) approaches have been used as a promising solution. Considering the low speed of the back-propagation (BP) algorithm to train a feed-forward NN, we consider a number of improved NN trainings for the Wisconsin breast cancer dataset: BP with momentum, BP with adaptive learning rate, BP with adaptive learning rate and momentum, Polak-Ribikre conjugate gradient algorithm (CGA), Fletcher-Reeves CGA, Powell-Beale CGA, scaled CGA, resilient BP (RBP), one-step secant and quasi-Newton methods. An NN ensemble, which is a learning paradigm to combine a number of NN outputs, is used to improve the accuracy of the classification task. Results demonstrate that NN ensemble-based classification methods have better performance than NN-based algorithms. The highest overall average accuracy is 97.68% obtained by NN ensemble trained by RBP for 50%-50% training-test evaluation method.
Adaptive P300 based control system
Jin, Jing; Allison, Brendan Z.; Sellers, Eric W.; Brunner, Clemens; Horki, Petar; Wang, Xingyu; Neuper, Christa
2015-01-01
An adaptive P300 brain-computer interface (BCI) using a 12 × 7 matrix explored new paradigms to improve bit rate and accuracy. During online use, the system adaptively selects the number of flashes to average. Five different flash patterns were tested. The 19-flash paradigm represents the typical row/column presentation (i.e., 12 columns and 7 rows). The 9- and 14-flash A & B paradigms present all items of the 12 × 7 matrix three times using either nine or 14 flashes (instead of 19), decreasing the amount of time to present stimuli. Compared to 9-flash A, 9-flash B decreased the likelihood that neighboring items would flash when the target was not flashing, thereby reducing interference from items adjacent to targets. 14-flash A also reduced adjacent item interference and 14-flash B additionally eliminated successive (double) flashes of the same item. Results showed that accuracy and bit rate of the adaptive system were higher than the non-adaptive system. In addition, 9- and 14-flash B produced significantly higher performance than their respective A conditions. The results also show the trend that the 14-flash B paradigm was better than the 19-flash pattern for naïve users. PMID:21474877
Calvo-Ortega, Juan-Francisco; Hermida-López, Marcelino; Moragues-Femenía, Sandra; Pozo-Massó, Miquel; Casals-Farran, Joan
2017-03-01
To evaluate the spatial accuracy of a frameless cone-beam computed tomography (CBCT)-guided cranial radiosurgery (SRS) using an end-to-end (E2E) phantom test methodology. Five clinical SRS plans were mapped to an acrylic phantom containing a radiochromic film. The resulting phantom-based plans (E2E plans) were delivered four times. The phantom was setup on the treatment table with intentional misalignments, and CBCT-imaging was used to align it prior to E2E plan delivery. Comparisons (global gamma analysis) of the planned and delivered dose to the film were performed using a commercial triple-channel film dosimetry software. The necessary distance-to-agreement to achieve a 95% (DTA95) gamma passing rate for a fixed 3% dose difference provided an estimate of the spatial accuracy of CBCT-guided SRS. Systematic (∑) and random (σ) error components, as well as 95% confidence levels were derived for the DTA95 metric. The overall systematic spatial accuracy averaged over all tests was 1.4mm (SD: 0.2mm), with a corresponding 95% confidence level of 1.8mm. The systematic (Σ) and random (σ) spatial components of the accuracy derived from the E2E tests were 0.2mm and 0.8mm, respectively. The E2E methodology used in this study allowed an estimation of the spatial accuracy of our CBCT-guided SRS procedure. Subsequently, a PTV margin of 2.0mm is currently used in our department. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Combining Passive Microwave Rain Rate Retrieval with Visible and Infrared Cloud Classification.
NASA Astrophysics Data System (ADS)
Miller, Shawn William
The relation between cloud type and rain rate has been investigated here from different approaches. Previous studies and intercomparisons have indicated that no single passive microwave rain rate algorithm is an optimal choice for all types of precipitating systems. Motivated by the upcoming Tropical Rainfall Measuring Mission (TRMM), an algorithm which combines visible and infrared cloud classification with passive microwave rain rate estimation was developed and analyzed in a preliminary manner using data from the Tropical Ocean Global Atmosphere-Coupled Ocean Atmosphere Response Experiment (TOGA-COARE). Overall correlation with radar rain rate measurements across five case studies showed substantial improvement in the combined algorithm approach when compared to the use of any single microwave algorithm. An automated neural network cloud classifier for use over both land and ocean was independently developed and tested on Advanced Very High Resolution Radiometer (AVHRR) data. The global classifier achieved strict accuracy for 82% of the test samples, while a more localized version achieved strict accuracy for 89% of its own test set. These numbers provide hope for the eventual development of a global automated cloud classifier for use throughout the tropics and the temperate zones. The localized classifier was used in conjunction with gridded 15-minute averaged radar rain rates at 8km resolution produced from the current operational network of National Weather Service (NWS) radars, to investigate the relation between cloud type and rain rate over three regions of the continental United States and adjacent waters. The results indicate a substantially lower amount of available moisture in the Front Range of the Rocky Mountains than in the Midwest or in the eastern Gulf of Mexico.
Moore, D F; Harwood, V J; Ferguson, D M; Lukasik, J; Hannah, P; Getrich, M; Brownell, M
2005-01-01
The accuracy of ribotyping and antibiotic resistance analysis (ARA) for prediction of sources of faecal bacterial pollution in an urban southern California watershed was determined using blinded proficiency samples. Antibiotic resistance patterns and HindIII ribotypes of Escherichia coli (n = 997), and antibiotic resistance patterns of Enterococcus spp. (n = 3657) were used to construct libraries from sewage samples and from faeces of seagulls, dogs, cats, horses and humans within the watershed. The three libraries were analysed to determine the accuracy of host source prediction. The internal accuracy of the libraries (average rate of correct classification, ARCC) with six source categories was 44% for E. coli ARA, 69% for E. coli ribotyping and 48% for Enterococcus ARA. Each library's predictive ability towards isolates that were not part of the library was determined using a blinded proficiency panel of 97 E. coli and 99 Enterococcus isolates. Twenty-eight per cent (by ARA) and 27% (by ribotyping) of the E. coli proficiency isolates were assigned to the correct source category. Sixteen per cent were assigned to the same source category by both methods, and 6% were assigned to the correct category. Addition of 2480 E. coli isolates to the ARA library did not improve the ARCC or proficiency accuracy. In contrast, 45% of Enterococcus proficiency isolates were correctly identified by ARA. None of the methods performed well enough on the proficiency panel to be judged ready for application to environmental samples. Most microbial source tracking (MST) studies published have demonstrated library accuracy solely by the internal ARCC measurement. Low rates of correct classification for E. coli proficiency isolates compared with the ARCCs of the libraries indicate that testing of bacteria from samples that are not represented in the library, such as blinded proficiency samples, is necessary to accurately measure predictive ability. The library-based MST methods used in this study may not be suited for determination of the source(s) of faecal pollution in large, urban watersheds.
Wang, Qingyu; Canton, Gador; Guo, Jian; Guo, Xiaoya; Hatsukami, Thomas S.; Billiar, Kristen L.; Yuan, Chun; Wu, Zheyang
2017-01-01
Background Image-based computational models are widely used to determine atherosclerotic plaque stress/strain conditions and investigate their association with plaque progression and rupture. However, patient-specific vessel material properties are in general lacking in those models, limiting the accuracy of their stress/strain measurements. A noninvasive approach of combining in vivo 3D multi-contrast and Cine magnetic resonance imaging (MRI) and computational modeling was introduced to quantify patient-specific carotid plaque material properties for potential plaque model improvements. Vessel material property variation in patients, along vessel segment, and between baseline and follow up were investigated. Methods In vivo 3D multi-contrast and Cine MRI carotid plaque data were acquired from 8 patients with follow-up (18 months) with written informed consent obtained. 3D thin-layer models and an established iterative procedure were used to determine parameter values of the Mooney-Rivlin models for the 81slices from 16 plaque samples. Effective Young’s Modulus (YM) values were calculated for comparison and analysis. Results Average Effective Young’s Modulus (YM) and circumferential shrinkage rate (C-Shrink) value of the 81 slices was 411kPa and 5.62%, respectively. Slice YM value varied from 70 kPa (softest) to 1284 kPa (stiffest), a 1734% difference. Average slice YM values by vessel varied from 109 kPa (softest) to 922 kPa (stiffest), a 746% difference. Location-wise, the maximum slice YM variation rate within a vessel was 311% (149 kPa vs. 613 kPa). The average slice YM variation rate for the 16 vessels was 134%. The average variation of YM values for all patients from baseline to follow up was 61.0%. The range of the variation of YM values was [-28.4%, 215%]. For plaque progression study, YM at follow-up showed negative correlation with plaque progression measured by wall thickness increase (WTI) (r = -0.7764, p = 0.0235). Wall thickness at baseline correlated with WTI negatively, with r = -0.5253 (p = 0.1813). Plaque burden at baseline correlated with YM change between baseline and follow-up, with r = 0.5939 (p = 0.1205). Conclusion In vivo carotid vessel material properties have large variations from patient to patient, along the diseased segment within a patient, and with time. The use of patient-specific, location specific and time-specific material properties in plaque models could potentially improve the accuracy of model stress/strain calculations. PMID:28715441
[Determination of wine original regions using information fusion of NIR and MIR spectroscopy].
Xiang, Ling-Li; Li, Meng-Hua; Li, Jing-Mingz; Li, Jun-Hui; Zhang, Lu-Da; Zhao, Long-Lian
2014-10-01
Geographical origins of wine grapes are significant factors affecting wine quality and wine prices. Tasters' evaluation is a good method but has some limitations. It is important to discriminate different wine original regions quickly and accurately. The present paper proposed a method to determine wine original regions based on Bayesian information fusion that fused near-infrared (NIR) transmission spectra information and mid-infrared (MIR) ATR spectra information of wines. This method improved the determination results by expanding the sources of analysis information. NIR spectra and MIR spectra of 153 wine samples from four different regions of grape growing were collected by near-infrared and mid-infrared Fourier transform spe trometer separately. These four different regions are Huailai, Yantai, Gansu and Changli, which areall typical geographical originals for Chinese wines. NIR and MIR discriminant models for wine regions were established using partial least squares discriminant analysis (PLS-DA) based on NIR spectra and MIR spectra separately. In PLS-DA, the regions of wine samples are presented in group of binary code. There are four wine regions in this paper, thereby using four nodes standing for categorical variables. The output nodes values for each sample in NIR and MIR models were normalized first. These values stand for the probabilities of each sample belonging to each category. They seemed as the input to the Bayesian discriminant formula as a priori probability value. The probabilities were substituteed into the Bayesian formula to get posterior probabilities, by which we can judge the new class characteristics of these samples. Considering the stability of PLS-DA models, all the wine samples were divided into calibration sets and validation sets randomly for ten times. The results of NIR and MIR discriminant models of four wine regions were as follows: the average accuracy rates of calibration sets were 78.21% (NIR) and 82.57% (MIR), and the average accuracy rates of validation sets were 82.50% (NIR) and 81.98% (MIR). After using the method proposed in this paper, the accuracy rates of calibration and validation changed to 87.11% and 90.87% separately, which all achieved better results of determination than individual spectroscopy. These results suggest that Bayesian information fusion of NIR and MIR spectra is feasible for fast identification of wine original regions.
Meta-analysis of stratus OCT glaucoma diagnostic accuracy.
Chen, Hsin-Yi; Chang, Yue-Cune
2014-09-01
To evaluate the diagnostic accuracy of glaucoma in different stages, different types of glaucoma, and different ethnic groups using Stratus optical coherence tomography (OCT). We searched MEDLINE to identify available articles on diagnostic accuracy of glaucoma published between January 2004 and December 2011. A PubMed (National Center for Biotechnology Information) search using medical subject headings and keywords was executed using the following terms: "diagnostic accuracy" or "receiver operator characteristic" or "area under curve" or "AUC" and "Stratus OCT" and "glaucoma." The search was subsequently limited to publications in English. The area under a receiver operator characteristic (AUC) curve was used to measure the diagnostic performance. A random-effects model was used to estimate the pooled AUC value of the 17 parameters (average retinal nerve fiber layer thickness, temporal quadrant, superior quadrant, nasal quadrant, inferior quadrant, and 1 to 12 o'clock). Meta-regression analysis was used to check the significance of some important factors: (1) glaucoma severity (five stages), (2) glaucoma types (four types), and (3) ethnicity (four categories). The orders of accuracy among those parameters were as follows: average > inferior > superior > 7 o'clock > 6 o'clock > 11 o'clock > 12 o'clock > 1 o'clock > 5 o'clock > nasal > temporal > 2 o'clock > 10 o'clock > 8 o'clock > 9 o'clock > 4 o'clock > 3 o'clock. After adjusting for the effects of age, glaucoma severity, glaucoma types, and ethnicity, the average retinal nerve fiber layer thickness provided highest accuracy compared with the other parameters of OCT. The diagnostic accuracy in Asian populations was significantly lower than that in whites and the other two ethnic types. Stratus OCT demonstrated good diagnostic capability in differentiating glaucomatous from normal eyes. However, we should be more cautious in applying this instrument in Asian groups in glaucoma management.
Accuracy of Carbohydrate Counting in Adults
Rushton, Wanda E.
2016-01-01
In Brief This study investigates carbohydrate counting accuracy in patients using insulin through a multiple daily injection regimen or continuous subcutaneous insulin infusion. The average accuracy test score for all patients was 59%. The carbohydrate test in this study can be used to emphasize the importance of carbohydrate counting to patients and to provide ongoing education. PMID:27621531
Testing the accuracy of growth and yield models for southern hardwood forests
H. Michael Rauscher; Michael J. Young; Charles D. Webb; Daniel J. Robison
2000-01-01
The accuracy of ten growth and yield models for Southern Appalachian upland hardwood forests and southern bottomland forests was evaluated. In technical applications, accuracy is the composite of both bias (average error) and precision. Results indicate that GHAT, NATPIS, and a locally calibrated version of NETWIGS may be regarded as being operationally valid...
A statistical model of false negative and false positive detection of phase singularities.
Jacquemet, Vincent
2017-10-01
The complexity of cardiac fibrillation dynamics can be assessed by analyzing the distribution of phase singularities (PSs) observed using mapping systems. Interelectrode distance, however, limits the accuracy of PS detection. To investigate in a theoretical framework the PS false negative and false positive rates in relation to the characteristics of the mapping system and fibrillation dynamics, we propose a statistical model of phase maps with controllable number and locations of PSs. In this model, phase maps are generated from randomly distributed PSs with physiologically-plausible directions of rotation. Noise and distortion of the phase are added. PSs are detected using topological charge contour integrals on regular grids of varying resolutions. Over 100 × 10 6 realizations of the random field process are used to estimate average false negative and false positive rates using a Monte-Carlo approach. The false detection rates are shown to depend on the average distance between neighboring PSs expressed in units of interelectrode distance, following approximately a power law with exponents in the range of 1.14 to 2 for false negatives and around 2.8 for false positives. In the presence of noise or distortion of phase, false detection rates at high resolution tend to a non-zero noise-dependent lower bound. This model provides an easy-to-implement tool for benchmarking PS detection algorithms over a broad range of configurations with multiple PSs.
NASA Astrophysics Data System (ADS)
Liang, Zhang; Yanqing, Hou; Jie, Wu
2016-12-01
The multi-antenna synchronized receiver (using a common clock) is widely applied in GNSS-based attitude determination (AD) or terrain deformations monitoring, and many other applications, since the high-accuracy single-differenced carrier phase can be used to improve the positioning or AD accuracy. Thus, the line bias (LB) parameter (fractional bias isolating) should be calibrated in the single-differenced phase equations. In the past decades, all researchers estimated the LB as a constant parameter in advance and compensated it in real time. However, the constant LB assumption is inappropriate in practical applications because of the physical length and permittivity changes of the cables, caused by the environmental temperature variation and the instability of receiver-self inner circuit transmitting delay. Considering the LB drift (or colored LB) in practical circumstances, this paper initiates a real-time estimator using auto regressive moving average-based (ARMA) prediction/whitening filter model or Moving average-based (MA) constant calibration model. In the ARMA-based filter model, four cases namely AR(1), ARMA(1, 1), AR(2) and ARMA(2, 1) are applied for the LB prediction. The real-time relative positioning model using the ARMA-based predicting LB is derived and it is theoretically proved that the positioning accuracy is better than the traditional double difference carrier phase (DDCP) model. The drifting LB is defined with a phase temperature changing rate integral function, which is a random walk process if the phase temperature changing rate is white noise, and is validated by the analysis of the AR model coefficient. The auto covariance function shows that the LB is indeed varying in time and estimating it as a constant is not safe, which is also demonstrated by the analysis on LB variation of each visible satellite during a zero and short baseline BDS/GPS experiment. Compared to the DDCP approach, in the zero-baseline experiment, the LB constant calibration (LBCC) and MA approaches improved the positioning accuracy of the vertical component, while slightly degrading the accuracy of the horizontal components. The ARMA(1, 0) model, however, improved the positioning accuracy of all three components, with 40 and 50 % improvement of the vertical component for BDS and GPS, respectively. In the short baseline experiment, compared to the DDCP approach, the LBCC approach yielded bad positioning solutions and degraded the AD accuracy; both MA and ARMA-based filter approaches improved the AD accuracy. Moreover, the ARMA(1, 0) and ARMA(1, 1) models have relatively better performance, improving to 55 % and 48 % the elevation angle in ARMA(1, 1) and MA model for GPS, respectively. Furthermore, the drifting LB variation is found to be continuous and slowly cumulative; the variation magnitudes in the unit of length are almost identical on different frequency carrier phases, so the LB variation does not show obvious correlation between different frequencies. Consequently, the wide-lane LB in the unit of cycle is very stable, while the narrow-lane LB varies largely in time. This reasoning probably also explains the phenomenon that the wide-lane LB originating in the satellites is stable, while the narrow-lane LB varies. The results of ARMA-based filters are better than the MA model, which probably implies that the modeling for drifting LB can further improve the precise point positioning accuracy.
Use of Temperature to Improve West Nile Virus Forecasts
NASA Astrophysics Data System (ADS)
Shaman, J. L.; DeFelice, N.; Schneider, Z.; Little, E.; Barker, C.; Caillouet, K.; Campbell, S.; Damian, D.; Irwin, P.; Jones, H.; Townsend, J.
2017-12-01
Ecological and laboratory studies have demonstrated that temperature modulates West Nile virus (WNV) transmission dynamics and spillover infection to humans. Here we explore whether the inclusion of temperature forcing in a model depicting WNV transmission improves WNV forecast accuracy relative to a baseline model depicting WNV transmission without temperature forcing. Both models are optimized using a data assimilation method and two observed data streams: mosquito infection rates and reported human WNV cases. Each coupled model-inference framework is then used to generate retrospective ensemble forecasts of WNV for 110 outbreak years from among 12 geographically diverse United States counties. The temperature-forced model improves forecast accuracy for much of the outbreak season. From the end of July until the beginning of October, a timespan during which 70% of human cases are reported, the temperature-forced model generated forecasts of the total number of human cases over the next 3 weeks, total number of human cases over the season, the week with the highest percentage of infectious mosquitoes, and the peak percentage of infectious mosquitoes that were on average 5%, 10%, 12%, and 6% more accurate, respectively, than the baseline model. These results indicate that use of temperature forcing improves WNV forecast accuracy and provide further evidence that temperatures influence rates of WNV transmission. The findings help build a foundation for implementation of a statistically rigorous system for real-time forecast of seasonal WNV outbreaks and their use as a quantitative decision support tool for public health officials and mosquito control programs.
Effects of diphenhydramine on human eye movements.
Hopfenbeck, J R; Cowley, D S; Radant, A; Greenblatt, D J; Roy-Byrne, P P
1995-04-01
Peak saccadic eye movement velocity (SEV) and average smooth pursuit gain (SP) are reduced in a dose-dependent manner by diazepam and provide reliable, quantitative measures of benzodiazepine agonist effects. To evaluate the specificity of these eye movement effects for agents acting at the central GABA-benzodiazepine receptor complex and the role of sedation in benzodiazepine effects, we studied eye movement effects of diphenhydramine, a sedating drug which does not act at the GABA-benzodiazepine receptor complex. Ten healthy males, aged 19-28 years, with no history of axis I psychiatric disorders or substance abuse, received 50 mg/70 kg intravenous diphenhydramine or a similar volume of saline on separate days 1 week apart. SEV, saccade latency and accuracy, SP, self-rated sedation, and short-term memory were assessed at baseline and at 5, 15, 30, 45, 60, 90 and 120 min after drug administration. Compared with placebo, diphenhydramine produced significant SEV slowing, and increases in saccade latency and self-rated sedation. There was no significant effect of diphenhydramine on smooth pursuit gain, saccade accuracy, or short-term memory. These results suggest that, like diazepam, diphenhydramine causes sedation, SEV slowing, and an increase in saccade latency. Since the degree of diphenhydramine-induced sedation was not correlated with changes in SEV or saccade latency, slowing of saccadic eye movements is unlikely to be attributable to sedation alone. Unlike diazepam, diphenhydramine does not impair smooth pursuit gain, saccadic accuracy, or memory. Different neurotransmitter systems may influence the neural pathways involved in SEV and smooth pursuit again.
Design, implementation and accuracy of a prototype for medical augmented reality.
Pandya, Abhilash; Siadat, Mohammad-Reza; Auner, Greg
2005-01-01
This paper is focused on prototype development and accuracy evaluation of a medical Augmented Reality (AR) system. The accuracy of such a system is of critical importance for medical use, and is hence considered in detail. We analyze the individual error contributions and the system accuracy of the prototype. A passive articulated arm is used to track a calibrated end-effector-mounted video camera. The live video view is superimposed in real time with the synchronized graphical view of CT-derived segmented object(s) of interest within a phantom skull. The AR accuracy mostly depends on the accuracy of the tracking technology, the registration procedure, the camera calibration, and the image scanning device (e.g., a CT or MRI scanner). The accuracy of the Microscribe arm was measured to be 0.87 mm. After mounting the camera on the tracking device, the AR accuracy was measured to be 2.74 mm on average (standard deviation = 0.81 mm). After using data from a 2-mm-thick CT scan, the AR error remained essentially the same at an average of 2.75 mm (standard deviation = 1.19 mm). For neurosurgery, the acceptable error is approximately 2-3 mm, and our prototype approaches these accuracy requirements. The accuracy could be increased with a higher-fidelity tracking system and improved calibration and object registration. The design and methods of this prototype device can be extrapolated to current medical robotics (due to the kinematic similarity) and neuronavigation systems.
NASA Astrophysics Data System (ADS)
Mundermann, Lars; Mundermann, Annegret; Chaudhari, Ajit M.; Andriacchi, Thomas P.
2005-01-01
Anthropometric parameters are fundamental for a wide variety of applications in biomechanics, anthropology, medicine and sports. Recent technological advancements provide methods for constructing 3D surfaces directly. Of these new technologies, visual hull construction may be the most cost-effective yet sufficiently accurate method. However, the conditions influencing the accuracy of anthropometric measurements based on visual hull reconstruction are unknown. The purpose of this study was to evaluate the conditions that influence the accuracy of 3D shape-from-silhouette reconstruction of body segments dependent on number of cameras, camera resolution and object contours. The results demonstrate that the visual hulls lacked accuracy in concave regions and narrow spaces, but setups with a high number of cameras reconstructed a human form with an average accuracy of 1.0 mm. In general, setups with less than 8 cameras yielded largely inaccurate visual hull constructions, while setups with 16 and more cameras provided good volume estimations. Body segment volumes were obtained with an average error of 10% at a 640x480 resolution using 8 cameras. Changes in resolution did not significantly affect the average error. However, substantial decreases in error were observed with increasing number of cameras (33.3% using 4 cameras; 10.5% using 8 cameras; 4.1% using 16 cameras; 1.2% using 64 cameras).
Wang, Liang; Li, Zishen; Zhao, Jiaojiao; Zhou, Kai; Wang, Zhiyu; Yuan, Hong
2016-12-21
Using mobile smart devices to provide urban location-based services (LBS) with sub-meter-level accuracy (around 0.5 m) is a major application field for future global navigation satellite system (GNSS) development. Real-time kinematic (RTK) positioning, which is a widely used GNSS-based positioning approach, can improve the accuracy from about 10-20 m (achieved by the standard positioning services) to about 3-5 cm based on the geodetic receivers. In using the smart devices to achieve positioning with sub-meter-level accuracy, a feasible solution of combining the low-cost GNSS module and the smart device is proposed in this work and a user-side GNSS RTK positioning software was developed from scratch based on the Android platform. Its real-time positioning performance was validated by BeiDou Navigation Satellite System/Global Positioning System (BDS/GPS) combined RTK positioning under the conditions of a static and kinematic (the velocity of the rover was 50-80 km/h) mode in a real urban environment with a SAMSUNG Galaxy A7 smartphone. The results show that the fixed-rates of ambiguity resolution (the proportion of epochs of ambiguities fixed) for BDS/GPS combined RTK in the static and kinematic tests were about 97% and 90%, respectively, and the average positioning accuracies (RMS) were better than 0.15 m (horizontal) and 0.25 m (vertical) for the static test, and 0.30 m (horizontal) and 0.45 m (vertical) for the kinematic test.
Kotani, Yoshihisa; Abumi, Kuniyoshi; Ito, Manabu; Takahata, Masahiko; Sudo, Hideki; Ohshima, Shigeki; Minami, Akio
2007-06-15
The accuracy of pedicle screw placement was evaluated in posterior scoliosis surgeries with or without the use of computer-assisted surgical techniques. In this retrospective cohort study, the pedicle screw placement accuracy in posterior scoliosis surgery was compared between conventional fluoroscopic and computer-assisted surgical techniques. There has been no study systemically analyzing the perforation pattern and comparative accuracy of pedicle screw placement in posterior scoliosis surgery. The 45 patients who received posterior correction surgeries were divided into 2 groups: Group C, manual control (25 patients); and Group N, navigation surgery (20 patients). The average Cobb angles were 73.7 degrees and 73.1 degrees before surgery in Group C and Group N, respectively. Using CT images, vertebral rotation, pedicle axes as measured to anteroposterior sacral axis and vertebral axis, and insertion angle error were measured. In perforation cases, the angular tendency, insertion point, and length abnormality were evaluated. The perforation was observed in 11% of Group C and 1.8% in Group N. In Group C, medial perforations of left screws were demonstrated in 8 of 9 perforated screws and 55% were distributed either in L1 or T12. The perforation consistently occurred in pedicles in which those axes approached anteroposterior sacral axis within 5 degrees . The average insertion errors were 8.4 degrees and 5.0 degrees in Group C and Group N, respectively, which were significantly different (P < 0.02). The medial perforation in Group C occurred around L1, especially when pedicle axis approached anteroposterior sacral axis. This consistent tendency was considered as the limitation of fluoroscopic screw insertion in which horizontal vertebral image was not visible. The use of surgical navigation system successfully reduced the perforation rate and insertion angle errors, demonstrating the clear advantage in safe and accurate pedicle screw placement of scoliosis surgery.
Segmentation of Retinal Blood Vessels Based on Cake Filter
Bao, Xi-Rong; Ge, Xin; She, Li-Huang; Zhang, Shi
2015-01-01
Segmentation of retinal blood vessels is significant to diagnosis and evaluation of ocular diseases like glaucoma and systemic diseases such as diabetes and hypertension. The retinal blood vessel segmentation for small and low contrast vessels is still a challenging problem. To solve this problem, a new method based on cake filter is proposed. Firstly, a quadrature filter band called cake filter band is made up in Fourier field. Then the real component fusion is used to separate the blood vessel from the background. Finally, the blood vessel network is got by a self-adaption threshold. The experiments implemented on the STARE database indicate that the new method has a better performance than the traditional ones on the small vessels extraction, average accuracy rate, and true and false positive rate. PMID:26636095
A re-examination of the effects of biased lineup instructions in eyewitness identification.
Clark, Steven E
2005-10-01
A meta-analytic review of research comparing biased and unbiased instructions in eyewitness identification experiments showed an asymmetry; specifically, that biased instructions led to a large and consistent decrease in accuracy in target-absent lineups, but produced inconsistent results for target-present lineups, with an average effect size near zero (Steblay, 1997). The results for target-present lineups are surprising, and are inconsistent with statistical decision theories (i.e., Green & Swets, 1966). A re-examination of the relevant studies and the meta-analysis of those studies shows clear evidence that correct identification rates do increase with biased lineup instructions, and that biased witnesses make correct identifications at a rate considerably above chance. Implications for theory, as well as police procedure and policy, are discussed.
A re-examination of the effects of biased lineup instructions in eyewitness identification.
Clark, Steven E
2005-08-01
A meta-analytic review of research comparing biased and unbiased instructions in eyewitness identification experiments showed an asymmetry, specifically that biased instructions led to a large and consistent decrease in accuracy in target-absent lineups, but produced inconsistent results for target-present lineups, with an average effect size near zero (N. M. Steblay, 1997). The results for target-present lineups are surprising, and are inconsistent with statistical decision theories (i.e., D. M. Green & J. A. Swets, 1966). A re-examination of the relevant studies and the meta-analysis of those studies shows clear evidence that correct identification rates do increase with biased lineup instructions, and that biased witnesses make correct identifications at a rate considerably above chance. Implications for theory, as well as police procedure and policy, are discussed.
MUSCLE: multiple sequence alignment with high accuracy and high throughput.
Edgar, Robert C
2004-01-01
We describe MUSCLE, a new computer program for creating multiple alignments of protein sequences. Elements of the algorithm include fast distance estimation using kmer counting, progressive alignment using a new profile function we call the log-expectation score, and refinement using tree-dependent restricted partitioning. The speed and accuracy of MUSCLE are compared with T-Coffee, MAFFT and CLUSTALW on four test sets of reference alignments: BAliBASE, SABmark, SMART and a new benchmark, PREFAB. MUSCLE achieves the highest, or joint highest, rank in accuracy on each of these sets. Without refinement, MUSCLE achieves average accuracy statistically indistinguishable from T-Coffee and MAFFT, and is the fastest of the tested methods for large numbers of sequences, aligning 5000 sequences of average length 350 in 7 min on a current desktop computer. The MUSCLE program, source code and PREFAB test data are freely available at http://www.drive5. com/muscle.
Ducrot, Christian; Gautret, Marjolaine; Pineau, Thierry; Jestin, André
2016-03-14
The objectives of this bibliometric analysis of the scientific literature were to describe the research subjects and the international collaborations in the field of research on infectious diseases in livestock animals including fishes and honeybees. It was based on articles published worldwide from 2006 through 2013. The source of data was the Web of Science, Core collection(®) and only papers fully written in English were considered. Queries were built that combined 130 descriptors related to animal species and 1213 descriptors related to diseases and pathogens. To refine and assess the accuracy of the extracted database, supplementary filters were applied to discard non-specific terms and neighbouring topics, and numerous tests were carried out on samples. For pathogens, annotation was done using a thematic terminology established to link each disease with its corresponding pathogen, which was in turn classified according to its family. A total of 62,754 articles were published in this field during this 8-year period. The average annual growth rate of the number of papers was 5%. This represents the reference data to which we compared the average annual growth rate of articles produced in each of the sub-categories that we defined. Thirty-seven percent of the papers were dedicated to ruminant diseases. Poultry, pigs and fishes were covered by respectively 21, 13 and 14% of the total. Thirty-seven percent of papers concerned bacteria, 33% viruses, 19% parasites, 2% prions, the remaining being multi-pathogens. Research on virology, especially on pigs and poultry, is increasing faster than the average. There also is increasing interest in monogastric species, fish and bees. The average annual growth rate for Asia was 10%, which is high compared to 3% for Europe and 2% for the Americas, indicating that Asia is currently playing a leading role in this field. There is a well established network of international collaborations. For 75% of the papers, the co-authors were from the same country, for 10%, they were from different countries on the same continent, and for 15%, they were from different continents. The annual growth rate of papers representing international collaborations generally is increasing more quickly than the overall average.
Position and volume estimation of atmospheric nuclear detonations from video reconstruction
NASA Astrophysics Data System (ADS)
Schmitt, Daniel T.
Recent work in digitizing films of foundational atmospheric nuclear detonations from the 1950s provides an opportunity to perform deeper analysis on these historical tests. This work leverages multi-view geometry and computer vision techniques to provide an automated means to perform three-dimensional analysis of the blasts for several points in time. The accomplishment of this requires careful alignment of the films in time, detection of features in the images, matching of features, and multi-view reconstruction. Sub-explosion features can be detected with a 67% hit rate and 22% false alarm rate. Hotspot features can be detected with a 71.95% hit rate, 86.03% precision and a 0.015% false positive rate. Detected hotspots are matched across 57-109 degree viewpoints with 76.63% average correct matching by defining their location relative to the center of the explosion, rotating them to the alternative viewpoint, and matching them collectively. When 3D reconstruction is applied to the hotspot matching it completes an automated process that has been used to create 168 3D point clouds with 31.6 points per reconstruction with each point having an accuracy of 0.62 meters with 0.35, 0.24, and 0.34 meters of accuracy in the x-, y- and z-direction respectively. As a demonstration of using the point clouds for analysis, volumes are estimated and shown to be consistent with radius-based models and in some cases improve on the level of uncertainty in the yield calculation.
Feasibility study of volumetric modulated arc therapy with constant dose rate for endometrial cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Ruijie; Wang, Junjie, E-mail: junjiewang47@yahoo.com; Xu, Feng
2013-10-01
To investigate the feasibility, efficiency, and delivery accuracy of volumetric modulated arc therapy with constant dose rate (VMAT-CDR) for whole-pelvic radiotherapy (WPRT) of endometrial cancer. The nine-field intensity-modulated radiotherapy (IMRT), VMAT with variable dose-rate (VMAT-VDR), and VMAT-CDR plans were created for 9 patients with endometrial cancer undergoing WPRT. The dose distribution of planning target volume (PTV), organs at risk (OARs), and normal tissue (NT) were compared. The monitor units (MUs) and treatment delivery time were also evaluated. For each VMAT-CDR plan, a dry run was performed to assess the dosimetric accuracy with MatriXX from IBA. Compared with IMRT, the VMAT-CDRmore » plans delivered a slightly greater V{sub 20} of the bowel, bladder, pelvis bone, and NT, but significantly decreased the dose to the high-dose region of the rectum and pelvis bone. The MUs decreased from 1105 with IMRT to 628 with VMAT-CDR. The delivery time also decreased from 9.5 to 3.2 minutes. The average gamma pass rate was 95.6% at the 3%/3 mm criteria with MatriXX pretreatment verification for 9 patients. VMAT-CDR can achieve comparable plan quality with significant shorter delivery time and smaller number of MUs compared with IMRT for patients with endometrial cancer undergoing WPRT. It can be accurately delivered and be an alternative to IMRT on the linear accelerator without VDR capability.« less
Cordeiro, Daniela Valença; Lima, Verônica Castro; Castro, Dinorah P; Castro, Leonardo C; Pacheco, Maria Angélica; Lee, Jae Min; Dimantas, Marcelo I; Prata, Tiago Santos
2011-01-01
To evaluate the influence of optic disc size on the diagnostic accuracy of macular ganglion cell complex (GCC) and conventional peripapillary retinal nerve fiber layer (pRNFL) analyses provided by spectral domain optical coherence tomography (SD-OCT) in glaucoma. Eighty-two glaucoma patients and 30 healthy subjects were included. All patients underwent GCC (7 × 7 mm macular grid, consisting of RNFL, ganglion cell and inner plexiform layers) and pRNFL thickness measurement (3.45 mm circular scan) by SD-OCT. One eye was randomly selected for analysis. Initially, receiver operating characteristic (ROC) curves were generated for different GCC and pRNFL parameters. The effect of disc area on the diagnostic accuracy of these parameters was evaluated using a logistic ROC regression model. Subsequently, 1.5, 2.0, and 2.5 mm(2) disc sizes were arbitrarily chosen (based on data distribution) and the predicted areas under the ROC curves (AUCs) and sensitivities were compared at fixed specificities for each. Average mean deviation index for glaucomatous eyes was -5.3 ± 5.2 dB. Similar AUCs were found for the best pRNFL (average thickness = 0.872) and GCC parameters (average thickness = 0.824; P = 0.19). The coefficient representing disc area in the ROC regression model was not statistically significant for average pRNFL thickness (-0.176) or average GCC thickness (0.088; P ≥ 0.56). AUCs for fixed disc areas (1.5, 2.0, and 2.5 mm(2)) were 0.904, 0.891, and 0.875 for average pRNFL thickness and 0.834, 0.842, and 0.851 for average GCC thickness, respectively. The highest sensitivities - at 80% specificity for average pRNFL (84.5%) and GCC thicknesses (74.5%) - were found with disc sizes fixed at 1.5 mm(2) and 2.5 mm(2). Diagnostic accuracy was similar between pRNFL and GCC thickness parameters. Although not statistically significant, there was a trend for a better diagnostic accuracy of pRNFL thickness measurement in cases of smaller discs. For GCC analysis, an inverse effect was observed.
Alwee, Razana; Hj Shamsuddin, Siti Mariyam; Sallehuddin, Roselina
2013-01-01
Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models. PMID:23766729
Effect of cabin ventilation rate on ultrafine particle exposure inside automobiles.
Knibbs, Luke D; de Dear, Richard J; Morawska, Lidia
2010-05-01
We alternately measured on-road and in-vehicle ultrafine (<100 nm) particle (UFP) concentration for 5 passenger vehicles that comprised an age range of 18 years. A range of cabin ventilation settings were assessed during 301 trips through a 4 km road tunnel in Sydney, Australia. Outdoor air flow (ventilation) rates under these settings were quantified on open roads using tracer gas techniques. Significant variability in tunnel trip average median in-cabin/on-road (I/O) UFP ratios was observed (0.08 to approximately 1.0). Based on data spanning all test automobiles and ventilation settings, a positive linear relationship was found between outdoor air flow rate and I/O ratio, with the former accounting for a substantial proportion of variation in the latter (R(2) = 0.81). UFP concentrations recorded in-cabin during tunnel travel were significantly higher than those reported by comparable studies performed on open roadways. A simple mathematical model afforded the ability to predict tunnel trip average in-cabin UFP concentrations with good accuracy. Our data indicate that under certain conditions, in-cabin UFP exposures incurred during tunnel travel may contribute significantly to daily exposure. The UFP exposure of automobile occupants appears strongly related to their choice of ventilation setting and vehicle.
Alwee, Razana; Shamsuddin, Siti Mariyam Hj; Sallehuddin, Roselina
2013-01-01
Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models.
Popovic, Gordana; Harhara, Thana; Pope, Ashley; Al-Awamer, Ahmed; Banerjee, Subrata; Bryson, John; Mak, Ernie; Lau, Jenny; Hannon, Breffni; Swami, Nadia; Le, Lisa W; Zimmermann, Camilla
2018-06-01
Performance status measures are increasingly completed by patients in outpatient cancer settings, but are not well validated for this use. We assessed performance of a patient-reported functional status measure (PRFS, based on the Eastern Cooperative Oncology Group [ECOG]), compared with the physician-completed ECOG, in terms of agreement in ratings and prediction of survival. Patients and physicians independently completed five-point PRFS (lay version of ECOG) and ECOG measures on first consultation at an oncology palliative care clinic. We assessed agreement between PRFS and ECOG using weighted Kappa statistics, and used linear regression to determine factors associated with the difference between PRFS and ECOG ratings. We used the Kaplan-Meier method to estimate the patients' median survival, categorized by PRFS and ECOG, and assessed predictive accuracy of these measures using the C-statistic. For the 949 patients, there was moderate agreement between PRFS and ECOG (weighted Kappa 0.32; 95% CI: 0.28-0.36). On average, patients' ratings of performance status were worse by 0.31 points (95% CI: 0.25-0.37, P < 0.0001); this tendency was greater for younger patients (P = 0.002) and those with worse symptoms (P < 0.0001). Both PRFS and ECOG scores correlated well with overall survival; the C-statistic was higher for the average of PRFS and ECOG scores (0.619) than when reported individually (0.596 and 0.604, respectively). Patients tend to rate their performance status worse than physicians, particularly if they are younger or have greater symptom burden. Prognostic ability of performance status could be improved by using the average of patients and physician scores. Copyright © 2018 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
Air traffic control surveillance accuracy and update rate study
NASA Technical Reports Server (NTRS)
Craigie, J. H.; Morrison, D. D.; Zipper, I.
1973-01-01
The results of an air traffic control surveillance accuracy and update rate study are presented. The objective of the study was to establish quantitative relationships between the surveillance accuracies, update rates, and the communication load associated with the tactical control of aircraft for conflict resolution. The relationships are established for typical types of aircraft, phases of flight, and types of airspace. Specific cases are analyzed to determine the surveillance accuracies and update rates required to prevent two aircraft from approaching each other too closely.
Monitoring nocturnal heart rate with bed sensor.
Migliorini, M; Kortelainen, J M; Pärkkä, J; Tenhunen, M; Himanen, S L; Bianchi, A M
2014-01-01
This article is part of the Focus Theme of Methods of Information in Medicine on "Biosignal Interpretation: Advanced Methods for Studying Cardiovascular and Respiratory Systems". The aim of this study is to assess the reliability of the estimated Nocturnal Heart Rate (HR), recorded through a bed sensor, compared with the one obtained from standard electrocardiography (ECG). Twenty-eight sleep deprived patients were recorded for one night each through matrix of piezoelectric sensors, integrated into the mattress, through polysomnography (PSG) simultaneously. The two recording methods have been compared in terms of signal quality and differences in heart beat detection. On average, coverage of 92.7% of the total sleep time was obtained for the bed sensor, testifying the good quality of the recordings. The average beat-to-beat error of the inter-beat intervals was 1.06%. These results suggest a good overall signal quality, however, considering fast heart rates (HR > 100 bpm), performances were worse: in fact, the sensitivity in the heart beat detection was 28.4% while the false positive rate was 3.8% which means that a large amount of fast beats were not detected. The accuracy of the measurements made using the bed sensor has less than 10% of failure rate especially in periods with HR lower than 70 bpm. For fast heart beats the uncertainty increases. This can be explained by the change in morphology of the bed sensor signal in correspondence of a higher HR.
Xiao, Bo; Imel, Zac E.; Georgiou, Panayiotis G.; Atkins, David C.; Narayanan, Shrikanth S.
2015-01-01
The technology for evaluating patient-provider interactions in psychotherapy–observational coding–has not changed in 70 years. It is labor-intensive, error prone, and expensive, limiting its use in evaluating psychotherapy in the real world. Engineering solutions from speech and language processing provide new methods for the automatic evaluation of provider ratings from session recordings. The primary data are 200 Motivational Interviewing (MI) sessions from a study on MI training methods with observer ratings of counselor empathy. Automatic Speech Recognition (ASR) was used to transcribe sessions, and the resulting words were used in a text-based predictive model of empathy. Two supporting datasets trained the speech processing tasks including ASR (1200 transcripts from heterogeneous psychotherapy sessions and 153 transcripts and session recordings from 5 MI clinical trials). The accuracy of computationally-derived empathy ratings were evaluated against human ratings for each provider. Computationally-derived empathy scores and classifications (high vs. low) were highly accurate against human-based codes and classifications, with a correlation of 0.65 and F-score (a weighted average of sensitivity and specificity) of 0.86, respectively. Empathy prediction using human transcription as input (as opposed to ASR) resulted in a slight increase in prediction accuracies, suggesting that the fully automatic system with ASR is relatively robust. Using speech and language processing methods, it is possible to generate accurate predictions of provider performance in psychotherapy from audio recordings alone. This technology can support large-scale evaluation of psychotherapy for dissemination and process studies. PMID:26630392
NASA Astrophysics Data System (ADS)
Lenzen, Matthias; Merklein, Marion
2017-10-01
In the automotive sector, a major challenge is the deep-drawing of modern lightweight sheet metals with limited formability. Thus, conventional material models lack in accuracy due to the complex material behavior. A current field of research takes into account the evolution of the Lankford coefficient. Today, changes in anisotropy under increasing degree of deformation are not considered. Only a consolidated average value of the Lankford coefficient is included in conventional material models. This leads to an increasing error in prediction of the flow behavior and therefore to an inaccurate prognosis of the forming behavior. To increase the accuracy of the prediction quality, the strain dependent Lankford coefficient should be respected, because the R-value has a direct effect on the contour of the associated flow rule. Further, the investigated materials show a more or less extinct rate dependency of the yield stress. For this reason, the rate dependency of the Lankford coefficient during uniaxial tension is focused within this contribution. To quantify the influence of strain rate on the Lankford coefficient, tensile tests are performed for three commonly used materials, the aluminum alloy AA6016-T4, the advanced high strength steel DP800 and the deep drawing steel DC06 at three different strain rates. The strain measurement is carried out by an optical strain measurement system. An evolution of the Lankford coefficient was observed for all investigated materials. Also, an influence of the deformation velocity on the anisotropy could be detected.
Underwater wireless optical MIMO system with spatial modulation and adaptive power allocation
NASA Astrophysics Data System (ADS)
Huang, Aiping; Tao, Linwei; Niu, Yilong
2018-04-01
In this paper, we investigate the performance of underwater wireless optical multiple-input multiple-output communication system combining spatial modulation (SM-UOMIMO) with flag dual amplitude pulse position modulation (FDAPPM). Channel impulse response for coastal and harbor ocean water links are obtained by Monte Carlo (MC) simulation. Moreover, we obtain the closed-form and upper bound average bit error rate (BER) expressions for receiver diversity including optical combining, equal gain combining and selected combining. And a novel adaptive power allocation algorithm (PAA) is proposed to minimize the average BER of SM-UOMIMO system. Our numeric results indicate an excellent match between the analytical results and numerical simulations, which confirms the accuracy of our derived expressions. Furthermore, the results show that adaptive PAA outperforms conventional fixed factor PAA and equal PAA obviously. Multiple-input single-output system with adaptive PAA obtains even better BER performance than MIMO one, at the same time reducing receiver complexity effectively.
NASA Astrophysics Data System (ADS)
Yong, Yan Ling; Tan, Li Kuo; McLaughlin, Robert A.; Chee, Kok Han; Liew, Yih Miin
2017-12-01
Intravascular optical coherence tomography (OCT) is an optical imaging modality commonly used in the assessment of coronary artery diseases during percutaneous coronary intervention. Manual segmentation to assess luminal stenosis from OCT pullback scans is challenging and time consuming. We propose a linear-regression convolutional neural network to automatically perform vessel lumen segmentation, parameterized in terms of radial distances from the catheter centroid in polar space. Benchmarked against gold-standard manual segmentation, our proposed algorithm achieves average locational accuracy of the vessel wall of 22 microns, and 0.985 and 0.970 in Dice coefficient and Jaccard similarity index, respectively. The average absolute error of luminal area estimation is 1.38%. The processing rate is 40.6 ms per image, suggesting the potential to be incorporated into a clinical workflow and to provide quantitative assessment of vessel lumen in an intraoperative time frame.
Kim, Junetae; Lim, Sanghee; Min, Yul Ha; Shin, Yong-Wook; Lee, Byungtae; Sohn, Guiyun; Jung, Kyung Hae; Lee, Jae-Ho; Son, Byung Ho; Ahn, Sei Hyun; Shin, Soo-Yong; Lee, Jong Won
2016-08-04
Mobile mental-health trackers are mobile phone apps that gather self-reported mental-health ratings from users. They have received great attention from clinicians as tools to screen for depression in individual patients. While several apps that ask simple questions using face emoticons have been developed, there has been no study examining the validity of their screening performance. In this study, we (1) evaluate the potential of a mobile mental-health tracker that uses three daily mental-health ratings (sleep satisfaction, mood, and anxiety) as indicators for depression, (2) discuss three approaches to data processing (ratio, average, and frequency) for generating indicator variables, and (3) examine the impact of adherence on reporting using a mobile mental-health tracker and accuracy in depression screening. We analyzed 5792 sets of daily mental-health ratings collected from 78 breast cancer patients over a 48-week period. Using the Patient Health Questionnaire-9 (PHQ-9) as the measure of true depression status, we conducted a random-effect logistic panel regression and receiver operating characteristic (ROC) analysis to evaluate the screening performance of the mobile mental-health tracker. In addition, we classified patients into two subgroups based on their adherence level (higher adherence and lower adherence) using a k-means clustering algorithm and compared the screening accuracy between the two groups. With the ratio approach, the area under the ROC curve (AUC) is 0.8012, indicating that the performance of depression screening using daily mental-health ratings gathered via mobile mental-health trackers is comparable to the results of PHQ-9 tests. Also, the AUC is significantly higher (P=.002) for the higher adherence group (AUC=0.8524) than for the lower adherence group (AUC=0.7234). This result shows that adherence to self-reporting is associated with a higher accuracy of depression screening. Our results support the potential of a mobile mental-health tracker as a tool for screening for depression in practice. Also, this study provides clinicians with a guideline for generating indicator variables from daily mental-health ratings. Furthermore, our results provide empirical evidence for the critical role of adherence to self-reporting, which represents crucial information for both doctors and patients.
Information Filtering Based on Users' Negative Opinions
NASA Astrophysics Data System (ADS)
Guo, Qiang; Li, Yang; Liu, Jian-Guo
2013-05-01
The process of heat conduction (HC) has recently found application in the information filtering [Zhang et al., Phys. Rev. Lett.99, 154301 (2007)], which is of high diversity but low accuracy. The classical HC model predicts users' potential interested objects based on their interesting objects regardless to the negative opinions. In terms of the users' rating scores, we present an improved user-based HC (UHC) information model by taking into account users' positive and negative opinions. Firstly, the objects rated by users are divided into positive and negative categories, then the predicted interesting and dislike object lists are generated by the UHC model. Finally, the recommendation lists are constructed by filtering out the dislike objects from the interesting lists. By implementing the new model based on nine similarity measures, the experimental results for MovieLens and Netflix datasets show that the new model considering negative opinions could greatly enhance the accuracy, measured by the average ranking score, from 0.049 to 0.036 for Netflix and from 0.1025 to 0.0570 for Movielens dataset, reduced by 26.53% and 44.39%, respectively. Since users prefer to give positive ratings rather than negative ones, the negative opinions contain much more information than the positive ones, the negative opinions, therefore, are very important for understanding users' online collective behaviors and improving the performance of HC model.
Mangrove forest distributions and dynamics in Madagascar (1975-2005)
Giri, C.; Muhlhausen, J.
2008-01-01
Mangrove forests of Madagascar are declining, albeit at a much slower rate than the global average. The forests are declining due to conversion to other land uses and forest degradation. However, accurate and reliable information on their present distribution and their rates, causes, and consequences of change have not been available. Earlier studies used remotely sensed data to map and, in some cases, to monitor mangrove forests at a local scale. Nonetheless, a comprehensive national assessment and synthesis was lacking. We interpreted time-series satellite data of 1975, 1990, 2000, and 2005 using a hybrid supervised and unsupervised classification approach. Landsat data were geometrically corrected to an accuracy of ?? one-half pixel, an accuracy necessary for change analysis. We used a postclassification change detection approach. Our results showed that Madagascar lost 7% of mangrove forests from 1975 to 2005, to a present extent of ???2,797 km2. Deforestation rates and causes varied both spatially and temporally. The forests increased by 5.6% (212 km2) from 1975 to 1990, decreased by 14.3% (455 km 2) from 1990 to 2000, and decreased by 2.6% (73 km2) from 2000 to 2005. Similarly, major changes occurred in Bombekota Bay, Mahajamba Bay, the coast of Ambanja, the Tsiribihina River, and Cap St Vincent. The main factors responsible for mangrove deforestation include conversion to agriculture (35%), logging (16%), conversion to aquaculture (3%), and urban development (1%). ?? 2008 by MDPI.
Thompson, Bradley F; Pingree, Matthew J; Qu, Wenchun; Murthy, Naveen S; Lachman, Nirusha; Hurdle, Mark Friedrich
2018-04-01
Ultrasound is rarely used for guiding lumbosacral epidural steroid injections due to its technical limitations. For example, sonographic imaging lacks the ability to confirm epidural spread and identify vascular uptake. The perceived risk that these limitations pose to human subjects has precluded any large scale clinical trials to date. To compare the accuracy of ultrasound versus fluoroscopic guidance for first sacral transforaminal epidural injections. Cadaveric comparative study using dichotomous outcomes. A fluoroscopy suite and anatomic laboratory at an academic medical center. Four unembalmed adult human cadavers with no history of spinal surgery. Eight sites were injected twice by one interventionalist, using fluoroscopic and ultrasound guidance. In the fluoroscopy arm, contrast spread was assessed using computed tomography. In the ultrasound arm, latex spread was assessed using gross anatomic dissection. Any visible evidence of epidural spread constituted a positive result. Comparison of the success of obtaining epidural contrast flow was the primary outcome measure. Secondary outcome measures included average duration, rate of intravascular uptake, and quantity of intravascular uptake. All injections performed in both the ultrasound arm and the fluoroscopy arm had positive epidural spread. The average duration was 3.03 minutes with fluoroscopy and 4.76 minutes with ultrasound. The rate of intravascular uptake was 37.5% with fluoroscopy and 50% with ultrasound. Within the ultrasound arm, greater intravascular spread and duration variability were recorded. Although ultrasonography can provide reliable image guidance for cannulating the first sacral foramen in cadavers, it would have limited clinical utility due to its inability to visualize relevant neurovascular structures deep to the osseus roof and exclude intravascular uptake. IV. Copyright © 2018 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
The quality of information about sickle cell disease on the Internet for youth.
Breakey, Vicky R; Harris, Lauren; Davis, Omar; Agarwal, Arnav; Ouellette, Carley; Akinnawo, Elizabeth; Stinson, Jennifer
2017-04-01
Adolescence is a vulnerable time for teens with sickle cell disease (SCD). Although there is evidence to support the use of web-based education to promote self-management skills in patients with chronic illnesses, the quality of SCD-related information on the Internet has not been assessed. A website review was conducted to appraise the quality, content, accuracy, readability, and desirability of online information for the adolescents with SCD. Relevant keywords were searched on the most popular search engines. Websites meeting predetermined criteria were reviewed. The quality of information was appraised using the validated DISCERN tool. Two physicians independently rated website completeness and accuracy. Readability of the sites was documented using the simple measure of gobbledygook (SMOG) scores and the Flesch Reading Ease (FRE). The website features considered desirable by youth were tracked. Search results yielded >600 websites with 25 unique hits meeting criteria. The overall quality of the information was "fair" and the average DISCERN rating score was 50.1 (±9.3, range 31.0-67.5). Only 12 of 25 (48%) websites had scores >50. The average completeness score was 20 of 29 (±5, range 12-27). No errors were identified. The mean SMOG score was 13.04 (±2.80, range 10.21-22.85) and the mean FRE score was 46.05 (±11.47; range 17.50-66.10), suggesting that the material was written well beyond the acceptable reading level for patient education. The websites were text-heavy and lacked the features that appeal to youth (chat, games, videos, etc.). Given the paucity of high-quality health information available for the teens with SCD, it is essential that additional online resources be developed. © 2016 Wiley Periodicals, Inc.
Rate- and accuracy-disabled subtype profiles among adults with dyslexia in the Hebrew orthography.
Shany, Michal; Breznitz, Zvia
2011-01-01
This study examined a subtyping scheme rooted in the dissociation between reading rate and accuracy in an exceptionally large sample of adult readers with dyslexia using a wide variety of behavioral and event-related potential (ERP) measures. Stage 1 was a behavioral study, in which basic reading skill, reading comprehension, linguistic and cognitive tasks were administered to 661 learning-disabled university students (n = 382) and their non-learning-disabled peers (n = 279). Based on a word reading measure, accuracy-disabled and rate-disabled subgroups were identified, as was a subgroup with deficits in both rate and accuracy. The results support the persistence of a rate versus accuracy dissociation into adulthood. Accuracy disability was related to a broad range of deficits affecting phonological, orthographic, and morphological processing, verbal memory, attention, and reading comprehension. Rate disability appeared to be associated with slower processing of printed material, alongside largely intact functioning resembling those of skilled readers. In stage 2, electroencephalogram (EEG)-ERP measurements were obtained from 140 participants recruited from the larger sample. Activation in visual association cortex, indicated by the N170 amplitude, was found to be lower for accuracy-disabled than skilled readers, and comparable between rate-disabled and skilled readers. The lowest amplitude was found in the double-deficit subgroup. The findings support the existence of distinctive reading disability profiles, based on selective deficits in reading rate versus accuracy and associated with different basic reading, linguistic, and cognitive skills as well as electrophysiological responses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gettings, M.B.
A blower-door-directed infiltration retrofit procedure was field tested on 18 homes in south central Wisconsin. The procedure, developed by the Wisconsin Energy Conservation Corporation, includes recommended retrofit techniques as well as criteria for estimating the amount of cost-effective work to be performed on a house. A recommended expenditure level and target air leakage reduction, in air changes per hour at 50 Pascal (ACH50), are determined from the initial leakage rate measured. The procedure produced an average 16% reduction in air leakage rate. For the 7 houses recommended for retrofit, 89% of the targeted reductions were accomplished with 76% of themore » recommended expenditures. The average cost of retrofits per house was reduced by a factor of four compared with previous programs. The average payback period for recommended retrofits was 4.4 years, based on predicted energy savings computed from achieved air leakage reductions. Although exceptions occurred, the procedure's 8 ACH50 minimum initial leakage rate for advising retrofits to be performed appeared a good choice, based on cost-effective air leakage reduction. Houses with initial rates of 7 ACH50 or below consistently required substantially higher costs to achieve significant air leakage reductions. No statistically significant average annual energy savings was detected as a result of the infiltration retrofits. Average measured savings were -27 therm per year, indicating an increase in energy use, with a 90% confidence interval of 36 therm. Measured savings for individual houses varied widely in both positive and negative directions, indicating that factors not considered affected the results. Large individual confidence intervals indicate a need to increase the accuracy of such measurements as well as understand the factors which may cause such disparity. Recommendations for the procedure include more extensive training of retrofit crews, checks for minimum air exchange rates to insure air quality, and addition of the basic cost of determining the initial leakage rate to the recommended expenditure level. Recommendations for the field test of the procedure include increasing the number of houses in the sample, more timely examination of metered data to detect anomalies, and the monitoring of indoor air temperature. Though not appropriate in a field test of a procedure, further investigation into the effects of air leakage rate reductions on heating loads needs to be performed.« less
Validation of Contact-Free Sleep Monitoring Device with Comparison to Polysomnography
Tal, Asher; Shinar, Zvika; Shaki, David; Codish, Shlomi; Goldbart, Aviv
2017-01-01
Study Objectives: To validate a contact-free system designed to achieve maximal comfort during long-term sleep monitoring, together with high monitoring accuracy. Methods: We used a contact-free monitoring system (EarlySense, Ltd., Israel), comprising an under-the-mattress piezoelectric sensor and a smartphone application, to collect vital signs and analyze sleep. Heart rate (HR), respiratory rate (RR), body movement, and calculated sleep-related parameters from the EarlySense (ES) sensor were compared to data simultaneously generated by the gold standard, polysomnography (PSG). Subjects in the sleep laboratory underwent overnight technician-attended full PSG, whereas subjects at home were recorded for 1 to 3 nights with portable partial PSG devices. Data were compared epoch by epoch. Results: A total of 63 subjects (85 nights) were recorded under a variety of sleep conditions. Compared to PSG, the contact-free system showed similar values for average total sleep time (TST), % wake, % rapid eye movement, and % non-rapid eye movement sleep, with 96.1% and 93.3% accuracy of continuous measurement of HR and RR, respectively. We found a linear correlation between TST measured by the sensor and TST determined by PSG, with a coefficient of 0.98 (R = 0.87). Epoch-by-epoch comparison with PSG in the sleep laboratory setting revealed that the system showed sleep detection sensitivity, specificity, and accuracy of 92.5%, 80.4%, and 90.5%, respectively. Conclusions: TST estimates with the contact-free sleep monitoring system were closely correlated with the gold-standard reference. This system shows good sleep staging capability with improved performance over accelerometer-based apps, and collects additional physiological information on heart rate and respiratory rate. Citation: Tal A, Shinar Z, Shaki D, Codish S, Goldbart A. Validation of contact-free sleep monitoring device with comparison to polysomnography. J Clin Sleep Med. 2017;13(3):517–522. PMID:27998378
40 CFR 53.53 - Test for flow rate accuracy, regulation, measurement accuracy, and cut-off.
Code of Federal Regulations, 2013 CFR
2013-07-01
... pressures and temperatures used in the tests and shall be checked at zero and at least one flow rate within...: Equation 5 ER18jy97.067 (ii) To successfully pass the flow rate CV measurement accuracy test, the absolute...
40 CFR 53.53 - Test for flow rate accuracy, regulation, measurement accuracy, and cut-off.
Code of Federal Regulations, 2011 CFR
2011-07-01
... pressures and temperatures used in the tests and shall be checked at zero and at least one flow rate within...: Equation 5 ER18jy97.067 (ii) To successfully pass the flow rate CV measurement accuracy test, the absolute...
40 CFR 53.53 - Test for flow rate accuracy, regulation, measurement accuracy, and cut-off.
Code of Federal Regulations, 2014 CFR
2014-07-01
... pressures and temperatures used in the tests and shall be checked at zero and at least one flow rate within...: Equation 5 ER18jy97.067 (ii) To successfully pass the flow rate CV measurement accuracy test, the absolute...
40 CFR 53.53 - Test for flow rate accuracy, regulation, measurement accuracy, and cut-off.
Code of Federal Regulations, 2012 CFR
2012-07-01
... pressures and temperatures used in the tests and shall be checked at zero and at least one flow rate within...: Equation 5 ER18jy97.067 (ii) To successfully pass the flow rate CV measurement accuracy test, the absolute...
A Case Study to Improve Emergency Room Patient Flow at Womack Army Medical Center
2009-06-01
use just the previous month, moving average 2-month period ( MA2 ) uses the average from the previous two months, moving average 3-month period (MA3...ED prior to discharge by provider) MA2 /MA3/MA4 - moving averages of 2-4 months in length MAD - mean absolute deviation (measure of accuracy for
Lohsiriwat, Varut; Prapasrivorakul, Siriluck; Lohsiriwat, Darin
2009-01-01
The purposes of this study were to determine clinical presentations and surgical outcomes of perforated peptic ulcer (PPU), and to evaluate the accuracy of the Boey scoring system in predicting mortality and morbidity. We carried out a retrospective study of patients undergoing emergency surgery for PPU between 2001 and 2006 in a university hospital. Clinical presentations and surgical outcomes were analyzed. Adjusted odds ratio (OR) of each Boey score on morbidity and mortality rate was compared with zero risk score. Receiver-operating characteristic curve analysis was used to compare the predictive ability between Boey score, American Society of Anesthesiologists (ASA) classification, and Mannheim Peritonitis Index (MPI). The study included 152 patients with average age of 52 years (range: 15-88 years), and 78% were male. The most common site of PPU was the prepyloric region (74%). Primary closure and omental graft was the most common procedure performed. Overall mortality rate was 9% and the complication rate was 30%. The mortality rate increased progressively with increasing numbers of the Boey score: 1%, 8% (OR=2.4), 33% (OR=3.5), and 38% (OR=7.7) for 0, 1, 2, and 3 scores, respectively (p<0.001). The morbidity rates for 0, 1, 2, and 3 Boey scores were 11%, 47% (OR=2.9), 75% (OR=4.3), and 77% (OR=4.9), respectively (p<0.001). Boey score and ASA classification appeared to be better than MPI for predicting the poor surgical outcomes. Perforated peptic ulcer is associated with high rates of mortality and morbidity. The Boey risk score serves as a simple and precise predictor for postoperative mortality and morbidity.
Parikh, Mili; Hynan, Linda S; Weiner, Myron F; Lacritz, Laura; Ringe, Wendy; Cullum, C Munro
2014-01-01
Alzheimer disease (AD) characteristically begins with episodic memory impairment followed by other cognitive deficits; however, the course of illness varies, with substantial differences in the rate of cognitive decline. For research and clinical purposes it would be useful to distinguish between persons who will progress slowly from persons who will progress at an average or faster rate. Our objective was to use neurocognitive performance features and disease-specific and health information to determine a predictive model for the rate of cognitive decline in participants with mild AD. We reviewed the records of a series of 96 consecutive participants with mild AD from 1995 to 2011 who had been administered selected neurocognitive tests and clinical measures. Based on Clinical Dementia Rating (CDR) of functional and cognitive decline over 2 years, participants were classified as Faster (n = 45) or Slower (n = 51) Progressors. Stepwise logistic regression analyses using neurocognitive performance features, disease-specific, health, and demographic variables were performed. Neuropsychological scores that distinguished Faster from Slower Progressors included Trail Making Test - A, Digit Symbol, and California Verbal Learning Test (CVLT) Total Learned and Primacy Recall. No disease-specific, health, or demographic variable predicted rate of progression; however, history of heart disease showed a trend. Among the neuropsychological variables, Trail Making Test - A best distinguished Faster from Slower Progressors, with an overall accuracy of 68%. In an omnibus model including neuropsychological, disease-specific, health, and demographic variables, only Trail Making Test - A distinguished between groups. Several neuropsychological performance features were associated with the rate of cognitive decline in mild AD, with baseline Trail Making Test - A performance best separating those who declined at an average or faster rate from those who showed slower progression.
Mooney, Robert; Corley, Gavin; Godfrey, Alan; Osborough, Conor; ÓLaighin, Gearóid
2017-01-01
Aims The study aims were to evaluate the validity of two commercially available swimming activity monitors for quantifying temporal and kinematic swimming variables. Methods Ten national level swimmers (5 male, 5 female; 15.3±1.3years; 164.8±12.9cm; 62.4±11.1kg; 425±66 FINA points) completed a set protocol comprising 1,500m of swimming involving all four competitive swimming strokes. Swimmers wore the Finis Swimsense and the Garmin Swim activity monitors throughout. The devices automatically identified stroke type, swim distance, lap time, stroke count, stroke rate, stroke length and average speed. Video recordings were also obtained and used as a criterion measure to evaluate performance. Results A significant positive correlation was found between the monitors and video for the identification of each of the four swim strokes (Garmin: X2 (3) = 31.292, p<0.05; Finis:X2 (3) = 33.004, p<0.05). No significant differences were found for swim distance measurements. Swimming laps performed in the middle of a swimming interval showed no significant difference from the criterion (Garmin: bias -0.065, 95% confidence intervals -3.828–6.920; Finis bias -0.02, 95% confidence intervals -3.095–3.142). However laps performed at the beginning and end of an interval were not as accurately timed. Additionally, a statistical difference was found for stroke count measurements in all but two occasions (p<0.05). These differences affect the accuracy of stroke rate, stroke length and average speed scores reported by the monitors, as all of these are derived from lap times and stroke counts. Conclusions Both monitors were found to operate with a relatively similar performance level and appear suited for recreational use. However, issues with feature detection accuracy may be related to individual variances in stroke technique. It is reasonable to expect that this level of error would increase when the devices are used by recreational swimmers rather than elite swimmers. Further development to improve accuracy of feature detection algorithms, specifically for lap time and stroke count, would also increase their suitability within competitive settings. PMID:28178301
Mooney, Robert; Quinlan, Leo R; Corley, Gavin; Godfrey, Alan; Osborough, Conor; ÓLaighin, Gearóid
2017-01-01
The study aims were to evaluate the validity of two commercially available swimming activity monitors for quantifying temporal and kinematic swimming variables. Ten national level swimmers (5 male, 5 female; 15.3±1.3years; 164.8±12.9cm; 62.4±11.1kg; 425±66 FINA points) completed a set protocol comprising 1,500m of swimming involving all four competitive swimming strokes. Swimmers wore the Finis Swimsense and the Garmin Swim activity monitors throughout. The devices automatically identified stroke type, swim distance, lap time, stroke count, stroke rate, stroke length and average speed. Video recordings were also obtained and used as a criterion measure to evaluate performance. A significant positive correlation was found between the monitors and video for the identification of each of the four swim strokes (Garmin: X2 (3) = 31.292, p<0.05; Finis:X2 (3) = 33.004, p<0.05). No significant differences were found for swim distance measurements. Swimming laps performed in the middle of a swimming interval showed no significant difference from the criterion (Garmin: bias -0.065, 95% confidence intervals -3.828-6.920; Finis bias -0.02, 95% confidence intervals -3.095-3.142). However laps performed at the beginning and end of an interval were not as accurately timed. Additionally, a statistical difference was found for stroke count measurements in all but two occasions (p<0.05). These differences affect the accuracy of stroke rate, stroke length and average speed scores reported by the monitors, as all of these are derived from lap times and stroke counts. Both monitors were found to operate with a relatively similar performance level and appear suited for recreational use. However, issues with feature detection accuracy may be related to individual variances in stroke technique. It is reasonable to expect that this level of error would increase when the devices are used by recreational swimmers rather than elite swimmers. Further development to improve accuracy of feature detection algorithms, specifically for lap time and stroke count, would also increase their suitability within competitive settings.
A lightweight QRS detector for single lead ECG signals using a max-min difference algorithm.
Pandit, Diptangshu; Zhang, Li; Liu, Chengyu; Chattopadhyay, Samiran; Aslam, Nauman; Lim, Chee Peng
2017-06-01
Detection of the R-peak pertaining to the QRS complex of an ECG signal plays an important role for the diagnosis of a patient's heart condition. To accurately identify the QRS locations from the acquired raw ECG signals, we need to handle a number of challenges, which include noise, baseline wander, varying peak amplitudes, and signal abnormality. This research aims to address these challenges by developing an efficient lightweight algorithm for QRS (i.e., R-peak) detection from raw ECG signals. A lightweight real-time sliding window-based Max-Min Difference (MMD) algorithm for QRS detection from Lead II ECG signals is proposed. Targeting to achieve the best trade-off between computational efficiency and detection accuracy, the proposed algorithm consists of five key steps for QRS detection, namely, baseline correction, MMD curve generation, dynamic threshold computation, R-peak detection, and error correction. Five annotated databases from Physionet are used for evaluating the proposed algorithm in R-peak detection. Integrated with a feature extraction technique and a neural network classifier, the proposed ORS detection algorithm has also been extended to undertake normal and abnormal heartbeat detection from ECG signals. The proposed algorithm exhibits a high degree of robustness in QRS detection and achieves an average sensitivity of 99.62% and an average positive predictivity of 99.67%. Its performance compares favorably with those from the existing state-of-the-art models reported in the literature. In regards to normal and abnormal heartbeat detection, the proposed QRS detection algorithm in combination with the feature extraction technique and neural network classifier achieves an overall accuracy rate of 93.44% based on an empirical evaluation using the MIT-BIH Arrhythmia data set with 10-fold cross validation. In comparison with other related studies, the proposed algorithm offers a lightweight adaptive alternative for R-peak detection with good computational efficiency. The empirical results indicate that it not only yields a high accuracy rate in QRS detection, but also exhibits efficient computational complexity at the order of O(n), where n is the length of an ECG signal. Copyright © 2017 Elsevier B.V. All rights reserved.
Pignone, Michael; Rich, Melissa; Teutsch, Steven M; Berg, Alfred O; Lohr, Kathleen N
2002-07-16
To assess the effectiveness of different colorectal cancer screening tests for adults at average risk. Recent systematic reviews; Guide to Clinical Preventive Services, 2nd edition; and focused searches of MEDLINE from 1966 through September 2001. The authors also conducted hand searches, reviewed bibliographies, and consulted context experts to ensure completeness. When available, the most recent high-quality systematic review was used to identify relevant articles. This review was then supplemented with a MEDLINE search for more recent articles. One reviewer abstracted information from the final set of studies into evidence tables, and a second reviewer checked the tables for accuracy. Discrepancies were resolved by consensus. For average-risk adults older than 50 years of age, evidence from multiple well-conducted randomized trials supported the effectiveness of fecal occult blood testing in reducing colorectal cancer incidence and mortality rates compared with no screening. Data from well-conducted case-control studies supported the effectiveness of sigmoidoscopy and possibly colonoscopy in reducing colon cancer incidence and mortality rates. A nonrandomized, controlled trial examining colorectal cancer mortality rates and randomized trials examining diagnostic yield supported the use of fecal occult blood testing plus sigmoidoscopy. The effectiveness of barium enema is unclear. Data are insufficient to support a definitive determination of the most effective screening strategy. Colorectal cancer screening reduces death from colorectal cancer and can decrease the incidence of disease through removal of adenomatous polyps. Several available screening options seem to be effective, but the single best screening approach cannot be determined because data are insufficient.
The effect of developer age on the detection of approximal caries using three dental films.
Syriopoulos, K; Velders, X L; Sanderink, G C; van Ginkel, F C; van Amerongen, J P; van der Stelt, P F
1999-07-01
To compare the diagnostic accuracy for the detection of approximal caries of three dental X-ray films using fresh and aged processing chemicals. Fifty-six extracted unrestored premolars were radiographed under standardized conditions using the new Dentus M2 (Agfa-Gevaert, Mortsel, Belgium), Ektaspeed Plus and Ultra-speed (Kodak Eastman Co, Rochester, USA) dental films. The films were processed manually using Agfa chemicals (Heraeus Kulzer, Dormagen, Germany). The procedure was repeated once a week until the complete exhaustion of the chemicals (6 weeks). Three independent observers assessed 210 radiographs using the following rating scale: 0 = sound, 1 = enamel lesion; 2 = lesion reaching the ADJ; 3 = dentinal lesion. True caries depth was determined by histological examination (14 sound surfaces, 11 enamel lesions, eight lesions reaching the ADJ and 23 dentinal lesions). True caries depth was subtracted from the values given by the observers and an analysis of variance was performed. The null hypothesis was rejected when P < 0.05. No significant differences were found in the diagnostic accuracy between the three films when using chemicals of up to 3 weeks old (P = 0.056). After the third week, Ultra-speed was significantly better than the other two films (P = 0.012). On average caries depth was underestimated. A similar level of diagnostic accuracy for approximal caries is achieved when using the three films. Dentus M2 and Ektaspeed Plus are at present the fastest available films and they should therefore be recommended for clinical practice. Agfa chemicals should be renewed every 3 weeks. Fifty per cent reduction in average gradient is indicative of renewing processing chemicals.
Applying a weighted random forests method to extract karst sinkholes from LiDAR data
NASA Astrophysics Data System (ADS)
Zhu, Junfeng; Pierskalla, William P.
2016-02-01
Detailed mapping of sinkholes provides critical information for mitigating sinkhole hazards and understanding groundwater and surface water interactions in karst terrains. LiDAR (Light Detection and Ranging) measures the earth's surface in high-resolution and high-density and has shown great potentials to drastically improve locating and delineating sinkholes. However, processing LiDAR data to extract sinkholes requires separating sinkholes from other depressions, which can be laborious because of the sheer number of the depressions commonly generated from LiDAR data. In this study, we applied the random forests, a machine learning method, to automatically separate sinkholes from other depressions in a karst region in central Kentucky. The sinkhole-extraction random forest was grown on a training dataset built from an area where LiDAR-derived depressions were manually classified through a visual inspection and field verification process. Based on the geometry of depressions, as well as natural and human factors related to sinkholes, 11 parameters were selected as predictive variables to form the dataset. Because the training dataset was imbalanced with the majority of depressions being non-sinkholes, a weighted random forests method was used to improve the accuracy of predicting sinkholes. The weighted random forest achieved an average accuracy of 89.95% for the training dataset, demonstrating that the random forest can be an effective sinkhole classifier. Testing of the random forest in another area, however, resulted in moderate success with an average accuracy rate of 73.96%. This study suggests that an automatic sinkhole extraction procedure like the random forest classifier can significantly reduce time and labor costs and makes its more tractable to map sinkholes using LiDAR data for large areas. However, the random forests method cannot totally replace manual procedures, such as visual inspection and field verification.
Liber, Alex C; Warner, Kenneth E
2018-01-01
According to survey data, the prevalence of Americans' self-reported cigarette smoking is dropping steadily. However, the accuracy of national surveys has been questioned because of declining response rates and the increasing stigmatization of smoking. We used data from 2 repeated, cross-sectional, nationally representative health surveys (National Survey on Drug Use and Health (NSDUH), 1979-2014; and National Health Interview Survey (NHIS), 1965-2015) to determine whether self-reported cigarette consumption has changed over time as a proportion of federally taxed cigarette sales. From each survey, we calculated national equivalents of annual cigarette consumption. From 1979 to 1997, the amount of cigarettes that NSDUH and NHIS respondents reported corresponded to an average of 59.5% (standard deviation (SD), 2.3%) and 65.6% (SD, 3.2%), respectively, of taxed cigarette sales. After 1997, respondents' reported smoking data corresponded to the equivalent of an average of 64.2% (SD, 5.9%) and 63.3% (SD, 2.5%), respectively, of taxed cigarette sales. NHIS figures remained steady throughout the latter period, with a decline during 2013-2015 from 65.9% to 61.1%. NSDUH figures increased steadily, exceeding those of the NHIS after 2002. Given the consistent underreporting of cigarette consumption over time, these surveys are likely not less accurate than they were previously. The recent decrease in NHIS accuracy, however, gives pause about the magnitude of the reported decline in smoking prevalence in 2014 and 2015. Improvement in the accuracy of NSDUH data is encouraging. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Qin, Nan; Botas, Pablo; Giantsoudi, Drosoula; Schuemann, Jan; Tian, Zhen; Jiang, Steve B.; Paganetti, Harald; Jia, Xun
2016-01-01
Monte Carlo (MC) simulation is commonly considered as the most accurate dose calculation method for proton therapy. Aiming at achieving fast MC dose calculations for clinical applications, we have previously developed a GPU-based MC tool, gPMC. In this paper, we report our recent updates on gPMC in terms of its accuracy, portability, and functionality, as well as comprehensive tests on this tool. The new version, gPMC v2.0, was developed under the OpenCL environment to enable portability across different computational platforms. Physics models of nuclear interactions were refined to improve calculation accuracy. Scoring functions of gPMC were expanded to enable tallying particle fluence, dose deposited by different particle types, and dose-averaged linear energy transfer (LETd). A multiple counter approach was employed to improve efficiency by reducing frequency of memory writing conflict at scoring. For dose calculation, accuracy improvements over gPMC v1.0 were observed in both water phantom cases and a patient case. For a prostate cancer case planned using high-energy proton beams, dose discrepancies in beam entrance and target region seen in gPMC v1.0 with respect to the gold standard tool for proton Monte Carlo simulations (TOPAS) results were substantially reduced and gamma test passing rate (1%/1mm) was improved from 82.7% to 93.1%. Average relative difference in LETd between gPMC and TOPAS was 1.7%. Average relative differences in dose deposited by primary, secondary, and other heavier particles were within 2.3%, 0.4%, and 0.2%. Depending on source proton energy and phantom complexity, it took 8 to 17 seconds on an AMD Radeon R9 290x GPU to simulate 107 source protons, achieving less than 1% average statistical uncertainty. As beam size was reduced from 10×10 cm2 to 1×1 cm2, time on scoring was only increased by 4.8% with eight counters, in contrast to a 40% increase using only one counter. With the OpenCL environment, the portability of gPMC v2.0 was enhanced. It was successfully executed on different CPUs and GPUs and its performance on different devices varied depending on processing power and hardware structure. PMID:27694712
The Effectiveness of a Rater Training Booklet in Increasing Accuracy of Performance Ratings
1988-04-01
subjects’ ratings were compared for accuracy. The dependent measure was the absolute deviation score of each individual’s rating from the "true score". The...subjects’ ratings were compared for accuracy. The dependent measure was the absolute deviation score of each individual’s rating from the "true score". The...r IS % _. Findings: The absolute deviation scores of each individual’s ratings from the "true score" provided by subject matter experts were analyzed
Field Assessment of Energy Audit Tools for Retrofit Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, J.; Bohac, D.; Nelson, C.
2013-07-01
This project focused on the use of home energy ratings as a tool to promote energy retrofits in existing homes. A home energy rating provides a quantitative appraisal of a home's asset performance, usually compared to a benchmark such as the average energy use of similar homes in the same region. Home rating systems can help motivate homeowners in several ways. Ratings can clearly communicate a home's achievable energy efficiency potential, provide a quantitative assessment of energy savings after retrofits are completed, and show homeowners how they rate compared to their neighbors, thus creating an incentive to conform to amore » social standard. An important consideration is how rating tools for the retrofit market will integrate with existing home energy service programs. For residential programs that target energy savings only, home visits should be focused on key efficiency measures for that home. In order to gain wide adoption, a rating tool must be easily integrated into the field process, demonstrate consistency and reasonable accuracy to earn the trust of home energy technicians, and have a low monetary cost and time hurdle for homeowners. Along with the Home Energy Score, this project also evaluated the energy modeling performance of SIMPLE and REM/Rate.« less
Blaya, Joaquin A; Shin, Sonya S; Yagui, Martin J A; Yale, Gloria; Suarez, Carmen; Asencios, Luis; Fraser, Hamish
2007-10-11
We created a web-based laboratory information system, e-Chasqui to connect public laboratories to health centers to improve communication and analysis. After one year, we performed a pre and post assessment of communication delays and found that e-Chasqui maintained the average delay but eliminated delays of over 60 days. Adding digital verification maintained the average delay, but should increase accuracy. We are currently performing a randomized evaluation of the impacts of e-Chasqui.
Validation of Biofeedback Wearables for Photoplethysmographic Heart Rate Tracking
Jo, Edward; Lewis, Kiana; Directo, Dean; Kim, Michael J.; Dolezal, Brett A.
2016-01-01
The purpose of this study was to examine the validity of HR measurements by two commercial-use activity trackers in comparison to ECG. Twenty-four healthy participants underwent the same 77-minute protocol during a single visit. Each participant completed an initial rest period of 15 minutes followed by 5 minute periods of each of the following activities: 60W and 120W cycling, walking, jogging, running, resisted arm raises, resisted lunges, and isometric plank. In between each exercise task was a 5-minute rest period. Each subject wore a Basis Peak (BPk) on one wrist and a Fitbit Charge HR (FB) on the opposite wrist. Criterion measurement of HR was administered by 12-lead ECG. Time synced data from each device and ECG were concurrently and electronically acquired throughout the entire 77-minute protocol. When examining data in aggregate, there was a strong correlation between BPk and ECG for HR (r = 0.92, p < 0.001) with a mean bias of -2.5 bpm (95% LoA 19.3, -24.4). The FB demonstrated a moderately strong correlation with ECG for HR (r = 0.83, p < 0.001) with an average mean bias of -8.8 bpm (95% LoA 24.2, -41.8). During physical efforts eliciting ECG HR > 116 bpm, the BPk demonstrated an r = 0.77 and mean bias = -4.9 bpm (95% LoA 21.3, -31.0) while the FB demonstrated an r = 0.58 and mean bias = -12.7 bpm (95% LoA 28.6, -54.0). The BPk satisfied validity criteria for HR monitors, however showed a marginal decline in accuracy with increasing physical effort (ECG HR > 116 bpm). The FB failed to satisfy validity criteria and demonstrated a substantial decrease in accuracy during higher exercise intensities. Key points Modern day wearable multi-sensor activity trackers incorporate reflective photoplethymography (PPG) for heart rate detection and monitoring at the dorsal wrist. This study examined the validity of two PPG-based activity trackers, the Basis Peak and Fitbit Charge HR. The Basis Peak performed with accuracy compared with ECG and results substantiate validation of heart rate measurements. There was a slight decrease in performance during higher levels of physical exertion. The Fitbit Charge HR performed with poor accuracy compared with ECG especially during higher physical exertion and specific exercise tasks. The Fitbit Charge HR was not validated for heart rate monitoring, although better accuracy was observed during resting or recovery conditions. PMID:27803634
Bouwman, Aniek C; Veerkamp, Roel F
2014-10-03
The aim of this study was to determine the consequences of splitting sequencing effort over multiple breeds for imputation accuracy from a high-density SNP chip towards whole-genome sequence. Such information would assist for instance numerical smaller cattle breeds, but also pig and chicken breeders, who have to choose wisely how to spend their sequencing efforts over all the breeds or lines they evaluate. Sequence data from cattle breeds was used, because there are currently relatively many individuals from several breeds sequenced within the 1,000 Bull Genomes project. The advantage of whole-genome sequence data is that it carries the causal mutations, but the question is whether it is possible to impute the causal variants accurately. This study therefore focussed on imputation accuracy of variants with low minor allele frequency and breed specific variants. Imputation accuracy was assessed for chromosome 1 and 29 as the correlation between observed and imputed genotypes. For chromosome 1, the average imputation accuracy was 0.70 with a reference population of 20 Holstein, and increased to 0.83 when the reference population was increased by including 3 other dairy breeds with 20 animals each. When the same amount of animals from the Holstein breed were added the accuracy improved to 0.88, while adding the 3 other breeds to the reference population of 80 Holstein improved the average imputation accuracy marginally to 0.89. For chromosome 29, the average imputation accuracy was lower. Some variants benefitted from the inclusion of other breeds in the reference population, initially determined by the MAF of the variant in each breed, but even Holstein specific variants did gain imputation accuracy from the multi-breed reference population. This study shows that splitting sequencing effort over multiple breeds and combining the reference populations is a good strategy for imputation from high-density SNP panels towards whole-genome sequence when reference populations are small and sequencing effort is limiting. When sequencing effort is limiting and interest lays in multiple breeds or lines this provides imputation of each breed.
Sterile Basics of Compounding: Relationship Between Syringe Size and Dosing Accuracy.
Kosinski, Tracy M; Brown, Michael C; Zavala, Pedro J
2018-01-01
The purpose of this study was to investigate the accuracy and reproducibility of a 2-mL volume injection using a 3-mL and 10-mL syringe with pharmacy student compounders. An exercise was designed to assess each student's accuracy in compounding a sterile preparation with the correct 4-mg strength using a 3-mL and 10-mL syringe. The average ondansetron dose when compounded with the 3-mL syringe was 4.03 mg (standard deviation ± 0.45 mg), which was not statistically significantly different than the intended 4-mg desired dose (P=0.497). The average ondansetron dose when compounded with the 10-mL syringe was 4.18 mg (standard deviation + 0.68 mg), which was statistically significantly different than the intended 4-mg desired dose (P=0.002). Additionally, there also was a statistically significant difference in the average ondansetron dose compounded using a 3-mL syringe (4.03 mg) and a 10-mL syringe (4.18 mg) (P=0.027). The accuracy and reproducibility of the 2-mL desired dose volume decreased as the compounding syringe size increased from 3 mL to 10 mL. Copyright© by International Journal of Pharmaceutical Compounding, Inc.
Du, Yang; Tan, Jian-guo; Chen, Li; Wang, Fang-ping; Tan, Yao; Zhou, Jian-feng
2012-08-18
To explore a gingival shade matching method and to evaluate the precision and accuracy of a dental spectrophotometer modified to be used in gingival color measurement. Crystaleye, a dental spectrophotometer (Olympus, Tokyo, Japan) with a custom shading cover was tested. For precision assessment, two experienced experimenters measured anterior maxillary incisors five times for each tooth. A total of 20 healthy gingival sites (attached gingiva, free gingiva and medial gingival papilla in anterior maxillary region) were measured,the Commission Internationale de I' Eclairage (CIE) color parameters (CIE L*a*b*) of which were analyzed using the supporting software. For accuracy assessment, a rectangular area of approximately 3 mm×3 mm was chosen in the attached gingival portion for spectral analysis. PR715 (SpectraScan;Photo Research Inc.,California, USA), a spectroradiometer, was utilized as standard control. Average color differences (ΔE) between the values from PR715 and Crystaleye were calculated. In precision assessment,ΔL* between the values in all the test sites and average values were from(0.28±0.16)to(0.78±0.57), with Δa*and Δb* from(0.28±0.15)to (0.87±0.65),from(0.19±0.09)to( 0.58±0.78), respectively. Average ΔE between values in all test sites and average values were from (0.62 ± 0.17) to (1.25 ± 0.98) CIELAB units, with a total average ΔE(0.90 ± 0.18). In accuracy assessment, ΔL* with control device were from(0.58±0.50)to(2.22±1.89),with Δa*and Δb* from(1.03±0.67)to(2.99±1.32),from(0.68±0.78)to(1.26±0.83), respectively. Average ΔE with the control device were from (2.44±0.82) to (3.51±1.03) CIELAB units, with a total average ΔE (2.96 ± 1.08). With appropriate modification, Crystaleye, the spectrophotometer, has demonstrated relative minor color variations that can be useful in gingival color measurement.
Skinner, Kenneth D.
2011-01-01
High-quality elevation data in riverine environments are important for fisheries management applications and the accuracy of such data needs to be determined for its proper application. The Experimental Advanced Airborne Research LiDAR (Light Detection and Ranging)-or EAARL-system was used to obtain topographic and bathymetric data along the Deadwood and South Fork Boise Rivers in west-central Idaho. The EAARL data were post-processed into bare earth and bathymetric raster and point datasets. Concurrently with the EAARL surveys, real-time kinematic global positioning system surveys were made in three areas along each of the rivers to assess the accuracy of the EAARL elevation data in different hydrogeomorphic settings. The accuracies of the EAARL-derived raster elevation values, determined in open, flat terrain, to provide an optimal vertical comparison surface, had root mean square errors ranging from 0.134 to 0.347 m. Accuracies in the elevation values for the stream hydrogeomorphic settings had root mean square errors ranging from 0.251 to 0.782 m. The greater root mean square errors for the latter data are the result of complex hydrogeomorphic environments within the streams, such as submerged aquatic macrophytes and air bubble entrainment; and those along the banks, such as boulders, woody debris, and steep slopes. These complex environments reduce the accuracy of EAARL bathymetric and topographic measurements. Steep banks emphasize the horizontal location discrepancies between the EAARL and ground-survey data and may not be good representations of vertical accuracy. The EAARL point to ground-survey comparisons produced results with slightly higher but similar root mean square errors than those for the EAARL raster to ground-survey comparisons, emphasizing the minimized horizontal offset by using interpolated values from the raster dataset at the exact location of the ground-survey point as opposed to an actual EAARL point within a 1-meter distance. The average error for the wetted stream channel surface areas was -0.5 percent, while the average error for the wetted stream channel volume was -8.3 percent. The volume of the wetted river channel was underestimated by an average of 31 percent in half of the survey areas, and overestimated by an average of 14 percent in the remainder of the survey areas. The EAARL system is an efficient way to obtain topographic and bathymetric data in large areas of remote terrain. The elevation accuracy of the EAARL system varies throughout the area depending upon the hydrogeomorphic setting, preventing the use of a single accuracy value to describe the EAARL system. The elevation accuracy variations should be kept in mind when using the data, such as for hydraulic modeling or aquatic habitat assessments.
Improved hybrid information filtering based on limited time window
NASA Astrophysics Data System (ADS)
Song, Wen-Jun; Guo, Qiang; Liu, Jian-Guo
2014-12-01
Adopting the entire collecting information of users, the hybrid information filtering of heat conduction and mass diffusion (HHM) (Zhou et al., 2010) was successfully proposed to solve the apparent diversity-accuracy dilemma. Since the recent behaviors are more effective to capture the users' potential interests, we present an improved hybrid information filtering of adopting the partial recent information. We expand the time window to generate a series of training sets, each of which is treated as known information to predict the future links proven by the testing set. The experimental results on one benchmark dataset Netflix indicate that by only using approximately 31% recent rating records, the accuracy could be improved by an average of 4.22% and the diversity could be improved by 13.74%. In addition, the performance on the dataset MovieLens could be preserved by considering approximately 60% recent records. Furthermore, we find that the improved algorithm is effective to solve the cold-start problem. This work could improve the information filtering performance and shorten the computational time.
Automated tracking of whiskers in videos of head fixed rodents.
Clack, Nathan G; O'Connor, Daniel H; Huber, Daniel; Petreanu, Leopoldo; Hires, Andrew; Peron, Simon; Svoboda, Karel; Myers, Eugene W
2012-01-01
We have developed software for fully automated tracking of vibrissae (whiskers) in high-speed videos (>500 Hz) of head-fixed, behaving rodents trimmed to a single row of whiskers. Performance was assessed against a manually curated dataset consisting of 1.32 million video frames comprising 4.5 million whisker traces. The current implementation detects whiskers with a recall of 99.998% and identifies individual whiskers with 99.997% accuracy. The average processing rate for these images was 8 Mpx/s/cpu (2.6 GHz Intel Core2, 2 GB RAM). This translates to 35 processed frames per second for a 640 px×352 px video of 4 whiskers. The speed and accuracy achieved enables quantitative behavioral studies where the analysis of millions of video frames is required. We used the software to analyze the evolving whisking strategies as mice learned a whisker-based detection task over the course of 6 days (8148 trials, 25 million frames) and measure the forces at the sensory follicle that most underlie haptic perception.
Automated Tracking of Whiskers in Videos of Head Fixed Rodents
Clack, Nathan G.; O'Connor, Daniel H.; Huber, Daniel; Petreanu, Leopoldo; Hires, Andrew; Peron, Simon; Svoboda, Karel; Myers, Eugene W.
2012-01-01
We have developed software for fully automated tracking of vibrissae (whiskers) in high-speed videos (>500 Hz) of head-fixed, behaving rodents trimmed to a single row of whiskers. Performance was assessed against a manually curated dataset consisting of 1.32 million video frames comprising 4.5 million whisker traces. The current implementation detects whiskers with a recall of 99.998% and identifies individual whiskers with 99.997% accuracy. The average processing rate for these images was 8 Mpx/s/cpu (2.6 GHz Intel Core2, 2 GB RAM). This translates to 35 processed frames per second for a 640 px×352 px video of 4 whiskers. The speed and accuracy achieved enables quantitative behavioral studies where the analysis of millions of video frames is required. We used the software to analyze the evolving whisking strategies as mice learned a whisker-based detection task over the course of 6 days (8148 trials, 25 million frames) and measure the forces at the sensory follicle that most underlie haptic perception. PMID:22792058
Kim, Heejun; Bian, Jiantao; Mostafa, Javed; Jonnalagadda, Siddhartha; Del Fiol, Guilherme
2016-01-01
Motivation: Clinicians need up-to-date evidence from high quality clinical trials to support clinical decisions. However, applying evidence from the primary literature requires significant effort. Objective: To examine the feasibility of automatically extracting key clinical trial information from ClinicalTrials.gov. Methods: We assessed the coverage of ClinicalTrials.gov for high quality clinical studies that are indexed in PubMed. Using 140 random ClinicalTrials.gov records, we developed and tested rules for the automatic extraction of key information. Results: The rate of high quality clinical trial registration in ClinicalTrials.gov increased from 0.2% in 2005 to 17% in 2015. Trials reporting results increased from 3% in 2005 to 19% in 2015. The accuracy of the automatic extraction algorithm for 10 trial attributes was 90% on average. Future research is needed to improve the algorithm accuracy and to design information displays to optimally present trial information to clinicians. PMID:28269867
Covariances for the 56Fe radiation damage cross sections
NASA Astrophysics Data System (ADS)
Simakov, Stanislav P.; Koning, Arjan; Konobeyev, Alexander Yu.
2017-09-01
The energy-energy and reaction-reaction covariance matrices were calculated for the n + 56Fe damage cross-sections by Total Monte Carlo method using the TENDL-2013 random files. They were represented in the ENDF-6 format and added to the unperturbed evaluation file. The uncertainties for the spectrum averaged radiation quantities in the representative fission, fusion and spallation facilities were first time assessed as 5-25%. Additional 5 to 20% have to be added to the atom displacement rate uncertainties to account for accuracy of the primary defects simulation in materials. The reaction-reaction correlation were shown to be 1% or less.
State-space decoding of primary afferent neuron firing rates
NASA Astrophysics Data System (ADS)
Wagenaar, J. B.; Ventura, V.; Weber, D. J.
2011-02-01
Kinematic state feedback is important for neuroprostheses to generate stable and adaptive movements of an extremity. State information, represented in the firing rates of populations of primary afferent (PA) neurons, can be recorded at the level of the dorsal root ganglia (DRG). Previous work in cats showed the feasibility of using DRG recordings to predict the kinematic state of the hind limb using reverse regression. Although accurate decoding results were attained, reverse regression does not make efficient use of the information embedded in the firing rates of the neural population. In this paper, we present decoding results based on state-space modeling, and show that it is a more principled and more efficient method for decoding the firing rates in an ensemble of PA neurons. In particular, we show that we can extract confounded information from neurons that respond to multiple kinematic parameters, and that including velocity components in the firing rate models significantly increases the accuracy of the decoded trajectory. We show that, on average, state-space decoding is twice as efficient as reverse regression for decoding joint and endpoint kinematics.
Zhang, Zhilin; Pi, Zhouyue; Liu, Benyuan
2015-02-01
Heart rate monitoring using wrist-type photoplethysmographic signals during subjects' intensive exercise is a difficult problem, since the signals are contaminated by extremely strong motion artifacts caused by subjects' hand movements. So far few works have studied this problem. In this study, a general framework, termed TROIKA, is proposed, which consists of signal decomposiTion for denoising, sparse signal RecOnstructIon for high-resolution spectrum estimation, and spectral peaK trAcking with verification. The TROIKA framework has high estimation accuracy and is robust to strong motion artifacts. Many variants can be straightforwardly derived from this framework. Experimental results on datasets recorded from 12 subjects during fast running at the peak speed of 15 km/h showed that the average absolute error of heart rate estimation was 2.34 beat per minute, and the Pearson correlation between the estimates and the ground truth of heart rate was 0.992. This framework is of great values to wearable devices such as smartwatches which use PPG signals to monitor heart rate for fitness.
Deviation Value for Conventional X-ray in Hospitals in South Sulawesi Province from 2014 to 2016
NASA Astrophysics Data System (ADS)
Bachtiar, Ilham; Abdullah, Bualkar; Tahir, Dahlan
2018-03-01
This paper describes the conventional X-ray machine parameters tested in the region of South Sulawesi from 2014 to 2016. The objective of this research is to know deviation of every parameter of conventional X-ray machine. The testing parameters were analyzed by using quantitative methods with participatory observational approach. Data collection was performed by testing the output of conventional X-ray plane using non-invasive x-ray multimeter. The test parameters include tube voltage (kV) accuracy, radiation output linearity, reproducibility and radiation beam value (HVL) quality. The results of the analysis show four conventional X-ray test parameters have varying deviation spans, where the tube voltage (kV) accuracy has an average value of 4.12%, the average radiation output linearity is 4.47% of the average reproducibility of 0.62% and the averaged of the radiation beam (HVL) is 3.00 mm.
Automatic detection and quantitative analysis of cells in the mouse primary motor cortex
NASA Astrophysics Data System (ADS)
Meng, Yunlong; He, Yong; Wu, Jingpeng; Chen, Shangbin; Li, Anan; Gong, Hui
2014-09-01
Neuronal cells play very important role on metabolism regulation and mechanism control, so cell number is a fundamental determinant of brain function. Combined suitable cell-labeling approaches with recently proposed three-dimensional optical imaging techniques, whole mouse brain coronal sections can be acquired with 1-μm voxel resolution. We have developed a completely automatic pipeline to perform cell centroids detection, and provided three-dimensional quantitative information of cells in the primary motor cortex of C57BL/6 mouse. It involves four principal steps: i) preprocessing; ii) image binarization; iii) cell centroids extraction and contour segmentation; iv) laminar density estimation. Investigations on the presented method reveal promising detection accuracy in terms of recall and precision, with average recall rate 92.1% and average precision rate 86.2%. We also analyze laminar density distribution of cells from pial surface to corpus callosum from the output vectorizations of detected cell centroids in mouse primary motor cortex, and find significant cellular density distribution variations in different layers. This automatic cell centroids detection approach will be beneficial for fast cell-counting and accurate density estimation, as time-consuming and error-prone manual identification is avoided.
Virtual reality technology prevents accidents in extreme situations
NASA Astrophysics Data System (ADS)
Badihi, Y.; Reiff, M. N.; Beychok, S.
2012-03-01
This research is aimed at examining the added value of using Virtual Reality (VR) in a driving simulator to prevent road accidents, specifically by improving drivers' skills when confronted with extreme situations. In an experiment, subjects completed a driving scenario using two platforms: A 3-D Virtual Reality display system using an HMD (Head-Mounted Display), and a standard computerized display system based on a standard computer monitor. The results show that the average rate of errors (deviating from the driving path) in a VR environment is significantly lower than in the standard one. In addition, there was no compensation between speed and accuracy in completing the driving mission. On the contrary: The average speed was even slightly faster in the VR simulation than in the standard environment. Thus, generally, despite the lower rate of deviation in VR setting, it is not achieved by driving slower. When the subjects were asked about their personal experiences from the training session, most of the subjects responded that among other things, the VR session caused them to feel a higher sense of commitment to the task and their performance. Some of them even stated that the VR session gave them a real sensation of driving.
Goss, Donald L; Lewek, Michael; Yu, Bing; Ware, William B; Teyhen, Deydre S; Gross, Michael T
2015-06-01
The injury incidence rate among runners is approximately 50%. Some individuals have advocated using an anterior-foot-strike pattern to reduce ground reaction forces and injury rates that they attribute to a rear-foot-strike pattern. The proportion of minimalist shoe wearers who adopt an anterior-foot-strike pattern remains unclear. To evaluate the accuracy of self-reported foot-strike patterns, compare negative ankle- and knee-joint angular work among runners using different foot-strike patterns and wearing traditional or minimalist shoes, and describe average vertical-loading rates. Descriptive laboratory study. Research laboratory. A total of 60 healthy volunteers (37 men, 23 women; age = 34.9 ± 8.9 years, height = 1.74 ± 0.08 m, mass = 70.9 ± 13.4 kg) with more than 6 months of experience wearing traditional or minimalist shoes were instructed to classify their foot-strike patterns. Participants ran in their preferred shoes on an instrumented treadmill with 3-dimensional motion capture. Self-reported foot-strike patterns were compared with 2-dimensional video assessments. Runners were classified into 3 groups based on video assessment: traditional-shoe rear-foot strikers (TSR; n = 22), minimalist-shoe anterior-foot strikers (MSA; n = 21), and minimalist-shoe rear-foot strikers (MSR; n = 17). Ankle and knee negative angular work and average vertical-loading rates during stance phase were compared among groups. Only 41 (68.3%) runners reported foot-strike patterns that agreed with the video assessment (κ = 0.42, P < .001). The TSR runners demonstrated greater ankle-dorsiflexion and knee-extension negative work than MSA and MSR runners (P < .05). The MSA (P < .001) and MSR (P = .01) runners demonstrated greater ankle plantar-flexion negative work than TSR runners. The MSR runners demonstrated a greater average vertical-loading rate than MSA and TSR runners (P < .001). Runners often cannot report their foot-strike patterns accurately and may not automatically adopt an anterior-foot-strike pattern after transitioning to minimalist running shoes.
Evapotranspiration and the water budget of prairie potholes in North Dakota
Shjeflo, J.B.
1968-01-01
The mass-transfer method was used to study the hydrologic behavior of 10 prairie potholes in central North Dakota during the 5-year period 1960-64. Many of the potholes went dry when precipitation was low. The average evapotranspiration during the May to October period each year was 2.11 feet, and the average seepage was 0.60 foot. These averages remained nearly constant for both wet and dry years. The greatest source of water for the potholes was the direct rainfall on the pond surface; this supplied 1.21 feet per year. Spring snowmelt supplied 0.79 foot of water and runoff from the land surface during the summer supplied 0.53 foot. Even though the water received from snowmelt was only 31 percent of the total, it was probably the most vital part of the annual water supply. This water was available in the spring, when waterfowl were nesting, and generally lasted until about July 1, even with no additional direct rainfall on the pond or runoff from the drainage basin. The average runoff from the land surface into pothole 3 was found to be 1.2 inches per year- 1 inch from snowmelt and 0.2 inch from rainfall.'The presence of growing aquatic plants, such as bulrushes and cattails, was a complicating factor in making measurements. New computation procedures had to be devised to define the variable mass-transfer coefficient. Rating periods were divided into 6-hour units for the vegetated potholes. The instruments had to be carefully maintained, as water levels had to be recorded with such accuracy that changes of 0.001 foot could be detected. In any research project involving the measurements of physical quantities, the results are dependent upon the accuracy and dependability of the instruments used; this was especially true during this project.
Accuracies of univariate and multivariate genomic prediction models in African cassava.
Okeke, Uche Godfrey; Akdemir, Deniz; Rabbi, Ismail; Kulakow, Peter; Jannink, Jean-Luc
2017-12-04
Genomic selection (GS) promises to accelerate genetic gain in plant breeding programs especially for crop species such as cassava that have long breeding cycles. Practically, to implement GS in cassava breeding, it is necessary to evaluate different GS models and to develop suitable models for an optimized breeding pipeline. In this paper, we compared (1) prediction accuracies from a single-trait (uT) and a multi-trait (MT) mixed model for a single-environment genetic evaluation (Scenario 1), and (2) accuracies from a compound symmetric multi-environment model (uE) parameterized as a univariate multi-kernel model to a multivariate (ME) multi-environment mixed model that accounts for genotype-by-environment interaction for multi-environment genetic evaluation (Scenario 2). For these analyses, we used 16 years of public cassava breeding data for six target cassava traits and a fivefold cross-validation scheme with 10-repeat cycles to assess model prediction accuracies. In Scenario 1, the MT models had higher prediction accuracies than the uT models for all traits and locations analyzed, which amounted to on average a 40% improved prediction accuracy. For Scenario 2, we observed that the ME model had on average (across all locations and traits) a 12% improved prediction accuracy compared to the uE model. We recommend the use of multivariate mixed models (MT and ME) for cassava genetic evaluation. These models may be useful for other plant species.
Wang, Liang; Li, Zishen; Zhao, Jiaojiao; Zhou, Kai; Wang, Zhiyu; Yuan, Hong
2016-01-01
Using mobile smart devices to provide urban location-based services (LBS) with sub-meter-level accuracy (around 0.5 m) is a major application field for future global navigation satellite system (GNSS) development. Real-time kinematic (RTK) positioning, which is a widely used GNSS-based positioning approach, can improve the accuracy from about 10–20 m (achieved by the standard positioning services) to about 3–5 cm based on the geodetic receivers. In using the smart devices to achieve positioning with sub-meter-level accuracy, a feasible solution of combining the low-cost GNSS module and the smart device is proposed in this work and a user-side GNSS RTK positioning software was developed from scratch based on the Android platform. Its real-time positioning performance was validated by BeiDou Navigation Satellite System/Global Positioning System (BDS/GPS) combined RTK positioning under the conditions of a static and kinematic (the velocity of the rover was 50–80 km/h) mode in a real urban environment with a SAMSUNG Galaxy A7 smartphone. The results show that the fixed-rates of ambiguity resolution (the proportion of epochs of ambiguities fixed) for BDS/GPS combined RTK in the static and kinematic tests were about 97% and 90%, respectively, and the average positioning accuracies (RMS) were better than 0.15 m (horizontal) and 0.25 m (vertical) for the static test, and 0.30 m (horizontal) and 0.45 m (vertical) for the kinematic test. PMID:28009835
Pierres, A; Benoliel, A M; Zhu, C; Bongrand, P
2001-01-01
The rate and distance-dependence of association between surface-attached molecules may be determined by monitoring the motion of receptor-bearing spheres along ligand-coated surfaces in a flow chamber (Pierres et al., Proc. Natl. Acad. Sci. U.S.A. 95:9256-9261, 1998). Particle arrests reveal bond formation, and the particle-to-surface distance may be estimated from the ratio between the velocity and the wall shear rate. However, several problems are raised. First, data interpretation requires extensive computer simulations. Second, the relevance of standard results from fluid mechanics to micrometer-size particles separated from surfaces by nanometer distances is not fully demonstrated. Third, the wall shear rate must be known with high accuracy. Here we present a simple derivation of an algorithm permitting one to simulate the motion of spheres near a plane in shear flow. We check that theoretical predictions are consistent with the experimental dependence of motion on medium viscosity or particle size, and the requirement for equilibrium particle height distribution to follow Boltzman's law. The determination of the statistical relationship between particle velocity and acceleration allows one to derive the wall shear rate with 1-s(-1) accuracy and the Hamaker constant of interaction between the particle and the wall with a sensitivity better than 10(-21) J. It is demonstrated that the correlation between particle height and mean velocity during a time interval Deltat is maximal when Deltat is about 0.1-0.2 s for a particle of 1.4-microm radius. When the particle-to-surface distance ranges between 10 and 40 nm, the particle height distribution may be obtained with a standard deviation ranging between 8 and 25 nm, provided the average velocity during a 160-ms period of time is determined with 10% accuracy. It is concluded that the flow chamber allows one to detect the formation of individual bonds with a minimal lifetime of 40 ms in presence of a disruptive force of approximately 5 pN and to assess the distance dependence within the tens of nanometer range. PMID:11423392
Baxter, Suzanne Domel; Smith, Albert F; Hardin, James W; Nichols, Michele D
2007-04-01
Validation study data are used to illustrate that conclusions about children's reporting accuracy for energy and macronutrients over multiple interviews (ie, time) depend on the analytic approach for comparing reported and reference information-conventional, which disregards accuracy of reported items and amounts, or reporting-error-sensitive, which classifies reported items as matches (eaten) or intrusions (not eaten), and amounts as corresponding or overreported. Children were observed eating school meals on 1 day (n=12), or 2 (n=13) or 3 (n=79) nonconsecutive days separated by >or=25 days, and interviewed in the morning after each observation day about intake the previous day. Reference (observed) and reported information were transformed to energy and macronutrients (ie, protein, carbohydrate, and fat), and compared. For energy and each macronutrient: report rates (reported/reference), correspondence rates (genuine accuracy measures), and inflation ratios (error measures). Mixed-model analyses. Using the conventional approach for analyzing energy and macronutrients, report rates did not vary systematically over interviews (all four P values >0.61). Using the reporting-error-sensitive approach for analyzing energy and macronutrients, correspondence rates increased over interviews (all four P values <0.04), indicating that reporting accuracy improved over time; inflation ratios decreased, although not significantly, over interviews, also suggesting that reporting accuracy improved over time. Correspondence rates were lower than report rates, indicating that reporting accuracy was worse than implied by conventional measures. When analyzed using the reporting-error-sensitive approach, children's dietary reporting accuracy for energy and macronutrients improved over time, but the conventional approach masked improvements and overestimated accuracy. The reporting-error-sensitive approach is recommended when analyzing data from validation studies of dietary reporting accuracy for energy and macronutrients.
Baxter, Suzanne Domel; Smith, Albert F.; Hardin, James W.; Nichols, Michele D.
2008-01-01
Objective Validation-study data are used to illustrate that conclusions about children’s reporting accuracy for energy and macronutrients over multiple interviews (ie, time) depend on the analytic approach for comparing reported and reference information—conventional, which disregards accuracy of reported items and amounts, or reporting-error-sensitive, which classifies reported items as matches (eaten) or intrusions (not eaten), and amounts as corresponding or overreported. Subjects and design Children were observed eating school meals on one day (n = 12), or two (n = 13) or three (n = 79) nonconsecutive days separated by ≥25 days, and interviewed in the morning after each observation day about intake the previous day. Reference (observed) and reported information were transformed to energy and macronutrients (protein, carbohydrate, fat), and compared. Main outcome measures For energy and each macronutrient: report rates (reported/reference), correspondence rates (genuine accuracy measures), inflation ratios (error measures). Statistical analyses Mixed-model analyses. Results Using the conventional approach for analyzing energy and macronutrients, report rates did not vary systematically over interviews (Ps > .61). Using the reporting-error-sensitive approach for analyzing energy and macronutrients, correspondence rates increased over interviews (Ps < .04), indicating that reporting accuracy improved over time; inflation ratios decreased, although not significantly, over interviews, also suggesting that reporting accuracy improved over time. Correspondence rates were lower than report rates, indicating that reporting accuracy was worse than implied by conventional measures. Conclusions When analyzed using the reporting-error-sensitive approach, children’s dietary reporting accuracy for energy and macronutrients improved over time, but the conventional approach masked improvements and overestimated accuracy. Applications The reporting-error-sensitive approach is recommended when analyzing data from validation studies of dietary reporting accuracy for energy and macronutrients. PMID:17383265
Singh, Omkar; Sunkaria, Ramesh Kumar
2017-12-01
This paper presents a novel technique to identify heartbeats in multimodal data using electrocardiogram (ECG) and arterial blood pressure (ABP) signals. Multiple physiological signals such as ECG, ABP, and Respiration are often recorded in parallel from the activity of heart. These signals generally possess related information as they are generated by the same physical system. The ECG and ABP correspond to the same phenomenon of contraction and relaxation activity of heart. Multiple signals acquired from various sensors are generally processed independently, thus discarding the information from other measurements. In the estimation of heart rate and heart rate variability, the R peaks are generally identified from ECG signal. Efficient detection of R-peaks in electrocardiogram (ECG) is a key component in the estimation of clinically relevant parameters from ECG. However, when the signal is severely affected by undesired artifacts, this becomes a challenging task. Sometimes in clinical environment, other physiological signals reflecting the cardiac activity such as ABP signal are also acquired simultaneously. Under the availability of such multimodal signals, the accuracy of R peak detection methods can be improved using sensor-fusion techniques. In the proposed method, the sample entropy (SampEn) is used as a metric for assessing the noise content in the physiological signal and the R peaks in ECG and the systolic peaks in ABP signals are fused together to enhance the efficiency of heartbeat detection. The proposed method was evaluated on the 100 records from the computing in cardiology challenge 2014 training data set. The performance parameters are: sensitivity (Se) and positive predictivity (PPV). The unimodal R peaks detector achieved: Se gross = 99.40%, PPV gross = 99.29%, Se average = 99.37%, PPV average = 99.29%. Similarly unimodal BP delineator achieved Se gross = 99.93%, PPV gross = 99.99%, Se average = 99.93%, PPV average = 99.99% whereas, the proposed multimodal beat detector achieved: Se gross = 99.65%, PPV gross = 99.91%, Se average = 99.68%, PPV average = 99.91%.
Stinson, Jennifer N; Tucker, Lori; Huber, Adam; Harris, Heather; Lin, Carmen; Cohen, Lindsay; Gill, Navreet; Lukas-Bretzler, Jacqueline; Proulx, Laurie; Prowten, David
2009-08-01
To determine the quality and content of English language Internet information about juvenile idiopathic arthritis (JIA) from the perspectives of consumers and healthcare professionals. Key words relevant to JIA were searched across 10 search engines. Quality of information was appraised independently by 2 health professionals, 1 young adult with JIA, and a parent using the DISCERN tool. Concordance of the website content (i.e., accuracy and completeness) with available evidence about the management of JIA was determined. Readability was determined using Flesch-Kincaid grade level and Reading Ease Score. Out of the 3000 Web pages accessed, only 58 unique sites met the inclusion criteria. Of these sites only 16 had DISCERN scores above 50% (indicating fair quality). These sites were then rated by consumers. Most sites targeted parents and none were specifically developed for youth with JIA. The overall quality of website information was fair, with a mean DISCERN quality rating score of 48.92 out of 75 (+/- 6.56, range 34.0-59.5). Overall completeness of sites was 9.07 out of 16 (+/- 2.28, range 5.25-13.25) and accuracy was 3.09 out of 4 (+/- 0.86, range 2-4), indicating a moderate level of accuracy. Average Flesch-Kincaid grade level and Reading Ease Score were 11.48 (+/- 0.74, range 10.1-12.0) and 36.36 (+/- 10.86, range 6.30-48.1), respectively, indicating that the material was difficult to read. Our study highlights the paucity of high quality Internet health information at an appropriate reading level for youth with JIA and their parents.
Using Mathematical Algorithms to Modify Glomerular Filtration Rate Estimation Equations
Zhu, Bei; Wu, Jianqing; Zhu, Jin; Zhao, Weihong
2013-01-01
Background The equations provide a rapid and low-cost method of evaluating glomerular filtration rate (GFR). Previous studies indicated that the Modification of Diet in Renal Disease (MDRD), Chronic Kidney Disease-Epidemiology (CKD-EPI) and MacIsaac equations need further modification for application in Chinese population. Thus, this study was designed to modify the three equations, and compare the diagnostic accuracy of the equations modified before and after. Methodology With the use of 99 mTc-DTPA renal dynamic imaging as the reference GFR (rGFR), the MDRD, CKD-EPI and MacIsaac equations were modified by two mathematical algorithms: the hill-climbing and the simulated-annealing algorithms. Results A total of 703 Chinese subjects were recruited, with the average rGFR 77.14±25.93 ml/min. The entire modification process was based on a random sample of 80% of subjects in each GFR level as a training sample set, the rest of 20% of subjects as a validation sample set. After modification, the three equations performed significant improvement in slop, intercept, correlated coefficient, root mean square error (RMSE), total deviation index (TDI), and the proportion of estimated GFR (eGFR) within 10% and 30% deviation of rGFR (P10 and P30). Of the three modified equations, the modified CKD-EPI equation showed the best accuracy. Conclusions Mathematical algorithms could be a considerable tool to modify the GFR equations. Accuracy of all the three modified equations was significantly improved in which the modified CKD-EPI equation could be the optimal one. PMID:23472113
Ramsthaler, F; Kreutz, K; Verhoff, M A
2007-11-01
It has been generally accepted in skeletal sex determination that the use of metric methods is limited due to the population dependence of the multivariate algorithms. The aim of the study was to verify the applicability of software-based sex estimations outside the reference population group for which discriminant equations have been developed. We examined 98 skulls from recent forensic cases of known age, sex, and Caucasian ancestry from cranium collections in Frankfurt and Mainz (Germany) to determine the accuracy of sex determination using the statistical software solution Fordisc which derives its database and functions from the US American Forensic Database. In a comparison between metric analysis using Fordisc and morphological determination of sex, average accuracy for both sexes was 86 vs 94%, respectively, and males were identified more accurately than females. The ratio of the true test result rate to the false test result rate was not statistically different for the two methodological approaches at a significance level of 0.05 but was statistically different at a level of 0.10 (p=0.06). Possible explanations for this difference comprise different ancestry, age distribution, and socio-economic status compared to the Fordisc reference sample. It is likely that a discriminant function analysis on the basis of more similar European reference samples will lead to more valid and reliable sexing results. The use of Fordisc as a single method for the estimation of sex of recent skeletal remains in Europe cannot be recommended without additional morphological assessment and without a built-in software update based on modern European reference samples.
Stec, James; Wang, Jing; Coombes, Kevin; Ayers, Mark; Hoersch, Sebastian; Gold, David L.; Ross, Jeffrey S; Hess, Kenneth R.; Tirrell, Stephen; Linette, Gerald; Hortobagyi, Gabriel N.; Symmans, W. Fraser; Pusztai, Lajos
2005-01-01
We examined how well differentially expressed genes and multigene outcome classifiers retain their class-discriminating values when tested on data generated by different transcriptional profiling platforms. RNA from 33 stage I-III breast cancers was hybridized to both Affymetrix GeneChip and Millennium Pharmaceuticals cDNA arrays. Only 30% of all corresponding gene expression measurements on the two platforms had Pearson correlation coefficient r ≥ 0.7 when UniGene was used to match probes. There was substantial variation in correlation between different Affymetrix probe sets matched to the same cDNA probe. When cDNA and Affymetrix probes were matched by basic local alignment tool (BLAST) sequence identity, the correlation increased substantially. We identified 182 genes in the Affymetrix and 45 in the cDNA data (including 17 common genes) that accurately separated 91% of cases in supervised hierarchical clustering in each data set. Cross-platform testing of these informative genes resulted in lower clustering accuracy of 45 and 79%, respectively. Several sets of accurate five-gene classifiers were developed on each platform using linear discriminant analysis. The best 100 classifiers showed average misclassification error rate of 2% on the original data that rose to 19.5% when tested on data from the other platform. Random five-gene classifiers showed misclassification error rate of 33%. We conclude that multigene predictors optimized for one platform lose accuracy when applied to data from another platform due to missing genes and sequence differences in probes that result in differing measurements for the same gene. PMID:16049308
Time-optimized laser micro machining by using a new high dynamic and high precision galvo scanner
NASA Astrophysics Data System (ADS)
Jaeggi, Beat; Neuenschwander, Beat; Zimmermann, Markus; Zecherle, Markus; Boeckler, Ernst W.
2016-03-01
High accuracy, quality and throughput are key factors in laser micro machining. To obtain these goals the ablation process, the machining strategy and the scanning device have to be optimized. The precision is influenced by the accuracy of the galvo scanner and can further be enhanced by synchronizing the movement of the mirrors with the laser pulse train. To maintain a high machining quality i.e. minimum surface roughness, the pulse-to-pulse distance has also to be optimized. Highest ablation efficiency is obtained by choosing the proper laser peak fluence together with highest specific removal rate. The throughput can now be enhanced by simultaneously increasing the average power, the repetition rate as well as the scanning speed to preserve the fluence and the pulse-to-pulse distance. Therefore a high scanning speed is of essential importance. To guarantee the required excellent accuracy even at high scanning speeds a new interferometry based encoder technology was used, that provides a high quality signal for closed-loop control of the galvo scanner position. Low inertia encoder design enables a very dynamic scanner system, which can be driven to very high line speeds by a specially adapted control solution. We will present results with marking speeds up to 25 m/s using a f = 100 mm objective obtained with a new scanning system and scanner tuning maintaining a precision of about 5 μm. Further it will be shown that, especially for short line lengths, the machining time can be minimized by choosing the proper speed which has not to be the maximum one.
Zygouris, Stelios; Ntovas, Konstantinos; Giakoumis, Dimitrios; Votis, Konstantinos; Doumpoulakis, Stefanos; Segkouli, Sofia; Karagiannidis, Charalampos; Tzovaras, Dimitrios; Tsolaki, Magda
2017-01-01
It has been demonstrated that virtual reality (VR) applications can be used for the detection of mild cognitive impairment (MCI). The aim of this study is to provide a preliminary investigation on whether a VR cognitive training application can be used to detect MCI in persons using the application at home without the help of an examiner. Two groups, one of healthy older adults (n = 6) and one of MCI patients (n = 6) were recruited from Thessaloniki day centers for cognitive disorders and provided with a tablet PC with custom software enabling the self-administration of the Virtual Super Market (VSM) cognitive training exercise. The average performance (from 20 administrations of the exercise) of the two groups was compared and was also correlated with performance in established neuropsychological tests. Average performance in terms of duration to complete the given exercise differed significantly between healthy(μ = 247.41 s/ sd = 89.006) and MCI (μ= 454.52 s/ sd = 177.604) groups, yielding a correct classification rate of 91.8% with a sensitivity and specificity of 94% and 89% respectively for MCI detection. Average performance also correlated significantly with performance in Functional Cognitive Assessment Scale (FUCAS), Test of Everyday Attention (TEA), and Rey Osterrieth Complex Figure test (ROCFT). The VR application exhibited very high accuracy in detecting MCI while all participants were able to operate the tablet and application on their own. Diagnostic accuracy was improved compared to a previous study using data from only one administration of the exercise. The results of the present study suggest that remote MCI detection through VR applications can be feasible.
Miftari, Rame; Nura, Adem; Topçiu-Shufta, Valdete; Miftari, Valon; Murseli, Arbenita; Haxhibeqiri, Valdete
2017-01-01
Aim: The aim of this study was determination of validity of 99mTcDTPA estimation of GFR for early detection of chronic kidney failure Material and methods: There were 110 patients (54 males and 56 females) with kidney disease referred for evaluation of renal function at UCC of Kosovo. All patients were included in two groups. In the first group were included 30 patients confirmed with renal failure, whereas in the second group were included 80 patients with other renal disease. In study were included only patients with ready results of creatinine, urea and glucose in the blood serum. For estimation of GFR we have used the Gate GFR DTPA method. The statistical data processing was conducted using statistical methods such as arithmetic average, the student t-test, percentage or rate, sensitivity, specificity and accuracy of the test. Results: The average age of all patients was 36 years old. The average age of female was 37 whereas of male 35. Patients with renal failure was significantly older than patients with other renal disease (p<0.005). Renal failure was found in 30 patients (27.27%). The concentration of urea and creatinine in blood serum of patients with renal failure were significantly higher than in patients with other renal disease (P< 0.00001). GFR in patients with renal failure were significantly lower than in patients with other renal disease, 51.75 ml/min (p<0.00001). Sensitivity of uremia and creatininemia for detection of renal failure were 83.33%, whereas sensitivity of 99mTcDTPA GFR was 100%. Specificity of uraemia and creatininemia were 63% whereas specificity of 99mTcDTPA GFR was 47.5%. Diagnostic accuracy of blood urea and creatinine in detecting of renal failure were 69%, whereas diagnostic accuracy of 99mTcDTPA GFR was 61.8%. Conclusion: Gate 99mTc DTPA scintigraphy in collaboration with biochemical tests are very sensitive methods for early detection of patients with chronic renal failure. PMID:28883673
Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications.
Wu, Xiao-Lin; Xu, Jiaqi; Feng, Guofei; Wiggans, George R; Taylor, Jeremy F; He, Jun; Qian, Changsong; Qiu, Jiansheng; Simpson, Barry; Walker, Jeremy; Bauck, Stewart
2016-01-01
Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD) or high-density (HD) SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE) or haplotype-averaged Shannon entropy (HASE) and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced) or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus population. The utility of this MOLO algorithm was also demonstrated in a real application, in which a 6K SNP panel was optimized conditional on 5,260 obligatory SNP selected based on SNP-trait association in U.S. Holstein animals. With this MOLO algorithm, both imputation error rate and genomic prediction error rate were minimal.
Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications
Wu, Xiao-Lin; Xu, Jiaqi; Feng, Guofei; Wiggans, George R.; Taylor, Jeremy F.; He, Jun; Qian, Changsong; Qiu, Jiansheng; Simpson, Barry; Walker, Jeremy; Bauck, Stewart
2016-01-01
Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD) or high-density (HD) SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE) or haplotype-averaged Shannon entropy (HASE) and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced) or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus population. The utility of this MOLO algorithm was also demonstrated in a real application, in which a 6K SNP panel was optimized conditional on 5,260 obligatory SNP selected based on SNP-trait association in U.S. Holstein animals. With this MOLO algorithm, both imputation error rate and genomic prediction error rate were minimal. PMID:27583971
[Navigation in implantology: Accuracy assessment regarding the literature].
Barrak, Ibrahim Ádám; Varga, Endre; Piffko, József
2016-06-01
Our objective was to assess the literature regarding the accuracy of the different static guided systems. After applying electronic literature search we found 661 articles. After reviewing 139 articles, the authors chose 52 articles for full-text evaluation. 24 studies involved accuracy measurements. Fourteen of our selected references were clinical and ten of them were in vitro (modell or cadaver). Variance-analysis (Tukey's post-hoc test; p < 0.05) was conducted to summarize the selected publications. Regarding 2819 results the average mean error at the entry point was 0.98 mm. At the level of the apex the average deviation was 1.29 mm while the mean of the angular deviation was 3,96 degrees. Significant difference could be observed between the two methods of implant placement (partially and fully guided sequence) in terms of deviation at the entry point, apex and angular deviation. Different levels of quality and quantity of evidence were available for assessing the accuracy of the different computer-assisted implant placement. The rapidly evolving field of digital dentistry and the new developments will further improve the accuracy of guided implant placement. In the interest of being able to draw dependable conclusions and for the further evaluation of the parameters used for accuracy measurements, randomized, controlled single or multi-centered clinical trials are necessary.
Classification of right-hand grasp movement based on EMOTIV Epoc+
NASA Astrophysics Data System (ADS)
Tobing, T. A. M. L.; Prawito, Wijaya, S. K.
2017-07-01
Combinations of BCT elements for right-hand grasp movement have been obtained, providing the average value of their classification accuracy. The aim of this study is to find a suitable combination for best classification accuracy of right-hand grasp movement based on EEG headset, EMOTIV Epoc+. There are three movement classifications: grasping hand, relax, and opening hand. These classifications take advantage of Event-Related Desynchronization (ERD) phenomenon that makes it possible to differ relaxation, imagery, and movement state from each other. The combinations of elements are the usage of Independent Component Analysis (ICA), spectrum analysis by Fast Fourier Transform (FFT), maximum mu and beta power with their frequency as features, and also classifier Probabilistic Neural Network (PNN) and Radial Basis Function (RBF). The average values of classification accuracy are ± 83% for training and ± 57% for testing. To have a better understanding of the signal quality recorded by EMOTIV Epoc+, the result of classification accuracy of left or right-hand grasping movement EEG signal (provided by Physionet) also be given, i.e.± 85% for training and ± 70% for testing. The comparison of accuracy value from each combination, experiment condition, and external EEG data are provided for the purpose of value analysis of classification accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, R; Wang, J
2014-06-01
Purpose: To investigate the feasibility, efficiency, and delivery accuracy of volumetric modulated arc therapy with constant dose rate (VMAT-CDR) for whole-pelvic radiotherapy (WPRT) of endometrial cancer. Methods: The nine-Field intensity-modulated radiotherapy (IMRT), VMAT with variable dose-rate (VMAT-VDR), and VMAT-CDR plans were created for 9 patients with endometrial cancer undergoing WPRT. The dose distribution of planning target volume (PTV), organs at risk (OARs), and normal tissue (NT) were compared. The monitor units (MUs) and treatment delivery time were also evaluated. For each VMAT-CDR plan, a dry Run was performed to assess the dosimetric accuracy with MatriXX from IBA. Results: Compared withmore » IMRT, the VMAT-CDR plans delivered a slightly greater V20 of the bowel, bladder, pelvis bone, and NT, but significantly decreased the dose to the high-dose region of the rectum and pelvis bone. The MUs Decreased from 1105 with IMRT to 628 with VMAT-CDR. The delivery time also decreased from 9.5 to 3.2 minutes. The average gamma pass rate was 95.6% at the 3%/3 mm criteria with MatriXX pretreatment verification for 9 patients. Conclusion: VMAT-CDR can achieve comparable plan quality with significant shorter delivery time and smaller number of MUs compared with IMRT for patients with endometrial cancer undergoing WPRT. It can be accurately delivered and be an alternative to IMRT on the linear accelerator without VDR capability. This work is supported by the grant project, National Natural; Science Foundation of China (No. 81071237)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borot de Battisti, Maxence, E-mail: M.E.P.Borot@um
Purpose: The development of MR-guided high dose rate (HDR) brachytherapy is under investigation due to the excellent tumor and organs at risk visualization of MRI. However, MR-based localization of needles (including catheters or tubes) has inherently a low update rate and the required image interpretation can be hampered by signal voids arising from blood vessels or calcifications limiting the precision of the needle guidance and reconstruction. In this paper, a new needle tracking prototype is investigated using fiber Bragg gratings (FBG)-based sensing: this prototype involves a MR-compatible stylet composed of three optic fibers with nine sets of embedded FBG sensorsmore » each. This stylet can be inserted into brachytherapy needles and allows a fast measurement of the needle deflection. This study aims to assess the potential of FBG-based sensing for real-time needle (including catheter or tube) tracking during MR-guided intervention. Methods: First, the MR compatibility of FBG-based sensing and its accuracy was evaluated. Different known needle deflections were measured using FBG-based sensing during simultaneous MR-imaging. Then, a needle tracking procedure using FBG-based sensing was proposed. This procedure involved a MR-based calibration of the FBG-based system performed prior to the interventional procedure. The needle tracking system was assessed in an experiment with a moving phantom during MR imaging. The FBG-based system was quantified by comparing the gold-standard shapes, the shape manually segmented on MRI and the FBG-based measurements. Results: The evaluation of the MR compatibility of FBG-based sensing and its accuracy shows that the needle deflection could be measured with an accuracy of 0.27 mm on average. Besides, the FBG-based measurements were comparable to the uncertainty of MR-based measurements estimated at half the voxel size in the MR image. Finally, the mean(standard deviation) Euclidean distance between MR- and FBG-based needle position measurements was equal to 0.79 mm(0.37 mm). The update rate and latency of the FBG-based needle position measurement were 100 and 300 ms, respectively. Conclusions: The FBG-based needle tracking procedure proposed in this paper is able to determine the position of the complete needle, under MR-imaging, with better accuracy and precision, higher update rate, and lower latency compared to current MR-based needle localization methods. This system would be eligible for MR-guided brachytherapy, in particular, for an improved needle guidance and reconstruction.« less
McCormick, J. L.; Whitney, D.; Schill, D. J.; Quist, Michael C.
2015-01-01
Accuracy of angler-reported data on steelhead, Oncorhynchus mykiss (Walbaum), harvest in Idaho, USA, was quantified by comparing data recorded on angler harvest permits to the numbers that the same group of anglers reported in an off-site survey. Anglers could respond to the off-site survey using mail or Internet; if they did not respond using these methods, they were called on the telephone. A majority of anglers responded through the mail, and the probability of responding by Internet decreased with increasing age of the respondent. The actual number of steelhead harvested did not appear to influence the response type. Anglers in the autumn 2012 survey overreported harvest by 24%, whereas anglers in the spring 2013 survey under-reported steelhead harvest by 16%. The direction of reporting bias may have been a function of actual harvest, where anglers harvested on average 2.6 times more fish during the spring fishery than the autumn. Reporting bias that is a function of actual harvest can have substantial management and conservation implications because the fishery will be perceived to be performing better at lower harvest rates and worse when harvest rates are higher. Thus, these findings warrant consideration when designing surveys and evaluating management actions.
Thorne, John C; Coggins, Truman E; Carmichael Olson, Heather; Astley, Susan J
2007-04-01
To evaluate classification accuracy and clinical feasibility of a narrative analysis tool for identifying children with a fetal alcohol spectrum disorder (FASD). Picture-elicited narratives generated by 16 age-matched pairs of school-aged children (FASD vs. typical development [TD]) were coded for semantic elaboration and reference strategy by judges who were unaware of age, gender, and group membership of the participants. Receiver operating characteristic (ROC) curves were used to examine the classification accuracy of the resulting set of narrative measures for making 2 classifications: (a) for the 16 children diagnosed with FASD, low performance (n = 7) versus average performance (n = 9) on a standardized expressive language task and (b) FASD (n = 16) versus TD (n = 16). Combining the rates of semantic elaboration and pragmatically inappropriate reference perfectly matched a classification based on performance on the standardized language task. More importantly, the rate of ambiguous nominal reference was highly accurate in classifying children with an FASD regardless of their performance on the standardized language task (area under the ROC curve = .863, confidence interval = .736-.991). Results support further study of the diagnostic utility of narrative analysis using discourse level measures of elaboration and children's strategic use of reference.
Comparing supervised learning techniques on the task of physical activity recognition.
Dalton, A; OLaighin, G
2013-01-01
The objective of this study was to compare the performance of base-level and meta-level classifiers on the task of physical activity recognition. Five wireless kinematic sensors were attached to each subject (n = 25) while they completed a range of basic physical activities in a controlled laboratory setting. Subjects were then asked to carry out similar self-annotated physical activities in a random order and in an unsupervised environment. A combination of time-domain and frequency-domain features were extracted from the sensor data including the first four central moments, zero-crossing rate, average magnitude, sensor cross-correlation, sensor auto-correlation, spectral entropy and dominant frequency components. A reduced feature set was generated using a wrapper subset evaluation technique with a linear forward search and this feature set was employed for classifier comparison. The meta-level classifier AdaBoostM1 with C4.5 Graft as its base-level classifier achieved an overall accuracy of 95%. Equal sized datasets of subject independent data and subject dependent data were used to train this classifier and high recognition rates could be achieved without the need for user specific training. Furthermore, it was found that an accuracy of 88% could be achieved using data from the ankle and wrist sensors only.
Piquado, Tepring; Benichov, Jonathan I.; Brownell, Hiram; Wingfield, Arthur
2013-01-01
Objective The purpose of this research was to determine whether negative effects of hearing loss on recall accuracy for spoken narratives can be mitigated by allowing listeners to control the rate of speech input. Design Paragraph-length narratives were presented for recall under two listening conditions in a within-participants design: presentation without interruption (continuous) at an average speech-rate of 150 words per minute; and presentation interrupted at periodic intervals at which participants were allowed to pause before initiating the next segment (self-paced). Study sample Participants were 24 adults ranging from 21 to 33 years of age. Half had age-normal hearing acuity and half had mild-to-moderate hearing loss. The two groups were comparable for age, years of formal education, and vocabulary. Results When narrative passages were presented continuously, without interruption, participants with hearing loss recalled significantly fewer story elements, both main ideas and narrative details, than those with age-normal hearing. The recall difference was eliminated when the two groups were allowed to self-pace the speech input. Conclusion Results support the hypothesis that the listening effort associated with reduced hearing acuity can slow processing operations and increase demands on working memory, with consequent negative effects on accuracy of narrative recall. PMID:22731919
Wang, Xueyi; Davidson, Nicholas J.
2011-01-01
Ensemble methods have been widely used to improve prediction accuracy over individual classifiers. In this paper, we achieve a few results about the prediction accuracies of ensemble methods for binary classification that are missed or misinterpreted in previous literature. First we show the upper and lower bounds of the prediction accuracies (i.e. the best and worst possible prediction accuracies) of ensemble methods. Next we show that an ensemble method can achieve > 0.5 prediction accuracy, while individual classifiers have < 0.5 prediction accuracies. Furthermore, for individual classifiers with different prediction accuracies, the average of the individual accuracies determines the upper and lower bounds. We perform two experiments to verify the results and show that it is hard to achieve the upper and lower bounds accuracies by random individual classifiers and better algorithms need to be developed. PMID:21853162
Larsson, Anne; Johansson, Adam; Axelsson, Jan; Nyholm, Tufve; Asklund, Thomas; Riklund, Katrine; Karlsson, Mikael
2013-02-01
The aim of this study was to evaluate MR-based attenuation correction of PET emission data of the head, based on a previously described technique that calculates substitute CT (sCT) images from a set of MR images. Images from eight patients, examined with (18)F-FLT PET/CT and MRI, were included. sCT images were calculated and co-registered to the corresponding CT images, and transferred to the PET/CT scanner for reconstruction. The new reconstructions were then compared with the originals. The effect of replacing bone with soft tissue in the sCT-images was also evaluated. The average relative difference between the sCT-corrected PET images and the CT-corrected PET images was 1.6% for the head and 1.9% for the brain. The average standard deviations of the relative differences within the head were relatively high, at 13.2%, primarily because of large differences in the nasal septa region. For the brain, the average standard deviation was lower, 4.1%. The global average difference in the head when replacing bone with soft tissue was 11%. The method presented here has a high rate of accuracy, but high-precision quantitative imaging of the nasal septa region is not possible at the moment.
Chang, Hsiang-Chih; Lee, Po-Lei; Lo, Men-Tzung; Lee, I-Hui; Yeh, Ting-Kuang; Chang, Chun-Yen
2012-05-01
This study proposes a steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) independent of amplitude-frequency and phase calibrations. Six stepping delay flickering sequences (SDFSs) at 32-Hz flickering frequency were used to implement a six-command BCI system. EEG signals recorded from Oz position were first filtered within 29-35 Hz, segmented based on trigger events of SDFSs to obtain SDFS epochs, and then stored separately in epoch registers. An epoch-average process suppressed the inter-SDFS interference. For each detection point, the latest six SDFS epochs in each epoch register were averaged and the normalized power of averaged responses was calculated. The visual target that induced the maximum normalized power was identified as the visual target. Eight subjects were recruited in this study. All subjects were requested to produce the "563241" command sequence four times. The averaged accuracy, command transfer interval, and information transfer rate (mean ± std.) values for all eight subjects were 97.38 ± 5.97%, 3.56 ± 0.68 s, and 42.46 ± 11.17 bits/min, respectively. The proposed system requires no calibration in either the amplitude-frequency characteristic or the reference phase of SSVEP which may provide an efficient and reliable channel for the neuromuscular disabled to communicate with external environments.
NASA Technical Reports Server (NTRS)
Estes, N. A. 3rd; Michaud, G.; Zipes, D. P.; El-Sherif, N.; Venditti, F. J.; Rosenbaum, D. S.; Albrecht, P.; Wang, P. J.; Cohen, R. J.
1997-01-01
This investigation was performed to evaluate the feasibility of detecting repolarization alternans with the heart rate elevated with a bicycle exercise protocol. Sensitive spectral signal-processing techniques are able to detect beat-to-beat alternation of the amplitude of the T wave, which is not visible on standard electrocardiogram. Previous animal and human investigations using atrial or ventricular pacing have demonstrated that T-wave alternans is a marker of vulnerability to ventricular arrhythmias. Using a spectral analysis technique incorporating noise reduction signal-processing software, we evaluated electrical alternans at rest and with the heart rate elevated during a bicycle exercise protocol. In this study we defined optimal criteria for electrical alternans to separate patients from those without inducible arrhythmias. Alternans and signal-averaged electrocardiographic results were compared with the results of vulnerability to ventricular arrhythmias as defined by induction of sustained ventricular tachycardia or fibrillation at electrophysiologic evaluation. In 27 patients alternans recorded at rest and with exercise had a sensitivity of 89%, specificity of 75%, and overall clinical accuracy of 80% (p <0.003). In this patient population the signal-averaged electrocardiogram was not a significant predictor of arrhythmia vulnerability. This is the first study to report that repolarization alternans can be detected with heart rate elevated with a bicycle exercise protocol. Alternans measured using this technique is an accurate predictor of arrhythmia inducibility.
Robust sleep quality quantification method for a personal handheld device.
Shin, Hangsik; Choi, Byunghun; Kim, Doyoon; Cho, Jaegeol
2014-06-01
The purpose of this study was to develop and validate a novel method for sleep quality quantification using personal handheld devices. The proposed method used 3- or 6-axes signals, including acceleration and angular velocity, obtained from built-in sensors in a smartphone and applied a real-time wavelet denoising technique to minimize the nonstationary noise. Sleep or wake status was decided on each axis, and the totals were finally summed to calculate sleep efficiency (SE), regarded as sleep quality in general. The sleep experiment was carried out for performance evaluation of the proposed method, and 14 subjects participated. An experimental protocol was designed for comparative analysis. The activity during sleep was recorded not only by the proposed method but also by well-known commercial applications simultaneously; moreover, activity was recorded on different mattresses and locations to verify the reliability in practical use. Every calculated SE was compared with the SE of a clinically certified medical device, the Philips (Amsterdam, The Netherlands) Actiwatch. In these experiments, the proposed method proved its reliability in quantifying sleep quality. Compared with the Actiwatch, accuracy and average bias error of SE calculated by the proposed method were 96.50% and -1.91%, respectively. The proposed method was vastly superior to other comparative applications with at least 11.41% in average accuracy and at least 6.10% in average bias; average accuracy and average absolute bias error of comparative applications were 76.33% and 17.52%, respectively.
Heba, Elhamy R.; Desai, Ajinkya; Zand, Kevin A.; Hamilton, Gavin; Wolfson, Tanya; Schlein, Alexandra N.; Gamst, Anthony; Loomba, Rohit; Sirlin, Claude B.; Middleton, Michael S.
2016-01-01
Purpose To determine the accuracy and the effect of possible subject-based confounders of magnitude-based magnetic resonance imaging (MRI) for estimating hepatic proton density fat fraction (PDFF) for different numbers of echoes in adults with known or suspected nonalcoholic fatty liver disease, using MR spectroscopy (MRS) as a reference. Materials and Methods In this retrospective analysis of 506 adults, hepatic PDFF was estimated by unenhanced 3.0T MRI, using right-lobe MRS as reference. Regions of interest placed on source images and on six-echo parametric PDFF maps were colocalized to MRS voxel location. Accuracy using different numbers of echoes was assessed by regression and Bland–Altman analysis; slope, intercept, average bias, and R2 were calculated. The effect of age, sex, and body mass index (BMI) on hepatic PDFF accuracy was investigated using multivariate linear regression analyses. Results MRI closely agreed with MRS for all tested methods. For three- to six-echo methods, slope, regression intercept, average bias, and R2 were 1.01–0.99, 0.11–0.62%, 0.24–0.56%, and 0.981–0.982, respectively. Slope was closest to unity for the five-echo method. The two-echo method was least accurate, underestimating PDFF by an average of 2.93%, compared to an average of 0.23–0.69% for the other methods. Statistically significant but clinically nonmeaningful effects on PDFF error were found for subject BMI (P range: 0.0016 to 0.0783), male sex (P range: 0.015 to 0.037), and no statistically significant effect was found for subject age (P range: 0.18–0.24). Conclusion Hepatic magnitude-based MRI PDFF estimates using three, four, five, and six echoes, and six-echo parametric maps are accurate compared to reference MRS values, and that accuracy is not meaningfully confounded by age, sex, or BMI. PMID:26201284
Belley, Matthew D; Wang, Chu; Nguyen, Giao; Gunasingha, Rathnayaka; Chao, Nelson J; Chen, Benny J; Dewhirst, Mark W; Yoshizumi, Terry T
2014-03-01
Accurate dosimetry is essential when irradiating mice to ensure that functional and molecular endpoints are well understood for the radiation dose delivered. Conventional methods of prescribing dose in mice involve the use of a single dose rate measurement and assume a uniform average dose throughout all organs of the entire mouse. Here, the authors report the individual average organ dose values for the irradiation of a 12, 23, and 33 g mouse on a 320 kVp x-ray irradiator and calculate the resulting error from using conventional dose prescription methods. Organ doses were simulated in the Geant4 application for tomographic emission toolkit using the MOBY mouse whole-body phantom. Dosimetry was performed for three beams utilizing filters A (1.65 mm Al), B (2.0 mm Al), and C (0.1 mm Cu + 2.5 mm Al), respectively. In addition, simulated x-ray spectra were validated with physical half-value layer measurements. Average doses in soft-tissue organs were found to vary by as much as 23%-32% depending on the filter. Compared to filters A and B, filter C provided the hardest beam and had the lowest variation in soft-tissue average organ doses across all mouse sizes, with a difference of 23% for the median mouse size of 23 g. This work suggests a new dose prescription method in small animal dosimetry: it presents a departure from the conventional approach of assigninga single dose value for irradiation of mice to a more comprehensive approach of characterizing individual organ doses to minimize the error and uncertainty. In human radiation therapy, clinical treatment planning establishes the target dose as well as the dose distribution, however, this has generally not been done in small animal research. These results suggest that organ dose errors will be minimized by calibrating the dose rates for all filters, and using different dose rates for different organs.
Belley, Matthew D.; Wang, Chu; Nguyen, Giao; Gunasingha, Rathnayaka; Chao, Nelson J.; Chen, Benny J.; Dewhirst, Mark W.; Yoshizumi, Terry T.
2014-01-01
Purpose: Accurate dosimetry is essential when irradiating mice to ensure that functional and molecular endpoints are well understood for the radiation dose delivered. Conventional methods of prescribing dose in mice involve the use of a single dose rate measurement and assume a uniform average dose throughout all organs of the entire mouse. Here, the authors report the individual average organ dose values for the irradiation of a 12, 23, and 33 g mouse on a 320 kVp x-ray irradiator and calculate the resulting error from using conventional dose prescription methods. Methods: Organ doses were simulated in the Geant4 application for tomographic emission toolkit using the MOBY mouse whole-body phantom. Dosimetry was performed for three beams utilizing filters A (1.65 mm Al), B (2.0 mm Al), and C (0.1 mm Cu + 2.5 mm Al), respectively. In addition, simulated x-ray spectra were validated with physical half-value layer measurements. Results: Average doses in soft-tissue organs were found to vary by as much as 23%–32% depending on the filter. Compared to filters A and B, filter C provided the hardest beam and had the lowest variation in soft-tissue average organ doses across all mouse sizes, with a difference of 23% for the median mouse size of 23 g. Conclusions: This work suggests a new dose prescription method in small animal dosimetry: it presents a departure from the conventional approach of assigning a single dose value for irradiation of mice to a more comprehensive approach of characterizing individual organ doses to minimize the error and uncertainty. In human radiation therapy, clinical treatment planning establishes the target dose as well as the dose distribution, however, this has generally not been done in small animal research. These results suggest that organ dose errors will be minimized by calibrating the dose rates for all filters, and using different dose rates for different organs. PMID:24593746
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belley, Matthew D.; Wang, Chu; Nguyen, Giao
2014-03-15
Purpose: Accurate dosimetry is essential when irradiating mice to ensure that functional and molecular endpoints are well understood for the radiation dose delivered. Conventional methods of prescribing dose in mice involve the use of a single dose rate measurement and assume a uniform average dose throughout all organs of the entire mouse. Here, the authors report the individual average organ dose values for the irradiation of a 12, 23, and 33 g mouse on a 320 kVp x-ray irradiator and calculate the resulting error from using conventional dose prescription methods. Methods: Organ doses were simulated in the Geant4 application formore » tomographic emission toolkit using the MOBY mouse whole-body phantom. Dosimetry was performed for three beams utilizing filters A (1.65 mm Al), B (2.0 mm Al), and C (0.1 mm Cu + 2.5 mm Al), respectively. In addition, simulated x-ray spectra were validated with physical half-value layer measurements. Results: Average doses in soft-tissue organs were found to vary by as much as 23%–32% depending on the filter. Compared to filters A and B, filter C provided the hardest beam and had the lowest variation in soft-tissue average organ doses across all mouse sizes, with a difference of 23% for the median mouse size of 23 g. Conclusions: This work suggests a new dose prescription method in small animal dosimetry: it presents a departure from the conventional approach of assigninga single dose value for irradiation of mice to a more comprehensive approach of characterizing individual organ doses to minimize the error and uncertainty. In human radiation therapy, clinical treatment planning establishes the target dose as well as the dose distribution, however, this has generally not been done in small animal research. These results suggest that organ dose errors will be minimized by calibrating the dose rates for all filters, and using different dose rates for different organs.« less
Tamburro, Gabriella; Fiedler, Patrique; Stone, David; Haueisen, Jens; Comani, Silvia
2018-01-01
EEG may be affected by artefacts hindering the analysis of brain signals. Data-driven methods like independent component analysis (ICA) are successful approaches to remove artefacts from the EEG. However, the ICA-based methods developed so far are often affected by limitations, such as: the need for visual inspection of the separated independent components (subjectivity problem) and, in some cases, for the independent and simultaneous recording of the inspected artefacts to identify the artefactual independent components; a potentially heavy manipulation of the EEG signals; the use of linear classification methods; the use of simulated artefacts to validate the methods; no testing in dry electrode or high-density EEG datasets; applications limited to specific conditions and electrode layouts. Our fingerprint method automatically identifies EEG ICs containing eyeblinks, eye movements, myogenic artefacts and cardiac interference by evaluating 14 temporal, spatial, spectral, and statistical features composing the IC fingerprint. Sixty-two real EEG datasets containing cued artefacts are recorded with wet and dry electrodes (128 wet and 97 dry channels). For each artefact, 10 nonlinear SVM classifiers are trained on fingerprints of expert-classified ICs. Training groups include randomly chosen wet and dry datasets decomposed in 80 ICs. The classifiers are tested on the IC-fingerprints of different datasets decomposed into 20, 50, or 80 ICs. The SVM performance is assessed in terms of accuracy, False Omission Rate (FOR), Hit Rate (HR), False Alarm Rate (FAR), and sensitivity ( p ). For each artefact, the quality of the artefact-free EEG reconstructed using the classification of the best SVM is assessed by visual inspection and SNR. The best SVM classifier for each artefact type achieved average accuracy of 1 (eyeblink), 0.98 (cardiac interference), and 0.97 (eye movement and myogenic artefact). Average classification sensitivity (p) was 1 (eyeblink), 0.997 (myogenic artefact), 0.98 (eye movement), and 0.48 (cardiac interference). Average artefact reduction ranged from a maximum of 82% for eyeblinks to a minimum of 33% for cardiac interference, depending on the effectiveness of the proposed method and the amplitude of the removed artefact. The performance of the SVM classifiers did not depend on the electrode type, whereas it was better for lower decomposition levels (50 and 20 ICs). Apart from cardiac interference, SVM performance and average artefact reduction indicate that the fingerprint method has an excellent overall performance in the automatic detection of eyeblinks, eye movements and myogenic artefacts, which is comparable to that of existing methods. Being also independent from simultaneous artefact recording, electrode number, type and layout, and decomposition level, the proposed fingerprint method can have useful applications in clinical and experimental EEG settings.
Tamburro, Gabriella; Fiedler, Patrique; Stone, David; Haueisen, Jens
2018-01-01
Background EEG may be affected by artefacts hindering the analysis of brain signals. Data-driven methods like independent component analysis (ICA) are successful approaches to remove artefacts from the EEG. However, the ICA-based methods developed so far are often affected by limitations, such as: the need for visual inspection of the separated independent components (subjectivity problem) and, in some cases, for the independent and simultaneous recording of the inspected artefacts to identify the artefactual independent components; a potentially heavy manipulation of the EEG signals; the use of linear classification methods; the use of simulated artefacts to validate the methods; no testing in dry electrode or high-density EEG datasets; applications limited to specific conditions and electrode layouts. Methods Our fingerprint method automatically identifies EEG ICs containing eyeblinks, eye movements, myogenic artefacts and cardiac interference by evaluating 14 temporal, spatial, spectral, and statistical features composing the IC fingerprint. Sixty-two real EEG datasets containing cued artefacts are recorded with wet and dry electrodes (128 wet and 97 dry channels). For each artefact, 10 nonlinear SVM classifiers are trained on fingerprints of expert-classified ICs. Training groups include randomly chosen wet and dry datasets decomposed in 80 ICs. The classifiers are tested on the IC-fingerprints of different datasets decomposed into 20, 50, or 80 ICs. The SVM performance is assessed in terms of accuracy, False Omission Rate (FOR), Hit Rate (HR), False Alarm Rate (FAR), and sensitivity (p). For each artefact, the quality of the artefact-free EEG reconstructed using the classification of the best SVM is assessed by visual inspection and SNR. Results The best SVM classifier for each artefact type achieved average accuracy of 1 (eyeblink), 0.98 (cardiac interference), and 0.97 (eye movement and myogenic artefact). Average classification sensitivity (p) was 1 (eyeblink), 0.997 (myogenic artefact), 0.98 (eye movement), and 0.48 (cardiac interference). Average artefact reduction ranged from a maximum of 82% for eyeblinks to a minimum of 33% for cardiac interference, depending on the effectiveness of the proposed method and the amplitude of the removed artefact. The performance of the SVM classifiers did not depend on the electrode type, whereas it was better for lower decomposition levels (50 and 20 ICs). Discussion Apart from cardiac interference, SVM performance and average artefact reduction indicate that the fingerprint method has an excellent overall performance in the automatic detection of eyeblinks, eye movements and myogenic artefacts, which is comparable to that of existing methods. Being also independent from simultaneous artefact recording, electrode number, type and layout, and decomposition level, the proposed fingerprint method can have useful applications in clinical and experimental EEG settings. PMID:29492336
Protein contact prediction using patterns of correlation.
Hamilton, Nicholas; Burrage, Kevin; Ragan, Mark A; Huber, Thomas
2004-09-01
We describe a new method for using neural networks to predict residue contact pairs in a protein. The main inputs to the neural network are a set of 25 measures of correlated mutation between all pairs of residues in two "windows" of size 5 centered on the residues of interest. While the individual pair-wise correlations are a relatively weak predictor of contact, by training the network on windows of correlation the accuracy of prediction is significantly improved. The neural network is trained on a set of 100 proteins and then tested on a disjoint set of 1033 proteins of known structure. An average predictive accuracy of 21.7% is obtained taking the best L/2 predictions for each protein, where L is the sequence length. Taking the best L/10 predictions gives an average accuracy of 30.7%. The predictor is also tested on a set of 59 proteins from the CASP5 experiment. The accuracy is found to be relatively consistent across different sequence lengths, but to vary widely according to the secondary structure. Predictive accuracy is also found to improve by using multiple sequence alignments containing many sequences to calculate the correlations. Copyright 2004 Wiley-Liss, Inc.
Combining remotely sensed and other measurements for hydrologic areal averages
NASA Technical Reports Server (NTRS)
Johnson, E. R.; Peck, E. L.; Keefer, T. N.
1982-01-01
A method is described for combining measurements of hydrologic variables of various sampling geometries and measurement accuracies to produce an estimated mean areal value over a watershed and a measure of the accuracy of the mean areal value. The method provides a means to integrate measurements from conventional hydrological networks and remote sensing. The resulting areal averages can be used to enhance a wide variety of hydrological applications including basin modeling. The correlation area method assigns weights to each available measurement (point, line, or areal) based on the area of the basin most accurately represented by the measurement. The statistical characteristics of the accuracy of the various measurement technologies and of the random fields of the hydrologic variables used in the study (water equivalent of the snow cover and soil moisture) required to implement the method are discussed.
Colvill, Emma; Booth, Jeremy; Nill, Simeon; Fast, Martin; Bedford, James; Oelfke, Uwe; Nakamura, Mitsuhiro; Poulsen, Per; Worm, Esben; Hansen, Rune; Ravkilde, Thomas; Scherman Rydhög, Jonas; Pommer, Tobias; Munck Af Rosenschold, Per; Lang, Stephanie; Guckenberger, Matthias; Groh, Christian; Herrmann, Christian; Verellen, Dirk; Poels, Kenneth; Wang, Lei; Hadsell, Michael; Sothmann, Thilo; Blanck, Oliver; Keall, Paul
2016-04-01
A study of real-time adaptive radiotherapy systems was performed to test the hypothesis that, across delivery systems and institutions, the dosimetric accuracy is improved with adaptive treatments over non-adaptive radiotherapy in the presence of patient-measured tumor motion. Ten institutions with robotic(2), gimbaled(2), MLC(4) or couch tracking(2) used common materials including CT and structure sets, motion traces and planning protocols to create a lung and a prostate plan. For each motion trace, the plan was delivered twice to a moving dosimeter; with and without real-time adaptation. Each measurement was compared to a static measurement and the percentage of failed points for γ-tests recorded. For all lung traces all measurement sets show improved dose accuracy with a mean 2%/2mm γ-fail rate of 1.6% with adaptation and 15.2% without adaptation (p<0.001). For all prostate the mean 2%/2mm γ-fail rate was 1.4% with adaptation and 17.3% without adaptation (p<0.001). The difference between the four systems was small with an average 2%/2mm γ-fail rate of <3% for all systems with adaptation for lung and prostate. The investigated systems all accounted for realistic tumor motion accurately and performed to a similar high standard, with real-time adaptation significantly outperforming non-adaptive delivery methods. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Visual Inspection Reliability for Precision Manufactured Parts.
See, Judi E
2015-12-01
Sandia National Laboratories conducted an experiment for the National Nuclear Security Administration to determine the reliability of visual inspection of precision manufactured parts used in nuclear weapons. Visual inspection has been extensively researched since the early 20th century; however, the reliability of visual inspection for nuclear weapons parts has not been addressed. In addition, the efficacy of using inspector confidence ratings to guide multiple inspections in an effort to improve overall performance accuracy is unknown. Further, the workload associated with inspection has not been documented, and newer measures of stress have not been applied. Eighty-two inspectors in the U.S. Nuclear Security Enterprise inspected 140 parts for eight different defects. Inspectors correctly rejected 85% of defective items and incorrectly rejected 35% of acceptable parts. Use of a phased inspection approach based on inspector confidence ratings was not an effective or efficient technique to improve the overall accuracy of the process. Results did verify that inspection is a workload-intensive task, dominated by mental demand and effort. Hits for Nuclear Security Enterprise inspection were not vastly superior to the industry average of 80%, and they were achieved at the expense of a high scrap rate not typically observed during visual inspection tasks. This study provides the first empirical data to address the reliability of visual inspection for precision manufactured parts used in nuclear weapons. Results enhance current understanding of the process of visual inspection and can be applied to improve reliability for precision manufactured parts. © 2015, Human Factors and Ergonomics Society.
The accuracy of stated energy contents of reduced-energy, commercially prepared foods
USDA-ARS?s Scientific Manuscript database
The accuracy of stated energy contents of reduced calorie restaurant foods and frozen meals purchased from supermarkets was evaluated. Measured energy values of 29 quick-serve and sit-down restaurant foods averaged 18% more than stated values, and measured energy values of 10 frozen meals purchased ...
Nonintrusive dynamic flowmeter
NASA Technical Reports Server (NTRS)
Pedersen, N. E.; Lynnworth, L. C.
1973-01-01
Description of some of the design and performance characteristics of an ultrasonic dynamic flowmeter which combines nonintrusiveness, fast response, high accuracy, and high resolution and is intended for use with cryogenic liquids and water. The flowmeter measures to 1% accuracy the dynamic as well as the steady flow velocity averaged over the pipe area.
Oh-Oka, Hitoshi; Nose, Ryuichiro
2005-09-01
Using a portable three dimensional ultrasound scanning device (The Bladder Scan BVI6100, Diagnostic Ultrasound Corporation), we examined measured values of bladder volume, especially focusing on volume lower than 100 ml. A total of 100 patients (male: 66, female: 34) were enrolled in the study. We made a comparison study between the measured value (the average of three measurements of bladder urine volume after a trial in male and female modes) using BVI6100, and the actual measured value of the sample obtained by urethral catheterization in each patient. We examined the factors which could increase the error rate. We also introduced the effective techniques to reduce measurement errors. The actual measured values in all patients correlated well with the average value of three measurements after a trial in a male mode of the BVI6100. The correlation coefficient was 0.887, the error rate was--4.6 +/- 24.5%, and the average coefficient of variation was 15.2. It was observed that the measurement result using the BVI6100 is influenced by patient side factors (extracted edges between bladder wall and urine, thickened bladder wall, irregular bladder wall, flattened rate of bladder, mistaking prostate for bladder in male, mistaking bladder for uterus in a female mode, etc.) or examiner side factors (angle between BVI and abdominal wall, compatibility between abdominal wall and ultrasound probe, controlling deflection while using probe, etc). When appropriate patients are chosen and proper measurement is performed, BVI6100 provides significantly higher accuracy in determining bladder volume, compared with existing abdominal ultrasound methods. BVI6100 is a convenient and extremely effective device also for the measurement of bladder urine over 100 ml.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malin, Martha J.; Bartol, Laura J.; DeWerd, Larry A., E-mail: mmalin@wisc.edu, E-mail: ladewerd@wisc.edu
2015-05-15
Purpose: To investigate why dose-rate constants for {sup 125}I and {sup 103}Pd seeds computed using the spectroscopic technique, Λ{sub spec}, differ from those computed with standard Monte Carlo (MC) techniques. A potential cause of these discrepancies is the spectroscopic technique’s use of approximations of the true fluence distribution leaving the source, φ{sub full}. In particular, the fluence distribution used in the spectroscopic technique, φ{sub spec}, approximates the spatial, angular, and energy distributions of φ{sub full}. This work quantified the extent to which each of these approximations affects the accuracy of Λ{sub spec}. Additionally, this study investigated how the simplified water-onlymore » model used in the spectroscopic technique impacts the accuracy of Λ{sub spec}. Methods: Dose-rate constants as described in the AAPM TG-43U1 report, Λ{sub full}, were computed with MC simulations using the full source geometry for each of 14 different {sup 125}I and 6 different {sup 103}Pd source models. In addition, the spectrum emitted along the perpendicular bisector of each source was simulated in vacuum using the full source model and used to compute Λ{sub spec}. Λ{sub spec} was compared to Λ{sub full} to verify the discrepancy reported by Rodriguez and Rogers. Using MC simulations, a phase space of the fluence leaving the encapsulation of each full source model was created. The spatial and angular distributions of φ{sub full} were extracted from the phase spaces and were qualitatively compared to those used by φ{sub spec}. Additionally, each phase space was modified to reflect one of the approximated distributions (spatial, angular, or energy) used by φ{sub spec}. The dose-rate constant resulting from using approximated distribution i, Λ{sub approx,i}, was computed using the modified phase space and compared to Λ{sub full}. For each source, this process was repeated for each approximation in order to determine which approximations used in the spectroscopic technique affect the accuracy of Λ{sub spec}. Results: For all sources studied, the angular and spatial distributions of φ{sub full} were more complex than the distributions used in φ{sub spec}. Differences between Λ{sub spec} and Λ{sub full} ranged from −0.6% to +6.4%, confirming the discrepancies found by Rodriguez and Rogers. The largest contribution to the discrepancy was the assumption of isotropic emission in φ{sub spec}, which caused differences in Λ of up to +5.3% relative to Λ{sub full}. Use of the approximated spatial and energy distributions caused smaller average discrepancies in Λ of −0.4% and +0.1%, respectively. The water-only model introduced an average discrepancy in Λ of −0.4%. Conclusions: The approximations used in φ{sub spec} caused discrepancies between Λ{sub approx,i} and Λ{sub full} of up to 7.8%. With the exception of the energy distribution, the approximations used in φ{sub spec} contributed to this discrepancy for all source models studied. To improve the accuracy of Λ{sub spec}, the spatial and angular distributions of φ{sub full} could be measured, with the measurements replacing the approximated distributions. The methodology used in this work could be used to determine the resolution that such measurements would require by computing the dose-rate constants from phase spaces modified to reflect φ{sub full} binned at different spatial and angular resolutions.« less
Estimation of Symptom Severity During Chemotherapy From Passively Sensed Data: Exploratory Study
Dey, Anind K; Ferreira, Denzil; Kamarck, Thomas; Sun, Weijing; Bae, Sangwon; Doryab, Afsaneh
2017-01-01
Background Physical and psychological symptoms are common during chemotherapy in cancer patients, and real-time monitoring of these symptoms can improve patient outcomes. Sensors embedded in mobile phones and wearable activity trackers could be potentially useful in monitoring symptoms passively, with minimal patient burden. Objective The aim of this study was to explore whether passively sensed mobile phone and Fitbit data could be used to estimate daily symptom burden during chemotherapy. Methods A total of 14 patients undergoing chemotherapy for gastrointestinal cancer participated in the 4-week study. Participants carried an Android phone and wore a Fitbit device for the duration of the study and also completed daily severity ratings of 12 common symptoms. Symptom severity ratings were summed to create a total symptom burden score for each day, and ratings were centered on individual patient means and categorized into low, average, and high symptom burden days. Day-level features were extracted from raw mobile phone sensor and Fitbit data and included features reflecting mobility and activity, sleep, phone usage (eg, duration of interaction with phone and apps), and communication (eg, number of incoming and outgoing calls and messages). We used a rotation random forests classifier with cross-validation and resampling with replacement to evaluate population and individual model performance and correlation-based feature subset selection to select nonredundant features with the best predictive ability. Results Across 295 days of data with both symptom and sensor data, a number of mobile phone and Fitbit features were correlated with patient-reported symptom burden scores. We achieved an accuracy of 88.1% for our population model. The subset of features with the best accuracy included sedentary behavior as the most frequent activity, fewer minutes in light physical activity, less variable and average acceleration of the phone, and longer screen-on time and interactions with apps on the phone. Mobile phone features had better predictive ability than Fitbit features. Accuracy of individual models ranged from 78.1% to 100% (mean 88.4%), and subsets of relevant features varied across participants. Conclusions Passive sensor data, including mobile phone accelerometer and usage and Fitbit-assessed activity and sleep, were related to daily symptom burden during chemotherapy. These findings highlight opportunities for long-term monitoring of cancer patients during chemotherapy with minimal patient burden as well as real-time adaptive interventions aimed at early management of worsening or severe symptoms. PMID:29258977
Estimation of Symptom Severity During Chemotherapy From Passively Sensed Data: Exploratory Study.
Low, Carissa A; Dey, Anind K; Ferreira, Denzil; Kamarck, Thomas; Sun, Weijing; Bae, Sangwon; Doryab, Afsaneh
2017-12-19
Physical and psychological symptoms are common during chemotherapy in cancer patients, and real-time monitoring of these symptoms can improve patient outcomes. Sensors embedded in mobile phones and wearable activity trackers could be potentially useful in monitoring symptoms passively, with minimal patient burden. The aim of this study was to explore whether passively sensed mobile phone and Fitbit data could be used to estimate daily symptom burden during chemotherapy. A total of 14 patients undergoing chemotherapy for gastrointestinal cancer participated in the 4-week study. Participants carried an Android phone and wore a Fitbit device for the duration of the study and also completed daily severity ratings of 12 common symptoms. Symptom severity ratings were summed to create a total symptom burden score for each day, and ratings were centered on individual patient means and categorized into low, average, and high symptom burden days. Day-level features were extracted from raw mobile phone sensor and Fitbit data and included features reflecting mobility and activity, sleep, phone usage (eg, duration of interaction with phone and apps), and communication (eg, number of incoming and outgoing calls and messages). We used a rotation random forests classifier with cross-validation and resampling with replacement to evaluate population and individual model performance and correlation-based feature subset selection to select nonredundant features with the best predictive ability. Across 295 days of data with both symptom and sensor data, a number of mobile phone and Fitbit features were correlated with patient-reported symptom burden scores. We achieved an accuracy of 88.1% for our population model. The subset of features with the best accuracy included sedentary behavior as the most frequent activity, fewer minutes in light physical activity, less variable and average acceleration of the phone, and longer screen-on time and interactions with apps on the phone. Mobile phone features had better predictive ability than Fitbit features. Accuracy of individual models ranged from 78.1% to 100% (mean 88.4%), and subsets of relevant features varied across participants. Passive sensor data, including mobile phone accelerometer and usage and Fitbit-assessed activity and sleep, were related to daily symptom burden during chemotherapy. These findings highlight opportunities for long-term monitoring of cancer patients during chemotherapy with minimal patient burden as well as real-time adaptive interventions aimed at early management of worsening or severe symptoms. ©Carissa A Low, Anind K Dey, Denzil Ferreira, Thomas Kamarck, Weijing Sun, Sangwon Bae, Afsaneh Doryab. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 19.12.2017.
Random noise effects in pulse-mode digital multilayer neural networks.
Kim, Y C; Shanblatt, M A
1995-01-01
A pulse-mode digital multilayer neural network (DMNN) based on stochastic computing techniques is implemented with simple logic gates as basic computing elements. The pulse-mode signal representation and the use of simple logic gates for neural operations lead to a massively parallel yet compact and flexible network architecture, well suited for VLSI implementation. Algebraic neural operations are replaced by stochastic processes using pseudorandom pulse sequences. The distributions of the results from the stochastic processes are approximated using the hypergeometric distribution. Synaptic weights and neuron states are represented as probabilities and estimated as average pulse occurrence rates in corresponding pulse sequences. A statistical model of the noise (error) is developed to estimate the relative accuracy associated with stochastic computing in terms of mean and variance. Computational differences are then explained by comparison to deterministic neural computations. DMNN feedforward architectures are modeled in VHDL using character recognition problems as testbeds. Computational accuracy is analyzed, and the results of the statistical model are compared with the actual simulation results. Experiments show that the calculations performed in the DMNN are more accurate than those anticipated when Bernoulli sequences are assumed, as is common in the literature. Furthermore, the statistical model successfully predicts the accuracy of the operations performed in the DMNN.
Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H
2017-10-25
Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.
Feature-Based Retinal Image Registration Using D-Saddle Feature
Hasikin, Khairunnisa; A. Karim, Noor Khairiah; Ahmedy, Fatimah
2017-01-01
Retinal image registration is important to assist diagnosis and monitor retinal diseases, such as diabetic retinopathy and glaucoma. However, registering retinal images for various registration applications requires the detection and distribution of feature points on the low-quality region that consists of vessels of varying contrast and sizes. A recent feature detector known as Saddle detects feature points on vessels that are poorly distributed and densely positioned on strong contrast vessels. Therefore, we propose a multiresolution difference of Gaussian pyramid with Saddle detector (D-Saddle) to detect feature points on the low-quality region that consists of vessels with varying contrast and sizes. D-Saddle is tested on Fundus Image Registration (FIRE) Dataset that consists of 134 retinal image pairs. Experimental results show that D-Saddle successfully registered 43% of retinal image pairs with average registration accuracy of 2.329 pixels while a lower success rate is observed in other four state-of-the-art retinal image registration methods GDB-ICP (28%), Harris-PIIFD (4%), H-M (16%), and Saddle (16%). Furthermore, the registration accuracy of D-Saddle has the weakest correlation (Spearman) with the intensity uniformity metric among all methods. Finally, the paired t-test shows that D-Saddle significantly improved the overall registration accuracy of the original Saddle. PMID:29204257
Comparative Study of SSVEP- and P300-Based Models for the Telepresence Control of Humanoid Robots.
Zhao, Jing; Li, Wei; Li, Mengfan
2015-01-01
In this paper, we evaluate the control performance of SSVEP (steady-state visual evoked potential)- and P300-based models using Cerebot-a mind-controlled humanoid robot platform. Seven subjects with diverse experience participated in experiments concerning the open-loop and closed-loop control of a humanoid robot via brain signals. The visual stimuli of both the SSVEP- and P300- based models were implemented on a LCD computer monitor with a refresh frequency of 60 Hz. Considering the operation safety, we set the classification accuracy of a model over 90.0% as the most important mandatory for the telepresence control of the humanoid robot. The open-loop experiments demonstrated that the SSVEP model with at most four stimulus targets achieved the average accurate rate about 90%, whereas the P300 model with the six or more stimulus targets under five repetitions per trial was able to achieve the accurate rates over 90.0%. Therefore, the four SSVEP stimuli were used to control four types of robot behavior; while the six P300 stimuli were chosen to control six types of robot behavior. Both of the 4-class SSVEP and 6-class P300 models achieved the average success rates of 90.3% and 91.3%, the average response times of 3.65 s and 6.6 s, and the average information transfer rates (ITR) of 24.7 bits/min 18.8 bits/min, respectively. The closed-loop experiments addressed the telepresence control of the robot; the objective was to cause the robot to walk along a white lane marked in an office environment using live video feedback. Comparative studies reveal that the SSVEP model yielded faster response to the subject's mental activity with less reliance on channel selection, whereas the P300 model was found to be suitable for more classifiable targets and required less training. To conclude, we discuss the existing SSVEP and P300 models for the control of humanoid robots, including the models proposed in this paper.
Comparative Study of SSVEP- and P300-Based Models for the Telepresence Control of Humanoid Robots
Li, Mengfan
2015-01-01
In this paper, we evaluate the control performance of SSVEP (steady-state visual evoked potential)- and P300-based models using Cerebot—a mind-controlled humanoid robot platform. Seven subjects with diverse experience participated in experiments concerning the open-loop and closed-loop control of a humanoid robot via brain signals. The visual stimuli of both the SSVEP- and P300- based models were implemented on a LCD computer monitor with a refresh frequency of 60 Hz. Considering the operation safety, we set the classification accuracy of a model over 90.0% as the most important mandatory for the telepresence control of the humanoid robot. The open-loop experiments demonstrated that the SSVEP model with at most four stimulus targets achieved the average accurate rate about 90%, whereas the P300 model with the six or more stimulus targets under five repetitions per trial was able to achieve the accurate rates over 90.0%. Therefore, the four SSVEP stimuli were used to control four types of robot behavior; while the six P300 stimuli were chosen to control six types of robot behavior. Both of the 4-class SSVEP and 6-class P300 models achieved the average success rates of 90.3% and 91.3%, the average response times of 3.65 s and 6.6 s, and the average information transfer rates (ITR) of 24.7 bits/min 18.8 bits/min, respectively. The closed-loop experiments addressed the telepresence control of the robot; the objective was to cause the robot to walk along a white lane marked in an office environment using live video feedback. Comparative studies reveal that the SSVEP model yielded faster response to the subject’s mental activity with less reliance on channel selection, whereas the P300 model was found to be suitable for more classifiable targets and required less training. To conclude, we discuss the existing SSVEP and P300 models for the control of humanoid robots, including the models proposed in this paper. PMID:26562524
NASA Astrophysics Data System (ADS)
Liang, Kun; Niu, Qunjie; Wu, Xiangkui; Xu, Jiaqi; Peng, Li; Zhou, Bo
2017-09-01
A lidar system with Fabry-Pérot etalon and an intensified charge coupled device can be used to obtain the scattering spectrum of the ocean and retrieve oceanic temperature profiles. However, the spectrum would be polluted by noise and result in a measurement error. To analyze the effect of signal to noise ratio (SNR) on the accuracy of measurements for Brillouin lidar in water, the theory model and characteristics of SNR are researched. The noise spectrums with different SNR are repetitiously measured based on simulation and experiment. The results show that accuracy is related to SNR, and considering the balance of time consumption and quality, the average of five measurements is adapted for real remote sensing under the pulse laser conditions of wavelength 532 nm, pulse energy 650 mJ, repetition rate 10 Hz, pulse width 8 ns and linewidth 0.003 cm-1 (90 MHz). Measuring with the Brillouin linewidth has a better accuracy at a lower temperature (<15 °C), while measuring with the Brillouin shift is a more appropriate method at a higher temperature (>15 °C), based on the classical retrieval model we adopt. The experimental results show that the temperature error is 0.71 °C and 0.06 °C based on shift and linewidth respectively when the image SNR is at the range of 3.2 dB-3.9 dB.
Huang, Yu-Ting; Huang, Chao-Ya; Su, Hsiu-Ya; Ma, Chen-Te
2018-06-01
Ventilator-associated pneumonia (VAP) is a common healthcare-associated infection in the neonatal intensive care unit (NICU). The average VAP infection density was 4.7‰ in our unit between June and August 2015. The results of a status survey indicated that in-service education lacked specialization, leading to inadequate awareness among staffs regarding the proper care of newborns with VAP and a lack of related care guides. This, in turn, resulted in inconsistencies in care measures for newborns with VAP. To improve the accuracy of implementation of preventive measures for VAP among medical staffs and reduce the density of VAP infections in the NICU. Conduct a literature search and adopt medical team resources management methods; establish effective team communication; establish monitoring mechanisms and incentives; establish mandatory in-service specialization education contents and a VAP preventive care guide exclusively for newborns as a reference for medical staffs during care execution; install additional equipment and aids and set reminders to ensure the implementation of VAP preventive measures. The accuracy rate of preventive measure execution by medical staffs improved from 70.1% to 97.9% and the VAP infection density in the NICU decreased from 4.7‰ to 0.52‰. Team integration effectively improved the accuracy of implementation of VAP-prevention measures, reduced the density of VAP infections, enhanced quality of care, and ensured that newborns received care that was more in line with specialization needs.
Li, Eldon Y; Tung, Chen-Yuan; Chang, Shu-Hsun
2016-08-01
The quest for an effective system capable of monitoring and predicting the trends of epidemic diseases is a critical issue for communities worldwide. With the prevalence of Internet access, more and more researchers today are using data from both search engines and social media to improve the prediction accuracy. In particular, a prediction market system (PMS) exploits the wisdom of crowds on the Internet to effectively accomplish relatively high accuracy. This study presents the architecture of a PMS and demonstrates the matching mechanism of logarithmic market scoring rules. The system was implemented to predict infectious diseases in Taiwan with the wisdom of crowds in order to improve the accuracy of epidemic forecasting. The PMS architecture contains three design components: database clusters, market engine, and Web applications. The system accumulated knowledge from 126 health professionals for 31 weeks to predict five disease indicators: the confirmed cases of dengue fever, the confirmed cases of severe and complicated influenza, the rate of enterovirus infections, the rate of influenza-like illnesses, and the confirmed cases of severe and complicated enterovirus infection. Based on the winning ratio, the PMS predicts the trends of three out of five disease indicators more accurately than does the existing system that uses the five-year average values of historical data for the same weeks. In addition, the PMS with the matching mechanism of logarithmic market scoring rules is easy to understand for health professionals and applicable to predict all the five disease indicators. The PMS architecture of this study affords organizations and individuals to implement it for various purposes in our society. The system can continuously update the data and improve prediction accuracy in monitoring and forecasting the trends of epidemic diseases. Future researchers could replicate and apply the PMS demonstrated in this study to more infectious diseases and wider geographical areas, especially the under-developed countries across Asia and Africa. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Busse, Harald; Riedel, Tim; Garnov, Nikita; Thörmer, Gregor; Kahn, Thomas; Moche, Michael
2015-01-01
MRI is of great clinical utility for the guidance of special diagnostic and therapeutic interventions. The majority of such procedures are performed iteratively ("in-and-out") in standard, closed-bore MRI systems with control imaging inside the bore and needle adjustments outside the bore. The fundamental limitations of such an approach have led to the development of various assistance techniques, from simple guidance tools to advanced navigation systems. The purpose of this work was to thoroughly assess the targeting accuracy, workflow and usability of a clinical add-on navigation solution on 240 simulated biopsies by different medical operators. Navigation relied on a virtual 3D MRI scene with real-time overlay of the optically tracked biopsy needle. Smart reference markers on a freely adjustable arm ensured proper registration. Twenty-four operators - attending (AR) and resident radiologists (RR) as well as medical students (MS) - performed well-controlled biopsies of 10 embedded model targets (mean diameter: 8.5 mm, insertion depths: 17-76 mm). Targeting accuracy, procedure times and 13 Likert scores on system performance were determined (strong agreement: 5.0). Differences in diagnostic success rates (AR: 93%, RR: 88%, MS: 81%) were not significant. In contrast, between-group differences in biopsy times (AR: 4:15, RR: 4:40, MS: 5:06 min:sec) differed significantly (p<0.01). Mean overall rating was 4.2. The average operator would use the system again (4.8) and stated that the outcome justifies the extra effort (4.4). Lowest agreement was reported for the robustness against external perturbations (2.8). The described combination of optical tracking technology with an automatic MRI registration appears to be sufficiently accurate for instrument guidance in a standard (closed-bore) MRI environment. High targeting accuracy and usability was demonstrated on a relatively large number of procedures and operators. Between groups with different expertise there were significant differences in experimental procedure times but not in the number of successful biopsies.
Targeting accuracy of single-isocenter intensity-modulated radiosurgery for multiple lesions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calvo-Ortega, J.F., E-mail: jfcdrr@yahoo.es; Pozo, M.; Moragues, S.
To investigate the targeting accuracy of intensity-modulated SRS (IMRS) plans designed to simultaneously treat multiple brain metastases with a single isocenter. A home-made acrylic phantom able to support a film (EBT3) in its coronal plane was used. The phantom was CT scanned and three coplanar small targets (a central and two peripheral) were outlined in the Eclipse system. Peripheral targets were 6 cm apart from the central one. A reference IMRS plan was designed to simultaneously treat the three targets, but only a single isocenter located at the center of the central target was used. After positioning the phantom onmore » the linac using the room lasers, a CBCT scan was acquired and the reference plan were mapped on it, by placing the planned isocenter at the intersection of the landmarks used in the film showing the linac isocenter. The mapped plan was then recalculated and delivered. The film dose distribution was derived using a cloud computing application ( (www.radiochromic.com)) that uses a triple-channel dosimetry algorithm. Comparison of dose distributions using the gamma index (5%/1 mm) were performed over a 5 × 5 cm{sup 2} region centered over each target. 2D shifts required to get the best gamma passing rates on the peripheral target regions were compared with the reported ones for the central target. The experiment was repeated ten times in different sessions. Average 2D shifts required to achieve optimal gamma passing rates (99%, 97%, 99%) were 0.7 mm (SD: 0.3 mm), 0.8 mm (SD: 0.4 mm) and 0.8 mm (SD: 0.3 mm), for the central and the two peripheral targets, respectively. No statistical differences (p > 0.05) were found for targeting accuracy between the central and the two peripheral targets. The study revealed a targeting accuracy within 1 mm for off-isocenter targets within 6 cm of the linac isocenter, when a single-isocenter IMRS plan is designed.« less
Koopman, Daniëlle; van Dalen, Jorn A; Arkies, Hester; Oostdijk, Ad H J; Francken, Anne Brecht; Bart, Jos; Slump, Cornelis H; Knollema, Siert; Jager, Pieter L
2018-01-16
We evaluated the diagnostic implications of a small-voxel reconstruction for lymph node characterization in breast cancer patients, using state-of-the-art FDG-PET/CT. We included 69 FDG-PET/CT scans from breast cancer patients. PET data were reconstructed using standard 4 × 4 × 4 mm 3 and small 2 × 2 × 2 mm 3 voxels. Two hundred thirty loco-regional lymph nodes were included, of which 209 nodes were visualised on PET/CT. All nodes were visually scored as benign or malignant, and SUV max and TB ratio (=SUV max /SUV background ) were measured. Final diagnosis was based on histological or imaging information. We determined the accuracy, sensitivity and specificity for both reconstruction methods and calculated optimal cut-off values to distinguish benign from malignant nodes. Sixty-one benign and 169 malignant lymph nodes were included. Visual evaluation accuracy was 73% (sensitivity 67%, specificity 89%) on standard-voxel images and 77% (sensitivity 78%, specificity 74%) on small-voxel images (p = 0.13). Across malignant nodes visualised on PET/CT, the small-voxel score was more often correct compared with the standard-voxel score (89 vs. 76%, p < 0.001). In benign nodes, the standard-voxel score was more often correct (89 vs. 74%, p = 0.04). Quantitative data were based on the 61 benign and 148 malignant lymph nodes visualised on PET/CT. SUVs and TB ratio were on average 3.0 and 1.6 times higher in malignant nodes compared to those in benign nodes (p < 0.001), on standard- and small-voxel PET images respectively. Small-voxel PET showed average increases in SUV max and TB ratio of typically 40% over standard-voxel PET. The optimal SUV max cut-off using standard-voxels was 1.8 (sensitivity 81%, specificity 95%, accuracy 85%) while for small-voxels, the optimal SUV max cut-off was 2.6 (sensitivity 78%, specificity 98%, accuracy 84%). Differences in accuracy were non-significant. Small-voxel PET/CT improves the sensitivity of visual lymph node characterization and provides a higher detection rate of malignant lymph nodes. However, small-voxel PET/CT also introduced more false-positive results in benign nodes. Across all nodes, differences in accuracy were non-significant. Quantitatively, small-voxel images require higher cut-off values. Readers have to adapt their reference standards.
The Impact of Sea Ice Concentration Accuracies on Climate Model Simulations with the GISS GCM
NASA Technical Reports Server (NTRS)
Parkinson, Claire L.; Rind, David; Healy, Richard J.; Martinson, Douglas G.; Zukor, Dorothy J. (Technical Monitor)
2000-01-01
The Goddard Institute for Space Studies global climate model (GISS GCM) is used to examine the sensitivity of the simulated climate to sea ice concentration specifications in the type of simulation done in the Atmospheric Modeling Intercomparison Project (AMIP), with specified oceanic boundary conditions. Results show that sea ice concentration uncertainties of +/- 7% can affect simulated regional temperatures by more than 6 C, and biases in sea ice concentrations of +7% and -7% alter simulated annually averaged global surface air temperatures by -0.10 C and +0.17 C, respectively, over those in the control simulation. The resulting 0.27 C difference in simulated annual global surface air temperatures is reduced by a third, to 0.18 C, when considering instead biases of +4% and -4%. More broadly, least-squares fits through the temperature results of 17 simulations with ice concentration input changes ranging from increases of 50% versus the control simulation to decreases of 50% yield a yearly average global impact of 0.0107 C warming for every 1% ice concentration decrease, i.e., 1.07 C warming for the full +50% to -50% range. Regionally and on a monthly average basis, the differences can be far greater, especially in the polar regions, where wintertime contrasts between the +50% and -50% cases can exceed 30 C. However, few statistically significant effects are found outside the polar latitudes, and temperature effects over the non-polar oceans tend to be under 1 C, due in part to the specification of an unvarying annual cycle of sea surface temperatures. The +/- 7% and 14% results provide bounds on the impact (on GISS GCM simulations making use of satellite data) of satellite-derived ice concentration inaccuracies, +/- 7% being the current estimated average accuracy of satellite retrievals and +/- 4% being the anticipated improved average accuracy for upcoming satellite instruments. Results show that the impact on simulated temperatures of imposed ice concentration changes is least in summer, encouragingly the same season in which the satellite accuracies are thought to be worst. Hence the impact of satellite inaccuracies is probably less than the use of an annually averaged satellite inaccuracy would suggest.
Incidence of stomach cancer in oman and the other gulf cooperation council countries.
Al-Mahrouqi, Haitham; Parkin, Lianne; Sharples, Katrina
2011-07-01
Stomach cancer is the most common cancer among males in Oman and the second most frequent among females from 1997 to 2007. Reports have suggested the rate is higher in Oman than in the other GCC countries. This study aims to describe the epidemiology of stomach cancer in Oman and to explore the apparent differences in the incidence of stomach cancer between Oman and the other Gulf Cooperation Council (GCC) countries. Data were obtained from the Omani National Cancer Registry (1997 - 2007) and from Gulf Centre for Cancer Registration reports (1998 - 2004). The annual average age-adjusted incidence rates for stomach cancer in Oman were 10.1 per 100,000 for males and 5.6 per 100,000 for females between 1997 and 2007. The age-adjusted incidence varied by region within Oman, and the incidence rate was higher in Oman than in most other GCC countries between 1998 and 2004. Further investigation of the completeness and accuracy of cancer registration is essential for exploration of variations in stomach cancer rates in the GCC countries.
Incidence of Stomach Cancer in Oman and the Other Gulf Cooperation Council Countries
Al-Mahrouqi, Haitham; Parkin, Lianne; Sharples, Katrina
2011-01-01
Objectives Stomach cancer is the most common cancer among males in Oman and the second most frequent among females from 1997 to 2007. Reports have suggested the rate is higher in Oman than in the other GCC countries. This study aims to describe the epidemiology of stomach cancer in Oman and to explore the apparent differences in the incidence of stomach cancer between Oman and the other Gulf Cooperation Council (GCC) countries. Methods Data were obtained from the Omani National Cancer Registry (1997 - 2007) and from Gulf Centre for Cancer Registration reports (1998 - 2004). Results The annual average age-adjusted incidence rates for stomach cancer in Oman were 10.1 per 100,000 for males and 5.6 per 100,000 for females between 1997 and 2007. The age-adjusted incidence varied by region within Oman, and the incidence rate was higher in Oman than in most other GCC countries between 1998 and 2004. Conclusion Further investigation of the completeness and accuracy of cancer registration is essential for exploration of variations in stomach cancer rates in the GCC countries. PMID:22043430
Time averaging of NMR chemical shifts in the MLF peptide in the solid state.
De Gortari, Itzam; Portella, Guillem; Salvatella, Xavier; Bajaj, Vikram S; van der Wel, Patrick C A; Yates, Jonathan R; Segall, Matthew D; Pickard, Chris J; Payne, Mike C; Vendruscolo, Michele
2010-05-05
Since experimental measurements of NMR chemical shifts provide time and ensemble averaged values, we investigated how these effects should be included when chemical shifts are computed using density functional theory (DFT). We measured the chemical shifts of the N-formyl-L-methionyl-L-leucyl-L-phenylalanine-OMe (MLF) peptide in the solid state, and then used the X-ray structure to calculate the (13)C chemical shifts using the gauge including projector augmented wave (GIPAW) method, which accounts for the periodic nature of the crystal structure, obtaining an overall accuracy of 4.2 ppm. In order to understand the origin of the difference between experimental and calculated chemical shifts, we carried out first-principles molecular dynamics simulations to characterize the molecular motion of the MLF peptide on the picosecond time scale. We found that (13)C chemical shifts experience very rapid fluctuations of more than 20 ppm that are averaged out over less than 200 fs. Taking account of these fluctuations in the calculation of the chemical shifts resulted in an accuracy of 3.3 ppm. To investigate the effects of averaging over longer time scales we sampled the rotameric states populated by the MLF peptides in the solid state by performing a total of 5 micros classical molecular dynamics simulations. By averaging the chemical shifts over these rotameric states, we increased the accuracy of the chemical shift calculations to 3.0 ppm, with less than 1 ppm error in 10 out of 22 cases. These results suggests that better DFT-based predictions of chemical shifts of peptides and proteins will be achieved by developing improved computational strategies capable of taking into account the averaging process up to the millisecond time scale on which the chemical shift measurements report.
Hoover, Stephen; Jackson, Eric V.; Paul, David; Locke, Robert
2016-01-01
Summary Background Accurate prediction of future patient census in hospital units is essential for patient safety, health outcomes, and resource planning. Forecasting census in the Neonatal Intensive Care Unit (NICU) is particularly challenging due to limited ability to control the census and clinical trajectories. The fixed average census approach, using average census from previous year, is a forecasting alternative used in clinical practice, but has limitations due to census variations. Objective Our objectives are to: (i) analyze the daily NICU census at a single health care facility and develop census forecasting models, (ii) explore models with and without patient data characteristics obtained at the time of admission, and (iii) evaluate accuracy of the models compared with the fixed average census approach. Methods We used five years of retrospective daily NICU census data for model development (January 2008 – December 2012, N=1827 observations) and one year of data for validation (January – December 2013, N=365 observations). Best-fitting models of ARIMA and linear regression were applied to various 7-day prediction periods and compared using error statistics. Results The census showed a slightly increasing linear trend. Best fitting models included a non-seasonal model, ARIMA(1,0,0), seasonal ARIMA models, ARIMA(1,0,0)x(1,1,2)7 and ARIMA(2,1,4)x(1,1,2)14, as well as a seasonal linear regression model. Proposed forecasting models resulted on average in 36.49% improvement in forecasting accuracy compared with the fixed average census approach. Conclusions Time series models provide higher prediction accuracy under different census conditions compared with the fixed average census approach. Presented methodology is easily applicable in clinical practice, can be generalized to other care settings, support short- and long-term census forecasting, and inform staff resource planning. PMID:27437040
Capan, Muge; Hoover, Stephen; Jackson, Eric V; Paul, David; Locke, Robert
2016-01-01
Accurate prediction of future patient census in hospital units is essential for patient safety, health outcomes, and resource planning. Forecasting census in the Neonatal Intensive Care Unit (NICU) is particularly challenging due to limited ability to control the census and clinical trajectories. The fixed average census approach, using average census from previous year, is a forecasting alternative used in clinical practice, but has limitations due to census variations. Our objectives are to: (i) analyze the daily NICU census at a single health care facility and develop census forecasting models, (ii) explore models with and without patient data characteristics obtained at the time of admission, and (iii) evaluate accuracy of the models compared with the fixed average census approach. We used five years of retrospective daily NICU census data for model development (January 2008 - December 2012, N=1827 observations) and one year of data for validation (January - December 2013, N=365 observations). Best-fitting models of ARIMA and linear regression were applied to various 7-day prediction periods and compared using error statistics. The census showed a slightly increasing linear trend. Best fitting models included a non-seasonal model, ARIMA(1,0,0), seasonal ARIMA models, ARIMA(1,0,0)x(1,1,2)7 and ARIMA(2,1,4)x(1,1,2)14, as well as a seasonal linear regression model. Proposed forecasting models resulted on average in 36.49% improvement in forecasting accuracy compared with the fixed average census approach. Time series models provide higher prediction accuracy under different census conditions compared with the fixed average census approach. Presented methodology is easily applicable in clinical practice, can be generalized to other care settings, support short- and long-term census forecasting, and inform staff resource planning.
Acquisition of decision making criteria: reward rate ultimately beats accuracy.
Balci, Fuat; Simen, Patrick; Niyogi, Ritwik; Saxe, Andrew; Hughes, Jessica A; Holmes, Philip; Cohen, Jonathan D
2011-02-01
Speed-accuracy trade-offs strongly influence the rate of reward that can be earned in many decision-making tasks. Previous reports suggest that human participants often adopt suboptimal speed-accuracy trade-offs in single session, two-alternative forced-choice tasks. We investigated whether humans acquired optimal speed-accuracy trade-offs when extensively trained with multiple signal qualities. When performance was characterized in terms of decision time and accuracy, our participants eventually performed nearly optimally in the case of higher signal qualities. Rather than adopting decision criteria that were individually optimal for each signal quality, participants adopted a single threshold that was nearly optimal for most signal qualities. However, setting a single threshold for different coherence conditions resulted in only negligible decrements in the maximum possible reward rate. Finally, we tested two hypotheses regarding the possible sources of suboptimal performance: (1) favoring accuracy over reward rate and (2) misestimating the reward rate due to timing uncertainty. Our findings provide support for both hypotheses, but also for the hypothesis that participants can learn to approach optimality. We find specifically that an accuracy bias dominates early performance, but diminishes greatly with practice. The residual discrepancy between optimal and observed performance can be explained by an adaptive response to uncertainty in time estimation.
NASA Astrophysics Data System (ADS)
Mafanya, Madodomzi; Tsele, Philemon; Botai, Joel; Manyama, Phetole; Swart, Barend; Monate, Thabang
2017-07-01
Invasive alien plants (IAPs) not only pose a serious threat to biodiversity and water resources but also have impacts on human and animal wellbeing. To support decision making in IAPs monitoring, semi-automated image classifiers which are capable of extracting valuable information in remotely sensed data are vital. This study evaluated the mapping accuracies of supervised and unsupervised image classifiers for mapping Harrisia pomanensis (a cactus plant commonly known as the Midnight Lady) using two interlinked evaluation strategies i.e. point and area based accuracy assessment. Results of the point-based accuracy assessment show that with reference to 219 ground control points, the supervised image classifiers (i.e. Maxver and Bhattacharya) mapped H. pomanensis better than the unsupervised image classifiers (i.e. K-mediuns, Euclidian Length and Isoseg). In this regard, user and producer accuracies were 82.4% and 84% respectively for the Maxver classifier. The user and producer accuracies for the Bhattacharya classifier were 90% and 95.7%, respectively. Though the Maxver produced a higher overall accuracy and Kappa estimate than the Bhattacharya classifier, the Maxver Kappa estimate of 0.8305 is not significantly (statistically) greater than the Bhattacharya Kappa estimate of 0.8088 at a 95% confidence interval. The area based accuracy assessment results show that the Bhattacharya classifier estimated the spatial extent of H. pomanensis with an average mapping accuracy of 86.1% whereas the Maxver classifier only gave an average mapping accuracy of 65.2%. Based on these results, the Bhattacharya classifier is therefore recommended for mapping H. pomanensis. These findings will aid in the algorithm choice making for the development of a semi-automated image classification system for mapping IAPs.
Assessment of the effects of CT dose in averaged x-ray CT images of a dose-sensitive polymer gel
NASA Astrophysics Data System (ADS)
Kairn, T.; Kakakhel, M. B.; Johnston, H.; Jirasek, A.; Trapp, J. V.
2015-01-01
The signal-to-noise ratio achievable in x-ray computed tomography (CT) images of polymer gels can be increased by averaging over multiple scans of each sample. However, repeated scanning delivers a small additional dose to the gel which may compromise the accuracy of the dose measurement. In this study, a NIPAM-based polymer gel was irradiated and then CT scanned 25 times, with the resulting data used to derive an averaged image and a "zero-scan" image of the gel. Comparison between these two results and the first scan of the gel showed that the averaged and zero-scan images provided better contrast, higher contrast-to- noise and higher signal-to-noise than the initial scan. The pixel values (Hounsfield units, HU) in the averaged image were not noticeably elevated, compared to the zero-scan result and the gradients used in the linear extrapolation of the zero-scan images were small and symmetrically distributed around zero. These results indicate that the averaged image was not artificially lightened by the small, additional dose delivered during CT scanning. This work demonstrates the broader usefulness of the zero-scan method as a means to verify the dosimetric accuracy of gel images derived from averaged x-ray CT data.
Sys, Gwen; Eykens, Hannelore; Lenaerts, Gerlinde; Shumelinsky, Felix; Robbrecht, Cedric; Poffyn, Bart
2017-06-01
This study analyses the accuracy of three-dimensional pre-operative planning and patient-specific guides for orthopaedic osteotomies. To this end, patient-specific guides were compared to the classical freehand method in an experimental setup with saw bones in two phases. In the first phase, the effect of guide design and oscillating versus reciprocating saws was analysed. The difference between target and performed cuts was quantified by the average distance deviation and average angular deviations in the sagittal and coronal planes for the different osteotomies. The results indicated that for one model osteotomy, the use of guides resulted in a more accurate cut when compared to the freehand technique. Reciprocating saws and slot guides improved accuracy in all planes, while oscillating saws and open guides lead to larger deviations from the planned cut. In the second phase, the accuracy of transfer of the planning to the surgical field with slot guides and a reciprocating saw was assessed and compared to the classical planning and freehand cutting method. The pre-operative plan was transferred with high accuracy. Three-dimensional-printed patient-specific guides improve the accuracy of osteotomies and bony resections in an experimental setup compared to conventional freehand methods. The improved accuracy is related to (1) a detailed and qualitative pre-operative plan and (2) an accurate transfer of the planning to the operation room with patient-specific guides by an accurate guidance of the surgical tools to perform the desired cuts.
NASA Astrophysics Data System (ADS)
Wayson, Michael B.; Bolch, Wesley E.
2018-04-01
Internal radiation dose estimates for diagnostic nuclear medicine procedures are typically calculated for a reference individual. Resultantly, there is uncertainty when determining the organ doses to patients who are not at 50th percentile on either height or weight. This study aims to better personalize internal radiation dose estimates for individual patients by modifying the dose estimates calculated for reference individuals based on easily obtainable morphometric characteristics of the patient. Phantoms of different sitting heights and waist circumferences were constructed based on computational reference phantoms for the newborn, 10 year-old, and adult. Monoenergetic photons and electrons were then simulated separately at 15 energies. Photon and electron specific absorbed fractions (SAFs) were computed for the newly constructed non-reference phantoms and compared to SAFs previously generated for the age-matched reference phantoms. Differences in SAFs were correlated to changes in sitting height and waist circumference to develop scaling factors that could be applied to reference SAFs as morphometry corrections. A further set of arbitrary non-reference phantoms were then constructed and used in validation studies for the SAF scaling factors. Both photon and electron dose scaling methods were found to increase average accuracy when sitting height was used as the scaling parameter (~11%). Photon waist circumference-based scaling factors showed modest increases in average accuracy (~7%) for underweight individuals, but not for overweight individuals. Electron waist circumference-based scaling factors did not show increases in average accuracy. When sitting height and waist circumference scaling factors were combined, modest average gains in accuracy were observed for photons (~6%), but not for electrons. Both photon and electron absorbed doses are more reliably scaled using scaling factors computed in this study. They can be effectively scaled using sitting height alone as patient-specific morphometric parameter.
Wayson, Michael B; Bolch, Wesley E
2018-04-13
Internal radiation dose estimates for diagnostic nuclear medicine procedures are typically calculated for a reference individual. Resultantly, there is uncertainty when determining the organ doses to patients who are not at 50th percentile on either height or weight. This study aims to better personalize internal radiation dose estimates for individual patients by modifying the dose estimates calculated for reference individuals based on easily obtainable morphometric characteristics of the patient. Phantoms of different sitting heights and waist circumferences were constructed based on computational reference phantoms for the newborn, 10 year-old, and adult. Monoenergetic photons and electrons were then simulated separately at 15 energies. Photon and electron specific absorbed fractions (SAFs) were computed for the newly constructed non-reference phantoms and compared to SAFs previously generated for the age-matched reference phantoms. Differences in SAFs were correlated to changes in sitting height and waist circumference to develop scaling factors that could be applied to reference SAFs as morphometry corrections. A further set of arbitrary non-reference phantoms were then constructed and used in validation studies for the SAF scaling factors. Both photon and electron dose scaling methods were found to increase average accuracy when sitting height was used as the scaling parameter (~11%). Photon waist circumference-based scaling factors showed modest increases in average accuracy (~7%) for underweight individuals, but not for overweight individuals. Electron waist circumference-based scaling factors did not show increases in average accuracy. When sitting height and waist circumference scaling factors were combined, modest average gains in accuracy were observed for photons (~6%), but not for electrons. Both photon and electron absorbed doses are more reliably scaled using scaling factors computed in this study. They can be effectively scaled using sitting height alone as patient-specific morphometric parameter.
Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder
NASA Technical Reports Server (NTRS)
Baurle, R. A.
2016-01-01
Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.
NASA Astrophysics Data System (ADS)
Zhao, Qian; Wang, Lei; Wang, Jazer; Wang, ChangAn; Shi, Hong-Fei; Guerrero, James; Feng, Mu; Zhang, Qiang; Liang, Jiao; Guo, Yunbo; Zhang, Chen; Wallow, Tom; Rio, David; Wang, Lester; Wang, Alvin; Wang, Jen-Shiang; Gronlund, Keith; Lang, Jun; Koh, Kar Kit; Zhang, Dong Qing; Zhang, Hongxin; Krishnamurthy, Subramanian; Fei, Ray; Lin, Chiawen; Fang, Wei; Wang, Fei
2018-03-01
Classical SEM metrology, CD-SEM, uses low data rate and extensive frame-averaging technique to achieve high-quality SEM imaging for high-precision metrology. The drawbacks include prolonged data collection time and larger photoresist shrinkage due to excess electron dosage. This paper will introduce a novel e-beam metrology system based on a high data rate, large probe current, and ultra-low noise electron optics design. At the same level of metrology precision, this high speed e-beam metrology system could significantly shorten data collection time and reduce electron dosage. In this work, the data collection speed is higher than 7,000 images per hr. Moreover, a novel large field of view (LFOV) capability at high resolution was enabled by an advanced electron deflection system design. The area coverage by LFOV is >100x larger than classical SEM. Superior metrology precision throughout the whole image has been achieved, and high quality metrology data could be extracted from full field. This new capability on metrology will further improve metrology data collection speed to support the need for large volume of metrology data from OPC model calibration of next generation technology. The shrinking EPE (Edge Placement Error) budget places more stringent requirement on OPC model accuracy, which is increasingly limited by metrology errors. In the current practice of metrology data collection and data processing to model calibration flow, CD-SEM throughput becomes a bottleneck that limits the amount of metrology measurements available for OPC model calibration, impacting pattern coverage and model accuracy especially for 2D pattern prediction. To address the trade-off in metrology sampling and model accuracy constrained by the cycle time requirement, this paper employs the high speed e-beam metrology system and a new computational software solution to take full advantage of the large volume data and significantly reduce both systematic and random metrology errors. The new computational software enables users to generate large quantity of highly accurate EP (Edge Placement) gauges and significantly improve design pattern coverage with up to 5X gain in model prediction accuracy on complex 2D patterns. Overall, this work showed >2x improvement in OPC model accuracy at a faster model turn-around time.
NASA Astrophysics Data System (ADS)
Harrington, Seán T.; Harrington, Joseph R.
2013-03-01
This paper presents an assessment of the suspended sediment rating curve approach for load estimation on the Rivers Bandon and Owenabue in Ireland. The rivers, located in the South of Ireland, are underlain by sandstone, limestones and mudstones, and the catchments are primarily agricultural. A comprehensive database of suspended sediment data is not available for rivers in Ireland. For such situations, it is common to estimate suspended sediment concentrations from the flow rate using the suspended sediment rating curve approach. These rating curves are most commonly constructed by applying linear regression to the logarithms of flow and suspended sediment concentration or by applying a power curve to normal data. Both methods are assessed in this paper for the Rivers Bandon and Owenabue. Turbidity-based suspended sediment loads are presented for each river based on continuous (15 min) flow data and the use of turbidity as a surrogate for suspended sediment concentration is investigated. A database of paired flow rate and suspended sediment concentration values, collected between the years 2004 and 2011, is used to generate rating curves for each river. From these, suspended sediment load estimates using the rating curve approach are estimated and compared to the turbidity based loads for each river. Loads are also estimated using stage and seasonally separated rating curves and daily flow data, for comparison purposes. The most accurate load estimate on the River Bandon is found using a stage separated power curve, while the most accurate load estimate on the River Owenabue is found using a general power curve. Maximum full monthly errors of - 76% to + 63% are found on the River Bandon with errors of - 65% to + 359% found on the River Owenabue. The average monthly error on the River Bandon is - 12% with an average error of + 87% on the River Owenabue. The use of daily flow data in the load estimation process does not result in a significant loss of accuracy on either river. Historic load estimates (with a 95% confidence interval) were hindcast from the flow record and average annual loads of 7253 ± 673 tonnes on the River Bandon and 1935 ± 325 tonnes on the River Owenabue were estimated to be passing the gauging stations.
NASA Astrophysics Data System (ADS)
Kruger, J. M.
2016-12-01
This study determines the rates of subsidence or uplift in coastal areas of SE Texas by comparing recent GNSS measurements to the original orthometric heights of previously installed National Geodetic Survey (NGS) benchmarks. Understanding subsidence rates in coastal areas of SE Texas is critical when determining its vulnerability to local sea level rise and flooding, as well as for accurate survey control. The study area includes major metropolitan and industrial areas as well as more rural areas at risk for flooding and hurricane surge. The resurveying methods used in this RTK GNSS study allow a large area to be covered relatively quickly with enough detail to determine subsidence rates that are averaged over several decades, and identify at-risk regions that can be monitored more closely with permanent or campaign-style measurements. The most recent measurements were acquired using a Trimble R8 GNSS system on all NGS benchmarks found in the study area. Differential corrections were applied in real time using a VRS network of base stations. Nominal vertical accuracies were 1.5 to 3.0 cm for a 2 to 5 minute reading. Usually three readings were measured and averaged for the final result. A total of 340 benchmarks were used for vertical rate calculations. Original NGS elevations were subtracted from the new elevations and divided by the number of years between the two elevation measurements to determine the average subsidence or uplift rate of the benchmark. Besides inaccuracies in the NGS datasheet and re-measured elevations, another source of error includes uncertainties in the year the NGS datasheet elevations were measured. Overall, vertical rates of change vary from -6 to -15 mm/yr subsidence in Port Arthur, Nederland, and other areas of Jefferson County, as well as in areas northwest of Beaumont, Texas. Other areas with subsidence rates between -10 and -4 mm/yr include parts of the Bolivar Peninsula in Galveston County, northeastern Chambers County, and the Mont Belvieu area. Surprisingly, areas of uplift, with rates as great as +5 mm/yr, were found in some parts of the study area, mostly around Liberty, Texas, western Chambers County, east-central Beaumont, and in the northern part of the study area near Jasper, Texas.
NASA Technical Reports Server (NTRS)
Schlegel, T. T.; Arenare, B.; Greco, E. C.; DePalma, J. L.; Starc, V.; Nunez, T.; Medina, R.; Jugo, D.; Rahman, M.A.; Delgado, R.
2007-01-01
We investigated the accuracy of several conventional and advanced resting ECG parameters for identifying obstructive coronary artery disease (CAD) and cardiomyopathy (CM). Advanced high-fidelity 12-lead ECG tests (approx. 5-min supine) were first performed on a "training set" of 99 individuals: 33 with ischemic or dilated CM and low ejection fraction (EF less than 40%); 33 with catheterization-proven obstructive CAD but normal EF; and 33 age-/gender-matched healthy controls. Multiple conventional and advanced ECG parameters were studied for their individual and combined retrospective accuracies in detecting underlying disease, the advanced parameters falling within the following categories: 1) Signal averaged ECG, including 12-lead high frequency QRS (150-250 Hz) plus multiple filtered and unfiltered parameters from the derived Frank leads; 2) 12-lead P, QRS and T-wave morphology via singular value decomposition (SVD) plus signal averaging; 3) Multichannel (12-lead, derived Frank lead, SVD lead) beat-to-beat QT interval variability; 4) Spatial ventricular gradient (and gradient component) variability; and 5) Heart rate variability. Several multiparameter ECG SuperScores were derivable, using stepwise and then generalized additive logistic modeling, that each had 100% retrospective accuracy in detecting underlying CM or CAD. The performance of these same SuperScores was then prospectively evaluated using a test set of another 120 individuals (40 new individuals in each of the CM, CAD and control groups, respectively). All 12-lead ECG SuperScores retrospectively generated for CM continued to perform well in prospectively identifying CM (i.e., areas under the ROC curve greater than 0.95), with one such score (containing just 4 components) maintaining 100% prospective accuracy. SuperScores retrospectively generated for CAD performed somewhat less accurately, with prospective areas under the ROC curve typically in the 0.90-0.95 range. We conclude that resting 12-lead high-fidelity ECG employing and combining the results of several advanced ECG software techniques shows great promise as a rapid and inexpensive tool for screening of heart disease.
Kim, Junetae; Lim, Sanghee; Min, Yul Ha; Shin, Yong-Wook; Lee, Byungtae; Sohn, Guiyun; Jung, Kyung Hae; Lee, Jae-Ho; Son, Byung Ho; Ahn, Sei Hyun; Shin, Soo-Yong
2016-01-01
Background Mobile mental-health trackers are mobile phone apps that gather self-reported mental-health ratings from users. They have received great attention from clinicians as tools to screen for depression in individual patients. While several apps that ask simple questions using face emoticons have been developed, there has been no study examining the validity of their screening performance. Objective In this study, we (1) evaluate the potential of a mobile mental-health tracker that uses three daily mental-health ratings (sleep satisfaction, mood, and anxiety) as indicators for depression, (2) discuss three approaches to data processing (ratio, average, and frequency) for generating indicator variables, and (3) examine the impact of adherence on reporting using a mobile mental-health tracker and accuracy in depression screening. Methods We analyzed 5792 sets of daily mental-health ratings collected from 78 breast cancer patients over a 48-week period. Using the Patient Health Questionnaire-9 (PHQ-9) as the measure of true depression status, we conducted a random-effect logistic panel regression and receiver operating characteristic (ROC) analysis to evaluate the screening performance of the mobile mental-health tracker. In addition, we classified patients into two subgroups based on their adherence level (higher adherence and lower adherence) using a k-means clustering algorithm and compared the screening accuracy between the two groups. Results With the ratio approach, the area under the ROC curve (AUC) is 0.8012, indicating that the performance of depression screening using daily mental-health ratings gathered via mobile mental-health trackers is comparable to the results of PHQ-9 tests. Also, the AUC is significantly higher (P=.002) for the higher adherence group (AUC=0.8524) than for the lower adherence group (AUC=0.7234). This result shows that adherence to self-reporting is associated with a higher accuracy of depression screening. Conclusions Our results support the potential of a mobile mental-health tracker as a tool for screening for depression in practice. Also, this study provides clinicians with a guideline for generating indicator variables from daily mental-health ratings. Furthermore, our results provide empirical evidence for the critical role of adherence to self-reporting, which represents crucial information for both doctors and patients. PMID:27492880
Thomas, Richard M; Parks, Connie L; Richard, Adam H
2016-09-01
A common task in forensic anthropology involves the estimation of the biological sex of a decedent by exploiting the sexual dimorphism between males and females. Estimation methods are often based on analysis of skeletal collections of known sex and most include a research-based accuracy rate. However, the accuracy rates of sex estimation methods in actual forensic casework have rarely been studied. This article uses sex determinations based on DNA results from 360 forensic cases to develop accuracy rates for sex estimations conducted by forensic anthropologists. The overall rate of correct sex estimation from these cases is 94.7% with increasing accuracy rates as more skeletal material is available for analysis and as the education level and certification of the examiner increases. Nine of 19 incorrect assessments resulted from cases in which one skeletal element was available, suggesting that the use of an "undetermined" result may be more appropriate for these cases. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.
Improvements in sub-grid, microphysics averages using quadrature based approaches
NASA Astrophysics Data System (ADS)
Chowdhary, K.; Debusschere, B.; Larson, V. E.
2013-12-01
Sub-grid variability in microphysical processes plays a critical role in atmospheric climate models. In order to account for this sub-grid variability, Larson and Schanen (2013) propose placing a probability density function on the sub-grid cloud microphysics quantities, e.g. autoconversion rate, essentially interpreting the cloud microphysics quantities as a random variable in each grid box. Random sampling techniques, e.g. Monte Carlo and Latin Hypercube, can be used to calculate statistics, e.g. averages, on the microphysics quantities, which then feed back into the model dynamics on the coarse scale. We propose an alternate approach using numerical quadrature methods based on deterministic sampling points to compute the statistical moments of microphysics quantities in each grid box. We have performed a preliminary test on the Kessler autoconversion formula, and, upon comparison with Latin Hypercube sampling, our approach shows an increased level of accuracy with a reduction in sample size by almost two orders of magnitude. Application to other microphysics processes is the subject of ongoing research.
Improving ECG Classification Accuracy Using an Ensemble of Neural Network Modules
Javadi, Mehrdad; Ebrahimpour, Reza; Sajedin, Atena; Faridi, Soheil; Zakernejad, Shokoufeh
2011-01-01
This paper illustrates the use of a combined neural network model based on Stacked Generalization method for classification of electrocardiogram (ECG) beats. In conventional Stacked Generalization method, the combiner learns to map the base classifiers' outputs to the target data. We claim adding the input pattern to the base classifiers' outputs helps the combiner to obtain knowledge about the input space and as the result, performs better on the same task. Experimental results support our claim that the additional knowledge according to the input space, improves the performance of the proposed method which is called Modified Stacked Generalization. In particular, for classification of 14966 ECG beats that were not previously seen during training phase, the Modified Stacked Generalization method reduced the error rate for 12.41% in comparison with the best of ten popular classifier fusion methods including Max, Min, Average, Product, Majority Voting, Borda Count, Decision Templates, Weighted Averaging based on Particle Swarm Optimization and Stacked Generalization. PMID:22046232
Yong, Yan Ling; Tan, Li Kuo; McLaughlin, Robert A; Chee, Kok Han; Liew, Yih Miin
2017-12-01
Intravascular optical coherence tomography (OCT) is an optical imaging modality commonly used in the assessment of coronary artery diseases during percutaneous coronary intervention. Manual segmentation to assess luminal stenosis from OCT pullback scans is challenging and time consuming. We propose a linear-regression convolutional neural network to automatically perform vessel lumen segmentation, parameterized in terms of radial distances from the catheter centroid in polar space. Benchmarked against gold-standard manual segmentation, our proposed algorithm achieves average locational accuracy of the vessel wall of 22 microns, and 0.985 and 0.970 in Dice coefficient and Jaccard similarity index, respectively. The average absolute error of luminal area estimation is 1.38%. The processing rate is 40.6 ms per image, suggesting the potential to be incorporated into a clinical workflow and to provide quantitative assessment of vessel lumen in an intraoperative time frame. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Interobserver Reliability of the Total Body Score System for Quantifying Human Decomposition.
Dabbs, Gretchen R; Connor, Melissa; Bytheway, Joan A
2016-03-01
Several authors have tested the accuracy of the Total Body Score (TBS) method for quantifying decomposition, but none have examined the reliability of the method as a scoring system by testing interobserver error rates. Sixteen participants used the TBS system to score 59 observation packets including photographs and written descriptions of 13 human cadavers in different stages of decomposition (postmortem interval: 2-186 days). Data analysis used a two-way random model intraclass correlation in SPSS (v. 17.0). The TBS method showed "almost perfect" agreement between observers, with average absolute correlation coefficients of 0.990 and average consistency correlation coefficients of 0.991. While the TBS method may have sources of error, scoring reliability is not one of them. Individual component scores were examined, and the influences of education and experience levels were investigated. Overall, the trunk component scores were the least concordant. Suggestions are made to improve the reliability of the TBS method. © 2016 American Academy of Forensic Sciences.
The content of Ca, Cu, Fe, Mg and Mn and antioxidant activity of green coffee brews.
Stelmach, Ewelina; Pohl, Pawel; Szymczycha-Madeja, Anna
2015-09-01
A simple and fast method of the analysis of green coffee infusions was developed to measure total concentrations of Ca, Cu, Fe, Mg and Mn by high resolution-continuum source flame atomic absorption spectrometry. The precision of the method was within 1-8%, while the accuracy was within -1% to 2%. The method was used to the analysis of infusions of twelve green coffees of different geographical origin. It was found that Ca and Mg were leached the easiest, i.e., on average 75% and 70%, respectively. As compared to the mug coffee preparation, the rate of the extraction of elements was increased when infusions were prepared using dripper or Turkish coffee preparation methods. Additionally, it was established that the antioxidant activity of green coffee infusions prepared using the mug coffee preparation was high, 75% on average, and positively correlated with the total content of phenolic compounds and the concentration of Ca in the brew. Copyright © 2015 Elsevier Ltd. All rights reserved.
Measurement and interpretation of skin prick test results.
van der Valk, J P M; Gerth van Wijk, R; Hoorn, E; Groenendijk, L; Groenendijk, I M; de Jong, N W
2015-01-01
There are several methods to read skin prick test results in type-I allergy testing. A commonly used method is to characterize the wheal size by its 'average diameter'. A more accurate method is to scan the area of the wheal to calculate the actual size. In both methods, skin prick test (SPT) results can be corrected for histamine-sensitivity of the skin by dividing the results of the allergic reaction by the histamine control. The objectives of this study are to compare different techniques of quantifying SPT results, to determine a cut-off value for a positive SPT for histamine equivalent prick -index (HEP) area, and to study the accuracy of predicting cashew nut reactions in double-blind placebo-controlled food challenge (DBPCFC) tests with the different SPT methods. Data of 172 children with cashew nut sensitisation were used for the analysis. All patients underwent a DBPCFC with cashew nut. Per patient, the average diameter and scanned area of the wheal size were recorded. In addition, the same data for the histamine-induced wheal were collected for each patient. The accuracy in predicting the outcome of the DBPCFC using four different SPT readings (i.e. average diameter, area, HEP-index diameter, HEP-index area) were compared in a Receiver-Operating Characteristic (ROC) plot. Characterizing the wheal size by the average diameter method is inaccurate compared to scanning method. A wheal average diameter of 3 mm is generally considered as a positive SPT cut-off value and an equivalent HEP-index area cut-off value of 0.4 was calculated. The four SPT methods yielded a comparable area under the curve (AUC) of 0.84, 0.85, 0.83 and 0.83, respectively. The four methods showed comparable accuracy in predicting cashew nut reactions in a DBPCFC. The 'scanned area method' is theoretically more accurate in determining the wheal area than the 'average diameter method' and is recommended in academic research. A HEP-index area of 0.4 is determined as cut-off value for a positive SPT. However, in clinical practice, the 'average diameter method' is also useful, because this method provides similar accuracy in predicting cashew nut allergic reactions in the DBPCFC. Trial number NTR3572.
Jamil, Muhammad; Ahmad, Omar; Poh, Kian Keong; Yap, Choon Hwai
2017-07-01
Current Doppler echocardiography quantification of mitral regurgitation (MR) severity has shortcomings. Proximal isovelocity surface area (PISA)-based methods, for example, are unable to account for the fact that ultrasound Doppler can measure only one velocity component: toward or away from the transducer. In the present study, we used ultrasound-based computational fluid dynamics (Ub-CFD) to quantify mitral regurgitation and study its advantages and disadvantages compared with 2-D and 3-D PISA methods. For Ub-CFD, patient-specific mitral valve geometry and velocity data were obtained from clinical ultrasound followed by 3-D CFD simulations at an assumed flow rate. We then obtained the average ratio of the ultrasound Doppler velocities to CFD velocities in the flow convergence region, and scaled CFD flow rate with this ratio as the final measured flow rate. We evaluated Ub-CFD, 2-D PISA and 3-D PISA with an in vitro flow loop, which featured regurgitation flow through (i) a simplified flat plate with round orifice and (ii) a 3-D printed realistic mitral valve and regurgitation orifice. The Ub-CFD and 3-D PISA methods had higher precision than the 2-D PISA method. Ub-CFD had consistent accuracy under all conditions tested, whereas 2-D PISA had the lowest overall accuracy. In vitro investigations indicated that the accuracy of 2-D and 3-D PISA depended significantly on the choice of aliasing velocity. Evaluation of these techniques was also performed for two clinical cases, and the dependency of PISA on aliasing velocity was similarly observed. Ub-CFD was robustly accurate and precise and has promise for future translation to clinical practice. Copyright © 2017 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Pairwise comparisons and visual perceptions of equal area polygons.
Adamic, P; Babiy, V; Janicki, R; Kakiashvili, T; Koczkodaj, W W; Tadeusiewicz, R
2009-02-01
The number of studies related to visual perception has been plentiful in recent years. Participants rated the areas of five randomly generated shapes of equal area, using a reference unit area that was displayed together with the shapes. Respondents were 179 university students from Canada and Poland. The average error estimated by respondents using the unit square was 25.75%. The error was substantially decreased to 5.51% when the shapes were compared to one another in pairs. This gain of 20.24% for this two-dimensional experiment was substantially better than the 11.78% gain reported in the previous one-dimensional experiments. This is the first statistically sound two-dimensional experiment demonstrating that pairwise comparisons improve accuracy.
Improving the Quality of Welding Seam of Automatic Welding of Buckets Based on TCP
NASA Astrophysics Data System (ADS)
Hu, Min
2018-02-01
Since February 2014, the welding defects of the automatic welding line of buckets have been frequently appeared. The average repair time of each bucket is 26min, which seriously affects the production efficiency and welding quality. We conducted troubleshooting, and found the main reasons for the welding defects of the buckets were the deviations of the center points of the robot tools and the poor quality of the locating welding. We corrected the gripper, welding torch, and accuracy of repeat positioning of robots to control the quality of positioning welding. The welding defect rate of buckets was reduced greatly, ensuring the production efficiency and welding quality.
The Design of Integrated Information System for High Voltage Metering Lab
NASA Astrophysics Data System (ADS)
Ma, Yan; Yang, Yi; Xu, Guangke; Gu, Chao; Zou, Lida; Yang, Feng
2018-01-01
With the development of smart grid, intelligent and informatization management of high-voltage metering lab become increasingly urgent. In the paper we design an integrated information system, which automates the whole transactions from accepting instruments, make experiments, generating report, report signature to instrument claims. Through creating database for all the calibrated instruments, using two-dimensional code, integrating report templates in advance, establishing bookmarks and online transmission of electronical signatures, our manual procedures reduce largely. These techniques simplify the complex process of account management and report transmission. After more than a year of operation, our work efficiency improves about forty percent averagely, and its accuracy rate and data reliability are much higher as well.
Frémont, P.; Labrecque, M.; Légaré, F.; Baillargeon, L.; Misson, L.
2001-01-01
OBJECTIVE: To develop and test the reliability of a tool for rating websites that provide information on evidence-based medicine. DESIGN: For each site, 60% of the score was given for content (eight criteria) and 40% was given for organization and presentation (nine criteria). Five of 10 randomly selected sites met the inclusion criteria and were used by three observers to test the accuracy of the tool. Each site was rated twice by each observer, with a 3-week interval between ratings. SETTING: Laval University, Quebec city. PARTICIPANTS: Three observers. MAIN OUTCOME MEASURES: The intraclass correlation coefficient (ICC) was used to rate the reliability of the tool. RESULTS: Average overall scores for the five sites were 40%, 79%, 83%, 88%, and 89%. All three observers rated the same two sites in fourth and fifth place and gave the top three ratings to the other three sites. The overall rating of the five sites by the three observers yielded an ICC of 0.93 to 0.97. An ICC of 0.87 was obtained for the two overall ratings conducted 3 weeks apart. CONCLUSION: This new tool offers excellent intraobserver and interobserver measurement reliability and is an excellent means of distinguishing between medical websites of varying quality. For best results, we recommend that the tool be used simultaneously by two observers and that differences be resolved by consensus. PMID:11768925
Wu, C; de Jong, J R; Gratama van Andel, H A; van der Have, F; Vastenhouw, B; Laverman, P; Boerman, O C; Dierckx, R A J O; Beekman, F J
2011-09-21
Attenuation of photon flux on trajectories between the source and pinhole apertures affects the quantitative accuracy of reconstructed single-photon emission computed tomography (SPECT) images. We propose a Chang-based non-uniform attenuation correction (NUA-CT) for small-animal SPECT/CT with focusing pinhole collimation, and compare the quantitative accuracy with uniform Chang correction based on (i) body outlines extracted from x-ray CT (UA-CT) and (ii) on hand drawn body contours on the images obtained with three integrated optical cameras (UA-BC). Measurements in phantoms and rats containing known activities of isotopes were conducted for evaluation. In (125)I, (201)Tl, (99m)Tc and (111)In phantom experiments, average relative errors comparing to the gold standards measured in a dose calibrator were reduced to 5.5%, 6.8%, 4.9% and 2.8%, respectively, with NUA-CT. In animal studies, these errors were 2.1%, 3.3%, 2.0% and 2.0%, respectively. Differences in accuracy on average between results of NUA-CT, UA-CT and UA-BC were less than 2.3% in phantom studies and 3.1% in animal studies except for (125)I (3.6% and 5.1%, respectively). All methods tested provide reasonable attenuation correction and result in high quantitative accuracy. NUA-CT shows superior accuracy except for (125)I, where other factors may have more impact on the quantitative accuracy than the selected attenuation correction.
Extraction of prostatic lumina and automated recognition for prostatic calculus image using PCA-SVM.
Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D Joshua
2011-01-01
Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi.
NASA Technical Reports Server (NTRS)
Petit, Gerard; Thomas, Claudine; Tavella, Patrizia
1993-01-01
Millisecond pulsars are galactic objects that exhibit a very stable spinning period. Several tens of these celestial clocks have now been discovered, which opens the possibility that an average time scale may be deduced through a long-term stability algorithm. Such an ensemble average makes it possible to reduce the level of the instabilities originating from the pulsars or from other sources of noise, which are unknown but independent. The basis for such an algorithm is presented and applied to real pulsar data. It is shown that pulsar time could shortly become more stable than the present atomic time, for averaging times of a few years. Pulsar time can also be used as a flywheel to maintain the accuracy of atomic time in case of temporary failure of the primary standards, or to transfer the improved accuracy of future standards back to the present.
NASA Astrophysics Data System (ADS)
Wei, Hongqiang; Zhou, Guiyun; Zhou, Junjie
2018-04-01
The classification of leaf and wood points is an essential preprocessing step for extracting inventory measurements and canopy characterization of trees from the terrestrial laser scanning (TLS) data. The geometry-based approach is one of the widely used classification method. In the geometry-based method, it is common practice to extract salient features at one single scale before the features are used for classification. It remains unclear how different scale(s) used affect the classification accuracy and efficiency. To assess the scale effect on the classification accuracy and efficiency, we extracted the single-scale and multi-scale salient features from the point clouds of two oak trees of different sizes and conducted the classification on leaf and wood. Our experimental results show that the balanced accuracy of the multi-scale method is higher than the average balanced accuracy of the single-scale method by about 10 % for both trees. The average speed-up ratio of single scale classifiers over multi-scale classifier for each tree is higher than 30.
An automatic step adjustment method for average power analysis technique used in fiber amplifiers
NASA Astrophysics Data System (ADS)
Liu, Xue-Ming
2006-04-01
An automatic step adjustment (ASA) method for average power analysis (APA) technique used in fiber amplifiers is proposed in this paper for the first time. In comparison with the traditional APA technique, the proposed method has suggested two unique merits such as a higher order accuracy and an ASA mechanism, so that it can significantly shorten the computing time and improve the solution accuracy. A test example demonstrates that, by comparing to the APA technique, the proposed method increases the computing speed by more than a hundredfold under the same errors. By computing the model equations of erbium-doped fiber amplifiers, the numerical results show that our method can improve the solution accuracy by over two orders of magnitude at the same amplifying section number. The proposed method has the capacity to rapidly and effectively compute the model equations of fiber Raman amplifiers and semiconductor lasers.
Airborne and ground based lidar measurements of the atmospheric pressure profile
NASA Technical Reports Server (NTRS)
Korb, C. Laurence; Schwemmer, Geary K.; Dombrowski, Mark; Weng, Chi Y.
1989-01-01
The first high accuracy remote measurements of the atmospheric pressure profile have been made. The measurements were made with a differential absorption lidar system that utilizes tunable alexandrite lasers. The absorption in the trough between two lines in the oxygen A-band near 760 nm was used for probing the atmosphere. Measurements of the two-dimensional structure of the pressure field were made in the troposphere from an aircraft looking down. Also, measurements of the one-dimensional structure were made from the ground looking up. Typical pressure accuracies for the aircraft measurements were 1.5-2 mbar with a 30-m vertical resolution and a 100-shot average (20 s), which corresponds to a 2-km horizontal resolution. Typical accuracies for the upward viewing ground based measurements were 2.0 mbar for a 30-m resolution and a 100-shot average.
An alternative sensor-based method for glucose monitoring in children and young people with diabetes
Edge, Julie; Acerini, Carlo; Campbell, Fiona; Hamilton-Shield, Julian; Moudiotis, Chris; Rahman, Shakeel; Randell, Tabitha; Smith, Anne; Trevelyan, Nicola
2017-01-01
Objective To determine accuracy, safety and acceptability of the FreeStyle Libre Flash Glucose Monitoring System in the paediatric population. Design, setting and patients Eighty-nine study participants, aged 4–17 years, with type 1 diabetes were enrolled across 9 diabetes centres in the UK. A factory calibrated sensor was inserted on the back of the upper arm and used for up to 14 days. Sensor glucose measurements were compared with capillary blood glucose (BG) measurements. Sensor results were masked to participants. Results Clinical accuracy of sensor results versus BG results was demonstrated, with 83.8% of results in zone A and 99.4% of results in zones A and B of the consensus error grid. Overall mean absolute relative difference (MARD) was 13.9%. Sensor accuracy was unaffected by patient factors such as age, body weight, sex, method of insulin administration or time of use (day vs night). Participants were in the target glucose range (3.9–10.0 mmol/L) ∼50% of the time (mean 12.1 hours/day), with an average of 2.2 hours/day and 9.5 hours/day in hypoglycaemia and hyperglycaemia, respectively. Sensor application, wear/use of the device and comparison to self-monitoring of blood glucose were rated favourably by most participants/caregivers (84.3–100%). Five device related adverse events were reported across a range of participant ages. Conclusions Accuracy, safety and user acceptability of the FreeStyle Libre System were demonstrated for the paediatric population. Accuracy of the system was unaffected by subject characteristics, making it suitable for a broad range of children and young people with diabetes. Trial registration number NCT02388815. PMID:28137708
Boissin, C; Laflamme, L; Wallis, L; Fleming, J; Hasselberg, M
2015-09-01
This study assessed whether photographs of burns on patients with dark-skin types could be used for accurate diagnosing and if the accuracy was affected by physicians' clinical background or case characteristics. 21 South-African cases (Fitzpatrick grades 4-6) of varying complexity were photographed using a camera phone and uploaded on a web-survey. Respondents were asked to assess wound depth (3 categories) and size (in percentage). A sample of 24 burn surgeons and emergency physicians was recruited in South-Africa, USA and Sweden. Measurements of accuracy (using percentage agreement with bedside diagnosis), inter- (n=24), and intra-rater (n=6) reliability (using percentage agreement and kappa) were computed for all cases aggregated and by case characteristic. Overall diagnostic accuracy was 67.5% and 66.0% for burn size and depth, respectively. It was comparable between burn surgeons and emergency physicians and between countries of practice. However, the standard deviations were smaller, showing higher similarities in diagnoses for burn surgeons and South-African clinicians compared to emergency physicians and clinicians from other countries. Case characteristics (child/adult, simple/complex wound, partial/full thickness) affected the results for burn size but not for depth. Inter- and intra-rater reliability for burn depth was 55% and 77%. Size and depth of burns on patients with dark-skin types could be assessed at least as well using photographs as at bedside with 67.5% and 66.0% average accuracy rates. Case characteristics significantly affected the accuracy for burn size, but medical specialty and country of practice seldom did in a statistically significant manner. Copyright © 2015 Elsevier Ltd and ISBI. All rights reserved.
A Single-Channel EOG-Based Speller.
He, Shenghong; Li, Yuanqing
2017-11-01
Electrooculography (EOG) signals, which can be used to infer the intentions of a user based on eye movements, are widely used in human-computer interface (HCI) systems. Most existing EOG-based HCI systems incorporate a limited number of commands because they generally associate different commands with a few different types of eye movements, such as looking up, down, left, or right. This paper presents a novel single-channel EOG-based HCI that allows users to spell asynchronously by only blinking. Forty buttons corresponding to 40 characters displayed to the user via a graphical user interface are intensified in a random order. To select a button, the user must blink his/her eyes in synchrony as the target button is flashed. Two data processing procedures, specifically support vector machine (SVM) classification and waveform detection, are combined to detect eye blinks. During detection, we simultaneously feed the feature vectors extracted from the ongoing EOG signal into the SVM classification and waveform detection modules. Decisions are made based on the results of the SVM classification and waveform detection. Three online experiments were conducted with eight healthy subjects. We achieved an average accuracy of 94.4% and a response time of 4.14 s for selecting a character in synchronous mode, as well as an average accuracy of 93.43% and a false positive rate of 0.03/min in the idle state in asynchronous mode. The experimental results, therefore, demonstrated the effectiveness of this single-channel EOG-based speller.
Evaluation of Small Mass Spectrometer Systems
NASA Technical Reports Server (NTRS)
Arkin, C. Richard; Griffin, Timothy P.; Ottens, Andrew K.; Diaz, Jorge A.; Follistein, Duke W.; Adams, Fredrick W.; Helms, William R.; Voska, N. (Technical Monitor)
2002-01-01
This work is aimed at understanding the aspects of designing a miniature mass spectrometer (MS) system. A multitude of commercial and government sectors, such as the military, environmental agencies and industrial manufacturers of semiconductors, refrigerants, and petroleum products, would find a small, portable, rugged and reliable MS system beneficial. Several types of small MS systems are evaluated and discussed, including linear quadrupole, quadrupole ion trap, time of flight and sector. The performance of each system in terms of accuracy, precision, limits of detection, response time, recovery time, scan rate, volume and weight is assessed. A performance scale is setup to rank each systems and an overall performance score is given to each system. All experiments involved the analysis of hydrogen, helium, oxygen and argon in a nitrogen background with the concentrations of the components of interest ranging from 0-5000 part-per-million (ppm). The relative accuracies of the systems vary from < 1% to approx. 40% with an average below 10%. Relative precisions varied from 1% to 20%, with an average below 5%. The detection limits had a large distribution, ranging from 0.2 to 170 ppm. The systems had a diverse response time ranging from 4 s to 210 s as did the recovery time with a 6 s to 210 s distribution. Most instruments had scan times near, 1 s, however one instrument exceeded 13 s. System weights varied from 9 to 52 kg and sizes from 15 x 10(exp 3)cu cm to 110 x 10(exp 3) cu cm.
GBAS Ionospheric Anomaly Monitoring Based on a Two-Step Approach
Zhao, Lin; Yang, Fuxin; Li, Liang; Ding, Jicheng; Zhao, Yuxin
2016-01-01
As one significant component of space environmental weather, the ionosphere has to be monitored using Global Positioning System (GPS) receivers for the Ground-Based Augmentation System (GBAS). This is because an ionospheric anomaly can pose a potential threat for GBAS to support safety-critical services. The traditional code-carrier divergence (CCD) methods, which have been widely used to detect the variants of the ionospheric gradient for GBAS, adopt a linear time-invariant low-pass filter to suppress the effect of high frequency noise on the detection of the ionospheric anomaly. However, there is a counterbalance between response time and estimation accuracy due to the fixed time constants. In order to release the limitation, a two-step approach (TSA) is proposed by integrating the cascaded linear time-invariant low-pass filters with the adaptive Kalman filter to detect the ionospheric gradient anomaly. The performance of the proposed method is tested by using simulated and real-world data, respectively. The simulation results show that the TSA can detect ionospheric gradient anomalies quickly, even when the noise is severer. Compared to the traditional CCD methods, the experiments from real-world GPS data indicate that the average estimation accuracy of the ionospheric gradient improves by more than 31.3%, and the average response time to the ionospheric gradient at a rate of 0.018 m/s improves by more than 59.3%, which demonstrates the ability of TSA to detect a small ionospheric gradient more rapidly. PMID:27240367
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knill, C; Wayne State University School of Medicine, Detroit, MI; Snyder, M
Purpose: PTW’s Octavius 1000 SRS array performs IMRT QA measurements with liquid filled ionization chambers (LICs). Collection efficiencies of LICs have been shown to change during IMRT delivery as a function of LINAC pulse frequency and pulse dose, which affects QA results. In this study, two methods were developed to correct changes in collection efficiencies during IMRT QA measurements, and the effects of these corrections on QA pass rates were compared. Methods: For the first correction, Matlab software was developed that calculates pulse frequency and pulse dose for each detector, using measurement and DICOM RT Plan files. Pulse information ismore » converted to collection efficiency and measurements are corrected by multiplying detector dose by ratios of calibration to measured collection efficiencies. For the second correction, MU/min in daily 1000 SRS calibration was chosen to match average MU/min of the VMAT plan. Usefulness of derived corrections were evaluated using 6MV and 10FFF SBRT RapidArc plans delivered to the OCTAVIUS 4D system using a TrueBeam equipped with an HD- MLC. Effects of the two corrections on QA results were examined by performing 3D gamma analysis comparing predicted to measured dose, with and without corrections. Results: After complex Matlab corrections, average 3D gamma pass rates improved by [0.07%,0.40%,1.17%] for 6MV and [0.29%,1.40%,4.57%] for 10FFF using [3%/3mm,2%/2mm,1%/1mm] criteria. Maximum changes in gamma pass rates were [0.43%,1.63%,3.05%] for 6MV and [1.00%,4.80%,11.2%] for 10FFF using [3%/3mm,2%/2mm,1%/1mm] criteria. On average, pass rates of simple daily calibration corrections were within 1% of complex Matlab corrections. Conclusion: Ion recombination effects can potentially be clinically significant for OCTAVIUS 1000 SRS measurements, especially for higher pulse dose unflattened beams when using tighter gamma tolerances. Matching daily 1000 SRS calibration MU/min to average planned MU/min is a simple correction that greatly reduces ion recombination effects, improving measurements accuracy and gamma pass rates. This work was supported by PTW.« less
NASA Technical Reports Server (NTRS)
Molnar, Melissa; Marek, C. John
2004-01-01
A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes that are being developed at Glenn. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates were then used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx were obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3). The temperature of the gas entering the turbine (T4) was also correlated as a function of the initial combustor temperature (T3), equivalence ratio, water to fuel mass ratio, and pressure.
Application of Template Matching for Improving Classification of Urban Railroad Point Clouds
Arastounia, Mostafa; Oude Elberink, Sander
2016-01-01
This study develops an integrated data-driven and model-driven approach (template matching) that clusters the urban railroad point clouds into three classes of rail track, contact cable, and catenary cable. The employed dataset covers 630 m of the Dutch urban railroad corridors in which there are four rail tracks, two contact cables, and two catenary cables. The dataset includes only geometrical information (three dimensional (3D) coordinates of the points) with no intensity data and no RGB data. The obtained results indicate that all objects of interest are successfully classified at the object level with no false positives and no false negatives. The results also show that an average 97.3% precision and an average 97.7% accuracy at the point cloud level are achieved. The high precision and high accuracy of the rail track classification (both greater than 96%) at the point cloud level stems from the great impact of the employed template matching method on excluding the false positives. The cables also achieve quite high average precision (96.8%) and accuracy (98.4%) due to their high sampling and isolated position in the railroad corridor. PMID:27973452
Using Time-Series Regression to Predict Academic Library Circulations.
ERIC Educational Resources Information Center
Brooks, Terrence A.
1984-01-01
Four methods were used to forecast monthly circulation totals in 15 midwestern academic libraries: dummy time-series regression, lagged time-series regression, simple average (straight-line forecasting), monthly average (naive forecasting). In tests of forecasting accuracy, dummy regression method and monthly mean method exhibited smallest average…
Coincidence probability as a measure of the average phase-space density at freeze-out
NASA Astrophysics Data System (ADS)
Bialas, A.; Czyz, W.; Zalewski, K.
2006-02-01
It is pointed out that the average semi-inclusive particle phase-space density at freeze-out can be determined from the coincidence probability of the events observed in multiparticle production. The method of measurement is described and its accuracy examined.
NASA Astrophysics Data System (ADS)
Minkov, D. A.; Gavrilov, G. M.; Moreno, J. M. D.; Vazquez, C. G.; Marquez, E.
2017-03-01
The accuracy of the popular graphical method of Swanepoel (SGM) for the characterization of a thin film on a substrate specimen from its interference transmittance spectrum depends on the subjective choice of four characterization parameters: the slope of the graph, the order number for the longest wavelength extremum, and the two numbers of the extrema used for the calculation approximations of the average film thickness. Here, an error metric is introduced for estimating the accuracy of SGM characterization. An algorithm is proposed for the optimization of SGM, named the OGM algorithm, based on the minimization of this error metric. Its execution provides optimized values of the four characterization parameters, and the respective computation of the most accurate film characteristics achievable within the framework of SGM. Moreover, substrate absorption is accounted for, unlike in the classical SGM, which is beneficial when using modern UV/visible/NIR spectrophotometers due to the relatively larger amount of absorption in the commonly used glass substrates for wavelengths above 1700 nm. A significant increase in the accuracy of the film characteristics is obtained employing the OGM algorithm compared to the SGM algorithm for two model specimens. Such improvements in accuracy increase with increasing film absorption. The results of the film characterization by the OGM algorithm are presented for two specimens containing RF-magnetron-sputtered a-Si films with disparate film thicknesses. The computed average film thicknesses are within 1.1% of the respective film thicknesses measured by SEM for both films. Achieving such high film characterization accuracy is particularly significant for the film with a computed average thickness of 3934 nm, since we are not aware of any other film with such a large thickness that has been characterized by SGM.
Kawahara, Daisuke; Ozawa, Shuichi; Yokomachi, Kazushi; Tanaka, Sodai; Higaki, Toru; Fujioka, Chikako; Suzuki, Tatsuhiko; Tsuneda, Masato; Nakashima, Takeo; Ohno, Yoshimi; Nagata, Yasushi
2018-02-01
To evaluate the accuracy of raw-data-based effective atomic number (Z eff ) values and monochromatic CT numbers for contrast material of varying iodine concentrations, obtained using dual-energy CT. We used a tissue characterization phantom and varying concentrations of iodinated contrast medium. A comparison between the theoretical values of Z eff and that provided by the manufacturer was performed. The measured and theoretical monochromatic CT numbers at 40-130 keV were compared. The average difference between the Z eff values of lung (inhale) inserts in the tissue characterization phantom was 81.3% and the average Z eff difference was within 8.4%. The average difference between the Z eff values of the varying concentrations of iodinated contrast medium was within 11.2%. For the varying concentrations of iodinated contrast medium, the differences between the measured and theoretical monochromatic CT values increased with decreasing monochromatic energy. The Z eff and monochromatic CT numbers in the tissue characterization phantom were reasonably accurate. The accuracy of the raw-data-based Z eff values was higher than that of image-based Z eff values in the tissue-equivalent phantom. The accuracy of Z eff values in the contrast medium was in good agreement within the maximum SD found in the iodine concentration range of clinical dynamic CT imaging. Moreover, the optimum monochromatic energy for human tissue and iodinated contrast medium was found to be 70 keV. Advances in knowledge: The accuracy of the Z eff values and monochromatic CT numbers of the contrast medium created by raw-data-based, dual-energy CT could be sufficient in clinical conditions.
NASA Astrophysics Data System (ADS)
Yang, Huijuan; Guan, Cuntai; Sui Geok Chua, Karen; San Chok, See; Wang, Chuan Chu; Kok Soon, Phua; Tang, Christina Ka Yin; Keng Ang, Kai
2014-06-01
Objective. Detection of motor imagery of hand/arm has been extensively studied for stroke rehabilitation. This paper firstly investigates the detection of motor imagery of swallow (MI-SW) and motor imagery of tongue protrusion (MI-Ton) in an attempt to find a novel solution for post-stroke dysphagia rehabilitation. Detection of MI-SW from a simple yet relevant modality such as MI-Ton is then investigated, motivated by the similarity in activation patterns between tongue movements and swallowing and there being fewer movement artifacts in performing tongue movements compared to swallowing. Approach. Novel features were extracted based on the coefficients of the dual-tree complex wavelet transform to build multiple training models for detecting MI-SW. The session-to-session classification accuracy was boosted by adaptively selecting the training model to maximize the ratio of between-classes distances versus within-class distances, using features of training and evaluation data. Main results. Our proposed method yielded averaged cross-validation (CV) classification accuracies of 70.89% and 73.79% for MI-SW and MI-Ton for ten healthy subjects, which are significantly better than the results from existing methods. In addition, averaged CV accuracies of 66.40% and 70.24% for MI-SW and MI-Ton were obtained for one stroke patient, demonstrating the detectability of MI-SW and MI-Ton from the idle state. Furthermore, averaged session-to-session classification accuracies of 72.08% and 70% were achieved for ten healthy subjects and one stroke patient using the MI-Ton model. Significance. These results and the subjectwise strong correlations in classification accuracies between MI-SW and MI-Ton demonstrated the feasibility of detecting MI-SW from MI-Ton models.
Cuff-less PPG based continuous blood pressure monitoring: a smartphone based approach.
Gaurav, Aman; Maheedhar, Maram; Tiwari, Vijay N; Narayanan, Rangavittal
2016-08-01
Cuff-less estimation of systolic (SBP) and diastolic (DBP) blood pressure is an efficient approach for non-invasive and continuous monitoring of an individual's vitals. Although pulse transit time (PTT) based approaches have been successful in estimating the systolic and diastolic blood pressures to a reasonable degree of accuracy, there is still scope for improvement in terms of accuracies. Moreover, PTT approach requires data from sensors placed at two different locations along with individual calibration of physiological parameters for deriving correct estimation of systolic and diastolic blood pressure (BP) and hence is not suitable for smartphone deployment. Heart Rate Variability is one of the extensively used non-invasive parameters to assess cardiovascular autonomic nervous system and is known to be associated with SBP and DBP indirectly. In this work, we propose a novel method to extract a comprehensive set of features by combining PPG signal based and Heart Rate Variability (HRV) related features using a single PPG sensor. Further, these features are fed into a DBP feedback based combinatorial neural network model to arrive at a common weighted average output of DBP and subsequently SBP. Our results show that using this current approach, an accuracy of ±6.8 mmHg for SBP and ±4.7 mmHg for DBP is achievable on 1,750,000 pulses extracted from a public database (comprising 3000 people). Since most of the smartphones are now equipped with PPG sensor, a mobile based cuff-less BP estimation will enable the user to monitor their BP as a vital parameter on demand. This will open new avenues towards development of pervasive and continuous BP monitoring systems leading to an early detection and prevention of cardiovascular diseases.
2015-01-01
The rapidly expanding availability of high-resolution mass spectrometry has substantially enhanced the ion-current-based relative quantification techniques. Despite the increasing interest in ion-current-based methods, quantitative sensitivity, accuracy, and false discovery rate remain the major concerns; consequently, comprehensive evaluation and development in these regards are urgently needed. Here we describe an integrated, new procedure for data normalization and protein ratio estimation, termed ICan, for improved ion-current-based analysis of data generated by high-resolution mass spectrometry (MS). ICan achieved significantly better accuracy and precision, and lower false-positive rate for discovering altered proteins, over current popular pipelines. A spiked-in experiment was used to evaluate the performance of ICan to detect small changes. In this study E. coli extracts were spiked with moderate-abundance proteins from human plasma (MAP, enriched by IgY14-SuperMix procedure) at two different levels to set a small change of 1.5-fold. Forty-five (92%, with an average ratio of 1.71 ± 0.13) of 49 identified MAP protein (i.e., the true positives) and none of the reference proteins (1.0-fold) were determined as significantly altered proteins, with cutoff thresholds of ≥1.3-fold change and p ≤ 0.05. This is the first study to evaluate and prove competitive performance of the ion-current-based approach for assigning significance to proteins with small changes. By comparison, other methods showed remarkably inferior performance. ICan can be broadly applicable to reliable and sensitive proteomic survey of multiple biological samples with the use of high-resolution MS. Moreover, many key features evaluated and optimized here such as normalization, protein ratio determination, and statistical analyses are also valuable for data analysis by isotope-labeling methods. PMID:25285707
2015-01-01
Color is one of the most prominent features of an image and used in many skin and face detection applications. Color space transformation is widely used by researchers to improve face and skin detection performance. Despite the substantial research efforts in this area, choosing a proper color space in terms of skin and face classification performance which can address issues like illumination variations, various camera characteristics and diversity in skin color tones has remained an open issue. This research proposes a new three-dimensional hybrid color space termed SKN by employing the Genetic Algorithm heuristic and Principal Component Analysis to find the optimal representation of human skin color in over seventeen existing color spaces. Genetic Algorithm heuristic is used to find the optimal color component combination setup in terms of skin detection accuracy while the Principal Component Analysis projects the optimal Genetic Algorithm solution to a less complex dimension. Pixel wise skin detection was used to evaluate the performance of the proposed color space. We have employed four classifiers including Random Forest, Naïve Bayes, Support Vector Machine and Multilayer Perceptron in order to generate the human skin color predictive model. The proposed color space was compared to some existing color spaces and shows superior results in terms of pixel-wise skin detection accuracy. Experimental results show that by using Random Forest classifier, the proposed SKN color space obtained an average F-score and True Positive Rate of 0.953 and False Positive Rate of 0.0482 which outperformed the existing color spaces in terms of pixel wise skin detection accuracy. The results also indicate that among the classifiers used in this study, Random Forest is the most suitable classifier for pixel wise skin detection applications. PMID:26267377
Code of Federal Regulations, 2012 CFR
2012-10-01
... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant provider...
Code of Federal Regulations, 2013 CFR
2013-10-01
... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant provider...
Code of Federal Regulations, 2014 CFR
2014-10-01
... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant provider...
Code of Federal Regulations, 2011 CFR
2011-10-01
... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant provider...
Code of Federal Regulations, 2010 CFR
2010-10-01
... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant provider...
Grogan, Katie; Bramham, Jessica
2016-12-01
Given that the diagnosis of adulthood ADHD depends on the retrospective self-report of childhood ADHD symptoms, this study aimed to establish whether current mood affects the accuracy of retrospective self-ratings of childhood ADHD. Barkley's Adult ADHD Rating Scale (BAARS) was used to assess the retrospective self- and parent-reports of childhood ADHD symptoms of 160 adults with ADHD and 92 adults without ADHD. Self-rated current mood was also measured using the Hospital Anxiety and Depression Scale (HADS). Higher BAARS self-ratings correlated with higher HADS self-ratings. Strongest correlations were evident between hyperactive/impulsive symptoms and anxiety symptoms. There was no relationship between current mood and accuracy of self-report. Current mood does not affect the accuracy of retrospective self-ratings of ADHD. Future research should aim to provide new measures of anxiety in ADHD to avoid the double counting of hyperactive/impulsive and anxiety symptoms. © The Author(s) 2014.
Invasive advance of an advantageous mutation: nucleation theory.
O'Malley, Lauren; Basham, James; Yasi, Joseph A; Korniss, G; Allstadt, Andrew; Caraco, Thomas
2006-12-01
For sedentary organisms with localized reproduction, spatially clustered growth drives the invasive advance of a favorable mutation. We model competition between two alleles where recurrent mutation introduces a genotype with a rate of local propagation exceeding the resident's rate. We capture ecologically important properties of the rare invader's stochastic dynamics by assuming discrete individuals and local neighborhood interactions. To understand how individual-level processes may govern population patterns, we invoke the physical theory for nucleation of spatial systems. Nucleation theory discriminates between single-cluster and multi-cluster dynamics. A sufficiently low mutation rate, or a sufficiently small environment, generates single-cluster dynamics, an inherently stochastic process; a favorable mutation advances only if the invader cluster reaches a critical radius. For this mode of invasion, we identify the probability distribution of waiting times until the favored allele advances to competitive dominance, and we ask how the critical cluster size varies as propagation or mortality rates vary. Increasing the mutation rate or system size generates multi-cluster invasion, where spatial averaging produces nearly deterministic global dynamics. For this process, an analytical approximation from nucleation theory, called Avrami's Law, describes the time-dependent behavior of the genotype densities with remarkable accuracy.
Comparison between uroflowmetry and sonouroflowmetry in recording of urinary flow in healthy men.
Krhut, Jan; Gärtner, Marcel; Sýkora, Radek; Hurtík, Petr; Burda, Michal; Luňáček, Libor; Zvarová, Katarína; Zvara, Peter
2015-08-01
To evaluate the accuracy of sonouroflowmetry in recording urinary flow parameters and voided volume. A total of 25 healthy male volunteers (age 18-63 years) were included in the study. All participants were asked to carry out uroflowmetry synchronous with recording of the sound generated by the urine stream hitting the water level in the urine collection receptacle, using a dedicated cell phone. From 188 recordings, 34 were excluded, because of voided volume <150 mL or technical problems during recording. Sonouroflowmetry recording was visualized in a form of a trace, representing sound intensity over time. Subsequently, the matching datasets of uroflowmetry and sonouroflowmetry were compared with respect to flow time, voided volume, maximum flow rate and average flow rate. Pearson's correlation coefficient was used to compare parameters recorded by uroflowmetry with those calculated based on sonouroflowmetry recordings. The flow pattern recorded by sonouroflowmetry showed a good correlation with the uroflowmetry trace. A strong correlation (Pearson's correlation coefficient 0.87) was documented between uroflowmetry-recorded flow time and duration of the sound signal recorded with sonouroflowmetry. A moderate correlation was observed in voided volume (Pearson's correlation coefficient 0.68) and average flow rate (Pearson's correlation coefficient 0.57). A weak correlation (Pearson's correlation coefficient 0.38) between maximum flow rate recorded using uroflowmetry and sonouroflowmetry-recorded peak sound intensity was documented. The present study shows that the basic concept utilizing sound analysis for estimation of urinary flow parameters and voided volume is valid. However, further development of this technology and standardization of recording algorithm are required. © 2015 The Japanese Urological Association.
Valderrama, Joaquin T; de la Torre, Angel; Medina, Carlos; Segura, Jose C; Thornton, A Roger D
2016-03-01
The recording of auditory evoked potentials (AEPs) at fast rates allows the study of neural adaptation, improves accuracy in estimating hearing threshold and may help diagnosing certain pathologies. Stimulation sequences used to record AEPs at fast rates require to be designed with a certain jitter, i.e., not periodical. Some authors believe that stimuli from wide-jittered sequences may evoke auditory responses of different morphology, and therefore, the time-invariant assumption would not be accomplished. This paper describes a methodology that can be used to analyze the time-invariant assumption in jittered stimulation sequences. The proposed method [Split-IRSA] is based on an extended version of the iterative randomized stimulation and averaging (IRSA) technique, including selective processing of sweeps according to a predefined criterion. The fundamentals, the mathematical basis and relevant implementation guidelines of this technique are presented in this paper. The results of this study show that Split-IRSA presents an adequate performance and that both fast and slow mechanisms of adaptation influence the evoked-response morphology, thus both mechanisms should be considered when time-invariance is assumed. The significance of these findings is discussed. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
Feature Detection of Curve Traffic Sign Image on The Bandung - Jakarta Highway
NASA Astrophysics Data System (ADS)
Naseer, M.; Supriadi, I.; Supangkat, S. H.
2018-03-01
Unsealed roadside and problems with the road surface are common causes of road crashes, particularly when those are combined with curves. Curve traffic sign is an important component for giving early warning to driver on traffic, especially on high-speed traffic like on the highway. Traffic sign detection has became a very interesting research now, and in this paper will be discussed about the detection of curve traffic sign. There are two types of curve signs are discussed, namely the curve turn to the left and the curve turn to the right and the all data sample used are the curves taken / recorded from some signs on the Bandung - Jakarta Highway. Feature detection of the curve signs use Speed Up Robust Feature (SURF) method, where the detected scene image is 800x450. From 45 curve turn to the right images, the system can detect the feature well to 35 images, where the success rate is 77,78%, while from the 45 curve turn to the left images, the system can detect the feature well to 34 images and the success rate is 75,56%, so the average accuracy in the detection process is 76,67%. While the average time for the detection process is 0.411 seconds.
Movement amplitude and tempo change in piano performance
NASA Astrophysics Data System (ADS)
Palmer, Caroline
2004-05-01
Music performance places stringent temporal and cognitive demands on individuals that should yield large speed/accuracy tradeoffs. Skilled piano performance, however, shows consistently high accuracy across a wide variety of rates. Movement amplitude may affect the speed/accuracy tradeoff, so that high accuracy can be obtained even at very fast tempi. The contribution of movement amplitude changes in rate (tempo) is investigated with motion capture. Cameras recorded pianists with passive markers on hands and fingers, who performed on an electronic (MIDI) keyboard. Pianists performed short melodies at faster and faster tempi until they made errors (altering the speed/accuracy function). Variability of finger movements in the three motion planes indicated most change in the plane perpendicular to the keyboard across tempi. Surprisingly, peak amplitudes of motion before striking the keys increased as tempo increased. Increased movement amplitudes at faster rates may reduce or compensate for speed/accuracy tradeoffs. [Work supported by Canada Research Chairs program, HIMH R01 45764.
Validation of the Acoustic Voice Quality Index in the Lithuanian Language.
Uloza, Virgilijus; Petrauskas, Tadas; Padervinskis, Evaldas; Ulozaitė, Nora; Barsties, Ben; Maryn, Youri
2017-03-01
The aim of the present study was to validate the Acoustic Voice Quality Index in Lithuanian language (AVQI-LT) and investigate the feasibility and robustness of its diagnostic accuracy, differentiating normal and dysphonic voice. A total of 184 native Lithuanian subjects with normal voices (n = 46) and with various voice disorders (n = 138) were asked to read aloud the Lithuanian text and to sustain the vowel /a/. A sentence with 13 syllables and a 3-second midvowel portion of the sustained vowel were edited. Both speech tasks were concatenated, and perceptually rated for dysphonia severity by five voice clinicians. They rated the Grade (G) from the Grade Roughness Breathiness Asthenia Strain (GRBAS) protocol and the overall severity from the Consensus Auditory-perceptual Evaluation of Voice protocol with a visual analog scale (VAS). The average scores (G mean and VAS mean ) were taken as the perceptual dysphonia severity level for every voice sample. All concatenated voice samples were acoustically analyzed to receive an AVQI-LT score. Both auditory-perceptual judgment procedures showed sufficient strength of agreement between five raters. The results achieved significant and marked concurrent validity between both auditory-perceptual judgment procedures and AVQI-LT. The diagnostic accuracy of AVQI-LT showed for both auditory-perceptual judgment procedures comparable results with two different AVQI-LT thresholds. The AVQI-LT threshold of 2.97 for the G mean rating obtained reasonable sensitivity = 0.838 and excellent specificity = 0.937. For the VAS rating, an AVQI-LT threshold of 3.48 was determined with sensitivity = 0.840 and specificity = 0.922. The AVQI-LT is considered a valid and reliable tool for assessing the dysphonia severity level in Lithuanian-speaking population. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
The SIST-M: Predictive validity of a brief structured Clinical Dementia Rating interview
Okereke, Olivia I.; Pantoja-Galicia, Norberto; Copeland, Maura; Hyman, Bradley T.; Wanggaard, Taylor; Albert, Marilyn S.; Betensky, Rebecca A.; Blacker, Deborah
2011-01-01
Background We previously established reliability and cross-sectional validity of the SIST-M (Structured Interview and Scoring Tool–Massachusetts Alzheimer's Disease Research Center), a shortened version of an instrument shown to predict progression to Alzheimer disease (AD), even among persons with very mild cognitive impairment (vMCI). Objective To test predictive validity of the SIST-M. Methods Participants were 342 community-dwelling, non-demented older adults in a longitudinal study. Baseline Clinical Dementia Rating (CDR) ratings were determined by either: 1) clinician interviews or 2) a previously developed computer algorithm based on 60 questions (of a possible 131) extracted from clinician interviews. We developed age+gender+education-adjusted Cox proportional hazards models using CDR-sum-of-boxes (CDR-SB) as the predictor, where CDR-SB was determined by either clinician interview or algorithm; models were run for the full sample (n=342) and among those jointly classified as vMCI using clinician- and algorithm-based CDR ratings (n=156). We directly compared predictive accuracy using time-dependent Receiver Operating Characteristic (ROC) curves. Results AD hazard ratios (HRs) were similar for clinician-based and algorithm-based CDR-SB: for a 1-point increment in CDR-SB, respective HRs (95% CI)=3.1 (2.5,3.9) and 2.8 (2.2,3.5); among those with vMCI, respective HRs (95% CI) were 2.2 (1.6,3.2) and 2.1 (1.5,3.0). Similarly high predictive accuracy was achieved: the concordance probability (weighted average of the area-under-the-ROC curves) over follow-up was 0.78 vs. 0.76 using clinician-based vs. algorithm-based CDR-SB. Conclusion CDR scores based on items from this shortened interview had high predictive ability for AD – comparable to that using a lengthy clinical interview. PMID:21986342
Jesus, Gilmar Mercês de; Assis, Maria Alice Altenburg de; Kupek, Emil
2017-06-05
The study evaluated the validity and reproducibility of the food consumption section of the questionnaire Food Intake and Physical Activity of School Children (Web-CAAFE), an Internet-based software for the qualitative measurement of food consumption by recalling the previous day. A total of 390 students in grades 2 to 5 (7 to 15 years) of a semi-integral public school participated in the study. The validity was tested by comparing the report in the Web-CAAFE and the direct observation of food consumed in the school in the previous day. The reproducibility was evaluated in a sub-sample of 92 schoolchildren, by comparing repeated reports in the Web-CAAFE on the same day. Probabilities of accuracy in the Web-CAAFE report in relation to the observation (matches, omissions and intrusions and respective 95% confidence intervals) among seven food groups were estimated through multinomial logistic regression. The average for the match rate was 81.4% (variation: 62% sweets and 98% beans); for the omission rate was 16.2% (variation between 2.1% dairy products and 28.5% sweets); for the intrusion rate was 7.1% (variation between 1.3% beans and 13.8% cereals). Sweets, cereals and processed foods, snack foods and fried foods simultaneously exhibited higher rates of omission and intrusion. Students 10 years of age or older had lower probabilities of intruding food items. There were no significant variations in the accuracy of the report between repeated measures. The Web-CAAFE was a valid and reliable instrument for the evaluation of food consumption, when applied to students in grades 2 to 5 of public schools.
Gildersleeve-Neumann, Christina E; Kester, Ellen S; Davis, Barbara L; Peña, Elizabeth D
2008-07-01
English speech acquisition by typically developing 3- to 4-year-old children with monolingual English was compared to English speech acquisition by typically developing 3- to 4-year-old children with bilingual English-Spanish backgrounds. We predicted that exposure to Spanish would not affect the English phonetic inventory but would increase error frequency and type in bilingual children. Single-word speech samples were collected from 33 children. Phonetically transcribed samples for the 3 groups (monolingual English children, English-Spanish bilingual children who were predominantly exposed to English, and English-Spanish bilingual children with relatively equal exposure to English and Spanish) were compared at 2 time points and for change over time for phonetic inventory, phoneme accuracy, and error pattern frequencies. Children demonstrated similar phonetic inventories. Some bilingual children produced Spanish phonemes in their English and produced few consonant cluster sequences. Bilingual children with relatively equal exposure to English and Spanish averaged more errors than did bilingual children who were predominantly exposed to English. Both bilingual groups showed higher error rates than English-only children overall, particularly for syllable-level error patterns. All language groups decreased in some error patterns, although the ones that decreased were not always the same across language groups. Some group differences of error patterns and accuracy were significant. Vowel error rates did not differ by language group. Exposure to English and Spanish may result in a higher English error rate in typically developing bilinguals, including the application of Spanish phonological properties to English. Slightly higher error rates are likely typical for bilingual preschool-aged children. Change over time at these time points for all 3 groups was similar, suggesting that all will reach an adult-like system in English with exposure and practice.
Performance of a multi leaf collimator system for MR-guided radiation therapy.
Cai, Bin; Li, Harold; Yang, Deshan; Rodriguez, Vivian; Curcuru, Austen; Wang, Yuhe; Wen, Jie; Kashani, Rojano; Mutic, Sasa; Green, Olga
2017-12-01
The purpose of this study was to investigate and characterize the performance of a Multi Leaf Collimator (MLC) designed for Cobalt-60 based MR-guided radiation therapy system in a 0.35 T magnetic field. The MLC design and unique assembly features in the ViewRay MRIdian system were first reviewed. The RF cage shielding of MLC motor and cables were evaluated using ACR phantoms with real-time imaging and quantified by signal-to-noise ratio. The dosimetric characterizations, including the leaf transmission, leaf penumbra, tongue-and-groove effect, were investigated using radiosensitive films. The output factor of MLC-defined fields was measured with ionization chambers for both symmetric fields from 2.1 × 2.1 cm 2 to 27.3 × 27.3 cm 2 and asymmetric fields from 10.5 × 10.5 cm 2 to 10.5 × 2.0 cm 2 . Multi leaf collimator (MLC) positional accuracy was assessed by delivering either a picket fence (PF) style pattern on radiochromic films with wire-jig phantom or double and triple-rectangular patterns on ArcCheck-MR (Sun Nuclear, Melbourne, FL, USA) with gamma analysis as the pass/fail indicator. Leaf speed tests were performed to assess the capability of full range leaf travel within manufacture's specifications. Multi leaf collimator plan delivery reproducibility was tested by repeatedly delivering both open fields and fields with irregular shaped segments over 1-month period. Comparable SNRs within 4% were observed for MLC moving and stationary plans on vendor-reconstructed images, and the direct k-space reconstructed images showed that the three SNRs are within 1%. The maximum leaf transmission for all three MLCs was less than 0.35% and the average leakage was 0.153 ± 0.006%, 0.151 ± 0.008%, and 0.159 ± 0.015% for head 1, 2, and 3, respectively. Both the leaf edge and leaf end penumbra showed comparable values within 0.05 cm, and the measured values are within 0.1 cm with TPS values. The leaf edge TG effect indicated 10% underdose and the leaf end TG showed a shifted dose distribution with 0.3 cm offset. The leaf positioning test showed a 0.2 cm accuracy in the PF style test, and a gamma passing rate above 96% was observed with a 3%/2 mm criteria when comparing the measured double/triple-rectangular pattern fluence with TPS calculated fluence. The average leaf speed when executing the test plan fell in a range from 1.86 to 1.95 cm/s. The measured and TPS calculated output factors were within 2% for squared fields and within 3% for rectangular fields. The reproducibility test showed the deviation of output factors were well within 2% for square fields and the gamma passing rate within 1.5% for fields with irregular segments. The Monte Carlo predicted output factors were within 2% compared to TPS values. 15 out of the 16 IMRT plans have gamma passing rate more than 98% compared to the TPS fluence with an average passing rate of 99.1 ± 0.6%. The MRIdian MLC has a good RF noise shielding design, low radiation leakage, good positioning accuracy, comparable TG effect, and can be modeled by an independent Monte Carlo calculation platform. © 2017 American Association of Physicists in Medicine.
The contribution of Multi-GNSS Experiment (MGEX) to precise point positioning
NASA Astrophysics Data System (ADS)
Guo, Fei; Li, Xingxing; Zhang, Xiaohong; Wang, Jinling
2017-06-01
In response to the changing world of GNSS, the International GNSS Service (IGS) has initiated the Multi-GNSS Experiment (MGEX). As part of the MGEX project, initial precise orbit and clock products have been released for public use, which are the key prerequisites for multi-GNSS precise point positioning (PPP). In particular, precise orbits and clocks at intervals of 5 min and 30 s are presently available for the new emerging systems. This paper investigates the benefits of multi-GNSS for PPP. Firstly, orbit and clock consistency tests (between different providers) were performed for GPS, GLONASS, Galileo and BeiDou. In general, the differences of GPS are, respectively, 1.0-1.5 cm for orbit and 0.1 ns for clock. The consistency of GLONASS is worse than GPS by a factor of 2-3, i.e. 2-4 cm for orbit and 0.2 ns for clock. However, the corresponding differences of Galileo and BeiDou are significantly larger than those of GPS and GLONASS, particularly for the BeiDou GEO satellites. Galileo as well as BeiDou IGSO/MEO products have a consistency of 0.1-0.2 m for orbit, and 0.2-0.3 ns for clock. As to BeiDou GEO satellites, the difference of their orbits reaches 3-4 m in along-track, 0.5-0.6 m in cross-track, and 0.2-0.3 m in the radial directions, together with an average RMS of 0.6 ns for clock. Furthermore, the short-term stability of multi-GNSS clocks was analyzed by Allan deviation. Results show that clock stability of the onboard GNSS is highly dependent on the satellites generations, operational lifetime, orbit types, and frequency standards. Finally, kinematic PPP tests were conducted to investigate the contribution of multi-GNSS and higher rate clock corrections. As expected, the positioning accuracy as well as convergence speed benefit from the fusion of multi-GNSS and higher rate of precise clock corrections. The multi-GNSS PPP improves the positioning accuracy by 10-20%, 40-60%, and 60-80% relative to the GPS-, GLONASS-, and BeiDou-only PPP. The usage of 30 s interval clock products decreases interpolation errors, and the positioning accuracy is improved by an average of 30-50% for the all the cases except for the BeiDou-only PPP.
Dynamic testing and test anxiety amongst gifted and average-ability children.
Vogelaar, Bart; Bakker, Merel; Elliott, Julian G; Resing, Wilma C M
2017-03-01
Dynamic testing has been proposed as a testing approach that is less disadvantageous for children who may be potentially subject to bias when undertaking conventional assessments. For example, those who encounter high levels of test anxiety, or who are unfamiliar with standardized test procedures, may fail to demonstrate their true potential or capabilities. While dynamic testing has proven particularly useful for special groups of children, it has rarely been used with gifted children. We investigated whether it would be useful to conduct a dynamic test to measure the cognitive abilities of intellectually gifted children. We also investigated whether test anxiety scores would be related to a progression in the children's test scores after dynamic training. Participants were 113 children aged between 7 and 8 years from several schools in the western part of the Netherlands. The children were categorized as either gifted or average-ability and split into an unguided practice or a dynamic testing condition. The study employed a pre-test-training-post-test design. Using linear mixed modelling analysis with a multilevel approach, we inspected the growth trajectories of children in the various conditions and examined the impact of ability and test anxiety on progression and training benefits. Dynamic testing proved to be successful in improving the scores of the children, although no differences in training benefits were found between gifted and average-ability children. Test anxiety was shown to influence the children's rate of change across all test sessions and their improvement in performance accuracy after dynamic training. © 2016 The British Psychological Society.
Evaluating Rater Accuracy in Rater-Mediated Assessments Using an Unfolding Model
ERIC Educational Resources Information Center
Wang, Jue; Engelhard, George, Jr.; Wolfe, Edward W.
2016-01-01
The number of performance assessments continues to increase around the world, and it is important to explore new methods for evaluating the quality of ratings obtained from raters. This study describes an unfolding model for examining rater accuracy. Accuracy is defined as the difference between observed and expert ratings. Dichotomous accuracy…
NASA Astrophysics Data System (ADS)
Baker, Erik Reese
A repeated-measures, within-subjects design was conducted on 58 participant pilots to assess mean differences on energy management situation awareness response time and response accuracy between a conventional electronic aircraft display, a primary flight display (PFD), and an ecological interface design aircraft display, the OZ concept display. Participants were associated with a small Midwestern aviation university, including student pilots, flight instructors, and faculty with piloting experience. Testing consisted of observing 15 static screenshots of each cockpit display type and then selecting applicable responses from 27 standardized responses for each screen. A paired samples t-test was computed comparing accuracy and response time for the two displays. There was no significant difference in means between PFD Response Time and OZ Response Time. On average, mean PFD Accuracy was significantly higher than mean OZ Accuracy (MDiff = 13.17, SDDiff = 20.96), t(57) = 4.78, p < .001, d = 0.63. This finding showed operational potential for the OZ display, since even without first training to proficiency on the previously unseen OZ display, participant performance differences were not operationally remarkable. There was no significant correlation between PFD Response Time and PFD Accuracy, but there was a significant correlation between OZ Response Time and OZ Accuracy, r (58) = .353, p < .01. These findings suggest the participant familiarity of the PFD resulted in accuracy scores unrelated to response time, compared to the participants unaccustomed with the OZ display where longer response times manifested in greater understanding of the OZ display. PFD Response Time and PFD Accuracy were not correlated with pilot flight hours, which was not expected. It was thought that increased experience would translate into faster and more accurate assessment of the aircraft stimuli. OZ Response Time and OZ Accuracy were also not correlated with pilot flight hours, but this was expected. This was consistent with previous research that observed novice operators performing as well as experienced professional pilots on dynamic flight tasks with the OZ display. A demographic questionnaire and a feedback survey were included in the trial. An equivalent three-quarters majority of participants rated the PFD as "easy" and the OZ as "confusing", yet performance accuracy and response times between the two displays were not operationally different.
The accuracy of mother's reports about their children's vaccination status.
Gareaballah, E T; Loevinsohn, B P
1989-01-01
Estimates of measles vaccination coverage in the Sudan vary on average by 23 percentage points, depending on whether or not information supplied by mothers who have lost their children's vaccination cards is included. To determine the accuracy of mother's reports, we collected data during four large coverage surveys in which illiterate mothers with vaccination cards were asked about their children's vaccination status and their answers were compared with the information given on the cards. Mothers' replies were very accurate. For example, for measles vaccination, the data supplied were both sensitive (87%) and specific (79%) compared with those on the vaccination cards. For both DPT and measles vaccination, accurate estimates of the true coverage rates could therefore be obtained by relying solely on mothers' reports. Within +/- 1 month, 78% of the women knew the age at which their children had received their first dose of poliovaccine. Ignoring mothers' reports of their children's vaccination status could therefore result in serious underestimates of the true vaccination coverage. A simple method of dealing with the problem posed by lost vaccination cards during coverage surveys is also suggested.
Johnson, LeeAnn K; Brown, Mary B; Carruthers, Ethan A; Ferguson, John A; Dombek, Priscilla E; Sadowsky, Michael J
2004-08-01
A horizontal, fluorophore-enhanced, repetitive extragenic palindromic-PCR (rep-PCR) DNA fingerprinting technique (HFERP) was developed and evaluated as a means to differentiate human from animal sources of Escherichia coli. Box A1R primers and PCR were used to generate 2,466 rep-PCR and 1,531 HFERP DNA fingerprints from E. coli strains isolated from fecal material from known human and 12 animal sources: dogs, cats, horses, deer, geese, ducks, chickens, turkeys, cows, pigs, goats, and sheep. HFERP DNA fingerprinting reduced within-gel grouping of DNA fingerprints and improved alignment of DNA fingerprints between gels, relative to that achieved using rep-PCR DNA fingerprinting. Jackknife analysis of the complete rep-PCR DNA fingerprint library, done using Pearson's product-moment correlation coefficient, indicated that animal and human isolates were assigned to the correct source groups with an 82.2% average rate of correct classification. However, when only unique isolates were examined, isolates from a single animal having a unique DNA fingerprint, Jackknife analysis showed that isolates were assigned to the correct source groups with a 60.5% average rate of correct classification. The percentages of correctly classified isolates were about 15 and 17% greater for rep-PCR and HFERP, respectively, when analyses were done using the curve-based Pearson's product-moment correlation coefficient, rather than the band-based Jaccard algorithm. Rarefaction analysis indicated that, despite the relatively large size of the known-source database, genetic diversity in E. coli was very great and is most likely accounting for our inability to correctly classify many environmental E. coli isolates. Our data indicate that removal of duplicate genotypes within DNA fingerprint libraries, increased database size, proper methods of statistical analysis, and correct alignment of band data within and between gels improve the accuracy of microbial source tracking methods.
Reducing unnecessary lab testing in the ICU with artificial intelligence.
Cismondi, F; Celi, L A; Fialho, A S; Vieira, S M; Reti, S R; Sousa, J M C; Finkelstein, S N
2013-05-01
To reduce unnecessary lab testing by predicting when a proposed future lab test is likely to contribute information gain and thereby influence clinical management in patients with gastrointestinal bleeding. Recent studies have demonstrated that frequent laboratory testing does not necessarily relate to better outcomes. Data preprocessing, feature selection, and classification were performed and an artificial intelligence tool, fuzzy modeling, was used to identify lab tests that do not contribute an information gain. There were 11 input variables in total. Ten of these were derived from bedside monitor trends heart rate, oxygen saturation, respiratory rate, temperature, blood pressure, and urine collections, as well as infusion products and transfusions. The final input variable was a previous value from one of the eight lab tests being predicted: calcium, PTT, hematocrit, fibrinogen, lactate, platelets, INR and hemoglobin. The outcome for each test was a binary framework defining whether a test result contributed information gain or not. Predictive modeling was applied to recognize unnecessary lab tests in a real world ICU database extract comprising 746 patients with gastrointestinal bleeding. Classification accuracy of necessary and unnecessary lab tests of greater than 80% was achieved for all eight lab tests. Sensitivity and specificity were satisfactory for all the outcomes. An average reduction of 50% of the lab tests was obtained. This is an improvement from previously reported similar studies with average performance 37% by [1-3]. Reducing frequent lab testing and the potential clinical and financial implications are an important issue in intensive care. In this work we present an artificial intelligence method to predict the benefit of proposed future laboratory tests. Using ICU data from 746 patients with gastrointestinal bleeding, and eleven measurements, we demonstrate high accuracy in predicting the likely information to be gained from proposed future lab testing for eight common GI related lab tests. Future work will explore applications of this approach to a range of underlying medical conditions and laboratory tests. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Reducing unnecessary lab testing in the ICU with artificial intelligence
Cismondi, F.; Celi, L.A.; Fialho, A.S.; Vieira, S.M.; Reti, S.R.; Sousa, J.M.C.; Finkelstein, S.N.
2017-01-01
Objectives To reduce unnecessary lab testing by predicting when a proposed future lab test is likely to contribute information gain and thereby influence clinical management in patients with gastrointestinal bleeding. Recent studies have demonstrated that frequent laboratory testing does not necessarily relate to better outcomes. Design Data preprocessing, feature selection, and classification were performed and an artificial intelligence tool, fuzzy modeling, was used to identify lab tests that do not contribute an information gain. There were 11 input variables in total. Ten of these were derived from bedside monitor trends heart rate, oxygen saturation, respiratory rate, temperature, blood pressure, and urine collections, as well as infusion products and transfusions. The final input variable was a previous value from one of the eight lab tests being predicted: calcium, PTT, hematocrit, fibrinogen, lactate, platelets, INR and hemoglobin. The outcome for each test was a binary framework defining whether a test result contributed information gain or not. Patients Predictive modeling was applied to recognize unnecessary lab tests in a real world ICU database extract comprising 746 patients with gastrointestinal bleeding. Main results Classification accuracy of necessary and unnecessary lab tests of greater than 80% was achieved for all eight lab tests. Sensitivity and specificity were satisfactory for all the outcomes. An average reduction of 50% of the lab tests was obtained. This is an improvement from previously reported similar studies with average performance 37% by [1–3]. Conclusions Reducing frequent lab testing and the potential clinical and financial implications are an important issue in intensive care. In this work we present an artificial intelligence method to predict the benefit of proposed future laboratory tests. Using ICU data from 746 patients with gastrointestinal bleeding, and eleven measurements, we demonstrate high accuracy in predicting the likely information to be gained from proposed future lab testing for eight common GI related lab tests. Future work will explore applications of this approach to a range of underlying medical conditions and laboratory tests. PMID:23273628
A new weighted mean temperature model in China
NASA Astrophysics Data System (ADS)
Liu, Jinghong; Yao, Yibin; Sang, Jizhang
2018-01-01
The Global Positioning System (GPS) has been applied in meteorology to monitor the change of Precipitable Water Vapor (PWV) in atmosphere, transformed from Zenith Wet Delay (ZWD). A key factor in converting the ZWD into the PWV is the weighted mean temperature (Tm), which has a direct impact on the accuracy of the transformation. A number of Bevis-type models, like Tm -Ts and Tm -Ts,Ps type models, have been developed by statistics approaches, and are not able to clearly depict the relationship between Tm and the surface temperature, Ts . A new model for Tm , called weighted mean temperature norm model (abbreviated as norm model), is derived as a function of Ts , the lapse rate of temperature, δ, the tropopause height, htrop , and the radiosonde station height, hs . It is found that Tm is better related to Ts through an intermediate temperature. The small effects of lapse rate can be ignored and the tropopause height be obtained from an empirical model. Then the norm model is reduced to a simplified form, which causes fewer loss of accuracy and needs two inputs, Ts and hs . In site-specific fittings, the norm model performs much better, with RMS values reduced averagely by 0.45 K and the Mean of Absolute Differences (MAD) values by 0.2 K. The norm model is also found more appropriate than the linear models to fit Tm in a large area, not only with the RMS value reduced from 4.3 K to 3.80 K, correlation coefficient R2 increased from 0.84 to 0.88, and MAD decreased from 3.24 K to 2.90 K, but also with the distribution of simplified model values to be more reasonable. The RMS and MAD values of the differences between reference and computed PWVs are reduced by on average 16.3% and 14.27%, respectively, when using the new norm models instead of the linear model.
NASA Technical Reports Server (NTRS)
Hoffbeck, Joseph P.; Landgrebe, David A.
1994-01-01
Many analysis algorithms for high-dimensional remote sensing data require that the remotely sensed radiance spectra be transformed to approximate reflectance to allow comparison with a library of laboratory reflectance spectra. In maximum likelihood classification, however, the remotely sensed spectra are compared to training samples, thus a transformation to reflectance may or may not be helpful. The effect of several radiance-to-reflectance transformations on maximum likelihood classification accuracy is investigated in this paper. We show that the empirical line approach, LOWTRAN7, flat-field correction, single spectrum method, and internal average reflectance are all non-singular affine transformations, and that non-singular affine transformations have no effect on discriminant analysis feature extraction and maximum likelihood classification accuracy. (An affine transformation is a linear transformation with an optional offset.) Since the Atmosphere Removal Program (ATREM) and the log residue method are not affine transformations, experiments with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were conducted to determine the effect of these transformations on maximum likelihood classification accuracy. The average classification accuracy of the data transformed by ATREM and the log residue method was slightly less than the accuracy of the original radiance data. Since the radiance-to-reflectance transformations allow direct comparison of remotely sensed spectra with laboratory reflectance spectra, they can be quite useful in labeling the training samples required by maximum likelihood classification, but these transformations have only a slight effect or no effect at all on discriminant analysis and maximum likelihood classification accuracy.
Baxter, Suzanne Domel; Guinn, Caroline H.; Smith, Albert F.; Hitchcock, David B.; Royer, Julie A.; Puryear, Megan P.; Collins, Kathleen L.; Smith, Alyssa L.
2017-01-01
Validation-study data were analyzed to investigate retention interval (RI) and prompt effects on accuracy of fourth-grade children’s reports of school-breakfast and school-lunch (in 24-hour recalls), and accuracy of school-breakfast reports by breakfast location (classroom; cafeteria). Randomly-selected fourth-grade children at 10 schools in four districts were observed eating school-provided breakfast and lunch, and interviewed under one of eight conditions (two RIs [short (prior-24-hour recall obtained in afternoon); long (previous-day recall obtained in morning)] crossed with four prompts [forward (distant-to-recent), meal-name (breakfast, etc.), open (no instructions), reverse (recent-to-distant)]). Each condition had 60 children (half girls). Of 480 children, 355 and 409 reported meals satisfying criteria for reports of school-breakfast and school-lunch, respectively. For breakfast and lunch separately, a conventional measure—report rate—and reporting-error-sensitive measures—correspondence rate and inflation ratio—were calculated for energy per meal-reporting child. Correspondence rate and inflation ratio—but not report rate—showed better accuracy for school-breakfast and school-lunch reports with the short than long RI; this pattern was not found for some prompts for each sex. Correspondence rate and inflation ratio showed better school-breakfast report accuracy for the classroom than cafeteria location for each prompt, but report rate showed the opposite. For each RI, correspondence rate and inflation ratio showed better accuracy for lunch than breakfast, but report rate showed the opposite. When choosing RI and prompts for recalls, researchers and practitioners should select short RIs to maximize accuracy. Recommendations for prompt selections are less clear. As report rates distort validation-study accuracy conclusions, reporting-error-sensitive measures are recommended. PMID:26865356
NASA Astrophysics Data System (ADS)
Zhang, Junwei; Hong, Xuezhi; Liu, Jie; Guo, Changjian
2018-04-01
In this work, we investigate and experimentally demonstrate an orthogonal frequency division multiplexing (OFDM) based high speed wavelength-division multiplexed (WDM) visible light communication (VLC) system using an inter-block data precoding and superimposed pilots (DP-SP) based channel estimation (CE) scheme. The residual signal-to-pilot interference (SPI) can be eliminated by using inter-block data precoding, resulting in a significant improvement in estimated accuracy and the overall system performance compared with uncoded SP based CE scheme. We also study the power allocation/overhead problem of the training for DP-SP, uncoded SP and conventional preamble based CE schemes, from which we obtain the optimum signal-to-pilot power ratio (SPR)/overhead percentage for all above cases. Intra-symbol frequency-domain averaging (ISFA) is also adopted to further enhance the accuracy of CE. By using the DP-SP based CE scheme, aggregate data rates of 1.87-Gbit/s and 1.57-Gbit/s are experimentally demonstrated over 0.8-m and 2-m indoor free space transmission, respectively, using a commercially available red, green and blue (RGB) light emitting diode (LED) with WDM. Experimental results show that the DP-SP based CE scheme is comparable to the conventional preamble based CE scheme in term of received Q factor and data rate while entailing a much smaller overhead-size.
A Classification Method for Seed Viability Assessment with Infrared Thermography.
Men, Sen; Yan, Lei; Liu, Jiaxin; Qian, Hua; Luo, Qinjuan
2017-04-12
This paper presents a viability assessment method for Pisum sativum L. seeds based on the infrared thermography technique. In this work, different artificial treatments were conducted to prepare seeds samples with different viability. Thermal images and visible images were recorded every five minutes during the standard five day germination test. After the test, the root length of each sample was measured, which can be used as the viability index of that seed. Each individual seed area in the visible images was segmented with an edge detection method, and the average temperature of the corresponding area in the infrared images was calculated as the representative temperature for this seed at that time. The temperature curve of each seed during germination was plotted. Thirteen characteristic parameters extracted from the temperature curve were analyzed to show the difference of the temperature fluctuations between the seeds samples with different viability. With above parameters, support vector machine (SVM) was used to classify the seed samples into three categories: viable, aged and dead according to the root length, the classification accuracy rate was 95%. On this basis, with the temperature data of only the first three hours during the germination, another SVM model was proposed to classify the seed samples, and the accuracy rate was about 91.67%. From these experimental results, it can be seen that infrared thermography can be applied for the prediction of seed viability, based on the SVM algorithm.
Frog sound identification using extended k-nearest neighbor classifier
NASA Astrophysics Data System (ADS)
Mukahar, Nordiana; Affendi Rosdi, Bakhtiar; Athiar Ramli, Dzati; Jaafar, Haryati
2017-09-01
Frog sound identification based on the vocalization becomes important for biological research and environmental monitoring. As a result, different types of feature extractions and classifiers have been employed to evaluate the accuracy of frog sound identification. This paper presents a frog sound identification with Extended k-Nearest Neighbor (EKNN) classifier. The EKNN classifier integrates the nearest neighbors and mutual sharing of neighborhood concepts, with the aims of improving the classification performance. It makes a prediction based on who are the nearest neighbors of the testing sample and who consider the testing sample as their nearest neighbors. In order to evaluate the classification performance in frog sound identification, the EKNN classifier is compared with competing classifier, k -Nearest Neighbor (KNN), Fuzzy k -Nearest Neighbor (FKNN) k - General Nearest Neighbor (KGNN)and Mutual k -Nearest Neighbor (MKNN) on the recorded sounds of 15 frog species obtained in Malaysia forest. The recorded sounds have been segmented using Short Time Energy and Short Time Average Zero Crossing Rate (STE+STAZCR), sinusoidal modeling (SM), manual and the combination of Energy (E) and Zero Crossing Rate (ZCR) (E+ZCR) while the features are extracted by Mel Frequency Cepstrum Coefficient (MFCC). The experimental results have shown that the EKNCN classifier exhibits the best performance in terms of accuracy compared to the competing classifiers, KNN, FKNN, GKNN and MKNN for all cases.
Quantifying rapid changes in cardiovascular state with a moving ensemble average.
Cieslak, Matthew; Ryan, William S; Babenko, Viktoriya; Erro, Hannah; Rathbun, Zoe M; Meiring, Wendy; Kelsey, Robert M; Blascovich, Jim; Grafton, Scott T
2018-04-01
MEAP, the moving ensemble analysis pipeline, is a new open-source tool designed to perform multisubject preprocessing and analysis of cardiovascular data, including electrocardiogram (ECG), impedance cardiogram (ICG), and continuous blood pressure (BP). In addition to traditional ensemble averaging, MEAP implements a moving ensemble averaging method that allows for the continuous estimation of indices related to cardiovascular state, including cardiac output, preejection period, heart rate variability, and total peripheral resistance, among others. Here, we define the moving ensemble technique mathematically, highlighting its differences from fixed-window ensemble averaging. We describe MEAP's interface and features for signal processing, artifact correction, and cardiovascular-based fMRI analysis. We demonstrate the accuracy of MEAP's novel B point detection algorithm on a large collection of hand-labeled ICG waveforms. As a proof of concept, two subjects completed a series of four physical and cognitive tasks (cold pressor, Valsalva maneuver, video game, random dot kinetogram) on 3 separate days while ECG, ICG, and BP were recorded. Critically, the moving ensemble method reliably captures the rapid cyclical cardiovascular changes related to the baroreflex during the Valsalva maneuver and the classic cold pressor response. Cardiovascular measures were seen to vary considerably within repetitions of the same cognitive task for each individual, suggesting that a carefully designed paradigm could be used to capture fast-acting event-related changes in cardiovascular state. © 2017 Society for Psychophysiological Research.
NASA Technical Reports Server (NTRS)
Haag, Thomas W.
1995-01-01
A torsional-type thrust stand has been designed and built to test Pulsed Plasma Thrusters (PPT's) in both single shot and repetitive operating modes. Using this stand, momentum per pulse was determined strictly as a function of thrust stand deflection, spring constant, and natural frequency. No empirical corrections were required. The accuracy of the method was verified using a swinging impact pendulum. Momentum transfer data between the thrust stand and the pendulum were consistent to within 1%. Following initial calibrations, the stand was used to test a Lincoln Experimental Satellite (LES-8/9) thruster. The LES-8/9 system had a mass of approximately 7.5 kg, with a nominal thrust to weight ratio of 1.3 x 10(exp -5). A total of 34 single shot thruster pulses were individually measured. The average impulse bit per pulse was 266 microN-s, which was slightly less than the value of 300 microN-s published in previous reports on this device. Repetitive pulse measurements were performed similar to ordinary steady-state thrust measurements. The thruster was operated for 30 minutes at a repetition rate of 132 pulses per minute and yielded an average thrust of 573 microN. Using average thrust, the average impulse bit per pulse was estimated to be 260 microN-s, which was in agreement with the single shot data. Zero drift during the repetitive pulse test was found to be approximately 1% of the measured thrust.
Accuracy of Person-Fit Statistics: A Monte Carlo Study of the Influence of Aberrance Rates
ERIC Educational Resources Information Center
St-Onge, Christina; Valois, Pierre; Abdous, Belkacem; Germain, Stephane
2011-01-01
Using a Monte Carlo experimental design, this research examined the relationship between answer patterns' aberrance rates and person-fit statistics (PFS) accuracy. It was observed that as the aberrance rate increased, the detection rates of PFS also increased until, in some situations, a peak was reached and then the detection rates of PFS…
Extraction of Prostatic Lumina and Automated Recognition for Prostatic Calculus Image Using PCA-SVM
Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D. Joshua
2011-01-01
Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi. PMID:21461364
Performance of some numerical Laplace inversion methods on American put option formula
NASA Astrophysics Data System (ADS)
Octaviano, I.; Yuniar, A. R.; Anisa, L.; Surjanto, S. D.; Putri, E. R. M.
2018-03-01
Numerical inversion approaches of Laplace transform is used to obtain a semianalytic solution. Some of the mathematical inversion methods such as Durbin-Crump, Widder, and Papoulis can be used to calculate American put options through the optimal exercise price in the Laplace space. The comparison of methods on some simple functions is aimed to know the accuracy and parameters which used in the calculation of American put options. The result obtained is the performance of each method regarding accuracy and computational speed. The Durbin-Crump method has an average error relative of 2.006e-004 with computational speed of 0.04871 seconds, the Widder method has an average error relative of 0.0048 with computational speed of 3.100181 seconds, and the Papoulis method has an average error relative of 9.8558e-004 with computational speed of 0.020793 seconds.
NASA Astrophysics Data System (ADS)
Rahaman, Md. Mashiur; Islam, Hafizul; Islam, Md. Tariqul; Khondoker, Md. Reaz Hasan
2017-12-01
Maneuverability and resistance prediction with suitable accuracy is essential for optimum ship design and propulsion power prediction. This paper aims at providing some of the maneuverability characteristics of a Japanese bulk carrier model, JBC in calm water using a computational fluid dynamics solver named SHIP Motion and OpenFOAM. The solvers are based on the Reynolds average Navier-Stokes method (RaNS) and solves structured grid using the Finite Volume Method (FVM). This paper comprises the numerical results of calm water test for the JBC model with available experimental results. The calm water test results include the total drag co-efficient, average sinkage, and trim data. Visualization data for pressure distribution on the hull surface and free water surface have also been included. The paper concludes that the presented solvers predict the resistance and maneuverability characteristics of the bulk carrier with reasonable accuracy utilizing minimum computational resources.
Accuracy of smartphone apps for heart rate measurement.
Coppetti, Thomas; Brauchlin, Andreas; Müggler, Simon; Attinger-Toller, Adrian; Templin, Christian; Schönrath, Felix; Hellermann, Jens; Lüscher, Thomas F; Biaggi, Patric; Wyss, Christophe A
2017-08-01
Background Smartphone manufacturers offer mobile health monitoring technology to their customers, including apps using the built-in camera for heart rate assessment. This study aimed to test the diagnostic accuracy of such heart rate measuring apps in clinical practice. Methods The feasibility and accuracy of measuring heart rate was tested on four commercially available apps using both iPhone 4 and iPhone 5. 'Instant Heart Rate' (IHR) and 'Heart Fitness' (HF) work with contact photoplethysmography (contact of fingertip to built-in camera), while 'Whats My Heart Rate' (WMH) and 'Cardiio Version' (CAR) work with non-contact photoplethysmography. The measurements were compared to electrocardiogram and pulse oximetry-derived heart rate. Results Heart rate measurement using app-based photoplethysmography was performed on 108 randomly selected patients. The electrocardiogram-derived heart rate correlated well with pulse oximetry ( r = 0.92), IHR ( r = 0.83) and HF ( r = 0.96), but somewhat less with WMH ( r = 0.62) and CAR ( r = 0.60). The accuracy of app-measured heart rate as compared to electrocardiogram, reported as mean absolute error (in bpm ± standard error) was 2 ± 0.35 (pulse oximetry), 4.5 ± 1.1 (IHR), 2 ± 0.5 (HF), 7.1 ± 1.4 (WMH) and 8.1 ± 1.4 (CAR). Conclusions We found substantial performance differences between the four studied heart rate measuring apps. The two contact photoplethysmography-based apps had higher feasibility and better accuracy for heart rate measurement than the two non-contact photoplethysmography-based apps.
Aytac Korkmaz, Sevcan
2016-05-05
The aim of this article is to provide early detection of cervical cancer by using both Atomic Force Microscope (AFM) and Scanning Electron Microscope (SEM) images of same patient. When the studies in the literature are examined, it is seen that the AFM and SEM images of the same patient are not used together for early diagnosis of cervical cancer. AFM and SEM images can be limited when using only one of them for the early detection of cervical cancer. Therefore, multi-modality solutions which give more accuracy results than single solutions have been realized in this paper. Optimum feature space has been obtained by Discrete Wavelet Entropy Energy (DWEE) applying to the 3×180 AFM and SEM images. Then, optimum features of these images are classified with Jensen Shannon, Hellinger, and Triangle Measure (JHT) Classifier for early diagnosis of cervical cancer. However, between classifiers which are Jensen Shannon, Hellinger, and triangle distance have been validated the measures via relationships. Afterwards, accuracy diagnosis of normal, benign, and malign cervical cancer cell was found by combining mean success rates of Jensen Shannon, Hellinger, and Triangle Measure which are connected with each other. Averages of accuracy diagnosis for AFM and SEM images by averaging the results obtained from these 3 classifiers are found as 98.29% and 97.10%, respectively. It has been observed that AFM images for early diagnosis of cervical cancer have higher performance than SEM images. Also in this article, surface roughness of malign AFM images in the result of the analysis made for the AFM images, according to the normal and benign AFM images is observed as larger, If the volume of particles has found as smaller. Copyright © 2016 Elsevier B.V. All rights reserved.
On the Utility of the Molecular Oxygen Dayglow Emissions as Proxies for Middle Atmospheric Ozone
NASA Technical Reports Server (NTRS)
Mlynczak, Martin G.; Olander, Daphne S.
1995-01-01
Molecular oxygen dayglow emissions arise in part from processes related to the Hartley band photolysis of ozone. It is therefore possible to derive daytime ozone concentrations from measurements of the volume emission rate of either dayglow. The accuracy to which the ozone concentration can be inferred depends on the accuracy to which numerous kinetic and spectroscopic rate constants are known, including rates which describe the excitation of molecular oxygen by processes that are not related to the ozone concentration. We find that several key rate constants must be known to better than 7 percent accuracy in order to achieve an inferred ozone concentration accurate to 15 percent from measurements of either dayglow. Currently, accuracies for various parameters typically range from 5 to 100 percent.
RHIC BPM system average orbit calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michnoff,R.; Cerniglia, P.; Degen, C.
2009-05-04
RHIC beam position monitor (BPM) system average orbit was originally calculated by averaging positions of 10000 consecutive turns for a single selected bunch. Known perturbations in RHIC particle trajectories, with multiple frequencies around 10 Hz, contribute to observed average orbit fluctuations. In 2006, the number of turns for average orbit calculations was made programmable; this was used to explore averaging over single periods near 10 Hz. Although this has provided an average orbit signal quality improvement, an average over many periods would further improve the accuracy of the measured closed orbit. A new continuous average orbit calculation was developed justmore » prior to the 2009 RHIC run and was made operational in March 2009. This paper discusses the new algorithm and performance with beam.« less
It Matters Whether Reading Comprehension Is Conceptualised as Rate or Accuracy
ERIC Educational Resources Information Center
Rønberg, Louise Flensted; Petersen, Dorthe Klint
2016-01-01
This study shows that it makes a difference whether accuracy measures or rate measures are used when assessing reading comprehension. When the outcome is reading comprehension accuracy (i.e., the number of correct responses), word reading skills (measured as access to orthographic representations) account for a modest amount of the variance in the…
Ultrasound-guided versus computed tomography-scan guided biopsy of pleural-based lung lesions
Khosla, Rahul; McLean, Anna W; Smith, Jessica A
2016-01-01
Background: Computed tomography (CT) guided biopsies have long been the standard technique to obtain tissue from the thoracic cavity and is traditionally performed by interventional radiologists. Ultrasound (US) guided biopsy of pleural-based lesions, performed by pulmonologists is gaining popularity and has the advantage of multi-planar imaging, real-time technique, and the absence of radiation exposure to patients. In this study, we aim to determine the diagnostic accuracy, the time to diagnosis after the initial consult placement, and the complications rates between the two different modalities. Methods: A retrospective study of electronic medical records was done of patients who underwent CT-guided biopsies and US-guided biopsies for pleural-based lesions between 2005 and 2014 and the data collected were analyzed for comparing the two groups. Results: A total of 158 patients underwent 162 procedures during the study period. 86 patients underwent 89 procedures in the US group, and 72 patients underwent 73 procedures in the CT group. The overall yield in the US group was 82/89 (92.1%) versus 67/73 (91.8%) in the CT group (P = 1.0). Average days to the procedure was 7.2 versus 17.5 (P = 0.00001) in the US and CT group, respectively. Complication rate was higher in CT group 17/73 (23.3%) versus 1/89 (1.1%) in the US group (P < 0.0001). Conclusions: For pleural-based lesions the diagnostic accuracy of US guided biopsy is similar to that of CT-guided biopsy, with a lower complication rate and a significantly reduced time to the procedure. PMID:27625440
Lu, Yan; Wei, Jin-Ying; Yao, De-Sheng; Pan, Zhong-Mian; Yao, Yao
2017-01-01
To investigate the value of carbon nanoparticles in identifying sentinel lymph nodes in early-stage cervical cancer. From January 2014 to January 2016, 40 patients with cervical cancer stage IA2-IIA, based on the International Federation of Gynecology and Obstetrics (FIGO) 2009 criteria, were included in this study. The normal cervix around the tumor was injected with a total of 1 mL of carbon nanoparticles (CNP)at 3 and 9 o'clock. All patients then underwent laparoscopic pelvic lymph node dissection and radical hysterectomy. The black-dyed sentinel lymph nodes were removed for routine pathological examination and immunohistochemical staining. Among the 40 patients, 38 patients had at least one sentinel lymph node (SLN). The detection rate was 95% (38/40). One hundred seventy-three SLNs were detected with an average of 3.9 SLNs per side. 25 positive lymph nodes, which included 21 positive SLNs, were detected in 8 (20%) patients. Sentinel lymph nodes were localized in the obturator (47.97%), internal lilac (13.87%), external lilac (26.59%), parametrial (1.16%), and common iliac (8.67%) regions. The sensitivity of the SLN detection was 100% (5/5), the accuracy was 97.37% (37/38), and the negative predictive value was 100. 0% and the false negative rate was 0%. Sentinel lymph nodes can be used to accurately predict the pathological state of pelvic lymph nodes in early cervical cancer. The detection rates and accuracy of sentinel lymph node were high. Carbon nanoparticles can be used to trace the sentinel lymph node in early cervical cancer.
Ceylan, Murat; Ceylan, Rahime; Ozbay, Yüksel; Kara, Sadik
2008-09-01
In biomedical signal classification, due to the huge amount of data, to compress the biomedical waveform data is vital. This paper presents two different structures formed using feature extraction algorithms to decrease size of feature set in training and test data. The proposed structures, named as wavelet transform-complex-valued artificial neural network (WT-CVANN) and complex wavelet transform-complex-valued artificial neural network (CWT-CVANN), use real and complex discrete wavelet transform for feature extraction. The aim of using wavelet transform is to compress data and to reduce training time of network without decreasing accuracy rate. In this study, the presented structures were applied to the problem of classification in carotid arterial Doppler ultrasound signals. Carotid arterial Doppler ultrasound signals were acquired from left carotid arteries of 38 patients and 40 healthy volunteers. The patient group included 22 males and 16 females with an established diagnosis of the early phase of atherosclerosis through coronary or aortofemoropopliteal (lower extremity) angiographies (mean age, 59 years; range, 48-72 years). Healthy volunteers were young non-smokers who seem to not bear any risk of atherosclerosis, including 28 males and 12 females (mean age, 23 years; range, 19-27 years). Sensitivity, specificity and average detection rate were calculated for comparison, after training and test phases of all structures finished. These parameters have demonstrated that training times of CVANN and real-valued artificial neural network (RVANN) were reduced using feature extraction algorithms without decreasing accuracy rate in accordance to our aim.
Acoustic voice analysis of prelingually deaf adults before and after cochlear implantation.
Evans, Maegan K; Deliyski, Dimitar D
2007-11-01
It is widely accepted that many severe to profoundly deaf adults have benefited from cochlear implants (CIs). However, limited research has been conducted to investigate changes in voice and speech of prelingually deaf adults who receive CIs, a population well known for presenting with a variety of voice and speech abnormalities. The purpose of this study was to use acoustic analysis to explore changes in voice and speech for three prelingually deaf males pre- and postimplantation over 6 months. The following measurements, some measured in varying contexts, were obtained: fundamental frequency (F0), jitter, shimmer, noise-to-harmonic ratio, voice turbulence index, soft phonation index, amplitude- and F0-variation, F0-range, speech rate, nasalance, and vowel production. Characteristics of vowel production were measured by determining the first formant (F1) and second formant (F2) of vowels in various contexts, magnitude of F2-variation, and rate of F2-variation. Perceptual measurements of pitch, pitch variability, loudness variability, speech rate, and intonation were obtained for comparison. Results are reported using descriptive statistics. The results showed patterns of change for some of the parameters while there was considerable variation across the subjects. All participants demonstrated a decrease in F0 in at least one context and demonstrated a change in nasalance toward the norm as compared to their normal hearing control. The two participants who were oral-language communicators were judged to produce vowels with an average of 97.2% accuracy and the sign-language user demonstrated low percent accuracy for vowel production.
Wong, Yau; Chao, Jerry; Lin, Zhiping; Ober, Raimund J.
2014-01-01
In fluorescence microscopy, high-speed imaging is often necessary for the proper visualization and analysis of fast subcellular dynamics. Here, we examine how the speed of image acquisition affects the accuracy with which parameters such as the starting position and speed of a microscopic non-stationary fluorescent object can be estimated from the resulting image sequence. Specifically, we use a Fisher information-based performance bound to investigate the detector-dependent effect of frame rate on the accuracy of parameter estimation. We demonstrate that when a charge-coupled device detector is used, the estimation accuracy deteriorates as the frame rate increases beyond a point where the detector’s readout noise begins to overwhelm the low number of photons detected in each frame. In contrast, we show that when an electron-multiplying charge-coupled device (EMCCD) detector is used, the estimation accuracy improves with increasing frame rate. In fact, at high frame rates where the low number of photons detected in each frame renders the fluorescent object difficult to detect visually, imaging with an EMCCD detector represents a natural implementation of the Ultrahigh Accuracy Imaging Modality, and enables estimation with an accuracy approaching that which is attainable only when a hypothetical noiseless detector is used. PMID:25321248
Goss, Donald L.; Lewek, Michael; Yu, Bing; Ware, William B.; Teyhen, Deydre S.; Gross, Michael T.
2015-01-01
Context The injury incidence rate among runners is approximately 50%. Some individuals have advocated using an anterior–foot-strike pattern to reduce ground reaction forces and injury rates that they attribute to a rear–foot-strike pattern. The proportion of minimalist shoe wearers who adopt an anterior–foot-strike pattern remains unclear. Objective To evaluate the accuracy of self-reported foot-strike patterns, compare negative ankle- and knee-joint angular work among runners using different foot-strike patterns and wearing traditional or minimalist shoes, and describe average vertical-loading rates. Design Descriptive laboratory study. Setting Research laboratory. Patients or Other Participants A total of 60 healthy volunteers (37 men, 23 women; age = 34.9 ± 8.9 years, height = 1.74 ± 0.08 m, mass = 70.9 ± 13.4 kg) with more than 6 months of experience wearing traditional or minimalist shoes were instructed to classify their foot-strike patterns. Intervention(s) Participants ran in their preferred shoes on an instrumented treadmill with 3-dimensional motion capture. Main Outcome Measure(s) Self-reported foot-strike patterns were compared with 2-dimensional video assessments. Runners were classified into 3 groups based on video assessment: traditional-shoe rear-foot strikers (TSR; n = 22), minimalist-shoe anterior-foot strikers (MSA; n = 21), and minimalist-shoe rear-foot strikers (MSR; n = 17). Ankle and knee negative angular work and average vertical-loading rates during stance phase were compared among groups. Results Only 41 (68.3%) runners reported foot-strike patterns that agreed with the video assessment (κ = 0.42, P < .001). The TSR runners demonstrated greater ankle-dorsiflexion and knee-extension negative work than MSA and MSR runners (P < .05). The MSA (P < .001) and MSR (P = .01) runners demonstrated greater ankle plantar-flexion negative work than TSR runners. The MSR runners demonstrated a greater average vertical-loading rate than MSA and TSR runners (P < .001). Conclusions Runners often cannot report their foot-strike patterns accurately and may not automatically adopt an anterior–foot-strike pattern after transitioning to minimalist running shoes. PMID:26098391
Goss, Donald L; Lewek, Michael; Yu, Bing; Ware, William B; Teyhen, Deydre S; Gross, Michael T
2015-02-19
Context : The injury incidence rate among runners is approximately 50%. Some individuals have advocated using an anterior-foot-strike pattern to reduce ground reaction forces and injury rates that they attribute to a rear-foot-strike pattern. The proportion of minimalist shoe wearers who adopt an anterior-foot-strike pattern remains unclear. Objective : To evaluate the accuracy of self-reported foot-strike patterns, compare negative ankle- and knee-joint angular work among runners using different foot-strike patterns and wearing traditional or minimalist shoes, and describe average vertical-loading rates. Design : Descriptive laboratory study. Setting : Research laboratory. Patients or Other Participants : A total of 60 healthy volunteers (37 men, 23 women; age = 34.9 ± 8.9 years, height = 1.74 ± 0.08 m, mass = 70.9 ± 13.4 kg) with more than 6 months of experience wearing traditional or minimalist shoes were instructed to classify their foot-strike patterns. Intervention(s) : Participants ran in their preferred shoes on an instrumented treadmill with 3-dimensional motion capture. Main Outcome Measure(s) : Self-reported foot-strike patterns were compared with 2-dimensional video assessments. Runners were classified into 3 groups based on video assessment: traditional-shoe rear-foot strikers (TSR; n = 22), minimalist-shoe anterior-foot strikers (MSA; n = 21), and minimalist-shoe rear-foot strikers (MSR; n = 17). Ankle and knee negative angular work and average vertical-loading rates during stance phase were compared among groups. Results : Only 41 (68.3%) runners reported foot-strike patterns that agreed with the video assessment (κ = 0.42, P < .001). The TSR runners demonstrated greater ankle-dorsiflexion and knee-extension negative work than MSA and MSR runners (P < .05). The MSA (P < .001) and MSR (P = .01) runners demonstrated greater ankle plantar-flexion negative work than TSR runners. The MSR runners demonstrated a greater average vertical-loading rate than MSA and TSR runners (P < .001). Conclusions : Runners often cannot report their foot-strike patterns accurately and may not automatically adopt an anterior-foot-strike pattern after transitioning to minimalist running shoes.
2014-01-01
Introduction Prolonged ventilation and failed extubation are associated with increased harm and cost. The added value of heart and respiratory rate variability (HRV and RRV) during spontaneous breathing trials (SBTs) to predict extubation failure remains unknown. Methods We enrolled 721 patients in a multicenter (12 sites), prospective, observational study, evaluating clinical estimates of risk of extubation failure, physiologic measures recorded during SBTs, HRV and RRV recorded before and during the last SBT prior to extubation, and extubation outcomes. We excluded 287 patients because of protocol or technical violations, or poor data quality. Measures of variability (97 HRV, 82 RRV) were calculated from electrocardiogram and capnography waveforms followed by automated cleaning and variability analysis using Continuous Individualized Multiorgan Variability Analysis (CIMVA™) software. Repeated randomized subsampling with training, validation, and testing were used to derive and compare predictive models. Results Of 434 patients with high-quality data, 51 (12%) failed extubation. Two HRV and eight RRV measures showed statistically significant association with extubation failure (P <0.0041, 5% false discovery rate). An ensemble average of five univariate logistic regression models using RRV during SBT, yielding a probability of extubation failure (called WAVE score), demonstrated optimal predictive capacity. With repeated random subsampling and testing, the model showed mean receiver operating characteristic area under the curve (ROC AUC) of 0.69, higher than heart rate (0.51), rapid shallow breathing index (RBSI; 0.61) and respiratory rate (0.63). After deriving a WAVE model based on all data, training-set performance demonstrated that the model increased its predictive power when applied to patients conventionally considered high risk: a WAVE score >0.5 in patients with RSBI >105 and perceived high risk of failure yielded a fold increase in risk of extubation failure of 3.0 (95% confidence interval (CI) 1.2 to 5.2) and 3.5 (95% CI 1.9 to 5.4), respectively. Conclusions Altered HRV and RRV (during the SBT prior to extubation) are significantly associated with extubation failure. A predictive model using RRV during the last SBT provided optimal accuracy of prediction in all patients, with improved accuracy when combined with clinical impression or RSBI. This model requires a validation cohort to evaluate accuracy and generalizability. Trial registration ClinicalTrials.gov NCT01237886. Registered 13 October 2010. PMID:24713049
Evaluation of Techniques Used to Estimate Cortical Feature Maps
Katta, Nalin; Chen, Thomas L.; Watkins, Paul V.; Barbour, Dennis L.
2011-01-01
Functional properties of neurons are often distributed nonrandomly within a cortical area and form topographic maps that reveal insights into neuronal organization and interconnection. Some functional maps, such as in visual cortex, are fairly straightforward to discern with a variety of techniques, while other maps, such as in auditory cortex, have resisted easy characterization. In order to determine appropriate protocols for establishing accurate functional maps in auditory cortex, artificial topographic maps were probed under various conditions, and the accuracy of estimates formed from the actual maps was quantified. Under these conditions, low-complexity maps such as sound frequency can be estimated accurately with as few as 25 total samples (e.g., electrode penetrations or imaging pixels) if neural responses are averaged together. More samples are required to achieve the highest estimation accuracy for higher complexity maps, and averaging improves map estimate accuracy even more than increasing sampling density. Undersampling without averaging can result in misleading map estimates, while undersampling with averaging can lead to the false conclusion of no map when one actually exists. Uniform sample spacing only slightly improves map estimation over nonuniform sample spacing typical of serial electrode penetrations. Tessellation plots commonly used to visualize maps estimated using nonuniform sampling are always inferior to linearly interpolated estimates, although differences are slight at higher sampling densities. Within primary auditory cortex, then, multiunit sampling with at least 100 samples would likely result in reasonable feature map estimates for all but the highest complexity maps and the highest variability that might be expected. PMID:21889537
NASA Astrophysics Data System (ADS)
Richards, Taylor; Sturgeon, Gregory M.; Ramirez-Giraldo, Juan Carlos; Rubin, Geoffrey; Segars, Paul; Samei, Ehsan
2017-03-01
The purpose of this study was to quantify the accuracy of coronary computed tomography angiography (CTA) stenosis measurements using newly developed physical coronary plaque models attached to a base dynamic cardiac phantom (Shelley Medical DHP-01). Coronary plaque models (5 mm diameter, 50% stenosis, and 32 mm long) were designed and 3D-printed with tissue equivalent materials (calcified plaque with iodine enhanced lumen). Realistic cardiac motion was achieved by fitting known cardiac motion vectors to left ventricle volume-time curves to create synchronized heart motion profiles executed by the base cardiac phantom. Realistic coronary CTA acquisition was accomplished by synthesizing corresponding ECG waveforms for gating and reconstruction purposes. All scans were acquired using a retrospective gating technique on a dual-source CT system (Siemens SOMATOM FLASH) with 75ms temporal resolution. Multi-planar reformatted images were reconstructed along vessel centerlines and the enhanced lumens were manually segmented by 5 independent operators. On average, the stenosis measurement accuracy was 0.9% positive bias for the motion free condition (0 bpm). The measurement accuracy monotonically decreased to 18.5% negative bias at 90 bpm. Contrast-tonoise (CNR), vessel circularity, and segmentation conformity also decreased monotonically with increasing heart rate. These results demonstrate successful implementation of the base cardiac phantom with 3D-printed coronary plaque models, adjustable motion profiles, and coordinated ECG waveforms. They further show the utility of the model to ascertain metrics of coronary CT accuracy and image quality under a variety of plaque, motion, and acquisition conditions.
Spreading a medication administration intervention organizationwide in six hospitals.
Kliger, Julie; Singer, Sara; Hoffman, Frank; O'Neil, Edward
2012-02-01
Six hospitals from the San Francisco Bay Area participated in a 12-month quality improvement project conducted by the Integrated Nurse Leadership Program (INLP). A quality improvement intervention that focused on improving medication administration accuracy was spread from two pilot units to all inpatient units in the hospitals. INLP developed a 12-month curriculum, presented in a combination of off-site training sessions and hospital-based training and consultant-led meetings, to teach clinicians the key skills needed to drive organizationwide change. Each hospital established a nurse-led project team, as well as unit teams to address six safety processes designed to improve medication administration accuracy: compare medication to the medication administration record; keep medication labeled throughout; check two patient identifications; explain drug to patient (if applicable); chart immediately after administration; and protect process from distractions and interruptions. From baseline until one year after project completion, the six hospitals improved their medication accuracy rates, on average, from 83.4% to 98.0% in the spread units. The spread units also improved safety processes overall from 83.1% to 97.2%. During the same time, the initial pilot units also continued to improve accuracy from 94.0% to 96.8% and safety processes overall from 95.3% to 97.2%. With thoughtful planning, engaging those doing the work early and focusing on the "human side of change" along with technical knowledge of improvement methodologies, organizations can spread initiatives enterprisewide. This program required significant training of frontline workers in problem-solving skills, leading change, team management, data tracking, and communication.
Negoita, Madalina; Zolgharni, Massoud; Dadkho, Elham; Pernigo, Matteo; Mielewczik, Michael; Cole, Graham D; Dhutia, Niti M; Francis, Darrel P
2016-09-01
To determine the optimal frame rate at which reliable heart walls velocities can be assessed by speckle tracking. Assessing left ventricular function with speckle tracking is useful in patient diagnosis but requires a temporal resolution that can follow myocardial motion. In this study we investigated the effect of different frame rates on the accuracy of speckle tracking results, highlighting the temporal resolution where reliable results can be obtained. 27 patients were scanned at two different frame rates at their resting heart rate. From all acquired loops, lower temporal resolution image sequences were generated by dropping frames, decreasing the frame rate by up to 10-fold. Tissue velocities were estimated by automated speckle tracking. Above 40 frames/s the peak velocity was reliably measured. When frame rate was lower, the inter-frame interval containing the instant of highest velocity also contained lower velocities, and therefore the average velocity in that interval was an underestimate of the clinically desired instantaneous maximum velocity. The higher the frame rate, the more accurately maximum velocities are identified by speckle tracking, until the frame rate drops below 40 frames/s, beyond which there is little increase in peak velocity. We provide in an online supplement the vendor-independent software we used for automatic speckle-tracked velocity assessment to help others working in this field. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Paramedic Application of a Triage Sieve: A Paper-Based Exercise.
Cuttance, Glen; Dansie, Kathryn; Rayner, Tim
2017-02-01
Introduction Triage is the systematic prioritization of casualties when there is an imbalance between the needs of these casualties and resource availability. The triage sieve is a recognized process for prioritizing casualties for treatment during mass-casualty incidents (MCIs). While the application of a triage sieve generally is well-accepted, the measurement of its accuracy has been somewhat limited. Obtaining reliable measures for triage sieve accuracy rates is viewed as a necessity for future development in this area. The goal of this study was to investigate how theoretical knowledge acquisition and the practical application of an aide-memoir impacted triage sieve accuracy rates. Two hundred and ninety-two paramedics were allocated randomly to one of four separate sub-groups, a non-intervention control group, and three intervention groups, which involved them receiving either an educational review session and/or an aide-memoir. Participants were asked to triage sieve 20 casualties using a previously trialed questionnaire. The study showed the non-intervention control group had a correct accuracy rate of 47%, a similar proportion of casualties found to be under-triaged (37%), but a significantly lower number of casualties were over-triaged (16%). The provision of either an educational review or aide-memoir significantly increased the correct triage sieve accuracy rate to 77% and 90%, respectively. Participants who received both the educational review and aide-memoir had an overall accuracy rate of 89%. Over-triaged rates were found not to differ significantly across any of the study groups. This study supports the use of an aide-memoir for maximizing MCI triage accuracy rates. A "just-in-time" educational refresher provided comparable benefits, however its practical application to the MCI setting has significant operational limitations. In addition, this study provides some guidance on triage sieve accuracy rate measures that can be applied to define acceptable performance of a triage sieve during a MCI. Cuttance G , Dansie K , Rayner T . Paramedic application of a triage sieve: a paper-based exercise. Prehosp Disaster Med. 2017;32(1):3-13.
Comparison of voice-automated transcription and human transcription in generating pathology reports.
Al-Aynati, Maamoun M; Chorneyko, Katherine A
2003-06-01
Software that can convert spoken words into written text has been available since the early 1980s. Early continuous speech systems were developed in 1994, with the latest commercially available editions having a claimed accuracy of up to 98% of speech recognition at natural speech rates. To evaluate the efficacy of one commercially available voice-recognition software system with pathology vocabulary in generating pathology reports and to compare this with human transcription. To draw cost analysis conclusions regarding human versus computer-based transcription. Two hundred six routine pathology reports from the surgical pathology material handled at St Joseph's Healthcare, Hamilton, Ontario, were generated simultaneously using computer-based transcription and human transcription. The following hardware and software were used: a desktop 450-MHz Intel Pentium III processor with 192 MB of RAM, a speech-quality sound card (Sound Blaster), noise-canceling headset microphone, and IBM ViaVoice Pro version 8 with pathology vocabulary support (Voice Automated, Huntington Beach, Calif). The cost of the hardware and software used was approximately Can 2250 dollars. A total of 23 458 words were transcribed using both methods with a mean of 114 words per report. The mean accuracy rate was 93.6% (range, 87.4%-96%) using the computer software, compared to a mean accuracy of 99.6% (range, 99.4%-99.8%) for human transcription (P <.001). Time needed to edit documents by the primary evaluator (M.A.) using the computer was on average twice that needed for editing the documents produced by human transcriptionists (range, 1.4-3.5 times). The extra time needed to edit documents was 67 minutes per week (13 minutes per day). Computer-based continuous speech-recognition systems in pathology can be successfully used in pathology practice even during the handling of gross pathology specimens. The relatively low accuracy rate of this voice-recognition software with resultant increased editing burden on pathologists may not encourage its application on a wide scale in pathology departments with sufficient human transcription services, despite significant potential financial savings. However, computer-based transcription represents an attractive and relatively inexpensive alternative to human transcription in departments where there is a shortage of transcription services, and will no doubt become more commonly used in pathology departments in the future.
The Impact of Target, Wording, and Duration on Rating Accuracy for Direct Behavior Rating
ERIC Educational Resources Information Center
Chafouleas, Sandra M.; Jaffery, Rose; Riley-Tillman, T. Chris; Christ, Theodore J.; Sen, Rohini
2013-01-01
The purpose of this study was to extend evaluation of rater accuracy using "Direct Behavior Rating--Single-Item Scales" (DBR-SIS). Extension of prior research was accomplished through use of criterion ratings derived from both systematic direct observation and expert DBR-SIS scores, and also through control of the durations over which…
Hall, Justin M; Azar, Frederick M; Miller, Robert H; Smith, Richard; Throckmorton, Thomas W
2014-09-01
We compared accuracy and reliability of a traditional method of measurement (most cephalad vertebral spinous process that can be reached by a patient with the extended thumb) to estimates made with the shoulder in abduction to determine if there were differences between the two methods. Six physicians with fellowship training in sports medicine or shoulder surgery estimated measurements in 48 healthy volunteers. Three were randomly chosen to make estimates of both internal rotation measurements for each volunteer. An independent observer made objective measurements on lateral scoliosis films (spinous process method) or with a goniometer (abduction method). Examiners were blinded to objective measurements as well as to previous estimates. Intraclass coefficients for interobserver reliability for the traditional method averaged 0.75, indicating good agreement among observers. The difference in vertebral level estimated by the examiner and the actual radiographic level averaged 1.8 levels. The intraclass coefficient for interobserver reliability for the abduction method averaged 0.81 for all examiners, indicating near-perfect agreement. Confidence intervals indicated that estimates were an average of 8° different from the objective goniometer measurements. Pearson correlation coefficients of intraobserver reliability for the abduction method averaged 0.94, indicating near-perfect agreement within observers. Confidence intervals demonstrated repeated estimates between 5° and 10° of the original. Internal rotation estimates made with the shoulder abducted demonstrated interobserver reliability superior to that of spinous process estimates, and reproducibility was high. On the basis of this finding, we now take glenohumeral internal rotation measurements with the shoulder in abduction and use a goniometer to maximize accuracy and objectivity. Copyright © 2014 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.
Mori, Shinichiro; Shibayama, Kouichi; Tanimoto, Katsuyuki; Kumagai, Motoki; Matsuzaki, Yuka; Furukawa, Takuji; Inaniwa, Taku; Shirai, Toshiyuki; Noda, Koji; Tsuji, Hiroshi; Kamada, Tadashi
2012-09-01
Our institute has constructed a new treatment facility for carbon ion scanning beam therapy. The first clinical trials were successfully completed at the end of November 2011. To evaluate patient setup accuracy, positional errors between the reference Computed Tomography (CT) scan and final patient setup images were calculated using 2D-3D registration software. Eleven patients with tumors of the head and neck, prostate and pelvis receiving carbon ion scanning beam treatment participated. The patient setup process takes orthogonal X-ray flat panel detector (FPD) images and the therapists adjust the patient table position in six degrees of freedom to register the reference position by manual or auto- (or both) registration functions. We calculated residual positional errors with the 2D-3D auto-registration function using the final patient setup orthogonal FPD images and treatment planning CT data. Residual error averaged over all patients in each fraction decreased from the initial to the last treatment fraction [1.09 mm/0.76° (averaged in the 1st and 2nd fractions) to 0.77 mm/0.61° (averaged in the 15th and 16th fractions)]. 2D-3D registration calculation time was 8.0 s on average throughout the treatment course. Residual errors in translation and rotation averaged over all patients as a function of date decreased with the passage of time (1.6 mm/1.2° in May 2011 to 0.4 mm/0.2° in December 2011). This retrospective residual positional error analysis shows that the accuracy of patient setup during the first clinical trials of carbon ion beam scanning therapy was good and improved with increasing therapist experience.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jennings, G
Purpose: To quantify the delivered activity accuracy of Radium 223 dichloride injections, when being administrated for castration – resistant prostate cancer, symptomatic bone metastases. The impact of residual activity in the spent syringe and dispensing accuracy of Ra 223. Methods: The administration is by slow intravenous injection over 1 minute followed by double flushing of the 10 mL syringe and IV with saline. Eighty (80) procedures was used to investigate variations in the activity from the amount prescribed (µCi) = 1.35 × Patient weight Kg. The Activity dispensed into a 10mL syringe using a NIST traceable Capintec CRC-25R Chamber andmore » a cross calibrated capintec CRC-15R to measure activity in the syringe immediately before and after administration Results: The patients weight range from 121Ib to 235lb and doses ranging 74.25 µCi to 144.2 µCi. The deviation of dispensed dose vs Prescribed dose average +2.1% with a range of −1.1% to +5.7%. The Dose measured before administration ranges 79.3 µCi to 154.9 µCi. Deviation from the dispensed dose was show to average +2.9% with a range of −0.8% to +7.3%. The average residual dose post injection was 2.5 µCi or 2.2% of the pre injection activity. Ranging from 0.9 µCi to 6.2 µCi, 0.7% to 5.4% respectively. Subtracting the residual activity from that measured activity before injection and comparing it to prescription dose was shown to have an average variation of +2.7% with a range of −0.8% to 7.4%. Conclusion: The case resulted in the 6.2 µCi maximum residual dose had two syringes. A small, 82.8 µCi activity, case resulted in the 7.4% maximum variation in measures less residual verses prescription dose. The average +2.1 % dispenses activity of Ra 223 over the prescription dosage was seen to counteract the average 2.2% residual dosage found to remain in the syringe.« less
NASA Technical Reports Server (NTRS)
Tom, C.; Miller, L. D.; Christenson, J. W.
1978-01-01
A landscape model was constructed with 34 land-use, physiographic, socioeconomic, and transportation maps. A simple Markov land-use trend model was constructed from observed rates of change and nonchange from photointerpreted 1963 and 1970 airphotos. Seven multivariate land-use projection models predicting 1970 spatial land-use changes achieved accuracies from 42 to 57 percent. A final modeling strategy was designed, which combines both Markov trend and multivariate spatial projection processes. Landsat-1 image preprocessing included geometric rectification/resampling, spectral-band, and band/insolation ratioing operations. A new, systematic grid-sampled point training-set approach proved to be useful when tested on the four orginal MSS bands, ten image bands and ratios, and all 48 image and map variables (less land use). Ten variable accuracy was raised over 15 percentage points from 38.4 to 53.9 percent, with the use of the 31 ancillary variables. A land-use classification map was produced with an optimal ten-channel subset of four image bands and six ancillary map variables. Point-by-point verification of 331,776 points against a 1972/1973 U.S. Geological Survey (UGSG) land-use map prepared with airphotos and the same classification scheme showed average first-, second-, and third-order accuracies of 76.3, 58.4, and 33.0 percent, respectively.
Content Volatility of Scientific Topics in Wikipedia: A Cautionary Tale.
Wilson, Adam M; Likens, Gene E
2015-01-01
Wikipedia has quickly become one of the most frequently accessed encyclopedic references, despite the ease with which content can be changed and the potential for 'edit wars' surrounding controversial topics. Little is known about how this potential for controversy affects the accuracy and stability of information on scientific topics, especially those with associated political controversy. Here we present an analysis of the Wikipedia edit histories for seven scientific articles and show that topics we consider politically but not scientifically "controversial" (such as evolution and global warming) experience more frequent edits with more words changed per day than pages we consider "noncontroversial" (such as the standard model in physics or heliocentrism). For example, over the period we analyzed, the global warming page was edited on average (geometric mean ±SD) 1.9±2.7 times resulting in 110.9±10.3 words changed per day, while the standard model in physics was only edited 0.2±1.4 times resulting in 9.4±5.0 words changed per day. The high rate of change observed in these pages makes it difficult for experts to monitor accuracy and contribute time-consuming corrections, to the possible detriment of scientific accuracy. As our society turns to Wikipedia as a primary source of scientific information, it is vital we read it critically and with the understanding that the content is dynamic and vulnerable to vandalism and other shenanigans.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pokhrel, Damodar, E-mail: damodar.pokhrel@uky.edu; Sood, Sumit; McClinton, Christopher
To retrospectively evaluate quality, efficiency, and delivery accuracy of volumetric-modulated arc therapy (VMAT) plans for single-fraction treatment of thoracic vertebral metastases using image-guided stereotactic body radiosurgery (SBRS) after RTOG 0631 dosimetric compliance criteria. After obtaining credentialing for MD Anderson spine phantom irradiation validation, 10 previously treated patients with thoracic vertebral metastases with noncoplanar hybrid arcs using 1 to 2 3D-conformal partial arcs plus 7 to 9 intensity-modulated radiation therapy beams were retrospectively re-optimized with VMAT using 3 full coplanar arcs. Tumors were located between T2 and T12. Contrast-enhanced T1/T2-weighted magnetic resonance images were coregistered with planning computed tomography and planningmore » target volumes (PTV) were between 14.4 and 230.1 cc (median = 38.0 cc). Prescription dose was 16 Gy in 1 fraction with 6 MV beams at Novalis-TX linear accelerator consisting of micro multileaf collimators. Each plan was assessed for target coverage using conformality index, the conformation number, the ratio of the volume receiving 50% of the prescription dose over PTV, R50%, homogeneity index (HI), and PTV-1600 coverage per RTOG 0631 requirements. Organs-at-risk doses were evaluated for maximum doses to spinal cord (D{sub 0.03} {sub cc}, D{sub 0.35} {sub cc}), partial spinal cord (D{sub 10%}), esophagus (D{sub 0.03} {sub cc} and D{sub 5} {sub cc}), heart (D{sub 0.03} {sub cc} and D{sub 15} {sub cc}), and lung (V{sub 5}, V{sub 10}, and maximum dose to 1000 cc of lung). Dose delivery efficiency and accuracy of each VMAT-SBRS plan were assessed using quality assurance (QA) plan on MapCHECK device. Total beam-on time was recorded during QA procedure, and a clinical gamma index (2%/2 mm and 3%/3 mm) was used to compare agreement between planned and measured doses. All 10 VMAT-SBRS plans met RTOG 0631 dosimetric requirements for PTV coverage. The plans demonstrated highly conformal and homogenous coverage of the vertebral PTV with mean HI, conformality index, conformation number, and R{sub 50%} values of 0.13 ± 0.03 (range: 0.09 to 0.18), 1.03 ± 0.04 (range: 0.98 to 1.09), 0.81 ± 0.06 (range: 0.72 to 0.89), and 4.2 ± 0.94 (range: 2.7 to 5.4), respectively. All 10 patients met protocol guidelines with maximum dose to spinal cord (average: 8.83 ± 1.9 Gy, range: 5.9 to 10.9 Gy); dose to 0.35 cc of spinal cord (average: 7.62 ± 1.7 Gy, range: 5.4 to 9.6 Gy); and dose to 10% of partial spinal cord (average 6.31 ± 1.5 Gy, range: 3.5 to 8.5 Gy) less than 14, 10, and 10 Gy, respectively. For all 10 patients, the maximum dose to esophagus (average: 9.41 ± 4.3 Gy, range: 1.5 to 14.9 Gy) and dose to 5 cc of esophagus (average: 7.43 ± 3.8 Gy, range: 1.1 to 11.8 Gy) were kept less than protocol requirements 16 Gy and 11.9 Gy, respectively. In a similar manner, all 10 patients met protocol compliance criteria with maximum dose to heart (average: 4.62 ± 3.5 Gy, range: 1.3 to 10.2 Gy) and dose to 15 cc of heart (average: 2.23 ± 1.8 Gy, range: 0.3 to 5.6 Gy) less than 22 and 16 Gy, respectively. The dose to the lung was retained much lower than protocol guidelines for all 10 patients. The total number of monitor units was, on average, 6919 ± 1187. The average beam-on time was 11.5 ± 2.0 minutes. The VMAT plans demonstrated dose delivery accuracy of 95.8 ± 0.7%, on average, for clinical gamma passing rate with 2%/2 mm criteria and 98.3 ± 0.8%, on average, with 3%/3 mm criteria. All VMAT-SBRS plans were considered clinically acceptable per RTOG 0631 dosimetric compliance criteria. VMAT planning provided highly conformal and homogenous dose distributions for the lower-dose vertebral PTV and the spinal cord as well as organs-at-risk such as esophagus, heart, and lung. Higher QA pass rates and shorter beam-on time suggest that VMAT-SBRS is a clinically feasible, fast, and effective treatment option for patients with thoracic vertebral metastases.« less
Edwards, T.C.; Cutler, D.R.; Zimmermann, N.E.; Geiser, L.; Moisen, Gretchen G.
2006-01-01
We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by resubstitution rates were similar for each lichen species irrespective of the underlying sample survey form. Cross-validation estimates of prediction accuracies were lower than resubstitution accuracies for all species and both design types, and in all cases were closer to the true prediction accuracies based on the EVALUATION data set. We argue that greater emphasis should be placed on calculating and reporting cross-validation accuracy rates rather than simple resubstitution accuracy rates. Evaluation of the DESIGN and PURPOSIVE tree models on the EVALUATION data set shows significantly lower prediction accuracy for the PURPOSIVE tree models relative to the DESIGN models, indicating that non-probabilistic sample surveys may generate models with limited predictive capability. These differences were consistent across all four lichen species, with 11 of the 12 possible species and sample survey type comparisons having significantly lower accuracy rates. Some differences in accuracy were as large as 50%. The classification tree structures also differed considerably both among and within the modelled species, depending on the sample survey form. Overlap in the predictor variables selected by the DESIGN and PURPOSIVE tree models ranged from only 20% to 38%, indicating the classification trees fit the two evaluated survey forms on different sets of predictor variables. The magnitude of these differences in predictor variables throws doubt on ecological interpretation derived from prediction models based on non-probabilistic sample surveys. ?? 2006 Elsevier B.V. All rights reserved.
SU-E-T-291: Dosimetric Accuracy of Multitarget Single Isocenter Radiosurgery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tannazi, F; Huang, M; Thomas, E
2015-06-15
Purpose: To evaluate the accuracy of single-isocenter multiple-target VMAT radiosurgery (SIMT-VMAT-SRS) by analysis of pre-treatment verification measurements. Methods: Our QA procedure used a phantom having a coronal plane for EDR2 film and a 0.125 cm3 ionization chamber. Film measurements were obtained for the largest and smallest targets for each plan. An ionization chamber measurement (ICM) was obtained for sufficiently large targets. Films were converted to dose using a patient-specific calibration curve and compared to treatment planning system calculations. Alignment error was estimated using image registration. The gamma index was calculated for 3%/3 and 3%/1 mm criteria. The median dose inmore » the target region and, for plans having an ICM, the average dose in the central 5 mm was calculated. Results: The average equivalent target diameter of the 48 targets was 15 mm (3–43 mm). Twenty of the 24 plans had an ICM for the plan corresponding to the largest target (diameter 11–43 mm) with a mean ratio of chamber reading to expected dose (ED) and the mean ratio of film to ED (averaged over the central 5 mm) was 1.001 (0.025 SD) and 1.000 (0.029 SD), respectively. For all plans, the mean film to ED (from the median dose in the target region) was 0.997 (0.027 SD). The mean registration vector was (0.15,0.29) mm, with an average magnitude of 0.96 mm. Before (after) registration, the average fraction of pixels having gamma < 1 was 99.3% (99.6%) and 89.1% (97.6%) for 3%/3mm and 3%/1mm, respectively. Conclusion: Our results demonstrate dosimetric accuracy of SIMT-VMAT-SRS for targets as small as 3 mm. Film dosimetry provides accurate assessment of the absolute dose delivered to targets too small for an ionization chamber measurement; however, the relatively large registration vector indicates that image-guidance should replace laser-based setup for patient-specific evaluation of geometric accuracy.« less