Sample records for normalized gross error

  1. a Gross Error Elimination Method for Point Cloud Data Based on Kd-Tree

    NASA Astrophysics Data System (ADS)

    Kang, Q.; Huang, G.; Yang, S.

    2018-04-01

    Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data's pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Passarge, M; Fix, M K; Manser, P

    Purpose: To create and test an accurate EPID-frame-based VMAT QA metric to detect gross dose errors in real-time and to provide information about the source of error. Methods: A Swiss cheese model was created for an EPID-based real-time QA process. The system compares a treatmentplan- based reference set of EPID images with images acquired over each 2° gantry angle interval. The metric utilizes a sequence of independent consecutively executed error detection Methods: a masking technique that verifies infield radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment to quantify rotation, scaling andmore » translation; standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each test were determined. For algorithm testing, twelve different types of errors were selected to modify the original plan. Corresponding predictions for each test case were generated, which included measurement-based noise. Each test case was run multiple times (with different noise per run) to assess the ability to detect introduced errors. Results: Averaged over five test runs, 99.1% of all plan variations that resulted in patient dose errors were detected within 2° and 100% within 4° (∼1% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 91.5% were detected by the system within 2°. Based on the type of method that detected the error, determination of error sources was achieved. Conclusion: An EPID-based during-treatment error detection system for VMAT deliveries was successfully designed and tested. The system utilizes a sequence of methods to identify and prevent gross treatment delivery errors. The system was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of errors in real-time and indicate the error source. J. V. Siebers receives funding support from Varian Medical Systems.« less

  3. Immortalization of normal human mammary epithelial cells in two steps by direct targeting of senescence barriers does not require gross genomic alterations

    DOE PAGES

    Garbe, James C.; Vrba, Lukas; Sputova, Klara; ...

    2014-10-29

    Telomerase reactivation and immortalization are critical for human carcinoma progression. However, little is known about the mechanisms controlling this crucial step, due in part to the paucity of experimentally tractable model systems that can examine human epithelial cell immortalization as it might occur in vivo. We achieved efficient non-clonal immortalization of normal human mammary epithelial cells (HMEC) by directly targeting the 2 main senescence barriers encountered by cultured HMEC. The stress-associated stasis barrier was bypassed using shRNA to p16INK4; replicative senescence due to critically shortened telomeres was bypassed in post-stasis HMEC by c-MYC transduction. Thus, 2 pathologically relevant oncogenic agentsmore » are sufficient to immortally transform normal HMEC. The resultant non-clonal immortalized lines exhibited normal karyotypes. Most human carcinomas contain genomically unstable cells, with widespread instability first observed in vivo in pre-malignant stages; in vitro, instability is seen as finite cells with critically shortened telomeres approach replicative senescence. Our results support our hypotheses that: (1) telomere-dysfunction induced genomic instability in pre-malignant finite cells may generate the errors required for telomerase reactivation and immortalization, as well as many additional “passenger” errors carried forward into resulting carcinomas; (2) genomic instability during cancer progression is needed to generate errors that overcome tumor suppressive barriers, but not required per se; bypassing the senescence barriers by direct targeting eliminated a need for genomic errors to generate immortalization. Achieving efficient HMEC immortalization, in the absence of “passenger” genomic errors, should facilitate examination of telomerase regulation during human carcinoma progression, and exploration of agents that could prevent immortalization.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, David A.

    Oak Ridge Associated Universities (ORAU), under the Oak Ridge Institute for Science and Education (ORISE) contract, collected split surface water samples with Nuclear Fuel Services (NFS) representatives on August 21, 2013. Representatives from the U.S. Nuclear Regulatory Commission (NRC) and the Tennessee Department of Environment and Conservation were also in attendance. Samples were collected at four surface water stations, as required in the approved Request for Technical Assistance number 11-018. These stations included Nolichucky River upstream (NRU), Nolichucky River downstream (NRD), Martin Creek upstream (MCU), and Martin Creek downstream (MCD). Both ORAU and NFS performed gross alpha and gross betamore » analyses, and the comparison of results using the duplicate error ratio (DER), also known as the normalized absolute difference, are tabulated. All DER values were less than 3 and results are consistent with low (e.g., background) concentrations.« less

  5. Insar Unwrapping Error Correction Based on Quasi-Accurate Detection of Gross Errors (quad)

    NASA Astrophysics Data System (ADS)

    Kang, Y.; Zhao, C. Y.; Zhang, Q.; Yang, C. S.

    2018-04-01

    Unwrapping error is a common error in the InSAR processing, which will seriously degrade the accuracy of the monitoring results. Based on a gross error correction method, Quasi-accurate detection (QUAD), the method for unwrapping errors automatic correction is established in this paper. This method identifies and corrects the unwrapping errors by establishing a functional model between the true errors and interferograms. The basic principle and processing steps are presented. Then this method is compared with the L1-norm method with simulated data. Results show that both methods can effectively suppress the unwrapping error when the ratio of the unwrapping errors is low, and the two methods can complement each other when the ratio of the unwrapping errors is relatively high. At last the real SAR data is tested for the phase unwrapping error correction. Results show that this new method can correct the phase unwrapping errors successfully in the practical application.

  6. A strategy for reducing gross errors in the generalized Born models of implicit solvation

    PubMed Central

    Onufriev, Alexey V.; Sigalov, Grigori

    2011-01-01

    The “canonical” generalized Born (GB) formula [C. Still, A. Tempczyk, R. C. Hawley, and T. Hendrickson, J. Am. Chem. Soc. 112, 6127 (1990)] is known to provide accurate estimates for total electrostatic solvation energies ΔGel of biomolecules if the corresponding effective Born radii are accurate. Here we show that even if the effective Born radii are perfectly accurate, the canonical formula still exhibits significant number of gross errors (errors larger than 2kBT relative to numerical Poisson equation reference) in pairwise interactions between individual atomic charges. Analysis of exact analytical solutions of the Poisson equation (PE) for several idealized nonspherical geometries reveals two distinct spatial modes of the PE solution; these modes are also found in realistic biomolecular shapes. The canonical GB Green function misses one of two modes seen in the exact PE solution, which explains the observed gross errors. To address the problem and reduce gross errors of the GB formalism, we have used exact PE solutions for idealized nonspherical geometries to suggest an alternative analytical Green function to replace the canonical GB formula. The proposed functional form is mathematically nearly as simple as the original, but depends not only on the effective Born radii but also on their gradients, which allows for better representation of details of nonspherical molecular shapes. In particular, the proposed functional form captures both modes of the PE solution seen in nonspherical geometries. Tests on realistic biomolecular structures ranging from small peptides to medium size proteins show that the proposed functional form reduces gross pairwise errors in all cases, with the amount of reduction varying from more than an order of magnitude for small structures to a factor of 2 for the largest ones. PMID:21528947

  7. A model-data comparison of gross primary productivity: Results from the North American Carbon Program site synthesis

    Treesearch

    Kevin Schaefer; Christopher R. Schwalm; Chris Williams; M. Altaf Arain; Alan Barr; Jing M. Chen; Kenneth J. Davis; Dimitre Dimitrov; Timothy W. Hilton; David Y. Hollinger; Elyn Humphreys; Benjamin Poulter; Brett M. Raczka; Andrew D. Richardson; Alok Sahoo; Peter Thornton; Rodrigo Vargas; Hans Verbeeck; Ryan Anderson; Ian Baker; T. Andrew Black; Paul Bolstad; Jiquan Chen; Peter S. Curtis; Ankur R. Desai; Michael Dietze; Danilo Dragoni; Christopher Gough; Robert F. Grant; Lianhong Gu; Atul Jain; Chris Kucharik; Beverly Law; Shuguang Liu; Erandathie Lokipitiya; Hank A. Margolis; Roser Matamala; J. Harry McCaughey; Russ Monson; J. William Munger; Walter Oechel; Changhui Peng; David T. Price; Dan Ricciuto; William J. Riley; Nigel Roulet; Hanqin Tian; Christina Tonitto; Margaret Torn; Ensheng Weng; Xiaolu Zhou

    2012-01-01

    Accurately simulating gross primary productivity (GPP) in terrestrial ecosystem models is critical because errors in simulated GPP propagate through the model to introduce additional errors in simulated biomass and other fluxes. We evaluated simulated, daily average GPP from 26 models against estimated GPP at 39 eddy covariance flux tower sites across the United States...

  8. a Generic Probabilistic Model and a Hierarchical Solution for Sensor Localization in Noisy and Restricted Conditions

    NASA Astrophysics Data System (ADS)

    Ji, S.; Yuan, X.

    2016-06-01

    A generic probabilistic model, under fundamental Bayes' rule and Markov assumption, is introduced to integrate the process of mobile platform localization with optical sensors. And based on it, three relative independent solutions, bundle adjustment, Kalman filtering and particle filtering are deduced under different and additional restrictions. We want to prove that first, Kalman filtering, may be a better initial-value supplier for bundle adjustment than traditional relative orientation in irregular strips and networks or failed tie-point extraction. Second, in high noisy conditions, particle filtering can act as a bridge for gap binding when a large number of gross errors fail a Kalman filtering or a bundle adjustment. Third, both filtering methods, which help reduce the error propagation and eliminate gross errors, guarantee a global and static bundle adjustment, who requires the strictest initial values and control conditions. The main innovation is about the integrated processing of stochastic errors and gross errors in sensor observations, and the integration of the three most used solutions, bundle adjustment, Kalman filtering and particle filtering into a generic probabilistic localization model. The tests in noisy and restricted situations are designed and examined to prove them.

  9. An adaptive modeling and simulation environment for combined-cycle data reconciliation and degradation estimation

    NASA Astrophysics Data System (ADS)

    Lin, Tsungpo

    Performance engineers face the major challenge in modeling and simulation for the after-market power system due to system degradation and measurement errors. Currently, the majority in power generation industries utilizes the deterministic data matching method to calibrate the model and cascade system degradation, which causes significant calibration uncertainty and also the risk of providing performance guarantees. In this research work, a maximum-likelihood based simultaneous data reconciliation and model calibration (SDRMC) is used for power system modeling and simulation. By replacing the current deterministic data matching with SDRMC one can reduce the calibration uncertainty and mitigate the error propagation to the performance simulation. A modeling and simulation environment for a complex power system with certain degradation has been developed. In this environment multiple data sets are imported when carrying out simultaneous data reconciliation and model calibration. Calibration uncertainties are estimated through error analyses and populated to performance simulation by using principle of error propagation. System degradation is then quantified by performance comparison between the calibrated model and its expected new & clean status. To mitigate smearing effects caused by gross errors, gross error detection (GED) is carried out in two stages. The first stage is a screening stage, in which serious gross errors are eliminated in advance. The GED techniques used in the screening stage are based on multivariate data analysis (MDA), including multivariate data visualization and principal component analysis (PCA). Subtle gross errors are treated at the second stage, in which the serial bias compensation or robust M-estimator is engaged. To achieve a better efficiency in the combined scheme of the least squares based data reconciliation and the GED technique based on hypotheses testing, the Levenberg-Marquardt (LM) algorithm is utilized as the optimizer. To reduce the computation time and stabilize the problem solving for a complex power system such as a combined cycle power plant, meta-modeling using the response surface equation (RSE) and system/process decomposition are incorporated with the simultaneous scheme of SDRMC. The goal of this research work is to reduce the calibration uncertainties and, thus, the risks of providing performance guarantees arisen from uncertainties in performance simulation.

  10. Evaluation of a seven-year air quality simulation using the Weather Research and Forecasting (WRF)/Community Multiscale Air Quality (CMAQ) models in the eastern United States.

    PubMed

    Zhang, Hongliang; Chen, Gang; Hu, Jianlin; Chen, Shu-Hua; Wiedinmyer, Christine; Kleeman, Michael; Ying, Qi

    2014-03-01

    The performance of the Weather Research and Forecasting (WRF)/Community Multi-scale Air Quality (CMAQ) system in the eastern United States is analyzed based on results from a seven-year modeling study with a 4-km spatial resolution. For 2-m temperature, the monthly averaged mean bias (MB) and gross error (GE) values are generally within the recommended performance criteria, although temperature is over-predicted with MB values up to 2K. Water vapor at 2-m is well-predicted but significant biases (>2 g kg(-1)) were observed in wintertime. Predictions for wind speed are satisfactory but biased towards over-prediction with 0

  11. Validating the Rett Syndrome Gross Motor Scale.

    PubMed

    Downs, Jenny; Stahlhut, Michelle; Wong, Kingsley; Syhler, Birgit; Bisgaard, Anne-Marie; Jacoby, Peter; Leonard, Helen

    2016-01-01

    Rett syndrome is a pervasive neurodevelopmental disorder associated with a pathogenic mutation on the MECP2 gene. Impaired movement is a fundamental component and the Rett Syndrome Gross Motor Scale was developed to measure gross motor abilities in this population. The current study investigated the validity and reliability of the Rett Syndrome Gross Motor Scale. Video data showing gross motor abilities supplemented with parent report data was collected for 255 girls and women registered with the Australian Rett Syndrome Database, and the factor structure and relationships between motor scores, age and genotype were investigated. Clinical assessment scores for 38 girls and women with Rett syndrome who attended the Danish Center for Rett Syndrome were used to assess consistency of measurement. Principal components analysis enabled the calculation of three factor scores: Sitting, Standing and Walking, and Challenge. Motor scores were poorer with increasing age and those with the p.Arg133Cys, p.Arg294* or p.Arg306Cys mutation achieved higher scores than those with a large deletion. The repeatability of clinical assessment was excellent (intraclass correlation coefficient for total score 0.99, 95% CI 0.93-0.98). The standard error of measurement for the total score was 2 points and we would be 95% confident that a change 4 points in the 45-point scale would be greater than within-subject measurement error. The Rett Syndrome Gross Motor Scale could be an appropriate measure of gross motor skills in clinical practice and clinical trials.

  12. Challenges of primate embryonic stem cell research.

    PubMed

    Bavister, Barry D; Wolf, Don P; Brenner, Carol A

    2005-01-01

    Embryonic stem (ES) cells hold great promise for treating degenerative diseases, including diabetes, Parkinson's, Alzheimer's, neural degeneration, and cardiomyopathies. This research is controversial to some because producing ES cells requires destroying embryos, which generally means human embryos. However, some of the surplus human embryos available from in vitro fertilization (IVF) clinics may have a high rate of genetic errors and therefore would be unsuitable for ES cell research. Although gross chromosome errors can readily be detected in ES cells, other anomalies such as mitochondrial DNA defects may have gone unrecognized. An insurmountable problem is that there are no human ES cells derived from in vivo-produced embryos to provide normal comparative data. In contrast, some monkey ES cell lines have been produced using in vivo-generated, normal embryos obtained from fertile animals; these can represent a "gold standard" for primate ES cells. In this review, we argue a need for strong research programs using rhesus monkey ES cells, conducted in parallel with studies on human ES and adult stem cells, to derive the maximum information about the biology of normal stem cells and to produce technical protocols for their directed differentiation into safe and functional replacement cells, tissues, and organs. In contrast, ES cell research using only human cell lines is likely to be incomplete, which could hinder research progress, and delay or diminish the effective application of ES cell technology to the treatment of human diseases.

  13. Air and smear sample calculational tool for Fluor Hanford Radiological control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BAUMANN, B.L.

    2003-07-11

    A spreadsheet calculation tool was developed to automate the calculations performed for determining the concentration of airborne radioactivity and smear counting as outlined in HNF-13536, Section 5.2.7, ''Analyzing Air and Smear Samples''. This document reports on the design and testing of the calculation tool. Radiological Control Technicians (RCTs) will save time and reduce hand written and calculation errors by using an electronic form for documenting and calculating work place air samples. Current expectations are RCTs will perform an air sample and collect the filter or perform a smear for surface contamination. RCTs will then survey the filter for gross alphamore » and beta/gamma radioactivity and with the gross counts utilize either hand calculation method or a calculator to determine activity on the filter. The electronic form will allow the RCT with a few key strokes to document the individual's name, payroll, gross counts, instrument identifiers; produce an error free record. This productivity gain is realized by the enhanced ability to perform mathematical calculations electronically (reducing errors) and at the same time, documenting the air sample.« less

  14. Psychrometric Measurement of Leaf Water Potential: Lack of Error Attributable to Leaf Permeability.

    PubMed

    Barrs, H D

    1965-07-02

    A report that low permeability could cause gross errors in psychrometric determinations of water potential in leaves has not been confirmed. No measurable error from this source could be detected for either of two types of thermocouple psychrometer tested on four species, each at four levels of water potential. No source of error other than tissue respiration could be demonstrated.

  15. The sensitivity of derived estimates to the measurment quality objectives for independent variables

    Treesearch

    Francis A. Roesch

    2002-01-01

    The effect of varying the allowed measurement error for individual tree variables upon county estimates of gross cubic-foot volume was examined. Measurement Quality Ob~ectives (MQOs) for three forest tree variables (biological identity, diameter, and height) used in individual tree gross cubic-foot volume equations were varied from the current USDA Forest Service...

  16. The Sensitivity of Derived Estimates to the Measurement Quality Objectives for Independent Variables

    Treesearch

    Francis A. Roesch

    2005-01-01

    The effect of varying the allowed measurement error for individual tree variables upon county estimates of gross cubic-foot volume was examined. Measurement Quality Objectives (MQOs) for three forest tree variables (biological identity, diameter, and height) used in individual tree gross cubic-foot volume equations were varied from the current USDA Forest Service...

  17. Prenatal Exposure to Alcohol, Caffeine, Tobacco, and Aspirin: Effects on Fine and Gross Motor Preformance in 4-Year-Old Children.

    ERIC Educational Resources Information Center

    Barr, Helen M.; And Others

    1990-01-01

    Multiple regression analyses of data from 449 children indicated statistically significant relationships between moderate levels of prenatal alcohol exposure and increased errors, increased latency, and increased total time on the Wisconsin Fine Motor Steadiness Battery and poorer balance on the Gross Motor Scale. (RH)

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    none,

    Oak Ridge Associated Universities (ORAU), under the Oak Ridge Institute for Science and Education (ORISE) contract, collected split surface water samples with Nuclear Fuel Services (NFS) representatives on June 12, 2013. Representatives from the U.S. Nuclear Regulatory Commission (NRC) and the Tennessee Department of Environment and Conservation were also in attendance. Samples were collected at four surface water stations, as required in the approved Request for Technical Assistance number 11-018. These stations included Nolichucky River upstream (NRU), Nolichucky River downstream (NRD), Martin Creek upstream (MCU), and Martin Creek downstream (MCD). Both ORAU and NFS performed gross alpha and gross betamore » analyses, and Table 1 presents the comparison of results using the duplicate error ratio (DER), also known as the normalized absolute difference. A DER ≤ 3 indicates at a 99% confidence interval that split sample results do not differ significantly when compared to their respective one standard deviation (sigma) uncertainty (ANSI N42.22). The NFS split sample report specifies 95% confidence level of reported uncertainties (NFS 2013). Therefore, standard two sigma reporting values were divided by 1.96. In conclusion, most DER values were less than 3 and results are consistent with low (e.g., background) concentrations. The gross beta result for sample 5198W0014 was the exception. The ORAU gross beta result of 6.30 ± 0.65 pCi/L from location NRD is well above NFS's non-detected result of 1.56 ± 0.59 pCi/L. NFS's data package includes no detected result for any radionuclide at location NRD. At NRC's request, ORAU performed gamma spectroscopic analysis of sample 5198W0014 to identify analytes contributing to the relatively elevated gross beta results. This analysis identified detected amounts of naturally-occurring constituents, most notably Ac-228 from the thorium decay series, and does not suggest the presence of site-related contamination.« less

  19. Measurement invariance of TGMD-3 in children with and without mental and behavioral disorders.

    PubMed

    Magistro, Daniele; Piumatti, Giovanni; Carlevaro, Fabio; Sherar, Lauren B; Esliger, Dale W; Bardaglio, Giulia; Magno, Francesca; Zecca, Massimiliano; Musella, Giovanni

    2018-05-24

    This study evaluated whether the Test of Gross Motor Development 3 (TGMD-3) is a reliable tool to compare children with and without mental and behavioral disorders across gross motor skill domains. A total of 1,075 children (aged 3-11 years), 98 with mental and behavioral disorders and 977 without (typically developing), were included in the analyses. The TGMD-3 evaluates fundamental gross motor skills of children across two domains: locomotor skills and ball skills. Two independent testers simultaneously observed children's performances (agreement over 95%). Each child completed one practice and then two formal trials. Scores were recorded only during the two formal trials. Multigroup confirmatory factor analysis tested the assumption of TGMD-3 measurement invariance across disability groups. According to the magnitude of changes in root mean square error of approximation and comparative fit index between nested models, the assumption of measurement invariance across groups was valid. Loadings of the manifest indicators on locomotor and ball skills were significant (p < .001) in both groups. Item response theory analysis showed good reliability results across locomotor and the ball skills full latent traits. The present study confirmed the factorial structure of TGMD-3 and demonstrated its feasibility across normally developing children and children with mental and behavioral disorders. These findings provide new opportunities for understanding the effect of specific intervention strategies on this population. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Oak Ridge Associated Universities (ORAU), under the Oak Ridge Institute for Science and Education (ORISE) contract, collected split surface water samples with Nuclear Fuel Services (NFS) representatives on November 15, 2012. Representatives from the U.S. Nuclear Regulatory Commission and Tennessee Department of Environment and Conservation were also in attendance. Samples were collected at four surface water stations, as required in the approved Request for Technical Assistance number 11-018. These stations included Nolichucky River upstream (NRU), Nolichucky River downstream (NRD), Martin Creek upstream (MCU), and Martin Creek downstream (MCD). Both ORAU and NFS performed gross alpha and gross beta analyses, andmore » the results are compared using the duplicate error ratio (DER), also known as the normalized absolute difference. A DER {<=} 3 indicates that, at a 99% confidence interval, split sample results do not differ significantly when compared to their respective one standard deviation (sigma) uncertainty (ANSI N42.22). The NFS split sample report does not specify the confidence level of reported uncertainties (NFS 2012). Therefore, standard two sigma reporting is assumed and uncertainty values were divided by 1.96. In conclusion, all DER values were less than 3 and results are consistent with low (e.g., background) concentrations.« less

  1. Prevention of gross setup errors in radiotherapy with an efficient automatic patient safety system.

    PubMed

    Yan, Guanghua; Mittauer, Kathryn; Huang, Yin; Lu, Bo; Liu, Chihray; Li, Jonathan G

    2013-11-04

    Treatment of the wrong body part due to incorrect setup is among the leading types of errors in radiotherapy. The purpose of this paper is to report an efficient automatic patient safety system (PSS) to prevent gross setup errors. The system consists of a pair of charge-coupled device (CCD) cameras mounted in treatment room, a single infrared reflective marker (IRRM) affixed on patient or immobilization device, and a set of in-house developed software. Patients are CT scanned with a CT BB placed over their surface close to intended treatment site. Coordinates of the CT BB relative to treatment isocenter are used as reference for tracking. The CT BB is replaced with an IRRM before treatment starts. PSS evaluates setup accuracy by comparing real-time IRRM position with reference position. To automate system workflow, PSS synchronizes with the record-and-verify (R&V) system in real time and automatically loads in reference data for patient under treatment. Special IRRMs, which can permanently stick to patient face mask or body mold throughout the course of treatment, were designed to minimize therapist's workload. Accuracy of the system was examined on an anthropomorphic phantom with a designed end-to-end test. Its performance was also evaluated on head and neck as well as abdominalpelvic patients using cone-beam CT (CBCT) as standard. The PSS system achieved a seamless clinic workflow by synchronizing with the R&V system. By permanently mounting specially designed IRRMs on patient immobilization devices, therapist intervention is eliminated or minimized. Overall results showed that the PSS system has sufficient accuracy to catch gross setup errors greater than 1 cm in real time. An efficient automatic PSS with sufficient accuracy has been developed to prevent gross setup errors in radiotherapy. The system can be applied to all treatment sites for independent positioning verification. It can be an ideal complement to complex image-guidance systems due to its advantages of continuous tracking ability, no radiation dose, and fully automated clinic workflow.

  2. Renal Parenchymal Area Growth Curves for Children 0 to 10 Months Old.

    PubMed

    Fischer, Katherine; Li, Chunming; Wang, Huixuan; Song, Yihua; Furth, Susan; Tasian, Gregory E

    2016-04-01

    Low renal parenchymal area, which is the gross area of the kidney in maximal longitudinal length minus the area of the collecting system, has been associated with increased risk of end stage renal disease during childhood in boys with posterior urethral valves. To our knowledge normal values do not exist. We aimed to increase the clinical usefulness of this measure by defining normal renal parenchymal area during infancy. In a cross-sectional study of children with prenatally detected mild unilateral hydronephrosis who were evaluated between 2000 and 2012 we measured the renal parenchymal area of normal kidney(s) opposite the kidney with mild hydronephrosis. Measurement was done with ultrasound from birth to post-gestational age 10 months. We used the LMS method to construct unilateral, bilateral, side and gender stratified normalized centile curves. We determined the z-score and the centile of a total renal parenchymal area of 12.4 cm(2) at post-gestational age 1 to 2 weeks, which has been associated with an increased risk of kidney failure before age 18 years in boys with posterior urethral valves. A total of 975 normal kidneys of children 0 to 10 months old were used to create renal parenchymal area centile curves. At the 97th centile for unilateral and single stratified curves the estimated margin of error was 4.4% to 8.8%. For bilateral and double stratified curves the estimated margin of error at the 97th centile was 6.6% to 13.2%. Total renal parenchymal area less than 12.4 cm(2) at post-gestational age 1 to 2 weeks had a z-score of -1.96 and fell at the 3rd percentile. These normal renal parenchymal area curves may be used to track kidney growth in infants and identify those at risk for chronic kidney disease progression. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  3. Comparing motor performance, praxis, coordination, and interpersonal synchrony between children with and without Autism Spectrum Disorder (ASD).

    PubMed

    Kaur, Maninderjit; M Srinivasan, Sudha; N Bhat, Anjana

    2018-01-01

    Children with Autism Spectrum Disorder (ASD) have basic motor impairments in balance, gait, and coordination as well as autism-specific impairments in praxis/motor planning and interpersonal synchrony. Majority of the current literature focuses on isolated motor behaviors or domains. Additionally, the relationship between cognition, symptom severity, and motor performance in ASD is unclear. We used a comprehensive set of measures to compare gross and fine motor, praxis/imitation, motor coordination, and interpersonal synchrony skills across three groups of children between 5 and 12 years of age: children with ASD with high IQ (HASD), children with ASD with low IQ (LASD), and typically developing (TD) children. We used the Bruininks-Oseretsky Test of Motor Proficiency and the Bilateral Motor Coordination subtest of the Sensory Integration and Praxis Tests to assess motor performance and praxis skills respectively. Children were also examined while performing simple and complex rhythmic upper and lower limb actions on their own (solo context) and with a social partner (social context). Both ASD groups had lower gross and fine motor scores, greater praxis errors in total and within various error types, lower movement rates, greater movement variability, and weaker interpersonal synchrony compared to the TD group. In addition, the LASD group had lower gross motor scores and greater mirroring errors compared to the HASD group. Overall, a variety of motor impairments are present across the entire spectrum of children with ASD, regardless of their IQ scores. Both, fine and gross motor performance significantly correlated with IQ but not with autism severity; however, praxis errors (mainly, total, overflow, and rhythmicity) strongly correlated with autism severity and not IQ. Our study findings highlight the need for clinicians and therapists to include motor evaluations and interventions in the standard-of-care of children with ASD and for the broader autism community to recognize dyspraxia as an integral part of the definition of ASD. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography

    DTIC Science & Technology

    1980-03-01

    interpreting/smoothing data containing a significant percentage of gross errors, and thus is ideally suited for applications in automated image ... analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of the paper describes the application of

  5. PGD and aneuploidy screening for 24 chromosomes: advantages and disadvantages of competing platforms.

    PubMed

    Bisignano, A; Wells, D; Harton, G; Munné, S

    2011-12-01

    Diagnosis of embryos for chromosome abnormalities, i.e. aneuploidy screening, has been invigorated by the introduction of microarray-based testing methods allowing analysis of 24 chromosomes in one test. Recent data have been suggestive of increased implantation and pregnancy rates following microarray testing. Preimplantation genetic diagnosis for infertility aims to test for gross chromosome changes with the hope that identification and transfer of normal embryos will improve IVF outcomes. Testing by some methods, specifically single-nucleotide polymorphism (SNP) microarrays, allow for more information and potential insight into parental origin of aneuploidy and uniparental disomy. The usefulness and validity of reporting this information is flawed. Numerous papers have shown that the majority of meiotic errors occur in the egg, while mitotic errors in the embryo affect parental chromosomes at random. Potential mistakes made in assigning an error as meiotic or mitotic may lead to erroneous reporting of results with medical consequences. This study's data suggest that the bioinformatic cleaning used to 'fix' the miscalls that plague single-cell whole-genome amplification provides little improvement in the quality of useful data. Based on the information available, SNP-based aneuploidy screening suffers from a number of serious issues that must be resolved. Copyright © 2011 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  6. Error analysis on spinal motion measurement using skin mounted sensors.

    PubMed

    Yang, Zhengyi; Ma, Heather Ting; Wang, Deming; Lee, Raymond

    2008-01-01

    Measurement errors of skin-mounted sensors in measuring forward bending movement of the lumbar spines are investigated. In this investigation, radiographic images capturing the entire lumbar spines' positions were acquired and used as a 'gold' standard. Seventeen young male volunteers (21 (SD 1) years old) agreed to participate in the study. Light-weight miniature sensors of the electromagnetic tracking systems-Fastrak were attached to the skin overlying the spinous processes of the lumbar spine. With the sensors attached, the subjects were requested to take lateral radiographs in two postures: neutral upright and full flexion. The ranges of motions of lumbar spine were calculated from two sets of digitized data: the bony markers of vertebral bodies and the sensors and compared. The differences between the two sets of results were then analyzed. The relative movement between sensor and vertebrae was decomposed into sensor sliding and titling, from which sliding error and titling error were introduced. Gross motion range of forward bending of lumbar spine measured from bony markers of vertebrae is 67.8 degrees (SD 10.6 degrees ) and that from sensors is 62.8 degrees (SD 12.8 degrees ). The error and absolute error for gross motion range were 5.0 degrees (SD 7.2 degrees ) and 7.7 degrees (SD 3.9 degrees ). The contributions of sensors placed on S1 and L1 to the absolute error were 3.9 degrees (SD 2.9 degrees ) and 4.4 degrees (SD 2.8 degrees ), respectively.

  7. Identifying model error in metabolic flux analysis - a generalized least squares approach.

    PubMed

    Sokolenko, Stanislav; Quattrociocchi, Marco; Aucoin, Marc G

    2016-09-13

    The estimation of intracellular flux through traditional metabolic flux analysis (MFA) using an overdetermined system of equations is a well established practice in metabolic engineering. Despite the continued evolution of the methodology since its introduction, there has been little focus on validation and identification of poor model fit outside of identifying "gross measurement error". The growing complexity of metabolic models, which are increasingly generated from genome-level data, has necessitated robust validation that can directly assess model fit. In this work, MFA calculation is framed as a generalized least squares (GLS) problem, highlighting the applicability of the common t-test for model validation. To differentiate between measurement and model error, we simulate ideal flux profiles directly from the model, perturb them with estimated measurement error, and compare their validation to real data. Application of this strategy to an established Chinese Hamster Ovary (CHO) cell model shows how fluxes validated by traditional means may be largely non-significant due to a lack of model fit. With further simulation, we explore how t-test significance relates to calculation error and show that fluxes found to be non-significant have 2-4 fold larger error (if measurement uncertainty is in the 5-10 % range). The proposed validation method goes beyond traditional detection of "gross measurement error" to identify lack of fit between model and data. Although the focus of this work is on t-test validation and traditional MFA, the presented framework is readily applicable to other regression analysis methods and MFA formulations.

  8. A simplified gross thrust computing technique for an afterburning turbofan engine

    NASA Technical Reports Server (NTRS)

    Hamer, M. J.; Kurtenbach, F. J.

    1978-01-01

    A simplified gross thrust computing technique extended to the F100-PW-100 afterburning turbofan engine is described. The technique uses measured total and static pressures in the engine tailpipe and ambient static pressure to compute gross thrust. Empirically evaluated calibration factors account for three-dimensional effects, the effects of friction and mass transfer, and the effects of simplifying assumptions for solving the equations. Instrumentation requirements and the sensitivity of computed thrust to transducer errors are presented. NASA altitude facility tests on F100 engines (computed thrust versus measured thrust) are presented, and calibration factors obtained on one engine are shown to be applicable to the second engine by comparing the computed gross thrust. It is concluded that this thrust method is potentially suitable for flight test application and engine maintenance on production engines with a minimum amount of instrumentation.

  9. Estimating of gross primary production in an Amazon-Cerrado transitional forest using MODIS and Landsat imagery.

    PubMed

    Danelichen, Victor H M; Biudes, Marcelo S; Velasque, Maísa C S; Machado, Nadja G; Gomes, Raphael S R; Vourlitis, George L; Nogueira, José S

    2015-09-01

    The acceleration of the anthropogenic activity has increased the atmospheric carbon concentration, which causes changes in regional climate. The Gross Primary Production (GPP) is an important variable in the global carbon cycle studies, since it defines the atmospheric carbon extraction rate from terrestrial ecosystems. The objective of this study was to estimate the GPP of the Amazon-Cerrado Transitional Forest by the Vegetation Photosynthesis Model (VPM) using local meteorological data and remote sensing data from MODIS and Landsat 5 TM reflectance from 2005 to 2008. The GPP was estimated using Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI) calculated by MODIS and Landsat 5 TM images. The GPP estimates were compared with measurements in a flux tower by eddy covariance. The GPP measured in the tower was consistent with higher values during the wet season and there was a trend to increase from 2005 to 2008. The GPP estimated by VPM showed the same increasing trend observed in measured GPP and had high correlation and Willmott's coefficient and low error metrics in comparison to measured GPP. These results indicated high potential of the Landsat 5 TM images to estimate the GPP of Amazon-Cerrado Transitional Forest by VPM.

  10. Vast Portfolio Selection with Gross-exposure Constraints*

    PubMed Central

    Fan, Jianqing; Zhang, Jingjin; Yu, Ke

    2012-01-01

    We introduce the large portfolio selection using gross-exposure constraints. We show that with gross-exposure constraint the empirically selected optimal portfolios based on estimated covariance matrices have similar performance to the theoretical optimal ones and there is no error accumulation effect from estimation of vast covariance matrices. This gives theoretical justification to the empirical results in Jagannathan and Ma (2003). We also show that the no-short-sale portfolio can be improved by allowing some short positions. The applications to portfolio selection, tracking, and improvements are also addressed. The utility of our new approach is illustrated by simulation and empirical studies on the 100 Fama-French industrial portfolios and the 600 stocks randomly selected from Russell 3000. PMID:23293404

  11. Shared and differentiated motor skill impairments in children with dyslexia and/or attention deficit disorder: From simple to complex sequential coordination

    PubMed Central

    Morin-Moncet, Olivier; Bélanger, Anne-Marie; Beauchamp, Miriam H.; Leonard, Gabriel

    2017-01-01

    Dyslexia and Attention deficit disorder (AD) are prevalent neurodevelopmental conditions in children and adolescents. They have high comorbidity rates and have both been associated with motor difficulties. Little is known, however, about what is shared or differentiated in dyslexia and AD in terms of motor abilities. Even when motor skill problems are identified, few studies have used the same measurement tools, resulting in inconstant findings. The present study assessed increasingly complex gross motor skills in children and adolescents with dyslexia, AD, and with both Dyslexia and AD. Our results suggest normal performance on simple motor-speed tests, whereas all three groups share a common impairment on unimanual and bimanual sequential motor tasks. Children in these groups generally improve with practice to the same level as normal subjects, though they make more errors. In addition, children with AD are the most impaired on complex bimanual out-of-phase movements and with manual dexterity. These latter findings are examined in light of the Multiple Deficit Model. PMID:28542319

  12. Advantages of combined touch screen technology and text hyperlink for the pathology grossing manual: a simple approach to access instructive information in biohazardous environments.

    PubMed

    Qu, Zhenhong; Ghorbani, Rhonda P; Li, Hongyan; Hunter, Robert L; Hannah, Christina D

    2007-03-01

    Gross examination, encompassing description, dissection, and sampling, is a complex task and an essential component of surgical pathology. Because of the complexity of the task, standardized protocols to guide the gross examination often become a bulky manual that is difficult to use. This problem is further compounded by the high specimen volume and biohazardous nature of the task. As a result, such a manual is often underused, leading to errors that are potentially harmful and time consuming to correct-a common chronic problem affecting many pathology laboratories. To combat this problem, we have developed a simple method that incorporates complex text and graphic information of a typical procedure manual and yet allows easy access to any intended instructive information in the manual. The method uses the Object-Linking-and-Embedding function of Microsoft Word (Microsoft, Redmond, WA) to establish hyperlinks among different contents, and then it uses the touch screen technology to facilitate navigation through the manual on a computer screen installed at the cutting bench with no need for a physical keyboard or a mouse. It takes less than 4 seconds to reach any intended information in the manual by 3 to 4 touches on the screen. A 3-year follow-up study shows that this method has increased use of the manual and has improved the quality of gross examination. The method is simple and can be easily tailored to different formats of instructive information, allowing flexible organization, easy access, and quick navigation. Increased compliance to instructive information reduces errors at the grossing bench and improves work efficiency.

  13. Why Three Heads Are a Better Bet than Four: A Reply to Sun, Tweney, and Wang (2010)

    ERIC Educational Resources Information Center

    Hahn, Ulrike; Warren, Paul A.

    2010-01-01

    We (Hahn & Warren, 2009) recently proposed a new account of the systematic errors and biases that appear to be present in people's perception of randomly generated events. In a comment on that article, Sun, Tweney, and Wang (2010) critiqued our treatment of the gambler's fallacy. We had argued that this fallacy was less gross an error than it…

  14. Potential lost productivity resulting from the global burden of uncorrected refractive error.

    PubMed

    Smith, T S T; Frick, K D; Holden, B A; Fricke, T R; Naidoo, K S

    2009-06-01

    To estimate the potential global economic productivity loss associated with the existing burden of visual impairment from uncorrected refractive error (URE). Conservative assumptions and national population, epidemiological and economic data were used to estimate the purchasing power parity-adjusted gross domestic product (PPP-adjusted GDP) loss for all individuals with impaired vision and blindness, and for individuals with normal sight who provide them with informal care. An estimated 158.1 million cases of visual impairment resulted from uncorrected or undercorrected refractive error in 2007; of these, 8.7 million were blind. We estimated the global economic productivity loss in international dollars (I$) associated with this burden at I$ 427.7 billion before, and I$ 268.8 billion after, adjustment for country-specific labour force participation and employment rates. With the same adjustment, but assuming no economic productivity for individuals aged > 50 years, we estimated the potential productivity loss at I$ 121.4 billion. Even under the most conservative assumptions, the total estimated productivity loss, in $I, associated with visual impairment from URE is approximately a thousand times greater than the global number of cases. The cost of scaling up existing refractive services to meet this burden is unknown, but if each affected individual were to be provided with appropriate eyeglasses for less than I$ 1000, a net economic gain may be attainable.

  15. Potential lost productivity resulting from the global burden of uncorrected refractive error

    PubMed Central

    Frick, KD; Holden, BA; Fricke, TR; Naidoo, KS

    2009-01-01

    Abstract Objective To estimate the potential global economic productivity loss associated with the existing burden of visual impairment from uncorrected refractive error (URE). Methods Conservative assumptions and national population, epidemiological and economic data were used to estimate the purchasing power parity-adjusted gross domestic product (PPP-adjusted GDP) loss for all individuals with impaired vision and blindness, and for individuals with normal sight who provide them with informal care. Findings An estimated 158.1 million cases of visual impairment resulted from uncorrected or undercorrected refractive error in 2007; of these, 8.7 million were blind. We estimated the global economic productivity loss in international dollars (I$) associated with this burden at I$ 427.7 billion before, and I$ 268.8 billion after, adjustment for country-specific labour force participation and employment rates. With the same adjustment, but assuming no economic productivity for individuals aged ≥ 50 years, we estimated the potential productivity loss at I$ 121.4 billion. Conclusion Even under the most conservative assumptions, the total estimated productivity loss, in $I, associated with visual impairment from URE is approximately a thousand times greater than the global number of cases. The cost of scaling up existing refractive services to meet this burden is unknown, but if each affected individual were to be provided with appropriate eyeglasses for less than I$ 1000, a net economic gain may be attainable. PMID:19565121

  16. 47 CFR 80.853 - Radiotelephone station.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... deck above the ship's main deck. (d) The principal operating position of the radiotelephone station must be in the room from which the ship is normally steered while at sea. In installations on cargo ships of 300 gross tons and upwards but less than 500 gross tons on which the keel was laid prior to...

  17. 47 CFR 80.853 - Radiotelephone station.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... deck above the ship's main deck. (d) The principal operating position of the radiotelephone station must be in the room from which the ship is normally steered while at sea. In installations on cargo ships of 300 gross tons and upwards but less than 500 gross tons on which the keel was laid prior to...

  18. 47 CFR 80.853 - Radiotelephone station.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... deck above the ship's main deck. (d) The principal operating position of the radiotelephone station must be in the room from which the ship is normally steered while at sea. In installations on cargo ships of 300 gross tons and upwards but less than 500 gross tons on which the keel was laid prior to...

  19. 47 CFR 80.853 - Radiotelephone station.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... deck above the ship's main deck. (d) The principal operating position of the radiotelephone station must be in the room from which the ship is normally steered while at sea. In installations on cargo ships of 300 gross tons and upwards but less than 500 gross tons on which the keel was laid prior to...

  20. 47 CFR 80.853 - Radiotelephone station.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... deck above the ship's main deck. (d) The principal operating position of the radiotelephone station must be in the room from which the ship is normally steered while at sea. In installations on cargo ships of 300 gross tons and upwards but less than 500 gross tons on which the keel was laid prior to...

  1. 47 CFR 80.965 - Reserve power supply.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 5 2011-10-01 2011-10-01 false Reserve power supply. 80.965 Section 80.965... power supply. (a) Each passenger vessel of more than 100 gross tons and each cargo vessel of more than 300 gross tons must be provided with a reserve power supply independent of the vessel's normal...

  2. 47 CFR 80.965 - Reserve power supply.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Reserve power supply. 80.965 Section 80.965... power supply. (a) Each passenger vessel of more than 100 gross tons and each cargo vessel of more than 300 gross tons must be provided with a reserve power supply independent of the vessel's normal...

  3. 49 CFR 1037.3 - Claims.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... computing the amount of the loss for which the carrier will pay there will be deducted from the gross amount... discrepancy is due to defective scales or other shipper facilities, or to inaccurate weighing or other error...

  4. Longitudinal motor development of "apparently normal" high-risk infants at 18 months, 3 and 5 years.

    PubMed

    Goyen, Traci Anne; Lui, Kei

    2002-12-01

    Motor development appears to be more affected by premature birth than other developmental domains, however few studies have specifically investigated the development of gross and fine motor skills in this population. To examine longitudinal motor development in a group of "apparently normal" high-risk infants. Developmental follow-up clinic in a perinatal centre. Longitudinal observational cohort study. Fifty-eight infants born less than 29 weeks gestation and/or 1000 g and without disabilities detected at 12 months. Longitudinal gross and fine motor skills at 18 months, 3 and 5 years using the Peabody Developmental Motor Scales. The HOME scale provided information of the home environment as a stimulus for development. A large proportion (54% at 18 months, 47% at 3 years and 64% at 5 years) of children continued to have fine motor deficits from 18 months to 5 years. The proportion of infants with gross motor deficits significantly increased over this period (14%, 33% and 81%, p<0.001), particularly for the 'micropreemies' (born <750 g). In multivariate analyses, gross motor development was positively influenced by the quality of the home environment. A large proportion of high-risk infants continued to have fine motor deficits, reflecting an underlying problem with fine motor skills. The proportion of infants with gross motor deficits significantly increased, as test demands became more challenging. In addition, the development of gross and fine motor skills appears to be influenced differently by the home environment.

  5. Determination of stores pointing error due to wing flexibility under flight load

    NASA Technical Reports Server (NTRS)

    Lokos, William A.; Bahm, Catherine M.; Heinle, Robert A.

    1995-01-01

    The in-flight elastic wing twist of a fighter-type aircraft was studied to provide for an improved on-board real-time computed prediction of pointing variations of three wing store stations. This is an important capability to correct sensor pod alignment variation or to establish initial conditions of iron bombs or smart weapons prior to release. The original algorithm was based upon coarse measurements. The electro-optical Flight Deflection Measurement System measured the deformed wing shape in flight under maneuver loads to provide a higher resolution database from which an improved twist prediction algorithm could be developed. The FDMS produced excellent repeatable data. In addition, a NASTRAN finite-element analysis was performed to provide additional elastic deformation data. The FDMS data combined with the NASTRAN analysis indicated that an improved prediction algorithm could be derived by using a different set of aircraft parameters, namely normal acceleration, stores configuration, Mach number, and gross weight.

  6. Notes on power of normality tests of error terms in regression models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Střelec, Luboš

    2015-03-10

    Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importancemore » of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models.« less

  7. Relationship between motor proficiency and body composition in 6- to 10-year-old children.

    PubMed

    Marmeleira, José; Veiga, Guida; Cansado, Hugo; Raimundo, Armando

    2017-04-01

    The aim of this study is to examine the relationship between motor skill competence and body composition of 6- to 10-year-old children. Seventy girls and 86 boys participated. Body composition was measured by body mass index and skinfold thickness. Motor proficiency was evaluated through the Bruininks-Oseretsky Test of Motor Proficiency-Short Form, which included measures of gross motor skills and fine motor skills. Significant associations were found for both sexes between the percentage of body fat and (i) the performance in each gross motor task, (ii) the composite score for gross motor skills and (iii) the motor proficiency score. The percentage of body fat was not significantly associated with the majority of the fine motor skills items and with the respective composite score. Considering body weigh categories, children with normal weight had significantly higher scores than their peers with overweight or with obesity in gross motor skills and in overall motor proficiency. Children's motor proficiency is negatively associated with body fat, and normal weight children show better motor competence than those who are overweight or obese. The negative impact of excessive body weight is stronger for gross motor skills that involve dynamic body movements than for stationary object control skills; fine motor skills appear to be relatively independent of the constraints imposed by excessive body weight. © 2017 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).

  8. Prediction of Malaysian monthly GDP

    NASA Astrophysics Data System (ADS)

    Hin, Pooi Ah; Ching, Soo Huei; Yeing, Pan Wei

    2015-12-01

    The paper attempts to use a method based on multivariate power-normal distribution to predict the Malaysian Gross Domestic Product next month. Letting r(t) be the vector consisting of the month-t values on m selected macroeconomic variables, and GDP, we model the month-(t+1) GDP to be dependent on the present and l-1 past values r(t), r(t-1),…,r(t-l+1) via a conditional distribution which is derived from a [(m+1)l+1]-dimensional power-normal distribution. The 100(α/2)% and 100(1-α/2)% points of the conditional distribution may be used to form an out-of sample prediction interval. This interval together with the mean of the conditional distribution may be used to predict the month-(t+1) GDP. The mean absolute percentage error (MAPE), estimated coverage probability and average length of the prediction interval are used as the criterions for selecting the suitable lag value l-1 and the subset from a pool of 17 macroeconomic variables. It is found that the relatively better models would be those of which 2 ≤ l ≤ 3, and involving one or two of the macroeconomic variables given by Market Indicative Yield, Oil Prices, Exchange Rate and Import Trade.

  9. Modeling error distributions of growth curve models through Bayesian methods.

    PubMed

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    none,

    Oak Ridge Associated Universities (ORAU), under the Oak Ridge Institute for Science and Education (ORISE) contract, collected split surface water samples with Nuclear Fuel Services (NFS) representatives on March 20, 2013. Representatives from the U.S. Nuclear Regulatory Commission and the Tennessee Department of Environment and Conservation were also in attendance. Samples were collected at four surface water stations, as required in the approved Request for Technical Assistance number 11-018. These stations included Nolichucky River upstream (NRU), Nolichucky River downstream (NRD), Martin Creek upstream (MCU), and Martin Creek downstream (MCD). Both ORAU and NFS performed gross alpha and gross beta analyses,more » and Table 1 presents the comparison of results using the duplicate error ratio (DER), also known as the normalized absolute difference. A DER {<=} 3 indicates that at a 99% confidence interval, split sample results do not differ significantly when compared to their respective one standard deviation (sigma) uncertainty (ANSI N42.22). The NFS split sample report does not specify the confidence level of reported uncertainties (NFS 2013). Therefore, standard two sigma reporting is assumed and uncertainty values were divided by 1.96. In conclusion, most DER values were less than 3 and results are consistent with low (e.g., background) concentrations. The gross beta result for sample 5198W0012 was the exception. The ORAU result of 9.23 ± 0.73 pCi/L from location MCD is well above NFS's result of -0.567 ± 0.63 pCi/L (non-detected). NFS's data package included a detected result for U-233/234, but no other uranium or plutonium detection, and nothing that would suggest the presence of beta-emitting radionuclides. The ORAU laboratory reanalyzed sample 5198W0012 using the remaining portion of the sample volume and a result of 11.3 ± 1.1 pCi/L was determined. As directed, the laboratory also counted the filtrate using gamma spectrometry analysis and identified only naturally occurring or ubiquitous man-made constituents, including beta emitters that are presumably responsible for the elevated gross beta values.« less

  11. Cross-validation of oxygen uptake prediction during walking in ambulatory persons with multiple sclerosis.

    PubMed

    Agiovlasitis, Stamatis; Motl, Robert W

    2016-01-01

    An equation for predicting the gross oxygen uptake (gross-VO2) during walking for persons with multiple sclerosis (MS) has been developed. Predictors included walking speed and total score from the 12-Item Multiple Sclerosis Walking Scale (MSWS-12). This study examined the validity of this prediction equation in another sample of persons with MS. Participants were 18 persons with MS with limited mobility problems (42 ± 13 years; 14 women). Participants completed the MSWS-12. Gross-VO2 was measured with open-circuit spirometry during treadmill walking at 2.0, 3.0, and 4.0 mph (0.89, 1.34, and 1.79 m·s(-1)). Absolute percent error was small: 8.3 ± 6.1% , 8.0 ± 5.6% , and 12.2 ± 9.0% at 2.0, 3.0, and 4.0 mph, respectively. Actual gross-VO2 did not differ significantly from predicted gross-VO2 at 2.0 and 3.0 mph, but was significantly higher than predicted gross-VO2 at 4.0 mph (p <  0.001). Bland-Altman plots indicated nearly zero mean difference between actual and predicted gross-VO2 with modest 95% confidence intervals at 2.0 and 3.0 mph, but there was some underestimation at 4.0 mph. Speed and MSWS-12 score provide valid prediction of gross-VO2 during treadmill walking at slow and moderate speeds in ambulatory persons with MS. However, there is a possibility of small underestimation for walking at 4.0 mph.

  12. DC-Compensated Current Transformer.

    PubMed

    Ripka, Pavel; Draxler, Karel; Styblíková, Renata

    2016-01-20

    Instrument current transformers (CTs) measure AC currents. The DC component in the measured current can saturate the transformer and cause gross error. We use fluxgate detection and digital feedback compensation of the DC flux to suppress the overall error to 0.15%. This concept can be used not only for high-end CTs with a nanocrystalline core, but it also works for low-cost CTs with FeSi cores. The method described here allows simultaneous measurements of the DC current component.

  13. Gross yield and mortality tables for fully stocked stands of Douglas-fir.

    Treesearch

    George R. Staebler

    1955-01-01

    Increasing interest in the practice of intensive forestry has demonstrated the need for gross yield tables for Douglas-fir showing the volume of trees that die as well as volume of live trees. Net yield tables for Douglas-fir, published in 1930, give the live volume in fully stocked stands at different ages on different sites. As in all normal yield tables, no...

  14. 76 FR 23799 - Erie Boulevard Hydropower, L.P.; Notice of Application Accepted for Filing, Soliciting Motions To...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-28

    ... seasonal flash boards; (2) a 168-acre reservoir with a gross storage capacity of 3,234 acre-feet and a...; (2) a 159-acre reservoir with a gross storage capacity of 2,646 acre-feet and a normal maximum pool... reservoir, with concrete core walls and partially equipped with 10-inch-high flashboards; (3) a 79.2- acre...

  15. Getting Back on Course

    ERIC Educational Resources Information Center

    Griswold, John S.

    2009-01-01

    "Spectacular error" sounds euphemistic compared to "devastating," "catastrophic," or "meltdown"--terms more commonly summoned to describe the credit crisis and ensuing global economic carnage. Whatever they are labeled, gross miscalculations on Wall Street are having a deleterious effect on college campuses across the country, with many…

  16. Genetically induced abnormal cranial development in human trisomy 18 with holoprosencephaly: comparisons with the normal tempo of osteogenic-neural development.

    PubMed

    Reid, Shaina N; Ziermann, Janine M; Gondré-Lewis, Marjorie C

    2015-07-01

    Craniofacial malformations are common congenital defects caused by failed midline inductive signals. These midline defects are associated with exposure of the fetus to exogenous teratogens and with inborn genetic errors such as those found in Down, Patau, Edwards' and Smith-Lemli-Opitz syndromes. Yet, there are no studies that analyze contributions of synchronous neurocranial and neural development in these disorders. Here we present the first in-depth analysis of malformations of the basicranium of a holoprosencephalic (HPE) trisomy 18 (T18; Edwards' syndrome) fetus with synophthalmic cyclopia and alobar HPE. With a combination of traditional gross dissection and state-of-the-art computed tomography, we demonstrate the deleterious effects of T18 caused by a translocation at 18p11.31. Bony features included a single developmentally unseparated frontal bone, and complete dual absence of the anterior cranial fossa and ethmoid bone. From a superior view with the calvarium plates removed, there was direct visual access to the orbital foramen and hard palate. Both the eyes and the pituitary gland, normally protected by bony structures, were exposed in the cranial cavity and in direct contact with the brain. The middle cranial fossa was shifted anteriorly, and foramina were either missing or displaced to an abnormal location due to the absence or misplacement of its respective cranial nerve (CN). When CN development was conserved in its induction and placement, the respective foramen developed in its normal location albeit with abnormal gross anatomical features, as seen in the facial nerve (CNVII) and the internal acoustic meatus. More anteriorly localized CNs and their foramina were absent or heavily disrupted compared with posterior ones. The severe malformations exhibited in the cranial fossae, orbital region, pituitary gland and sella turcica highlight the crucial involvement of transcription factors such as TGIF, which is located on chromosome 18 and contributes to neural patterning, in the proper development of neural and cranial structures. Our study of a T18 specimen emphasizes the intricate interplay between bone and brain development in midline craniofacial abnormalities in general. © 2015 Anatomical Society.

  17. Texas hospitals with higher health information technology expenditures have higher revenue: A longitudinal data analysis using a generalized estimating equation model.

    PubMed

    Lee, Jinhyung; Choi, Jae-Young

    2016-04-05

    The benefits of health information technology (IT) adoption have been reported in the literature, but whether health IT investment increases revenue generation remains an important research question. Texas hospital data obtained from the American Hospital Association (AHA) for 2007-2010 were used to investigate the association of health IT expenses and hospital revenue. The generalized estimation equation (GEE) with an independent error component was used to model the data controlling for cluster error within hospitals. We found that health IT expenses were significantly and positively associated with hospital revenue. Our model predicted that a 100% increase in health IT expenditure would result in an 8% increase in total revenue. The effect of health IT was more associated with gross outpatient revenue than gross inpatient revenue. Increased health IT expenses were associated with greater hospital revenue. Future research needs to confirm our findings with a national sample of hospitals.

  18. DC-Compensated Current Transformer †

    PubMed Central

    Ripka, Pavel; Draxler, Karel; Styblíková, Renata

    2016-01-01

    Instrument current transformers (CTs) measure AC currents. The DC component in the measured current can saturate the transformer and cause gross error. We use fluxgate detection and digital feedback compensation of the DC flux to suppress the overall error to 0.15%. This concept can be used not only for high-end CTs with a nanocrystalline core, but it also works for low-cost CTs with FeSi cores. The method described here allows simultaneous measurements of the DC current component. PMID:26805830

  19. Knee Muscle Strength at Varying Angular Velocities and Associations with Gross Motor Function in Ambulatory Children with Cerebral Palsy

    ERIC Educational Resources Information Center

    Hong, Wei-Hsien; Chen, Hseih-Ching; Shen, I-Hsuan; Chen, Chung-Yao; Chen, Chia-Ling; Chung, Chia-Ying

    2012-01-01

    The aim of this study was to evaluate the relationships of muscle strength at different angular velocities and gross motor functions in ambulatory children with cerebral palsy (CP). This study included 33 ambulatory children with spastic CP aged 6-15 years and 15 children with normal development. Children with CP were categorized into level I (n =…

  20. Association between Body Composition and Motor Performance in Preschool Children

    PubMed Central

    Kakebeeke, Tanja H.; Lanzi, Stefano; Zysset, Annina E.; Arhab, Amar; Messerli-Bürgy, Nadine; Stuelb, Kerstin; Leeger-Aschmann, Claudia S.; Schmutz, Einat A.; Meyer, Andrea H.; Kriemler, Susi; Munsch, Simone; Jenni, Oskar G.; Puder, Jardena J.

    2017-01-01

    Objective Being overweight makes physical movement more difficult. Our aim was to investigate the association between body composition and motor performance in preschool children. Methods A total of 476 predominantly normal-weight preschool children (age 3.9 ± 0.7 years; m/f: 251/225; BMI 16.0 ± 1.4 kg/m2) participated in the Swiss Preschoolers' Health Study (SPLASHY). Body composition assessments included skinfold thickness, waist circumference (WC), and BMI. The Zurich Neuromotor Assessment (ZNA) was used to assess gross and fine motor tasks. Results After adjustment for age, sex, socioeconomic status, sociocultural characteristics, and physical activity (assessed with accelerometers), skinfold thickness and WC were both inversely correlated with jumping sideward (gross motor task β-coefficient −1.92, p = 0.027; and −3.34, p = 0.014, respectively), while BMI was positively correlated with running performance (gross motor task β-coefficient 9.12, p = 0.001). No significant associations were found between body composition measures and fine motor tasks. Conclusion The inverse associations between skinfold thickness or WC and jumping sideward indicates that children with high fat mass may be less proficient in certain gross motor tasks. The positive association between BMI and running suggests that BMI might be an indicator of fat-free (i.e., muscle) mass in predominately normal-weight preschool children. PMID:28934745

  1. Reliable absolute analog code retrieval approach for 3D measurement

    NASA Astrophysics Data System (ADS)

    Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Chen, Deyun

    2017-11-01

    The wrapped phase of phase-shifting approach can be unwrapped by using Gray code, but both the wrapped phase error and Gray code decoding error can result in period jump error, which will lead to gross measurement error. Therefore, this paper presents a reliable absolute analog code retrieval approach. The combination of unequal-period Gray code and phase shifting patterns at high frequencies are used to obtain high-frequency absolute analog code, and at low frequencies, the same unequal-period combination patterns are used to obtain the low-frequency absolute analog code. Next, the difference between the two absolute analog codes was employed to eliminate period jump errors, and a reliable unwrapped result can be obtained. Error analysis was used to determine the applicable conditions, and this approach was verified through theoretical analysis. The proposed approach was further verified experimentally. Theoretical analysis and experimental results demonstrate that the proposed approach can perform reliable analog code unwrapping.

  2. Normal Language Skills and Normal Intelligence in a Child with de Lange Syndrome.

    ERIC Educational Resources Information Center

    Cameron, Thomas H.; Kelly, Desmond P.

    1988-01-01

    The subject of this case report is a two-year, seven-month-old girl with de Lange syndrome, normal intelligence, and age-appropriate language skills. She demonstrated initial delays in gross motor skills and in receptive and expressive language but responded well to intensive speech and language intervention, as well as to physical therapy.…

  3. Divergence of fine and gross motor skills in prelingually deaf children: implications for cochlear implantation.

    PubMed

    Horn, David L; Pisoni, David B; Miyamoto, Richard T

    2006-08-01

    The objective of this study was to assess relations between fine and gross motor development and spoken language processing skills in pediatric cochlear implant users. The authors conducted a retrospective analysis of longitudinal data. Prelingually deaf children who received a cochlear implant before age 5 and had no known developmental delay or cognitive impairment were included in the study. Fine and gross motor development were assessed before implantation using the Vineland Adaptive Behavioral Scales, a standardized parental report of adaptive behavior. Fine and gross motor scores reflected a given child's motor functioning with respect to a normative sample of typically developing, normal-hearing children. Relations between these preimplant scores and postimplant spoken language outcomes were assessed. In general, gross motor scores were found to be positively related to chronologic age, whereas the opposite trend was observed for fine motor scores. Fine motor scores were more strongly correlated with postimplant expressive and receptive language scores than gross motor scores. Our findings suggest a disassociation between fine and gross motor development in prelingually deaf children: fine motor skills, in contrast to gross motor skills, tend to be delayed as the prelingually deaf children get older. These findings provide new knowledge about the links between motor and spoken language development and suggest that auditory deprivation may lead to atypical development of certain motor and language skills that share common cortical processing resources.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    David A. King, CHP, PMP

    Oak Ridge Associated Universities (ORAU), under the Oak Ridge Institute for Science and Education (ORISE) contract, collected split surface water samples with Nuclear Fuel Services (NFS) representatives on August 22, 2012. Representatives from the U.S. Nuclear Regulatory Commission and Tennessee Department of Environment and Conservation were also in attendance. Samples were collected at four surface water stations, as required in the approved Request for Technical Assistance number 11-018. These stations included Nolichucky River upstream (NRU), Nolichucky River downstream (NRD), Martin Creek upstream (MCU), and Martin Creek downstream (MCD). Both ORAU and NFS performed gross alpha and gross beta analyses. Themore » comparison of results using the duplicate error ratio (DER), also known as the normalized absolute difference. A DER ≤ 3 indicates that, at a 99% confidence interval, split sample results do not differ significantly when compared to their respective one standard deviation (sigma) uncertainty. The NFS split sample report does not specify the confidence level of reported uncertainties. Therefore, standard two sigma reporting is assumed and uncertainty values were divided by 1.96. A comparison of split sample results, using the DER equation, indicates one set with a DER greater than 3. A DER of 3.1 is calculated for gross alpha results from ORAU sample 5198W0003 and NFS sample MCU-310212003. The ORAU result is 0.98 ± 0.30 pCi/L (value ± 2 sigma) compared to the NFS result of -0.08 ± 0.60 pCi/L. Relatively high DER values are not unexpected for low (e.g., background) analyte concentrations analyzed by separate laboratories, as is the case here. It is noted, however, NFS uncertainties are at least twice the ORAU uncertainties, which contributes to the elevated DER value. Differences in ORAU and NFS minimum detectable activities are even more pronounced. comparison of ORAU and NFS split samples produces reasonably consistent results for low (e.g., background) concentrations.« less

  5. Relation between hand function and gross motor function in full term infants aged 4 to 8 months.

    PubMed

    Nogueira, Solange F; Figueiredo, Elyonara M; Gonçalves, Rejane V; Mancini, Marisa C

    2015-01-01

    In children, reaching emerges around four months of age, which is followed by rapid changes in hand function and concomitant changes in gross motor function, including the acquisition of independent sitting. Although there is a close functional relationship between these domains, to date they have been investigated separately. To investigate the longitudinal profile of changes and the relationship between the development of hand function (i.e. reaching for and manipulating an object) and gross motor function in 13 normally developing children born at term who were evaluated every 15 days from 4 to 8 months of age. The number of reaches and the period (i.e. time) of manipulation to an object were extracted from video synchronized with the Qualisys(r) movement analysis system. Gross motor function was measured using the Alberta Infant Motor Scale. ANOVA for repeated measures was used to test the effect of age on the number of reaches, the time of manipulation and gross motor function. Hierarchical regression models were used to test the associations of reaching and manipulation with gross motor function. RESULTS revealed a significant increase in the number of reaches (p<0.001), the time of manipulation (p<0.001) and gross motor function (p<0.001) over time, as well as associations between reaching and gross motor function (R2=0.84; p<0.001) and manipulation and gross motor function (R2=0.13; p=0.02) from 4 to 6 months of age. Associations from 6 to 8 months of age were not significant. The relationship between hand function and gross motor function was not constant, and the age span from 4 to 6 months was a critical period of interdependency of hand function and gross motor function development.

  6. Biological Effects of Laser Radiation. Volume II. Review of Our Studies on Biological Effects of Laser Radiation-1965-1971.

    DTIC Science & Technology

    1978-10-17

    alter the immunological capability/virulence ratio of influenza virus ; gross and microscopic descriptions of lesions, their natural history, and...with Viruses 4 Chapter 4 Studies on Normal Animals 6 Chapter 5 Tumor-Related Laser Radiation Studies and Potential for Carcinogenesis 17 Chapter 6...affect the immunological capability/virulence ratio of influenza virus in order to explore facilitation of vaccine production; 5) extensive gross and

  7. A computer simulated phantom study of tomotherapy dose optimization based on probability density functions (PDF) and potential errors caused by low reproducibility of PDF.

    PubMed

    Sheng, Ke; Cai, Jing; Brookeman, James; Molloy, Janelle; Christopher, John; Read, Paul

    2006-09-01

    Lung tumor motion trajectories measured by four-dimensional CT or dynamic MRI can be converted to a probability density function (PDF), which describes the probability of the tumor at a certain position, for PDF based treatment planning. Using this method in simulated sequential tomotherapy, we study the dose reduction of normal tissues and more important, the effect of PDF reproducibility on the accuracy of dosimetry. For these purposes, realistic PDFs were obtained from two dynamic MRI scans of a healthy volunteer within a 2 week interval. The first PDF was accumulated from a 300 s scan and the second PDF was calculated from variable scan times from 5 s (one breathing cycle) to 300 s. Optimized beam fluences based on the second PDF were delivered to the hypothetical gross target volume (GTV) of a lung phantom that moved following the first PDF The reproducibility between two PDFs varied from low (78%) to high (94.8%) when the second scan time increased from 5 s to 300 s. When a highly reproducible PDF was used in optimization, the dose coverage of GTV was maintained; phantom lung receiving 10%-20% prescription dose was reduced by 40%-50% and the mean phantom lung dose was reduced by 9.6%. However, optimization based on PDF with low reproducibility resulted in a 50% underdosed GTV. The dosimetric error increased nearly exponentially as the PDF error increased. Therefore, although the dose of the tumor surrounding tissue can be theoretically reduced by PDF based treatment planning, the reliability and applicability of this method highly depend on if a reproducible PDF exists and is measurable. By correlating the dosimetric error and PDF error together, a useful guideline for PDF data acquisition and patient qualification for PDF based planning can be derived.

  8. A computer simulated phantom study of tomotherapy dose optimization based on probability density functions (PDF) and potential errors caused by low reproducibility of PDF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheng, Ke; Cai Jing; Brookeman, James

    2006-09-15

    Lung tumor motion trajectories measured by four-dimensional CT or dynamic MRI can be converted to a probability density function (PDF), which describes the probability of the tumor at a certain position, for PDF based treatment planning. Using this method in simulated sequential tomotherapy, we study the dose reduction of normal tissues and more important, the effect of PDF reproducibility on the accuracy of dosimetry. For these purposes, realistic PDFs were obtained from two dynamic MRI scans of a healthy volunteer within a 2 week interval. The first PDF was accumulated from a 300 s scan and the second PDF wasmore » calculated from variable scan times from 5 s (one breathing cycle) to 300 s. Optimized beam fluences based on the second PDF were delivered to the hypothetical gross target volume (GTV) of a lung phantom that moved following the first PDF. The reproducibility between two PDFs varied from low (78%) to high (94.8%) when the second scan time increased from 5 s to 300 s. When a highly reproducible PDF was used in optimization, the dose coverage of GTV was maintained; phantom lung receiving 10%-20% prescription dose was reduced by 40%-50% and the mean phantom lung dose was reduced by 9.6%. However, optimization based on PDF with low reproducibility resulted in a 50% underdosed GTV. The dosimetric error increased nearly exponentially as the PDF error increased. Therefore, although the dose of the tumor surrounding tissue can be theoretically reduced by PDF based treatment planning, the reliability and applicability of this method highly depend on if a reproducible PDF exists and is measurable. By correlating the dosimetric error and PDF error together, a useful guideline for PDF data acquisition and patient qualification for PDF based planning can be derived.« less

  9. DEVELOPMENT AND APPLICATION OF A NEW AIR POLLUTION MODELING SYSTEM. PART III: AEROSOL-PHASE SIMULATIONS (R823186)

    EPA Science Inventory

    Result from a new air pollution model were tested against data from the Southern California Air Quality Study (SCAQS) period of 26-29 August 1987. Gross errors for sulfate, sodium, light absorption, temperatures, surface solar radiation, sulfur dioxide gas, formaldehyde gas, and ...

  10. 21 CFR 58.130 - Conduct of a nonclinical laboratory study.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... specimen in a manner that precludes error in the recording and storage of data. (d) Records of gross... that specimen histopathologically. (e) All data generated during the conduct of a nonclinical laboratory study, except those that are generated by automated data collection systems, shall be recorded...

  11. 21 CFR 58.130 - Conduct of a nonclinical laboratory study.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... specimen in a manner that precludes error in the recording and storage of data. (d) Records of gross... that specimen histopathologically. (e) All data generated during the conduct of a nonclinical laboratory study, except those that are generated by automated data collection systems, shall be recorded...

  12. 21 CFR 58.130 - Conduct of a nonclinical laboratory study.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... specimen in a manner that precludes error in the recording and storage of data. (d) Records of gross... that specimen histopathologically. (e) All data generated during the conduct of a nonclinical laboratory study, except those that are generated by automated data collection systems, shall be recorded...

  13. Normal contour error measurement on-machine and compensation method for polishing complex surface by MRF

    NASA Astrophysics Data System (ADS)

    Chen, Hua; Chen, Jihong; Wang, Baorui; Zheng, Yongcheng

    2016-10-01

    The Magnetorheological finishing (MRF) process, based on the dwell time method with the constant normal spacing for flexible polishing, would bring out the normal contour error in the fine polishing complex surface such as aspheric surface. The normal contour error would change the ribbon's shape and removal characteristics of consistency for MRF. Based on continuously scanning the normal spacing between the workpiece and the finder by the laser range finder, the novel method was put forward to measure the normal contour errors while polishing complex surface on the machining track. The normal contour errors was measured dynamically, by which the workpiece's clamping precision, multi-axis machining NC program and the dynamic performance of the MRF machine were achieved for the verification and security check of the MRF process. The unit for measuring the normal contour errors of complex surface on-machine was designed. Based on the measurement unit's results as feedback to adjust the parameters of the feed forward control and the multi-axis machining, the optimized servo control method was presented to compensate the normal contour errors. The experiment for polishing 180mm × 180mm aspherical workpiece of fused silica by MRF was set up to validate the method. The results show that the normal contour error was controlled in less than 10um. And the PV value of the polished surface accuracy was improved from 0.95λ to 0.09λ under the conditions of the same process parameters. The technology in the paper has been being applied in the PKC600-Q1 MRF machine developed by the China Academe of Engineering Physics for engineering application since 2014. It is being used in the national huge optical engineering for processing the ultra-precision optical parts.

  14. Association between Body Composition and Motor Performance in Preschool Children.

    PubMed

    Kakebeeke, Tanja H; Lanzi, Stefano; Zysset, Annina E; Arhab, Amar; Messerli-Bürgy, Nadine; Stuelb, Kerstin; Leeger-Aschmann, Claudia S; Schmutz, Einat A; Meyer, Andrea H; Kriemler, Susi; Munsch, Simone; Jenni, Oskar G; Puder, Jardena J

    2017-01-01

    Being overweight makes physical movement more difficult. Our aim was to investigate the association between body composition and motor performance in preschool children. A total of 476 predominantly normal-weight preschool children (age 3.9 ± 0.7 years; m/f: 251/225; BMI 16.0 ± 1.4 kg/m2) participated in the Swiss Preschoolers' Health Study (SPLASHY). Body composition assessments included skinfold thickness, waist circumference (WC), and BMI. The Zurich Neuromotor Assessment (ZNA) was used to assess gross and fine motor tasks. After adjustment for age, sex, socioeconomic status, sociocultural characteristics, and physical activity (assessed with accelerometers), skinfold thickness and WC were both inversely correlated with jumping sideward (gross motor task β-coefficient -1.92, p = 0.027; and -3.34, p = 0.014, respectively), while BMI was positively correlated with running performance (gross motor task β-coefficient 9.12, p = 0.001). No significant associations were found between body composition measures and fine motor tasks. The inverse associations between skinfold thickness or WC and jumping sideward indicates that children with high fat mass may be less proficient in certain gross motor tasks. The positive association between BMI and running suggests that BMI might be an indicator of fat-free (i.e., muscle) mass in predominately normal-weight preschool children. © 2017 The Author(s) Published by S. Karger GmbH, Freiburg.

  15. Direction Dependent Effects In Widefield Wideband Full Stokes Radio Imaging

    NASA Astrophysics Data System (ADS)

    Jagannathan, Preshanth; Bhatnagar, Sanjay; Rau, Urvashi; Taylor, Russ

    2015-01-01

    Synthesis imaging in radio astronomy is affected by instrumental and atmospheric effects which introduce direction dependent gains.The antenna power pattern varies both as a function of time and frequency. The broad band time varying nature of the antenna power pattern when not corrected leads to gross errors in full stokes imaging and flux estimation. In this poster we explore the errors that arise in image deconvolution while not accounting for the time and frequency dependence of the antenna power pattern. Simulations were conducted with the wideband full stokes power pattern of the Very Large Array(VLA) antennas to demonstrate the level of errors arising from direction-dependent gains. Our estimate is that these errors will be significant in wide-band full-pol mosaic imaging as well and algorithms to correct these errors will be crucial for many up-coming large area surveys (e.g. VLASS)

  16. Prediction of static friction coefficient in rough contacts based on the junction growth theory

    NASA Astrophysics Data System (ADS)

    Spinu, S.; Cerlinca, D.

    2017-08-01

    The classic approach to the slip-stick contact is based on the framework advanced by Mindlin, in which localized slip occurs on the contact area when the local shear traction exceeds the product between the local pressure and the static friction coefficient. This assumption may be too conservative in the case of high tractions arising at the asperities tips in the contact of rough surfaces, because the shear traction may be allowed to exceed the shear strength of the softer material. Consequently, the classic frictional contact model is modified in this paper so that gross sliding occurs when the junctions formed between all contacting asperities are independently sheared. In this framework, when the contact tractions, normal and shear, exceed the hardness of the softer material on the entire contact area, the material of the asperities yields and the junction growth process ends in all contact regions, leading to gross sliding inception. This friction mechanism is implemented in a previously proposed numerical model for the Cattaneo-Mindlin slip-stick contact problem, which is modified to accommodate the junction growth theory. The frictionless normal contact problem is solved first, then the tangential force is gradually increased, until gross sliding inception. The contact problems in the normal and in the tangential direction are successively solved, until one is stabilized in relation to the other. The maximum tangential force leading to a non-vanishing stick area is the static friction force that can be sustained by the rough contact. The static friction coefficient is eventually derived as the ratio between the latter friction force and the normal force.

  17. Brain Research: The Necessity for Separating Sites, Actions and Functions.

    ERIC Educational Resources Information Center

    Meeker, Mary

    Educators, as applied scientists, must work in partnership with investigative scientists who are researching brain functions in order to reach a better understanding of gifted students and students who are intelligent but do not learn. Improper understanding of brain functions can cause gross errors in educational placement. Until recently, the…

  18. a Method of Generating dem from Dsm Based on Airborne Insar Data

    NASA Astrophysics Data System (ADS)

    Lu, W.; Zhang, J.; Xue, G.; Wang, C.

    2018-04-01

    Traditional methods of terrestrial survey to acquire DEM cannot meet the requirement of acquiring large quantities of data in real time, but the DSM can be quickly obtained by using the dual antenna synthetic aperture radar interferometry and the DEM generated by the DSM is more fast and accurate. Therefore it is most important to acquire DEM from DSM based on airborne InSAR data. This paper aims to the method that generate DEM from DSM accurately. Two steps in this paper are applied to acquire accurate DEM. First of all, when the DSM is generated by interferometry, unavoidable factors such as overlay and shadow will produce gross errors to affect the data accuracy, so the adaptive threshold segmentation method is adopted to remove the gross errors and the threshold is selected according to the coherence of the interferometry. Secondly DEM will be generated by the progressive triangulated irregular network densification filtering algorithm. Finally, experimental results are compared with the existing high-precision DEM results. The results show that this method can effectively filter out buildings, vegetation and other objects to obtain the high-precision DEM.

  19. Towards Complete, Geo-Referenced 3d Models from Crowd-Sourced Amateur Images

    NASA Astrophysics Data System (ADS)

    Hartmann, W.; Havlena, M.; Schindler, K.

    2016-06-01

    Despite a lot of recent research, photogrammetric reconstruction from crowd-sourced imagery is plagued by a number of recurrent problems. (i) The resulting models are chronically incomplete, because even touristic landmarks are photographed mostly from a few "canonical" viewpoints. (ii) Man-made constructions tend to exhibit repetitive structure and rotational symmetries, which lead to gross errors in the 3D reconstruction and aggravate the problem of incomplete reconstruction. (iii) The models are normally not geo-referenced. In this paper, we investigate the possibility of using sparse GNSS geo-tags from digital cameras to address these issues and push the boundaries of crowd-sourced photogrammetry. A small proportion of the images in Internet collections (≍ 10 %) do possess geo-tags. While the individual geo-tags are very inaccurate, they nevertheless can help to address the problems above. By providing approximate geo-reference for partial reconstructions they make it possible to fuse those pieces into more complete models; the capability to fuse partial reconstruction opens up the possibility to be more restrictive in the matching phase and avoid errors due to repetitive structure; and collectively, the redundant set of low-quality geo-tags can provide reasonably accurate absolute geo-reference. We show that even few, noisy geo-tags can help to improve architectural models, compared to puristic structure-from-motion only based on image correspondence.

  20. An Application of Semi-parametric Estimator with Weighted Matrix of Data Depth in Variance Component Estimation

    NASA Astrophysics Data System (ADS)

    Pan, X. G.; Wang, J. Q.; Zhou, H. Y.

    2013-05-01

    The variance component estimation (VCE) based on semi-parametric estimator with weighted matrix of data depth has been proposed, because the coupling system model error and gross error exist in the multi-source heterogeneous measurement data of space and ground combined TT&C (Telemetry, Tracking and Command) technology. The uncertain model error has been estimated with the semi-parametric estimator model, and the outlier has been restrained with the weighted matrix of data depth. On the basis of the restriction of the model error and outlier, the VCE can be improved and used to estimate weighted matrix for the observation data with uncertain model error or outlier. Simulation experiment has been carried out under the circumstance of space and ground combined TT&C. The results show that the new VCE based on the model error compensation can determine the rational weight of the multi-source heterogeneous data, and restrain the outlier data.

  1. Isolation and Characterization of Prostate Cancer Stem Cells

    DTIC Science & Technology

    2012-08-01

    guidelines. Adjacent prostate tissue was snap frozen in liquid Nitrogen or fixed in formalin and paraffin-embedded to evaluate anatomy and glandular...phenotypically normal and fertile [35]. We examined the prostate at 8 and 20 weeks of age and found no difference in gross anatomy and histology among WT...gross anatomy of the prostate of WT and CD1662/2 mice at 8 weeks of age, scale bar: 2 mm. Bottom: HE staining of DLP section from WT and CD1662/2 mice

  2. Toddler Development

    MedlinePlus

    ... is exciting to watch your toddler learn new skills. The normal development of children aged 1-3 includes several areas: Gross motor - walking, running, climbing Fine motor - feeding themselves, drawing Sensory - seeing, hearing, tasting, ...

  3. A system for EPID-based real-time treatment delivery verification during dynamic IMRT treatment.

    PubMed

    Fuangrod, Todsaporn; Woodruff, Henry C; van Uytven, Eric; McCurdy, Boyd M C; Kuncic, Zdenka; O'Connor, Daryl J; Greer, Peter B

    2013-09-01

    To design and develop a real-time electronic portal imaging device (EPID)-based delivery verification system for dynamic intensity modulated radiation therapy (IMRT) which enables detection of gross treatment delivery errors before delivery of substantial radiation to the patient. The system utilizes a comprehensive physics-based model to generate a series of predicted transit EPID image frames as a reference dataset and compares these to measured EPID frames acquired during treatment. The two datasets are using MLC aperture comparison and cumulative signal checking techniques. The system operation in real-time was simulated offline using previously acquired images for 19 IMRT patient deliveries with both frame-by-frame comparison and cumulative frame comparison. Simulated error case studies were used to demonstrate the system sensitivity and performance. The accuracy of the synchronization method was shown to agree within two control points which corresponds to approximately ∼1% of the total MU to be delivered for dynamic IMRT. The system achieved mean real-time gamma results for frame-by-frame analysis of 86.6% and 89.0% for 3%, 3 mm and 4%, 4 mm criteria, respectively, and 97.9% and 98.6% for cumulative gamma analysis. The system can detect a 10% MU error using 3%, 3 mm criteria within approximately 10 s. The EPID-based real-time delivery verification system successfully detected simulated gross errors introduced into patient plan deliveries in near real-time (within 0.1 s). A real-time radiation delivery verification system for dynamic IMRT has been demonstrated that is designed to prevent major mistreatments in modern radiation therapy.

  4. Improved assessment of gross and net primary productivity of Canada's landmass

    NASA Astrophysics Data System (ADS)

    Gonsamo, Alemu; Chen, Jing M.; Price, David T.; Kurz, Werner A.; Liu, Jane; Boisvenue, Céline; Hember, Robbie A.; Wu, Chaoyang; Chang, Kuo-Hsien

    2013-12-01

    assess Canada's gross primary productivity (GPP) and net primary productivity (NPP) using boreal ecosystem productivity simulator (BEPS) at 250 m spatial resolution with improved input parameter and driver fields and phenology and nutrient release parameterization schemes. BEPS is a process-based two-leaf enzyme kinetic terrestrial ecosystem model designed to simulate energy, water, and carbon (C) fluxes using spatial data sets of meteorology, remotely sensed land surface variables, soil properties, and photosynthesis and respiration rate parameters. Two improved key land surface variables, leaf area index (LAI) and land cover type, are derived at 250 m from Moderate Resolution Imaging Spectroradiometer sensor. For diagnostic error assessment, we use nine forest flux tower sites where all measured C flux, meteorology, and ancillary data sets are available. The errors due to input drivers and parameters are then independently corrected for Canada-wide GPP and NPP simulations. The optimized LAI use, for example, reduced the absolute bias in GPP from 20.7% to 1.1% for hourly BEPS simulations. Following the error diagnostics and corrections, daily GPP and NPP are simulated over Canada at 250 m spatial resolution, the highest resolution simulation yet for the country or any other comparable region. Total NPP (GPP) for Canada's land area was 1.27 (2.68) Pg C for 2008, with forests contributing 1.02 (2.2) Pg C. The annual comparisons between measured and simulated GPP show that the mean differences are not statistically significant (p > 0.05, paired t test). The main BEPS simulation error sources are from the driver fields.

  5. The effectiveness of inking needle core prostate biopsies for preventing patient specimen identification errors: a technique to address Joint Commission patient safety goals in specialty laboratories.

    PubMed

    Raff, Lester J; Engel, George; Beck, Kenneth R; O'Brien, Andrea S; Bauer, Meagan E

    2009-02-01

    The elimination or reduction of medical errors has been a main focus of health care enterprises in the United States since the year 2000. Elimination of errors in patient and specimen identification is a key component of this focus and is the number one goal in the Joint Commission's 2008 National Patient Safety Goals Laboratory Services Program. To evaluate the effectiveness of using permanent inks to maintain specimen identity in sequentially submitted prostate needle biopsies. For a 12-month period, a grossing technician stained each prostate core with permanent ink developed for inking of pathology specimens. A different color was used for each patient, with all the prostate cores from all vials for a particular patient inked with the same color. Five colors were used sequentially: green, blue, yellow, orange, and black. The ink was diluted with distilled water to a consistency that allowed application of a thin, uniform coating of ink along the edges of the prostate core. The time required to ink patient specimens comprising different numbers of vials and prostate biopsies was timed. The number and type of inked specimen discrepancies were evaluated. The identified discrepancy rate for prostate biopsy patients was 0.13%. The discrepancy rate in terms of total number of prostate blocks was 0.014%. Diluted inks adhered to biopsy contours throughout tissue processing. The tissue showed no untoward reactions to the inks. Inking did not affect staining (histochemical or immunohistochemical) or pathologic evaluation. On average, inking prostate needle biopsies increases grossing time by 20%. Inking of all prostate core biopsies with colored inks, in sequential order, is an aid in maintaining specimen identity. It is a simple and effective method of addressing Joint Commission patient safety goals by maintaining specimen identity during processing of similar types of gross specimens. This technique may be applicable in other specialty laboratories and high-volume laboratories, where many similar tissue specimens are processed.

  6. Procedural error monitoring and smart checklists

    NASA Technical Reports Server (NTRS)

    Palmer, Everett

    1990-01-01

    Human beings make and usually detect errors routinely. The same mental processes that allow humans to cope with novel problems can also lead to error. Bill Rouse has argued that errors are not inherently bad but their consequences may be. He proposes the development of error-tolerant systems that detect errors and take steps to prevent the consequences of the error from occurring. Research should be done on self and automatic detection of random and unanticipated errors. For self detection, displays should be developed that make the consequences of errors immediately apparent. For example, electronic map displays graphically show the consequences of horizontal flight plan entry errors. Vertical profile displays should be developed to make apparent vertical flight planning errors. Other concepts such as energy circles could also help the crew detect gross flight planning errors. For automatic detection, systems should be developed that can track pilot activity, infer pilot intent and inform the crew of potential errors before their consequences are realized. Systems that perform a reasonableness check on flight plan modifications by checking route length and magnitude of course changes are simple examples. Another example would be a system that checked the aircraft's planned altitude against a data base of world terrain elevations. Information is given in viewgraph form.

  7. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    PubMed

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  8. Relation between hand function and gross motor function in full term infants aged 4 to 8 months

    PubMed Central

    Nogueira, Solange F.; Figueiredo, Elyonara M.; Gonçalves, Rejane V.; Mancini, Marisa C.

    2015-01-01

    Background: In children, reaching emerges around four months of age, which is followed by rapid changes in hand function and concomitant changes in gross motor function, including the acquisition of independent sitting. Although there is a close functional relationship between these domains, to date they have been investigated separately. Objective: To investigate the longitudinal profile of changes and the relationship between the development of hand function (i.e. reaching for and manipulating an object) and gross motor function in 13 normally developing children born at term who were evaluated every 15 days from 4 to 8 months of age. Method: The number of reaches and the period (i.e. time) of manipulation to an object were extracted from video synchronized with the Qualisys(r) movement analysis system. Gross motor function was measured using the Alberta Infant Motor Scale. ANOVA for repeated measures was used to test the effect of age on the number of reaches, the time of manipulation and gross motor function. Hierarchical regression models were used to test the associations of reaching and manipulation with gross motor function. Results: Results revealed a significant increase in the number of reaches (p<0.001), the time of manipulation (p<0.001) and gross motor function (p<0.001) over time, as well as associations between reaching and gross motor function (R2=0.84; p<0.001) and manipulation and gross motor function (R2=0.13; p=0.02) from 4 to 6 months of age. Associations from 6 to 8 months of age were not significant. Conclusion: The relationship between hand function and gross motor function was not constant, and the age span from 4 to 6 months was a critical period of interdependency of hand function and gross motor function development. PMID:25714437

  9. A Developmental Study of Static Postural Control and Superimposed Arm Movements in Normal and Slowly Developing Children.

    ERIC Educational Resources Information Center

    Fisher, Janet M.

    Selected electromyographic parameters underlying static postural control in 4, 6, and 8 year old normally and slowly developing children during performance of selected arm movements were studied. Developmental delays in balance control were assessed by the Cashin Test of Motor Development (1974) and/or the Williams Gross Motor Coordination Test…

  10. Histologic characterization of the cat middle ear: in sickness and in health.

    PubMed

    Sula, M M; Njaa, B L; Payton, M E

    2014-09-01

    The purpose of this study was to establish microscopic normal in the middle ear of the cat while concurrently characterizing gross and microscopic lesions reflecting spontaneous otitis media. Both ears from 50 cats were examined grossly and processed for histologic examination of the external, middle, and internal ear on a single slide. Gross lesions of the middle ear were present in 14 of 100 (14%) and included turbid fluid, frank pus, hemorrhage, and fibrous thickening of the auricular mucoperiosteum. Histologically, 48 of 100 (48%) ears had evidence of ongoing or previous inflammatory middle ear disease, including proteinaceous fluid; vascular ectasia; expansion of the auricular mucoperiosteum by neutrophils, lymphocytes, and macrophages; cholesterol clefts; hemorrhage; fibrin; granulation tissue; membranous pseudo-glands; fibrosis; proliferation and/or osteolysis of the tympanic and septum bullae. Histologic lesions were identified in 34 of 100 ears (34%) lacking gross evidence of disease. Ears were classified histologically as either normal (52/100 [52%]) or diseased (48/100 [48%]). Diseased ears were further classified as mild to moderate (37/100 [37%]) or severely (11/100 [11%]) affected. Internal ear involvement was present in 11 of 100 (11%) ears. Histologic evidence of middle ear disease in cats is far greater than gross lesions or clinical literature suggests; further investigation and correlation of clinical and histologic disease are warranted. With minimal additional preparation, diagnostic specimens may be readily prepared and evaluated for this integral sensing organ. © The Author(s) 2013.

  11. Effect of patient setup errors on simultaneously integrated boost head and neck IMRT treatment plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siebers, Jeffrey V.; Keall, Paul J.; Wu Qiuwen

    2005-10-01

    Purpose: The purpose of this study is to determine dose delivery errors that could result from random and systematic setup errors for head-and-neck patients treated using the simultaneous integrated boost (SIB)-intensity-modulated radiation therapy (IMRT) technique. Methods and Materials: Twenty-four patients who participated in an intramural Phase I/II parotid-sparing IMRT dose-escalation protocol using the SIB treatment technique had their dose distributions reevaluated to assess the impact of random and systematic setup errors. The dosimetric effect of random setup error was simulated by convolving the two-dimensional fluence distribution of each beam with the random setup error probability density distribution. Random setup errorsmore » of {sigma} = 1, 3, and 5 mm were simulated. Systematic setup errors were simulated by randomly shifting the patient isocenter along each of the three Cartesian axes, with each shift selected from a normal distribution. Systematic setup error distributions with {sigma} = 1.5 and 3.0 mm along each axis were simulated. Combined systematic and random setup errors were simulated for {sigma} = {sigma} = 1.5 and 3.0 mm along each axis. For each dose calculation, the gross tumor volume (GTV) received by 98% of the volume (D{sub 98}), clinical target volume (CTV) D{sub 90}, nodes D{sub 90}, cord D{sub 2}, and parotid D{sub 50} and parotid mean dose were evaluated with respect to the plan used for treatment for the structure dose and for an effective planning target volume (PTV) with a 3-mm margin. Results: Simultaneous integrated boost-IMRT head-and-neck treatment plans were found to be less sensitive to random setup errors than to systematic setup errors. For random-only errors, errors exceeded 3% only when the random setup error {sigma} exceeded 3 mm. Simulated systematic setup errors with {sigma} = 1.5 mm resulted in approximately 10% of plan having more than a 3% dose error, whereas a {sigma} = 3.0 mm resulted in half of the plans having more than a 3% dose error and 28% with a 5% dose error. Combined random and systematic dose errors with {sigma} = {sigma} = 3.0 mm resulted in more than 50% of plans having at least a 3% dose error and 38% of the plans having at least a 5% dose error. Evaluation with respect to a 3-mm expanded PTV reduced the observed dose deviations greater than 5% for the {sigma} = {sigma} = 3.0 mm simulations to 5.4% of the plans simulated. Conclusions: Head-and-neck SIB-IMRT dosimetric accuracy would benefit from methods to reduce patient systematic setup errors. When GTV, CTV, or nodal volumes are used for dose evaluation, plans simulated including the effects of random and systematic errors deviate substantially from the nominal plan. The use of PTVs for dose evaluation in the nominal plan improves agreement with evaluated GTV, CTV, and nodal dose values under simulated setup errors. PTV concepts should be used for SIB-IMRT head-and-neck squamous cell carcinoma patients, although the size of the margins may be less than those used with three-dimensional conformal radiation therapy.« less

  12. Is Coefficient Alpha Robust to Non-Normal Data?

    PubMed Central

    Sheng, Yanyan; Sheng, Zhaohui

    2011-01-01

    Coefficient alpha has been a widely used measure by which internal consistency reliability is assessed. In addition to essential tau-equivalence and uncorrelated errors, normality has been noted as another important assumption for alpha. Earlier work on evaluating this assumption considered either exclusively non-normal error score distributions, or limited conditions. In view of this and the availability of advanced methods for generating univariate non-normal data, Monte Carlo simulations were conducted to show that non-normal distributions for true or error scores do create problems for using alpha to estimate the internal consistency reliability. The sample coefficient alpha is affected by leptokurtic true score distributions, or skewed and/or kurtotic error score distributions. Increased sample sizes, not test lengths, help improve the accuracy, bias, or precision of using it with non-normal data. PMID:22363306

  13. Interaction of Language Processing and Motor Skill in Children with Specific Language Impairment

    ERIC Educational Resources Information Center

    DiDonato Brumbach, Andrea C.; Goffman, Lisa

    2014-01-01

    Purpose: To examine how language production interacts with speech motor and gross and fine motor skill in children with specific language impairment (SLI). Method: Eleven children with SLI and 12 age-matched peers (4-6 years) produced structurally primed sentences containing particles and prepositions. Utterances were analyzed for errors and for…

  14. Simulation techniques for estimating error in the classification of normal patterns

    NASA Technical Reports Server (NTRS)

    Whitsitt, S. J.; Landgrebe, D. A.

    1974-01-01

    Methods of efficiently generating and classifying samples with specified multivariate normal distributions were discussed. Conservative confidence tables for sample sizes are given for selective sampling. Simulation results are compared with classified training data. Techniques for comparing error and separability measure for two normal patterns are investigated and used to display the relationship between the error and the Chernoff bound.

  15. Dietetic Treatment of Diabetes Mellitus with Special Reference to High Blood-pressure

    PubMed Central

    Embleton, Dennis

    1938-01-01

    The error in a diabetic is essentially a carbohydrate intolerance, and correction of this defect should be aimed at in treatment. Dietetic treatment of diabetes is more readily studied in early cases or cases in the pre-diabetic state, before arterial degeneration and other catastrophes have become manifest. It is suggested that such a condition exists in obese subjects with a carbohydrate intolerance. A high protein diet based on a study of these cases is brought forward. This diet has been shown to operate favourably in diabetic states. Many cases of reasonable severity can be brought to develop a normal or nearly normal glucose tolerance curve and retain this state over a period of years. Cases in this state are better able to resist concomitant infections without deterioration of their tolerance than cases imperfectly balanced with insulin. The high protein diet can be used in cases of hyperpiesia in the absence of gross kidney damage. These cases show a steady and lasting drop in blood-pressure without the necessity of employing rest. The value of the pure fruit diet in increasing tolerance of certain diabetics to carbohydrate is demonstrated. The indiscriminate use of insulin in hyperglycæmic states is deprecated on the grounds that it is frequently unnecessary, and though it may balance it does not necessarily rectify the main deficiency of carbohydrate intolerance. By the use of this simple high protein diet, where no weighing, &c., is required, a large number of diabetics at present on insulin could be readily dealt with, a return to a normal or nearly normal glucose tolerance curve being obtained and maintained. PMID:19991654

  16. Assessment of gross motor skills of at-risk infants: predictive validity of the Alberta Infant Motor Scale.

    PubMed

    Darrah, J; Piper, M; Watt, M J

    1998-07-01

    The Alberta Infant Motor Scale (AIMS) is a norm-referenced measure of infant gross motor development. The objectives of this study were: (1) to establish the best cut-off scores on the AIMS for predictive purposes, and (2) to compare the predictive abilities of the AIMS with those of the Movement Assessment of Infants (MAI) and the Peabody Developmental Gross Motor Scale (PDGMS). One hundred and sixty-four infants were assessed at 4 and 8 months adjusted ages on the three measures. A pediatrician assessed each infant's gross motor development at 18 months as normal, suspicious, or abnormal. For the AIMS, two different cut-off points were identified: the 10th centile at 4 months and the 5th centile at 8 months. The MAI provided the best specificity rates at 4 months while the AIMS was superior in specificity at 8 months. Sensitivity rates were comparable between the two tests. The PDGMS in general demonstrated poor predictive abilities.

  17. Development and demonstration of a Lagrangian dispersion modeling system for real-time prediction of smoke haze pollution from biomass burning in Southeast Asia

    NASA Astrophysics Data System (ADS)

    Hertwig, Denise; Burgin, Laura; Gan, Christopher; Hort, Matthew; Jones, Andrew; Shaw, Felicia; Witham, Claire; Zhang, Kathy

    2015-12-01

    Transboundary smoke haze caused by biomass burning frequently causes extreme air pollution episodes in maritime and continental Southeast Asia. With millions of people being affected by this type of pollution every year, the task to introduce smoke haze related air quality forecasts is urgent. We investigate three severe haze episodes: June 2013 in Maritime SE Asia, induced by fires in central Sumatra, and March/April 2013 and 2014 on mainland SE Asia. Based on comparisons with surface measurements of PM10 we demonstrate that the combination of the Lagrangian dispersion model NAME with emissions derived from satellite-based active-fire detection provides reliable forecasts for the region. Contrasting two fire emission inventories shows that using algorithms to account for fire pixel obscuration by cloud or haze better captures the temporal variations and observed persistence of local pollution levels. Including up-to-date representations of fuel types in the area and using better conversion and emission factors is found to more accurately represent local concentration magnitudes, particularly for peat fires. With both emission inventories the overall spatial and temporal evolution of the haze events is captured qualitatively, with some error attributed to the resolution of the meteorological data driving the dispersion process. In order to arrive at a quantitative agreement with local PM10 levels, the simulation results need to be scaled. Considering the requirements of operational forecasts, we introduce a real-time bias correction technique to the modeling system to address systematic and random modeling errors, which successfully improves the results in terms of reduced normalized mean biases and fractional gross errors.

  18. Raymond's Paragraph System: an alternative format for the organization of gross pathology reports and its implementation in an academic teaching hospital.

    PubMed

    Dayton, Annette S; Ro, Jae Y; Schwartz, Mary R; Ayala, Alberto G; Raymond, A Kevin

    2009-02-01

    Traditionally organized gross pathology reports, which are widely used in pathology resident and pathologists' assistant training programs, may not offer the most efficient method of communicating pertinent information to treating physicians. Instructional materials for teaching gross pathology dictation are limited and the teaching methods used are inconsistent. Raymond's Paragraph System, a gross pathology report formatting system, was developed for use at a cancer center and has been implemented at The Methodist Hospital, Houston, Tex, an academic medical center. Unlike traditionally organized reports in which everything is normally dictated in 1 long paragraph, this system separates the dictation into multiple paragraphs creating an organized and comprehensible report. Recent literature regarding formatting of pathology reports focuses primarily on the organization of specimen diagnoses and overall report layout. However, little literature is available that highlights organization of the specimen gross descriptions. To provide instruction to pathologists, pathology residents and fellows, and pathologists' assistant students about an alternative method of organizing gross pathology reports. Review of pertinent literature relating to preparation of gross pathology reports, report formatting, and pathology laboratory credentialing requirements. The paragraph system offers a viable alternative to traditionally organized pathology reports. Primarily, it provides a working model for medical professionals-in-training. It helps create user-friendly pathology reports by giving precise and concise information in a standardized format. This article provides an overview of the system and discusses our experience in its implementation.

  19. 10 CFR 71.33 - Package description.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Classification as Type B(U), Type B(M), or fissile material packaging; (2) Gross weight; (3) Model number; (4... absorbers or moderators, and the atomic ratio of moderator to fissile constituents; (5) Maximum normal...

  20. Gross and microscopic pathology of hard and soft corals in New Caledonia

    USGS Publications Warehouse

    Work, Thierry M.; Aeby, Greta S.; Lasne, Gregory; Tribollet, Aline

    2014-01-01

    We surveyed the reefs of Grande Terre, New Caledonia, for coral diseases in 2010 and 2013. Lesions encountered in hard and soft corals were systematically described at the gross and microscopic level. We sampled paired and normal tissues from 101 and 65 colonies in 2010 and 2013, respectively, comprising 51 species of corals from 27 genera. Tissue loss was the most common gross lesion sampled (40%) followed by discoloration (28%), growth anomalies (13%), bleaching (10%), and flatworm infestation (1%). When grouped by gross lesions, the diversity of microscopic lesions as measured by Shannon–Wiener index was highest for tissue loss, followed by discoloration, bleaching, and growth anomaly. Our findings document an extension of the range of certain diseases such as Porites trematodiasis and endolithic hypermycosis (dark spots) to the Western Pacific as well as the presence of a putative cnidarian endosymbiont. We also expand the range of species infected by cell-associated microbial aggregates, and confirm the trend that these aggregates predominate in dominant genera of corals in the Indo-Pacific. This study highlights the importance of including histopathology as an integral component of baseline coral disease surveys, because a given gross lesion might be associated with multiple potential causative agents.

  1. FDDI network test adaptor error injection circuit

    NASA Technical Reports Server (NTRS)

    Eckenrode, Thomas (Inventor); Stauffer, David R. (Inventor); Stempski, Rebecca (Inventor)

    1994-01-01

    An apparatus for injecting errors into a FDDI token ring network is disclosed. The error injection scheme operates by fooling a FORMAC into thinking it sent a real frame of data. This is done by using two RAM buffers. The RAM buffer normally accessed by the RBC/DPC becomes a SHADOW RAM during error injection operation. A dummy frame is loaded into the shadow RAM in order to fool the FORMAC. This data is just like the data that would be used if sending a normal frame, with the restriction that it must be shorter than the error injection data. The other buffer, the error injection RAM, contains the error injection frame. The error injection data is sent out to the media by switching a multiplexor. When the FORMAC is done transmitting the data, the multiplexor is switched back to the normal mode. Thus, the FORMAC is unaware of what happened and the token ring remains operational.

  2. Natural radioactivity of riverbank sediments of the Maritza and Tundja Rivers in Turkey.

    PubMed

    Aytas, Sule; Yusan, Sabriye; Aslani, Mahmoud A A; Karali, Turgay; Turkozu, D Alkim; Gok, Cem; Erenturk, Sema; Gokce, Melis; Oguz, K Firat

    2012-01-01

    This article represents the first results of the natural radionuclides in the Maritza and Tundja river sediments, in the vicinity of Edirne city, Turkey. The aim of the article is to describe the natural radioactivity concentrations as a baseline for further studies and to obtain the distribution patterns of radioactivity in trans-boundary river sediments of the Maritza and Tundja, which are shared by Turkey, Bulgaria and Greece. Sediment samples were collected during the period of August 2007-April 2010. The riverbank sediment samples were analyzed firstly for their pH, organic matter content and soil texture. The gross alpha/beta and (238)U, (232)Th and (40)K activity concentrations were then investigated in the collected sediment samples. The mean and standard error of mean values of gross alpha and gross beta activity concentrations were found as 91 ± 11, 410 ± 69 Bq/kg and 86 ± 11, 583 ± 109 Bq/kg for the Maritza and Tundja river sediments, respectively. Moreover, the mean and standard error of mean values of (238)U, (232)Th and (40)K activity concentrations were determined as 219 ± 68, 128 ± 55, 298 ± 13 and as 186 ± 98, 121 ± 68, 222 ± 30 Bq/kg for the Maritza and Tundja River, respectively. Absorbed dose rates (D) and annual effective dose equivalent s have been calculated for each sampling point. The average value of adsorbed dose rate and effective dose equivalent were found as 191 and 169 nGy/h; 2 and 2 mSv/y for the Maritza and the Tundja river sediments, respectively.

  3. Generalized approach for using unbiased symmetric metrics with negative values: normalized mean bias factor and normalized mean absolute error factor

    EPA Science Inventory

    Unbiased symmetric metrics provide a useful measure to quickly compare two datasets, with similar interpretations for both under and overestimations. Two examples include the normalized mean bias factor and normalized mean absolute error factor. However, the original formulations...

  4. An analysis of the Kalman filter in the Gamma Ray Observatory (GRO) onboard attitude determination subsystem

    NASA Technical Reports Server (NTRS)

    Snow, Frank; Harman, Richard; Garrick, Joseph

    1988-01-01

    The Gamma Ray Observatory (GRO) spacecraft needs a highly accurate attitude knowledge to achieve its mission objectives. Utilizing the fixed-head star trackers (FHSTs) for observations and gyroscopes for attitude propagation, the discrete Kalman Filter processes the attitude data to obtain an onboard accuracy of 86 arc seconds (3 sigma). A combination of linear analysis and simulations using the GRO Software Simulator (GROSS) are employed to investigate the Kalman filter for stability and the effects of corrupted observations (misalignment, noise), incomplete dynamic modeling, and nonlinear errors on Kalman filter. In the simulations, on-board attitude is compared with true attitude, the sensitivity of attitude error to model errors is graphed, and a statistical analysis is performed on the residuals of the Kalman Filter. In this paper, the modeling and sensor errors that degrade the Kalman filter solution beyond mission requirements are studied, and methods are offered to identify the source of these errors.

  5. Electron Beam Propagation Through a Magnetic Wiggler with Random Field Errors

    DTIC Science & Technology

    1989-08-21

    Another quantity of interest is the vector potential 6.A,.(:) associated with the field error 6B,,,(:). Defining the normalized vector potentials ba = ebA...then follows that the correlation of the normalized vector potential errors is given by 1 . 12 (-a.(zj)a.,(z2)) = a,k,, dz’ , dz" (bBE(z’)bB , (z")) a2...Throughout the following, terms of order O(z:/z) will be neglected. Similarly, for the y-component of the normalized vector potential errors, one

  6. Scaled test statistics and robust standard errors for non-normal data in covariance structure analysis: a Monte Carlo study.

    PubMed

    Chou, C P; Bentler, P M; Satorra, A

    1991-11-01

    Research studying robustness of maximum likelihood (ML) statistics in covariance structure analysis has concluded that test statistics and standard errors are biased under severe non-normality. An estimation procedure known as asymptotic distribution free (ADF), making no distributional assumption, has been suggested to avoid these biases. Corrections to the normal theory statistics to yield more adequate performance have also been proposed. This study compares the performance of a scaled test statistic and robust standard errors for two models under several non-normal conditions and also compares these with the results from ML and ADF methods. Both ML and ADF test statistics performed rather well in one model and considerably worse in the other. In general, the scaled test statistic seemed to behave better than the ML test statistic and the ADF statistic performed the worst. The robust and ADF standard errors yielded more appropriate estimates of sampling variability than the ML standard errors, which were usually downward biased, in both models under most of the non-normal conditions. ML test statistics and standard errors were found to be quite robust to the violation of the normality assumption when data had either symmetric and platykurtic distributions, or non-symmetric and zero kurtotic distributions.

  7. Mimicking Aphasic Semantic Errors in Normal Speech Production: Evidence from a Novel Experimental Paradigm

    ERIC Educational Resources Information Center

    Hodgson, Catherine; Lambon Ralph, Matthew A.

    2008-01-01

    Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study…

  8. Occurrence and Nonoccurrence of Random Sequences: Comment on Hahn and Warren (2009)

    ERIC Educational Resources Information Center

    Sun, Yanlong; Tweney, Ryan D.; Wang, Hongbin

    2010-01-01

    On the basis of the statistical concept of waiting time and on computer simulations of the "probabilities of nonoccurrence" (p. 457) for random sequences, Hahn and Warren (2009) proposed that given people's experience of a finite data stream from the environment, the gambler's fallacy is not as gross an error as it might seem. We deal with two…

  9. Can a sample of Landsat sensor scenes reliably estimate the global extent of tropical deforestation?

    Treesearch

    R. L. Czaplewski

    2003-01-01

    Tucker and Townshend (2000) conclude that wall-to-wall coverage is needed to avoid gross errors in estimations of deforestation rates' because tropical deforestation is concentrated along roads and rivers. They specifically question the reliability of the 10% sample of Landsat sensor scenes used in the global remote sensing survey conducted by the Food and...

  10. Harnessing Sparse and Low-Dimensional Structures for Robust Clustering of Imagery Data

    ERIC Educational Resources Information Center

    Rao, Shankar Ramamohan

    2009-01-01

    We propose a robust framework for clustering data. In practice, data obtained from real measurement devices can be incomplete, corrupted by gross errors, or not correspond to any assumed model. We show that, by properly harnessing the intrinsic low-dimensional structure of the data, these kinds of practical problems can be dealt with in a uniform…

  11. Modeling Error Distributions of Growth Curve Models through Bayesian Methods

    ERIC Educational Resources Information Center

    Zhang, Zhiyong

    2016-01-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is…

  12. Motor-Enriched Learning Activities Can Improve Mathematical Performance in Preadolescent Children.

    PubMed

    Beck, Mikkel M; Lind, Rune R; Geertsen, Svend S; Ritz, Christian; Lundbye-Jensen, Jesper; Wienecke, Jacob

    2016-01-01

    Objective: An emerging field of research indicates that physical activity can benefit cognitive functions and academic achievements in children. However, less is known about how academic achievements can benefit from specific types of motor activities (e.g., fine and gross) integrated into learning activities. Thus, the aim of this study was to investigate whether fine or gross motor activity integrated into math lessons (i.e., motor-enrichment) could improve children's mathematical performance. Methods: A 6-week within school cluster-randomized intervention study investigated the effects of motor-enriched mathematical teaching in Danish preadolescent children ( n = 165, age = 7.5 ± 0.02 years). Three groups were included: a control group (CON), which received non-motor enriched conventional mathematical teaching, a fine motor math group (FMM) and a gross motor math group (GMM), which received mathematical teaching enriched with fine and gross motor activity, respectively. The children were tested before (T0), immediately after (T1) and 8 weeks after the intervention (T2). A standardized mathematical test (50 tasks) was used to evaluate mathematical performance. Furthermore, it was investigated whether motor-enriched math was accompanied by different effects in low and normal math performers. Additionally, the study investigated the potential contribution of cognitive functions and motor skills on mathematical performance. Results: All groups improved their mathematical performance from T0 to T1. However, from T0 to T1, the improvement was significantly greater in GMM compared to FMM (1.87 ± 0.71 correct answers) ( p = 0.02). At T2 no significant differences in mathematical performance were observed. A subgroup analysis revealed that normal math-performers benefitted from GMM compared to both CON 1.78 ± 0.73 correct answers ( p = 0.04) and FMM 2.14 ± 0.72 correct answers ( p = 0.008). These effects were not observed in low math-performers. The effects were partly accounted for by visuo-spatial short-term memory and gross motor skills. Conclusion: The study demonstrates that motor enriched learning activities can improve mathematical performance. In normal math performers GMM led to larger improvements than FMM and CON. This was not the case for the low math performers. Future studies should further elucidate the neurophysiological mechanisms underlying the observed behavioral effects.

  13. Motor-Enriched Learning Activities Can Improve Mathematical Performance in Preadolescent Children

    PubMed Central

    Beck, Mikkel M.; Lind, Rune R.; Geertsen, Svend S.; Ritz, Christian; Lundbye-Jensen, Jesper; Wienecke, Jacob

    2016-01-01

    Objective: An emerging field of research indicates that physical activity can benefit cognitive functions and academic achievements in children. However, less is known about how academic achievements can benefit from specific types of motor activities (e.g., fine and gross) integrated into learning activities. Thus, the aim of this study was to investigate whether fine or gross motor activity integrated into math lessons (i.e., motor-enrichment) could improve children's mathematical performance. Methods: A 6-week within school cluster-randomized intervention study investigated the effects of motor-enriched mathematical teaching in Danish preadolescent children (n = 165, age = 7.5 ± 0.02 years). Three groups were included: a control group (CON), which received non-motor enriched conventional mathematical teaching, a fine motor math group (FMM) and a gross motor math group (GMM), which received mathematical teaching enriched with fine and gross motor activity, respectively. The children were tested before (T0), immediately after (T1) and 8 weeks after the intervention (T2). A standardized mathematical test (50 tasks) was used to evaluate mathematical performance. Furthermore, it was investigated whether motor-enriched math was accompanied by different effects in low and normal math performers. Additionally, the study investigated the potential contribution of cognitive functions and motor skills on mathematical performance. Results: All groups improved their mathematical performance from T0 to T1. However, from T0 to T1, the improvement was significantly greater in GMM compared to FMM (1.87 ± 0.71 correct answers) (p = 0.02). At T2 no significant differences in mathematical performance were observed. A subgroup analysis revealed that normal math-performers benefitted from GMM compared to both CON 1.78 ± 0.73 correct answers (p = 0.04) and FMM 2.14 ± 0.72 correct answers (p = 0.008). These effects were not observed in low math-performers. The effects were partly accounted for by visuo-spatial short-term memory and gross motor skills. Conclusion: The study demonstrates that motor enriched learning activities can improve mathematical performance. In normal math performers GMM led to larger improvements than FMM and CON. This was not the case for the low math performers. Future studies should further elucidate the neurophysiological mechanisms underlying the observed behavioral effects. PMID:28066215

  14. Relationship between Motor Skill and Body Mass Index in 5- to 10-Year-Old Children

    ERIC Educational Resources Information Center

    D'Hondt, Eva; Deforche, Benedicte; De Bourdeaudhuij, Ilse; Lenoir, Matthieu

    2009-01-01

    The purpose of this study was to investigate gross and fine motor skill in overweight and obese children compared with normal-weight peers. According to international cut-off points for Body Mass Index (BMI) from Cole et al. (2000), all 117 participants (5-10 year) were classified as being normal-weight, overweight, or obese. Level of motor skill…

  15. Degeneration of the long biceps tendon: comparison of MRI with gross anatomy and histology.

    PubMed

    Buck, Florian M; Grehn, Holger; Hilbe, Monika; Pfirrmann, Christian W A; Manzanell, Silvana; Hodler, Jürg

    2009-11-01

    The objective of our study was to relate alterations in biceps tendon diameter and signal on MR images to gross anatomy and histology. T1-weighted, T2-weighted fat-saturated, and proton density-weighted fat-saturated spin-echo sequences were acquired in 15 cadaveric shoulders. Biceps tendon diameter (normal, flattened, thickened, and partially or completely torn) and signal intensity (compared with bone, fat, muscle, and joint fluid) were graded by two readers independently and in a blinded fashion. The distance of tendon abnormalities from the attachment at the glenoid were noted in millimeters. MRI findings were related to gross anatomic and histologic findings. On the basis of gross anatomy, there were six normal, five flattened, two thickened, and two partially torn tendons. Reader 1 graded nine diameter changes correctly, missed two, and incorrectly graded four. The corresponding values for reader 2 were seven, one, and five, respectively, with kappa = 0.75. Histology showed mucoid degeneration (n = 13), lipoid degeneration (n = 7), and fatty infiltration (n = 6). At least one type of abnormality was found in each single tendon. Mucoid degeneration was hyperintense compared with fatty infiltration on T2-weighted fat-saturated images and hyperintense compared with magic-angle artifacts on proton density-weighted fat-saturated images. MRI-based localization of degeneration agreed well with histologic findings. Diameter changes are specific but not sensitive in diagnosing tendinopathy of the biceps tendon. Increased tendon signal is most typical for mucoid degeneration but should be used with care as a sign of tendon degeneration.

  16. Development of a wideband pulse quaternary modulation system. [for an operational 400 Mbps baseband laser communication system

    NASA Technical Reports Server (NTRS)

    Federhofer, J. A.

    1974-01-01

    Laboratory data verifying the pulse quaternary modulation (PQM) theoretical predictions is presented. The first laboratory PQM laser communication system was successfully fabricated, integrated, tested and demonstrated. System bit error rate tests were performed and, in general, indicated approximately a 2 db degradation from the theoretically predicted results. These tests indicated that no gross errors were made in the initial theoretical analysis of PQM. The relative ease with which the entire PQM laboratory system was integrated and tested indicates that PQM is a viable candidate modulation scheme for an operational 400 Mbps baseband laser communication system.

  17. Anatomic study of the canine stifle using low-field magnetic resonance imaging (MRI) and MRI arthrography.

    PubMed

    Pujol, Esteban; Van Bree, Henri; Cauzinille, Laurent; Poncet, Cyrill; Gielen, Ingrid; Bouvy, Bernard

    2011-06-01

    To investigate the use of low-field magnetic resonance imaging (MRI) and MR arthrography in normal canine stifles and to compare MRI images to gross dissection. Descriptive study. Adult canine pelvic limbs (n=17). Stifle joints from 12 dogs were examined by orthopedic and radiographic examination, synovial fluid analysis, and MRI performed using a 0.2 T system. Limbs 1 to 7 were used to develop the MR and MR arthrography imaging protocol. Limbs 8-17 were studied with the developed MR and MR arthrography protocol and by gross dissection. Three sequences were obtained: T1-weighted spin echo (SE) in sagittal, dorsal, and transverse plane; T2-weighted SE in sagittal plane and T1-gradient echo in sagittal plane. Specific bony and soft tissue structures were easily identifiable with the exception of articular cartilage. The cranial and caudal cruciate ligaments were identified. Medial and lateral menisci were seen as wedge-shaped hypointense areas. MR arthrography permitted further delineation of specific structures. MR images corresponded with gross dissection morphology. With the exception of poor delineation of articular cartilage, a low-field MRI and MR arthrography protocol provides images of adequate quality to assess the normal canine stifle joint. © Copyright 2011 by The American College of Veterinary Surgeons.

  18. Prevention of congenital defects induced by prenatal alcohol exposure (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Sheehan, Megan M.; Karunamuni, Ganga; Pedersen, Cameron J.; Gu, Shi; Doughman, Yong Qiu; Jenkins, Michael W.; Watanabe, Michiko; Rollins, Andrew M.

    2017-02-01

    Over 500,000 women per year in the United States drink during pregnancy, and 1 in 5 of this population also binge drink. Up to 40% of live-born children with prenatal alcohol exposure (PAE) present with congenital heart defects (CHDs) including life-threatening outflow and valvuloseptal anomalies. Previously we established a PAE model in the avian embryo and used optical coherence tomography (OCT) imaging to assay looping-stage (early) cardiac function/structure and septation-stage (late) cardiac defects. Early-stage ethanol-exposed embryos had smaller cardiac cushions (valve precursors) and increased retrograde flow, while late-stage embryos presented with gross head/body defects, and exhibited smaller atrio-ventricular (AV) valves, interventricular septae, and aortic vessels. However, supplementation with the methyl donor betaine reduced gross defects, prevented cardiac defects such as ventricular septal defects and abnormal AV valves, and normalized cardiac parameters. Immunofluorescent staining for 5-methylcytosine in transverse embryo sections also revealed that DNA methylation levels were reduced by ethanol but normalized by co-administration of betaine. Furthermore, supplementation with folate, another methyl donor, in the PAE model appeared to normalize retrograde flow levels which are typically elevated by ethanol exposure. Studies are underway to correlate retrograde flow numbers for folate with associated cushion volumes. Finally, preliminary findings have revealed that glutathione, a key endogenous antioxidant which also regulates methyl group donation, is particularly effective in improving alcohol-impacted survival and gross defect rates. Current investigations will determine whether glutathione has any positive effect on PAE-related CHDs. Our studies could have significant implications for public health, especially related to prenatal nutrition recommendations.

  19. Gross Motor Development in Children Aged 3-5 Years, United States 2012.

    PubMed

    Kit, Brian K; Akinbami, Lara J; Isfahani, Neda Sarafrazi; Ulrich, Dale A

    2017-07-01

    Objective Gross motor development in early childhood is important in fostering greater interaction with the environment. The purpose of this study is to describe gross motor skills among US children aged 3-5 years using the Test of Gross Motor Development (TGMD-2). Methods We used 2012 NHANES National Youth Fitness Survey (NNYFS) data, which included TGMD-2 scores obtained according to an established protocol. Outcome measures included locomotor and object control raw and age-standardized scores. Means and standard errors were calculated for demographic and weight status with SUDAAN using sample weights to calculate nationally representative estimates, and survey design variables to account for the complex sampling methods. Results The sample included 339 children aged 3-5 years. As expected, locomotor and object control raw scores increased with age. Overall mean standardized scores for locomotor and object control were similar to the mean value previously determined using a normative sample. Girls had a higher mean locomotor, but not mean object control, standardized score than boys (p < 0.05). However, the mean locomotor standardized scores for both boys and girls fell into the range categorized as "average." There were no other differences by age, race/Hispanic origin, weight status, or income in either of the subtest standardized scores (p > 0.05). Conclusions In a nationally representative sample of US children aged 3-5 years, TGMD-2 mean locomotor and object control standardized scores were similar to the established mean. These results suggest that standardized gross motor development among young children generally did not differ by demographic or weight status.

  20. Bridge Analysis and Evaluation of Effects Under Overload Vehicles

    DOT National Transportation Integrated Search

    2009-12-01

    Movement of industrial freight infrequently requires special overload vehicles weighing 5 to 6 times the normal legal truck weight to move across highway systems. The gross vehicle weight of the overload vehicles frequently exceeds 400 kips while the...

  1. 24 CFR 3282.402 - General principles.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... URBAN DEVELOPMENT MANUFACTURED HOME PROCEDURAL AND ENFORCEMENT REGULATIONS Consumer Complaint Handling.... (e) It is the policy of these regulations that all consumer complaints or other information... result of normal year and aging, gross and unforeseeable consumer abuse, or unforeseeable neglect of...

  2. 24 CFR 3282.402 - General principles.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... URBAN DEVELOPMENT MANUFACTURED HOME PROCEDURAL AND ENFORCEMENT REGULATIONS Consumer Complaint Handling.... (e) It is the policy of these regulations that all consumer complaints or other information... result of normal year and aging, gross and unforeseeable consumer abuse, or unforeseeable neglect of...

  3. 24 CFR 3282.402 - General principles.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... URBAN DEVELOPMENT MANUFACTURED HOME PROCEDURAL AND ENFORCEMENT REGULATIONS Consumer Complaint Handling.... (e) It is the policy of these regulations that all consumer complaints or other information... result of normal year and aging, gross and unforeseeable consumer abuse, or unforeseeable neglect of...

  4. A Dynamic Game on Network Topology for Counterinsurgency Applications

    DTIC Science & Technology

    2015-03-26

    scenario. This study creates a dynamic game on network topology to provide insight into the effec- tiveness of offensive targeting strategies determined by...focused upon the diffusion of thoughts and innovations throughout complex social networks. Coleman et al. (1966) and Ryan & Gross (1950) investigated...free networks make them extremely resilient against errors but very vulnerable to attack. Most interest- ingly, a determined attacker can remove well

  5. Use of the One-Minute Preceptor as a Teaching Tool in the Gross Anatomy Laboratory

    ERIC Educational Resources Information Center

    Chan, Lap Ki; Wiseman, Jeffrey

    2011-01-01

    The one-minute preceptor (OMP) is a time-efficient technique used for teaching in busy clinical settings. It consists of five microskills: (1) get a commitment from the student, (2) probe for supporting evidence, (3) reinforce what was done right, (4) correct errors and fill in omissions, and (5) teach a general rule. It can also be used to…

  6. 26 CFR 1.669(c)-2A - Computation of the beneficiary's income and tax for a prior taxable year.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... either the exact method or the short-cut method shall be determined by reference to the information... shows a mathematical error on its face which resulted in the wrong amount of tax being paid for such... amounts in such gross income, shall be based upon the return after the correction of such mathematical...

  7. WE-AB-BRA-01: 3D-2D Image Registration for Target Localization in Spine Surgery: Comparison of Similarity Metrics Against Robustness to Content Mismatch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Silva, T; Ketcha, M; Siewerdsen, J H

    Purpose: In image-guided spine surgery, mapping 3D preoperative images to 2D intraoperative images via 3D-2D registration can provide valuable assistance in target localization. However, the presence of surgical instrumentation, hardware implants, and soft-tissue resection/displacement causes mismatches in image content, confounding existing registration methods. Manual/semi-automatic methods to mask such extraneous content is time consuming, user-dependent, error prone, and disruptive to clinical workflow. We developed and evaluated 2 novel similarity metrics within a robust registration framework to overcome such challenges in target localization. Methods: An IRB-approved retrospective study in 19 spine surgery patients included 19 preoperative 3D CT images and 50 intraoperativemore » mobile radiographs in cervical, thoracic, and lumbar spine regions. A neuroradiologist provided truth definition of vertebral positions in CT and radiography. 3D-2D registration was performed using the CMA-ES optimizer with 4 gradient-based image similarity metrics: (1) gradient information (GI); (2) gradient correlation (GC); (3) a novel variant referred to as gradient orientation (GO); and (4) a second variant referred to as truncated gradient correlation (TGC). Registration accuracy was evaluated in terms of the projection distance error (PDE) of the vertebral levels. Results: Conventional similarity metrics were susceptible to gross registration error and failure modes associated with the presence of surgical instrumentation: for GI, the median PDE and interquartile range was 33.0±43.6 mm; similarly for GC, PDE = 23.0±92.6 mm respectively. The robust metrics GO and TGC, on the other hand, demonstrated major improvement in PDE (7.6 ±9.4 mm and 8.1± 18.1 mm, respectively) and elimination of gross failure modes. Conclusion: The proposed GO and TGC similarity measures improve registration accuracy and robustness to gross failure in the presence of strong image content mismatch. Such registration capability could offer valuable assistance in target localization without disruption of clinical workflow. G. Kleinszig and S. Vogt are employees of Siemens Healthcare.« less

  8. Evaluating concentration estimation errors in ELISA microarray experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daly, Don S.; White, Amanda M.; Varnum, Susan M.

    Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to predict a protein concentration in a sample. Deploying ELISA in a microarray format permits simultaneous prediction of the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Evaluating prediction error is critical to interpreting biological significance and improving the ELISA microarray process. Evaluating prediction error must be automated to realize a reliable high-throughput ELISA microarray system. Methods: In this paper, we present a statistical method based on propagation of error to evaluate prediction errors in the ELISA microarray process. Althoughmore » propagation of error is central to this method, it is effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization and statistical diagnostics when evaluating ELISA microarray prediction errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of prediction errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error.« less

  9. Error response test system and method using test mask variable

    NASA Technical Reports Server (NTRS)

    Gender, Thomas K. (Inventor)

    2006-01-01

    An error response test system and method with increased functionality and improved performance is provided. The error response test system provides the ability to inject errors into the application under test to test the error response of the application under test in an automated and efficient manner. The error response system injects errors into the application through a test mask variable. The test mask variable is added to the application under test. During normal operation, the test mask variable is set to allow the application under test to operate normally. During testing, the error response test system can change the test mask variable to introduce an error into the application under test. The error response system can then monitor the application under test to determine whether the application has the correct response to the error.

  10. Measuring the Accuracy of Simple Evolving Connectionist System with Varying Distance Formulas

    NASA Astrophysics Data System (ADS)

    Al-Khowarizmi; Sitompul, O. S.; Suherman; Nababan, E. B.

    2017-12-01

    Simple Evolving Connectionist System (SECoS) is a minimal implementation of Evolving Connectionist Systems (ECoS) in artificial neural networks. The three-layer network architecture of the SECoS could be built based on the given input. In this study, the activation value for the SECoS learning process, which is commonly calculated using normalized Hamming distance, is also calculated using normalized Manhattan distance and normalized Euclidean distance in order to compare the smallest error value and best learning rate obtained. The accuracy of measurement resulted by the three distance formulas are calculated using mean absolute percentage error. In the training phase with several parameters, such as sensitivity threshold, error threshold, first learning rate, and second learning rate, it was found that normalized Euclidean distance is more accurate than both normalized Hamming distance and normalized Manhattan distance. In the case of beta fibrinogen gene -455 G/A polymorphism patients used as training data, the highest mean absolute percentage error value is obtained with normalized Manhattan distance compared to normalized Euclidean distance and normalized Hamming distance. However, the differences are very small that it can be concluded that the three distance formulas used in SECoS do not have a significant effect on the accuracy of the training results.

  11. Evaluation of the user seal check on gross leakage detection of 3 different designs of N95 filtering facepiece respirators.

    PubMed

    Lam, Simon C; Lui, Andrew K F; Lee, Linda Y K; Lee, Joseph K L; Wong, K F; Lee, Cathy N Y

    2016-05-01

    The use of N95 respirators prevents spread of respiratory infectious agents, but leakage hampers its protection. Manufacturers recommend a user seal check to identify on-site gross leakage. However, no empirical evidence is provided. Therefore, this study aims to examine validity of a user seal check on gross leakage detection in commonly used types of N95 respirators. A convenience sample of 638 nursing students was recruited. On the wearing of 3 different designs of N95 respirators, namely 3M-1860s, 3M-1862, and Kimberly-Clark 46827, the standardized user seal check procedure was carried out to identify gross leakage. Repeated testing of leakage was followed by the use of a quantitative fit testing (QNFT) device in performing normal breathing and deep breathing exercises. Sensitivity, specificity, predictive values, and likelihood ratios were calculated accordingly. As indicated by QNFT, prevalence of actual gross leakage was 31.0%-39.2% with the 3M respirators and 65.4%-65.8% with the Kimberly-Clark respirator. Sensitivity and specificity of the user seal check for identifying actual gross leakage were approximately 27.7% and 75.5% for 3M-1860s, 22.1% and 80.5% for 3M-1862, and 26.9% and 80.2% for Kimberly-Clark 46827, respectively. Likelihood ratios were close to 1 (range, 0.89-1.51) for all types of respirators. The results did not support user seal checks in detecting any actual gross leakage in the donning of N95 respirators. However, such a check might alert health care workers that donning a tight-fitting respirator should be performed carefully. Copyright © 2016 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  12. Evaluation of physical growth in cerebral palsied children and its possible relationship with gross motor development.

    PubMed

    Ibrahim, Alaa I; Hawamdeh, Ziad M

    2007-03-01

    The object of this study was to detect any possible relation between the current gross motor function score for cerebral palsy children and their physical growth parameters. We measured 71 children with spastic cerebral palsy (35 diplegic, 25 quadriplegic and 11 hemiplegic) and a control group of 80 normal children. Measures taken for cerebral palsy and normal children included stature, weight, head circumference and mid upper-arm circumference, and, additionally for the cerebral palsied children, duration of the disease, birth weight, presence or absence of orofacial dysfunction, distribution of paralysis and degree of spasticity. Motor abilities were measured using the Gross Motor Function Measure. Results showed a significant decrease in the stature, current weight, head circumference and mid upper-arm circumference of both sexes of the quadriplegic children, and significant decreases in the current weight of the diplegic girls and the head circumference of the hemiplegic girls. There were also significant decreases in all scores of the quadriplegic children compared to the diplegic and hemiplegic children. Diplegic children had significantly decreased standing, walking and running, and total scores, compared to the hemiplegic children. Total score at age of testing was independently predicted by the duration of the disease, distribution of paralysis, presence or absence of orofacial dysfunction, spasticity index and the current body weight. Our findings indicate that in spastic cerebral palsy the physical growth parameters were markedly decreased in the quadriplegic form compared to other forms. Only current body weight, from the growth parameters, in addition to other relevant clinical data, can be considered predictors of the current gross motor abilities of those children.

  13. Obesity Leads to Declines in Motor Skills across Childhood

    PubMed Central

    Cheng, Jessica; East, Patricia; Blanco, Estela; Sim, Eastern Kang; Castillo, Marcela; Lozoff, Betsy; Gahagan, Sheila

    2016-01-01

    Background Poor motor skills have been consistently linked with a higher body weight in childhood, but the causal direction of this association is not fully understood. This study investigated the temporal ordering between children’s motor skills and weight status at 5 and 10 years. Methods Participants were 668 children (54% male) who were studied from infancy as part of an iron-deficiency anemia preventive trial and follow-up study in Santiago, Chile. All were healthy, full term, and weighing 3 kg or more at birth. Cross-lagged panel modeling was conducted to understand the temporal precedence between children’s weight status and motor proficiency. Analyses also examined differences in gross and fine motor skills among healthy weight, overweight, and obese children. Results A higher BMI at 5 years contributed to declines in motor proficiency from 5 to 10 years. There was no support for the reverse; that is, poor motor skills at 5 years did not predict increases in relative weight from 5 to 10 years. Obesity at 5 years also predicted declines in motor proficiency. When compared to normal weight children, obese children had significantly poorer total and gross motor skills at both 5 and 10 years. Overweight children had poorer total and gross motor skills at 10 years only. The differences in total and gross motor skills among normal-weight, overweight, and obese children appear to increase with age. There were small differences in fine motor skill between obese and non-obese children at 5 years only. Conclusions Obesity preceded declines in motor skills and not the reverse. Study findings suggest that early childhood obesity intervention efforts might help prevent declines in motor proficiency which, in turn, may positively impact children’s physical activity and overall fitness levels. PMID:27059409

  14. Obesity leads to declines in motor skills across childhood.

    PubMed

    Cheng, J; East, P; Blanco, E; Sim, E Kang; Castillo, M; Lozoff, B; Gahagan, S

    2016-05-01

    Poor motor skills have been consistently linked with a higher body weight in childhood, but the causal direction of this association is not fully understood. This study investigated the temporal ordering between children's motor skills and weight status at 5 and 10 years. Participants were 668 children (54% male) who were studied from infancy as part of an iron deficiency anaemia preventive trial and follow-up study in Santiago, Chile. All were healthy, full-term and weighing 3 kg or more at birth. Cross-lagged panel modelling was conducted to understand the temporal precedence between children's weight status and motor proficiency. Analyses also examined differences in gross and fine motor skills among healthy weight, overweight, and obese children. A higher BMI at 5 years contributed to declines in motor proficiency from 5 to 10 years. There was no support for the reverse, that is, poor motor skills at 5 years did not predict increases in relative weight from 5 to 10 years. Obesity at 5 years also predicted declines in motor proficiency. When compared with normal weight children, obese children had significantly poorer total and gross motor skills at both 5 and 10 years. Overweight children had poorer total and gross motor skills at 10 years only. The differences in total and gross motor skills among normal weight, overweight and obese children appear to increase with age. There were small differences in fine motor skill between obese and non-obese children at 5 years only. Obesity preceded declines in motor skills and not the reverse. Study findings suggest that early childhood obesity intervention efforts might help prevent declines in motor proficiency that, in turn, may positively impact children's physical activity and overall fitness levels. © 2016 John Wiley & Sons Ltd.

  15. Comparison of motor development of low birth weight (LBW) infants with and without using mechanical ventilation and normal birth weight infants

    PubMed Central

    Nazi, Sepideh; Aliabadi, Faranak

    2015-01-01

    Background: To determine whether using mechanical ventilation in neonatal intensive care unit (NICU) influences motor development of low birth weight (LBW) infants and to compare their motor development with normal birth weight (NBW) infants at the age of 8 to 12 months using Peabody Developmental Motor Scale 2 (PDMS-2). Methods: This cross sectional study was conducted on 70 LBW infants in two groups, mechanical ventilation (MV) group, n=35 and without mechanical ventilation (WMV) group, n=35 and 40 healthy NBW infants matched with LBW group for age. Motor quotients were determined using PDMS-2 and compared in all groups using ANOVA statistical method and SPSS version 17. Results: Comparison of the mean developmental motor quotient (DMQ) of both MV and WMV groups showed significant differences with NBW group (p< 0.05). Also, significant difference was found between the gross DMQ of MV group and WMV group (p< 0.05). Moreover, in MV group, both gross and fine motor quotients were considered as below average (16.12%). In WMV group, the gross motor quotient was considered as average (49.51%) and the fine motor quotient was considered as below average (16.12%). Conclusion: It seems that LBW infants have poor fine motor outcomes. The gross motor outcomes, on the other hand, will be significantly more influenced by using mechanical ventilation. In addition, more differences seem to be related to lower birth weight. Very Low Birth Weight (VLBW) infants are more prone to developmental difficulties than LBW infants with the history of using mechanical ventilation especially in fine motor development. PMID:26913264

  16. Speech Quality Measurement

    DTIC Science & Technology

    1977-06-10

    characterize. To detect distortion related to phonemic perception, spectral distance measures seem most important. Since the pitch contour plays such an...only gross gain errors should be detected. 10 In the caeas oi wavoform coders, the distortions are not so ea ily related to percoptlon. Pitci...e• ctral distanco moa.sures and related Lt measures were studied in this project. Let V(O), -1Tesw, be the short time power spectral envelope for a

  17. Air Force Construction Contract Disputes: An Analysis of Armed Services Board of Contract Appeals Cases to Identify Dispute Types and Causes

    DTIC Science & Technology

    1982-09-01

    IDirecting Work 7. DifrngSt 19. Laspector Improperly Conditions StpigWr 8. Changes in Zecs . -l 20. Fraud, Latent Defects, -I or Gross Errors 9. Challenges...1979. 346 11. Doyle, Peter G. "What the Contractor Expects of the Architect," Building Design and Construction, May 1978, pp. 94-9d. 12. "Exam Required

  18. General subspace learning with corrupted training data via graph embedding.

    PubMed

    Bao, Bing-Kun; Liu, Guangcan; Hong, Richang; Yan, Shuicheng; Xu, Changsheng

    2013-11-01

    We address the following subspace learning problem: supposing we are given a set of labeled, corrupted training data points, how to learn the underlying subspace, which contains three components: an intrinsic subspace that captures certain desired properties of a data set, a penalty subspace that fits the undesired properties of the data, and an error container that models the gross corruptions possibly existing in the data. Given a set of data points, these three components can be learned by solving a nuclear norm regularized optimization problem, which is convex and can be efficiently solved in polynomial time. Using the method as a tool, we propose a new discriminant analysis (i.e., supervised subspace learning) algorithm called Corruptions Tolerant Discriminant Analysis (CTDA), in which the intrinsic subspace is used to capture the features with high within-class similarity, the penalty subspace takes the role of modeling the undesired features with high between-class similarity, and the error container takes charge of fitting the possible corruptions in the data. We show that CTDA can well handle the gross corruptions possibly existing in the training data, whereas previous linear discriminant analysis algorithms arguably fail in such a setting. Extensive experiments conducted on two benchmark human face data sets and one object recognition data set show that CTDA outperforms the related algorithms.

  19. Reliability and responsiveness of the gross motor function measure-88 in children with cerebral palsy.

    PubMed

    Ko, Jooyeon; Kim, MinYoung

    2013-03-01

    The Gross Motor Function Measure (GMFM-88) is commonly used in the evaluation of gross motor function in children with cerebral palsy (CP). The relative reliability of GMFM-88 has been assessed in children with CP. However, little information is available regarding the absolute reliability or responsiveness of GMFM-88. The purpose of this study was to determine the absolute and relative reliability and the responsiveness of the GMFM-88 in evaluating gross motor function in children with CP. A clinical measurement design was used. Ten raters scored the GMFM-88 in 84 children (mean age=3.7 years, SD=1.9, range=10 months to 9 years 9 months) from video records across all Gross Motor Function Classification System (GMFCS) levels to establish interrater reliability. Two raters participated to assess intrarater reliability. Responsiveness was determined from 3 additional assessments after the baseline assessment. The interrater and intrarater intraclass correlation coefficients (ICCs) with 95% confidence intervals, standard error of measurement (SEM), smallest real difference (SRD), effect size (ES), and standardized response mean (SRM) were calculated. The relative reliability of the GMFM was excellent (ICCs=.952-1.000). The SEM and SRD for total score of the GMFM were acceptable (1.60 and 3.14, respectively). Additionally, the ES and SRM of the dimension goal scores increased gradually in the 3 follow-up assessments (GMFCS levels I and II: ES=0.5, 0.6, and 0.8 and SRM=1.3, 1.8, and 2.0; GMFCS levels III-V: ES=0.4, 0.7, and 0.9 and SRM=1.5, 1.7, and 2.0). Children over 10 years of age with CP were not included in this study, so the results should not be generalized to all children with CP. Both the reliability and the responsiveness of the GMFM-88 are reasonable for measuring gross motor function in children with CP.

  20. The Effects of Non-Normality on Type III Error for Comparing Independent Means

    ERIC Educational Resources Information Center

    Mendes, Mehmet

    2007-01-01

    The major objective of this study was to investigate the effects of non-normality on Type III error rates for ANOVA F its three commonly recommended parametric counterparts namely Welch, Brown-Forsythe, and Alexander-Govern test. Therefore these tests were compared in terms of Type III error rates across the variety of population distributions,…

  1. Semantic error patterns on the Boston Naming Test in normal aging, amnestic mild cognitive impairment, and mild Alzheimer's disease: is there semantic disruption?

    PubMed

    Balthazar, Marcio Luiz Figueredo; Cendes, Fernando; Damasceno, Benito Pereira

    2008-11-01

    Naming difficulty is common in Alzheimer's disease (AD), but the nature of this problem is not well established. The authors investigated the presence of semantic breakdown and the pattern of general and semantic errors in patients with mild AD, patients with amnestic mild cognitive impairment (aMCI), and normal controls by examining their spontaneous answers on the Boston Naming Test (BNT) and verifying whether they needed or were benefited by semantic and phonemic cues. The errors in spontaneous answers were classified in four mutually exclusive categories (semantic errors, visual paragnosia, phonological errors, and omission errors), and the semantic errors were further subclassified as coordinate, superordinate, and circumlocutory. Patients with aMCI performed normally on the BNT and needed fewer semantic and phonemic cues than patients with mild AD. After semantic cues, subjects with aMCI and control subjects gave more correct answers than patients with mild AD, but after phonemic cues, there was no difference between the three groups, suggesting that the low performance of patients with AD cannot be completely explained by semantic breakdown. Patterns of spontaneous naming errors and subtypes of semantic errors were similar in the three groups, with decreasing error frequency from coordinate to superordinate to circumlocutory subtypes.

  2. Linear error analysis of slope-area discharge determinations

    USGS Publications Warehouse

    Kirby, W.H.

    1987-01-01

    The slope-area method can be used to calculate peak flood discharges when current-meter measurements are not possible. This calculation depends on several quantities, such as water-surface fall, that are subject to large measurement errors. Other critical quantities, such as Manning's n, are not even amenable to direct measurement but can only be estimated. Finally, scour and fill may cause gross discrepancies between the observed condition of the channel and the hydraulic conditions during the flood peak. The effects of these potential errors on the accuracy of the computed discharge have been estimated by statistical error analysis using a Taylor-series approximation of the discharge formula and the well-known formula for the variance of a sum of correlated random variates. The resultant error variance of the computed discharge is a weighted sum of covariances of the various observational errors. The weights depend on the hydraulic and geometric configuration of the channel. The mathematical analysis confirms the rule of thumb that relative errors in computed discharge increase rapidly when velocity heads exceed the water-surface fall, when the flow field is expanding and when lateral velocity variation (alpha) is large. It also confirms the extreme importance of accurately assessing the presence of scour or fill. ?? 1987.

  3. Standard Error of Linear Observed-Score Equating for the NEAT Design with Nonnormally Distributed Data

    ERIC Educational Resources Information Center

    Zu, Jiyun; Yuan, Ke-Hai

    2012-01-01

    In the nonequivalent groups with anchor test (NEAT) design, the standard error of linear observed-score equating is commonly estimated by an estimator derived assuming multivariate normality. However, real data are seldom normally distributed, causing this normal estimator to be inconsistent. A general estimator, which does not rely on the…

  4. Simplified Approach Charts Improve Data Retrieval Performance

    PubMed Central

    Stewart, Michael; Laraway, Sean; Jordan, Kevin; Feary, Michael S.

    2016-01-01

    The effectiveness of different instrument approach charts to deliver minimum visibility and altitude information during airport equipment outages was investigated. Eighteen pilots flew simulated instrument approaches in three conditions: (a) normal operations using a standard approach chart (standard-normal), (b) equipment outage conditions using a standard approach chart (standard-outage), and (c) equipment outage conditions using a prototype decluttered approach chart (prototype-outage). Errors and retrieval times in identifying minimum altitudes and visibilities were measured. The standard-outage condition produced significantly more errors and longer retrieval times versus the standard-normal condition. The prototype-outage condition had significantly fewer errors and shorter retrieval times than did the standard-outage condition. The prototype-outage condition produced significantly fewer errors but similar retrieval times when compared with the standard-normal condition. Thus, changing the presentation of minima may reduce risk and increase safety in instrument approaches, specifically with airport equipment outages. PMID:28491009

  5. The Effect of Patellar Thickness on Intraoperative Knee Flexion and Patellar Tracking in Patients With Arthrofibrosis Undergoing Total Knee Arthroplasty.

    PubMed

    Kim, Abraham D; Shah, Vivek M; Scott, Richard D

    2016-05-01

    We evaluated the intraoperative effect of patellar thickness on intraoperative passive knee flexion and patellar tracking during total knee arthroplasty (TKA) in patients with preoperative arthrofibrosis and compared them to patients with normal preoperative range of motion (ROM) documented in a prior study. Routine posterior cruciate ligament-retaining TKA was performed in a total of 34 knees, 23 with normal ROM and 11 with arthrofibrosis, defined as ≤100° of passive knee flexion against gravity under anesthesia. Once clinical balance and congruent patellar tracking were established, custom trial patellar components thicker than the standard trial by 2-mm increments (2-8 mm) were sequentially placed and trialed. Passive flexion against gravity was recorded using digital photograph goniometry. Gross mechanics of patellofemoral tracking were visually assessed. On average, passive knee flexion decreased 2° for every 2-mm increment of patellar thickness (P < .0001), which was similar to patients with normal preoperative ROM. In addition, increased patellar thickness had no gross effect on patellar subluxation and tilt in patients with arthrofibrosis as well as those with normal ROM. Patellar thickness had a modest effect on intraoperative passive flexion and no effect on patellar tracking in patients with arthrofibrosis undergoing TKA. There was no marked difference in intraoperative flexion and patellar tracking between patients with arthrofibrosis and patients with normal preoperative ROM. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. An investigation of reports of Controlled Flight Toward Terrain (CFTT)

    NASA Technical Reports Server (NTRS)

    Porter, R. F.; Loomis, J. P.

    1981-01-01

    Some 258 reports from more than 23,000 documents in the files of the Aviation Safety Reporting System (ASRS) were found to be to the hazard of flight into terrain with no prior awareness by the crew of impending disaster. Examination of the reports indicate that human error was a casual factor in 64% of the incidents in which some threat of terrain conflict was experienced. Approximately two-thirds of the human errors were attributed to controllers, the most common discrepancy being a radar vector below the Minimum Vector Altitude (MVA). Errors by pilots were of a much diverse nature and include a few instances of gross deviations from their assigned altitudes. The ground proximity warning system and the minimum safe altitude warning equipment were the initial recovery factor in some 18 serious incidents and were apparently the sole warning in six reported instances which otherwise would most probably have ended in disaster.

  7. A Review of Depth and Normal Fusion Algorithms

    PubMed Central

    Štolc, Svorad; Pock, Thomas

    2018-01-01

    Geometric surface information such as depth maps and surface normals can be acquired by various methods such as stereo light fields, shape from shading and photometric stereo techniques. We compare several algorithms which deal with the combination of depth with surface normal information in order to reconstruct a refined depth map. The reasons for performance differences are examined from the perspective of alternative formulations of surface normals for depth reconstruction. We review and analyze methods in a systematic way. Based on our findings, we introduce a new generalized fusion method, which is formulated as a least squares problem and outperforms previous methods in the depth error domain by introducing a novel normal weighting that performs closer to the geodesic distance measure. Furthermore, a novel method is introduced based on Total Generalized Variation (TGV) which further outperforms previous approaches in terms of the geodesic normal distance error and maintains comparable quality in the depth error domain. PMID:29389903

  8. Childhood Psychosis and Computed Tomographic Brain Scan Findings.

    ERIC Educational Resources Information Center

    Gillberg, Christopher; Svendsen, Pal

    1983-01-01

    Computerized tomography (CT) of the brain was used to examine 27 infantile autistic children, 9 children with other kinds of childhood psychoses, 23 children with mental retardation, and 16 normal children. Gross abnormalities were seen in 26 percent of the autism cases. (Author/SEW)

  9. Estimating carbon fluxes on small rotationally grazed pastures

    USDA-ARS?s Scientific Manuscript database

    Satellite-based Normalized Difference Vegetation Index (NDVI) data have been extensively used for estimating gross primary productivity (GPP) and yield of grazing lands throughout the world. Large-scale estimates of GPP are a necessary component of efforts to monitor the soil carbon balance of grazi...

  10. Using NDVI to estimate carbon fluxes from small rotationally grazed pastures

    USDA-ARS?s Scientific Manuscript database

    Satellite-based Normalized Difference Vegetation Index (NDVI) data have been extensively used for estimating gross primary productivity (GPP) and yield of grazing lands throughout the world. However, the usefulness of satellite-based images for monitoring rotationally-grazed pastures in the northea...

  11. Study of Nephrotoxic Potential of Acetaminophen in Birds

    PubMed Central

    Jayakumar, K.; Mohan, K.; Swamy, H. D. Narayana; Shridhar, N. B.; Bayer, M. D.

    2010-01-01

    The present study was designed to evaluate the effect of acetaminophen on kidneys of birds by comparison with diclofenac that is used as positive control. The birds of Group I served as negative control and received normal saline, whereas Group II birds received diclofenac injection (2.5 mg/kg IM) and Group III birds received acetaminophen injection (10 mg/kg IM) for a period of seven days daily. The birds treated with diclofenac showed severe clinical signs of toxicity accompanied with high mortality and significant increase (P<0.001) in serum creatinine and uric acid concentration. The creatinine and uric acid concentrations were consistent with gross and histopathological findings. The negative control and acetaminophen-treated groups showed no adverse clinical signs, serum creatinine and uric acid concentrations were normal, and no gross or histopathological changes in kidneys were observed. Thus, it was concluded that acetaminophen can be used for treatment in birds without any adverse effect on kidneys. PMID:21170252

  12. Simultaneous uterine and urinary bladder rupture in an otherwise successful vaginal birth after cesarean delivery.

    PubMed

    Ho, Szu-Ying; Chang, Shuenn-Dhy; Liang, Ching-Chung

    2010-12-01

    Uterine rupture is the primary concern when a patient chooses a trial of labor after a cesarean section. Bladder rupture accompanied by uterine rupture should be taken into consideration if gross hematuria occurs. We report the case of a patient with uterine rupture during a trial of labor after cesarean delivery. She had a normal course of labor and no classic signs of uterine rupture. However, gross hematuria was noted after repair of the episiotomy. The patient began to complain of progressive abdominal pain, gross hematuria and oliguria. Cystoscopy revealed a direct communication between the bladder and the uterus. When opening the bladder peritoneum, rupture sites over the anterior uterus and posterior wall of the bladder were noted. Following primary repair of both wounds, a Foley catheter was left in place for 12 days. The patient had achieved a full recovery by the 2-year follow-up examination. Bladder injury and uterine rupture can occur at any time during labor. Gross hematuria immediately after delivery is the most common presentation. Cystoscopy is a good tool to identify the severity of bladder injury. Copyright © 2010 Elsevier. Published by Elsevier B.V. All rights reserved.

  13. Contributions of gross spectral properties and duration of spectral change to perception of stop consonants

    NASA Astrophysics Data System (ADS)

    Alexander, Joshua; Keith, Kluender

    2005-09-01

    All speech contrasts are multiply specified. For example, in addition to onsets and trajectories of formant transitions, gross spectral properties such as tilt, and duration of spectral change (both local and global) contribute to perception of contrasts between stops such as /b,d,g/. It is likely that listeners resort to different acoustic characteristics under different listening conditions. Hearing-impaired listeners, for whom spectral details are compromised, may be more likely to use short-term gross spectral characteristics as well as durational information. Here, contributions of broad spectral onset properties as well as duration of spectral change are investigated in perception experiments with normal-hearing listeners. Two series of synthesized CVs, each varying perceptually from /b/ to /d/, were synthesized. Onset frequency of F2, duration of formant transitions, and gross spectral tilts were manipulated parametrically. Perception of /b/ was encouraged by shorter formant transition durations and by more negative spectral tilt at onset independent of the rate of change in spectral tilt. Effects of spectral tilt at onset were contextual and depended on the tilt of the following vowel. Parallel studies with listeners with hearing impairment are ongoing. [Work supported by NIDCD.

  14. Evaluation of normalization methods for cDNA microarray data by k-NN classification

    PubMed Central

    Wu, Wei; Xing, Eric P; Myers, Connie; Mian, I Saira; Bissell, Mina J

    2005-01-01

    Background Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Results Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Conclusion Using LOOCV error of k-NNs as the evaluation criterion, three double-bias-removal normalization strategies, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, outperform other strategies for removing spatial effect, intensity effect and scale differences from cDNA microarray data. The apparent sensitivity of k-NN LOOCV classification error to dye biases suggests that this criterion provides an informative measure for evaluating normalization methods. All the computational tools used in this study were implemented using the R language for statistical computing and graphics. PMID:16045803

  15. Evaluation of normalization methods for cDNA microarray data by k-NN classification.

    PubMed

    Wu, Wei; Xing, Eric P; Myers, Connie; Mian, I Saira; Bissell, Mina J

    2005-07-26

    Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Using LOOCV error of k-NNs as the evaluation criterion, three double-bias-removal normalization strategies, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, outperform other strategies for removing spatial effect, intensity effect and scale differences from cDNA microarray data. The apparent sensitivity of k-NN LOOCV classification error to dye biases suggests that this criterion provides an informative measure for evaluating normalization methods. All the computational tools used in this study were implemented using the R language for statistical computing and graphics.

  16. Estimating maize production in Kenya using NDVI: Some statistical considerations

    USGS Publications Warehouse

    Lewis, J.E.; Rowland, James; Nadeau , A.

    1998-01-01

    A regression model approach using a normalized difference vegetation index (NDVI) has the potential for estimating crop production in East Africa. However, before production estimation can become a reality, the underlying model assumptions and statistical nature of the sample data (NDVI and crop production) must be examined rigorously. Annual maize production statistics from 1982-90 for 36 agricultural districts within Kenya were used as the dependent variable; median area NDVI (independent variable) values from each agricultural district and year were extracted from the annual maximum NDVI data set. The input data and the statistical association of NDVI with maize production for Kenya were tested systematically for the following items: (1) homogeneity of the data when pooling the sample, (2) gross data errors and influence points, (3) serial (time) correlation, (4) spatial autocorrelation and (5) stability of the regression coefficients. The results of using a simple regression model with NDVI as the only independent variable are encouraging (r 0.75, p 0.05) and illustrate that NDVI can be a responsive indicator of maize production, especially in areas of high NDVI spatial variability, which coincide with areas of production variability in Kenya.

  17. Improving the quality of marine geophysical track line data: Along-track analysis

    NASA Astrophysics Data System (ADS)

    Chandler, Michael T.; Wessel, Paul

    2008-02-01

    We have examined 4918 track line geophysics cruises archived at the U.S. National Geophysical Data Center (NGDC) using comprehensive error checking methods. Each cruise was checked for observation outliers, excessive gradients, metadata consistency, and general agreement with satellite altimetry-derived gravity and predicted bathymetry grids. Thresholds for error checking were determined empirically through inspection of histograms for all geophysical values, gradients, and differences with gridded data sampled along ship tracks. Robust regression was used to detect systematic scale and offset errors found by comparing ship bathymetry and free-air anomalies to the corresponding values from global grids. We found many recurring error types in the NGDC archive, including poor navigation, inappropriately scaled or offset data, excessive gradients, and extended offsets in depth and gravity when compared to global grids. While ˜5-10% of bathymetry and free-air gravity records fail our conservative tests, residual magnetic errors may exceed twice this proportion. These errors hinder the effective use of the data and may lead to mistakes in interpretation. To enable the removal of gross errors without over-writing original cruise data, we developed an errata system that concisely reports all errors encountered in a cruise. With such errata files, scientists may share cruise corrections, thereby preventing redundant processing. We have implemented these quality control methods in the modified MGD77 supplement to the Generic Mapping Tools software suite.

  18. Reducing errors benefits the field-based learning of a fundamental movement skill in children.

    PubMed

    Capio, C M; Poolton, J M; Sit, C H P; Holmstrom, M; Masters, R S W

    2013-03-01

    Proficient fundamental movement skills (FMS) are believed to form the basis of more complex movement patterns in sports. This study examined the development of the FMS of overhand throwing in children through either an error-reduced (ER) or error-strewn (ES) training program. Students (n = 216), aged 8-12 years (M = 9.16, SD = 0.96), practiced overhand throwing in either a program that reduced errors during practice (ER) or one that was ES. ER program reduced errors by incrementally raising the task difficulty, while the ES program had an incremental lowering of task difficulty. Process-oriented assessment of throwing movement form (Test of Gross Motor Development-2) and product-oriented assessment of throwing accuracy (absolute error) were performed. Changes in performance were examined among children in the upper and lower quartiles of the pretest throwing accuracy scores. ER training participants showed greater gains in movement form and accuracy, and performed throwing more effectively with a concurrent secondary cognitive task. Movement form improved among girls, while throwing accuracy improved among children with low ability. Reduced performance errors in FMS training resulted in greater learning than a program that did not restrict errors. Reduced cognitive processing costs (effective dual-task performance) associated with such approach suggest its potential benefits for children with developmental conditions. © 2011 John Wiley & Sons A/S.

  19. Size-dependent physiological responses of the branching coral Pocillopora verrucosa to elevated temperature and PCO2.

    PubMed

    Edmunds, Peter J; Burgess, Scott C

    2016-12-15

    Body size has large effects on organism physiology, but these effects remain poorly understood in modular animals with complex morphologies. Using two trials of a ∼24 day experiment conducted in 2014 and 2015, we tested the hypothesis that colony size of the coral Pocillopora verrucosa affects the response of calcification, aerobic respiration and gross photosynthesis to temperature (∼26.5 and ∼29.7°C) and P CO 2  (∼40 and ∼1000 µatm). Large corals calcified more than small corals, but at a slower size-specific rate; area-normalized calcification declined with size. Whole-colony and area-normalized calcification were unaffected by temperature, P CO 2 , or the interaction between the two. Whole-colony respiration increased with colony size, but the slopes of these relationships differed between treatments. Area-normalized gross photosynthesis declined with colony size, but whole-colony photosynthesis was unaffected by P CO 2 , and showed a weak response to temperature. When scaled up to predict the response of large corals, area-normalized metrics of physiological performance measured using small corals provide inaccurate estimates of the physiological performance of large colonies. Together, these results demonstrate the importance of colony size in modulating the response of branching corals to elevated temperature and high P CO 2 . © 2016. Published by The Company of Biologists Ltd.

  20. Normality Tests for Statistical Analysis: A Guide for Non-Statisticians

    PubMed Central

    Ghasemi, Asghar; Zahediasl, Saleh

    2012-01-01

    Statistical errors are common in scientific literature and about 50% of the published articles have at least one error. The assumption of normality needs to be checked for many statistical procedures, namely parametric tests, because their validity depends on it. The aim of this commentary is to overview checking for normality in statistical analysis using SPSS. PMID:23843808

  1. Scoring Methods in the International Land Benchmarking (ILAMB) Package

    NASA Astrophysics Data System (ADS)

    Collier, N.; Hoffman, F. M.; Keppel-Aleks, G.; Lawrence, D. M.; Mu, M.; Riley, W. J.; Randerson, J. T.

    2017-12-01

    The International Land Model Benchmarking (ILAMB) project is a model-data intercomparison and integration project designed to improve the performance of the land component of Earth system models. This effort is disseminated in the form of a python package which is openly developed (https://bitbucket.org/ncollier/ilamb). ILAMB is more than a workflow system that automates the generation of common scalars and plot comparisons to observational data. We aim to provide scientists and model developers with a tool to gain insight into model behavior. Thus, a salient feature of the ILAMB package is our synthesis methodology, which provides users with a high-level understanding of model performance. Within ILAMB, we calculate a non-dimensional score of a model's performance in a given dimension of the physics, chemistry, or biology with respect to an observational dataset. For example, we compare the Fluxnet-MTE Gross Primary Productivity (GPP) product against model output in the corresponding historical period. We compute common statistics such as the bias, root mean squared error, phase shift, and spatial distribution. We take these measures and find relative errors by normalizing the values, and then use the exponential to map this relative error to the unit interval. This allows for the scores to be combined into an overall score representing multiple aspects of model performance. In this presentation we give details of this process as well as a proposal for tuning the exponential mapping to make scores more cross comparable. However, as many models are calibrated using these scalar measures with respect to observational datasets, we also score the relationships among relevant variables in the model. For example, in the case of GPP, we also consider its relationship to precipitation, evapotranspiration, and temperature. We do this by creating a mean response curve and a two-dimensional distribution based on the observational data and model results. The response curves are then scored using a relative measure of the root mean squared error and the exponential as before. The distributions are scored using the so-called Hellinger distance, a statistical measure for how well one distribution is represented by another, and included in the model's overall score.

  2. Reversal and Rotation Errors by Normal and Retarded Readers

    ERIC Educational Resources Information Center

    Black, F. William

    1973-01-01

    Reports an investigation of the incidence of and relationships among word and letter reversals in writing and Bender-Gestalt rotation errors in matched samples of normal and retarded readers. No significant diffenences were found in the two groups. (TO)

  3. 12 CFR 27.3 - Recordkeeping requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... employed by the current employer of the applicant(s). For self-employed persons, the number of continuous years self-employed. (xiii) Gross total monthly income of each applicant, comprising the sum of normal... or disability income and income from part-time employment. For self-employed persons, include the...

  4. Cryptococcus neoformans of Unusual Morphology

    PubMed Central

    Cruickshank, J. G.; Cavill, R.; Jelbert, M.

    1973-01-01

    A case of primary cryptococcosis of the lungs was caused by an isolate of Cryptococcus neoformans that assumes a giant form in tissue but which has a normal appearance on artificial culture. Electron microscopy revealed gross enlargement of the capsule and plasma membranes in the tissue form. Images PMID:4121033

  5. An Analysis of Plan Robustness for Esophageal Tumors: Comparing Volumetric Modulated Arc Therapy Plans and Spot Scanning Proton Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren, Samantha, E-mail: samantha.warren@oncology.ox.ac.uk; Partridge, Mike; Bolsi, Alessandra

    Purpose: Planning studies to compare x-ray and proton techniques and to select the most suitable technique for each patient have been hampered by the nonequivalence of several aspects of treatment planning and delivery. A fair comparison should compare similarly advanced delivery techniques from current clinical practice and also assess the robustness of each technique. The present study therefore compared volumetric modulated arc therapy (VMAT) and single-field optimization (SFO) spot scanning proton therapy plans created using a simultaneous integrated boost (SIB) for dose escalation in midesophageal cancer and analyzed the effect of setup and range uncertainties on these plans. Methods andmore » Materials: For 21 patients, SIB plans with a physical dose prescription of 2 Gy or 2.5 Gy/fraction in 25 fractions to planning target volume (PTV){sub 50Gy} or PTV{sub 62.5Gy} (primary tumor with 0.5 cm margins) were created and evaluated for robustness to random setup errors and proton range errors. Dose–volume metrics were compared for the optimal and uncertainty plans, with P<.05 (Wilcoxon) considered significant. Results: SFO reduced the mean lung dose by 51.4% (range 35.1%-76.1%) and the mean heart dose by 40.9% (range 15.0%-57.4%) compared with VMAT. Proton plan robustness to a 3.5% range error was acceptable. For all patients, the clinical target volume D{sub 98} was 95.0% to 100.4% of the prescribed dose and gross tumor volume (GTV) D{sub 98} was 98.8% to 101%. Setup error robustness was patient anatomy dependent, and the potential minimum dose per fraction was always lower with SFO than with VMAT. The clinical target volume D{sub 98} was lower by 0.6% to 7.8% of the prescribed dose, and the GTV D{sub 98} was lower by 0.3% to 2.2% of the prescribed GTV dose. Conclusions: The SFO plans achieved significant sparing of normal tissue compared with the VMAT plans for midesophageal cancer. The target dose coverage in the SIB proton plans was less robust to random setup errors and might be unacceptable for certain patients. Robust optimization to ensure adequate target coverage of SIB proton plans might be beneficial.« less

  6. An Analysis of Plan Robustness for Esophageal Tumors: Comparing Volumetric Modulated Arc Therapy Plans and Spot Scanning Proton Planning

    PubMed Central

    Warren, Samantha; Partridge, Mike; Bolsi, Alessandra; Lomax, Anthony J.; Hurt, Chris; Crosby, Thomas; Hawkins, Maria A.

    2016-01-01

    Purpose Planning studies to compare x-ray and proton techniques and to select the most suitable technique for each patient have been hampered by the nonequivalence of several aspects of treatment planning and delivery. A fair comparison should compare similarly advanced delivery techniques from current clinical practice and also assess the robustness of each technique. The present study therefore compared volumetric modulated arc therapy (VMAT) and single-field optimization (SFO) spot scanning proton therapy plans created using a simultaneous integrated boost (SIB) for dose escalation in midesophageal cancer and analyzed the effect of setup and range uncertainties on these plans. Methods and Materials For 21 patients, SIB plans with a physical dose prescription of 2 Gy or 2.5 Gy/fraction in 25 fractions to planning target volume (PTV)50Gy or PTV62.5Gy (primary tumor with 0.5 cm margins) were created and evaluated for robustness to random setup errors and proton range errors. Dose–volume metrics were compared for the optimal and uncertainty plans, with P<.05 (Wilcoxon) considered significant. Results SFO reduced the mean lung dose by 51.4% (range 35.1%-76.1%) and the mean heart dose by 40.9% (range 15.0%-57.4%) compared with VMAT. Proton plan robustness to a 3.5% range error was acceptable. For all patients, the clinical target volume D98 was 95.0% to 100.4% of the prescribed dose and gross tumor volume (GTV) D98 was 98.8% to 101%. Setup error robustness was patient anatomy dependent, and the potential minimum dose per fraction was always lower with SFO than with VMAT. The clinical target volume D98 was lower by 0.6% to 7.8% of the prescribed dose, and the GTV D98 was lower by 0.3% to 2.2% of the prescribed GTV dose. Conclusions The SFO plans achieved significant sparing of normal tissue compared with the VMAT plans for midesophageal cancer. The target dose coverage in the SIB proton plans was less robust to random setup errors and might be unacceptable for certain patients. Robust optimization to ensure adequate target coverage of SIB proton plans might be beneficial. PMID:27084641

  7. An Analysis of Plan Robustness for Esophageal Tumors: Comparing Volumetric Modulated Arc Therapy Plans and Spot Scanning Proton Planning.

    PubMed

    Warren, Samantha; Partridge, Mike; Bolsi, Alessandra; Lomax, Anthony J; Hurt, Chris; Crosby, Thomas; Hawkins, Maria A

    2016-05-01

    Planning studies to compare x-ray and proton techniques and to select the most suitable technique for each patient have been hampered by the nonequivalence of several aspects of treatment planning and delivery. A fair comparison should compare similarly advanced delivery techniques from current clinical practice and also assess the robustness of each technique. The present study therefore compared volumetric modulated arc therapy (VMAT) and single-field optimization (SFO) spot scanning proton therapy plans created using a simultaneous integrated boost (SIB) for dose escalation in midesophageal cancer and analyzed the effect of setup and range uncertainties on these plans. For 21 patients, SIB plans with a physical dose prescription of 2 Gy or 2.5 Gy/fraction in 25 fractions to planning target volume (PTV)50Gy or PTV62.5Gy (primary tumor with 0.5 cm margins) were created and evaluated for robustness to random setup errors and proton range errors. Dose-volume metrics were compared for the optimal and uncertainty plans, with P<.05 (Wilcoxon) considered significant. SFO reduced the mean lung dose by 51.4% (range 35.1%-76.1%) and the mean heart dose by 40.9% (range 15.0%-57.4%) compared with VMAT. Proton plan robustness to a 3.5% range error was acceptable. For all patients, the clinical target volume D98 was 95.0% to 100.4% of the prescribed dose and gross tumor volume (GTV) D98 was 98.8% to 101%. Setup error robustness was patient anatomy dependent, and the potential minimum dose per fraction was always lower with SFO than with VMAT. The clinical target volume D98 was lower by 0.6% to 7.8% of the prescribed dose, and the GTV D98 was lower by 0.3% to 2.2% of the prescribed GTV dose. The SFO plans achieved significant sparing of normal tissue compared with the VMAT plans for midesophageal cancer. The target dose coverage in the SIB proton plans was less robust to random setup errors and might be unacceptable for certain patients. Robust optimization to ensure adequate target coverage of SIB proton plans might be beneficial. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  8. The effect of a Lean quality improvement implementation program on surgical pathology specimen accessioning and gross preparation error frequency.

    PubMed

    Smith, Maxwell L; Wilkerson, Trent; Grzybicki, Dana M; Raab, Stephen S

    2012-09-01

    Few reports have documented the effectiveness of Lean quality improvement in changing anatomic pathology patient safety. We used Lean methods of education; hoshin kanri goal setting and culture change; kaizen events; observation of work activities, hand-offs, and pathways; A3-problem solving, metric development, and measurement; and frontline work redesign in the accessioning and gross examination areas of an anatomic pathology laboratory. We compared the pre- and post-Lean implementation proportion of near-miss events and changes made in specific work processes. In the implementation phase, we documented 29 individual A3-root cause analyses. The pre- and postimplementation proportions of process- and operator-dependent near-miss events were 5.5 and 1.8 (P < .002) and 0.6 and 0.6, respectively. We conclude that through culture change and implementation of specific work process changes, Lean implementation may improve pathology patient safety.

  9. Noise Threshold and Resource Cost of Fault-Tolerant Quantum Computing with Majorana Fermions in Hybrid Systems.

    PubMed

    Li, Ying

    2016-09-16

    Fault-tolerant quantum computing in systems composed of both Majorana fermions and topologically unprotected quantum systems, e.g., superconducting circuits or quantum dots, is studied in this Letter. Errors caused by topologically unprotected quantum systems need to be corrected with error-correction schemes, for instance, the surface code. We find that the error-correction performance of such a hybrid topological quantum computer is not superior to a normal quantum computer unless the topological charge of Majorana fermions is insusceptible to noise. If errors changing the topological charge are rare, the fault-tolerance threshold is much higher than the threshold of a normal quantum computer and a surface-code logical qubit could be encoded in only tens of topological qubits instead of about 1,000 normal qubits.

  10. The constitutional t(11;22): implications for a novel mechanism responsible for gross chromosomal rearrangements

    PubMed Central

    Kurahashi, H; Inagaki, H; Ohye, T; Kogo, H; Tsutsumi, M; Kato, T; Tong, M; Emanuel, BS

    2012-01-01

    The constitutional t(11;22)(q23;q11) is the most common recurrent non-Robertsonian translocation in humans. The breakpoint sequences of both chromosomes are characterized by several hundred base pairs of palindromic AT-rich repeats (PATRRs). Similar PATRRs have also been identified at the breakpoints of other nonrecurrent translocations, suggesting that PATRR-mediated chromosomal translocation represents one of the universal pathways for gross chromosomal rearrangement in the human genome. We propose that PATRRs have the potential to form cruciform structures through intrastrand-base pairing in single-stranded DNA, creating a source of genomic instability and leading to translocations. Indeed, de novo examples of the t(11;22) are detected at a high frequency in sperm from normal healthy males. This review synthesizes recent data illustrating a novel paradigm for an apparent spermatogenesis-specific translocation mechanism. This observation has important implications pertaining to the predominantly paternal origin of de novo gross chromosomal rearrangements in humans. PMID:20507342

  11. Normally-Closed Zero-Leak Valve with Magnetostrictive Actuator

    NASA Technical Reports Server (NTRS)

    Ramspacher, Daniel J. (Inventor); Richard, James A. (Inventor)

    2017-01-01

    A non-pyrotechnic, normally-closed, zero-leak valve is a replacement for the pyrovalve used for both in-space and launch vehicle applications. The valve utilizes a magnetostrictive alloy for actuation, rather than pyrotechnic charges. The alloy, such as Terfenol-D, experiences magnetostriction, i.e. a gross elongation, when exposed to a magnetic field. This elongation fractures a parent metal seal, allowing fluid flow through the valve. The required magnetic field is generated by redundant coils that are isolated from the working fluid.

  12. Chronic cholera-like lesions caused by Moraxella osloensis.

    PubMed

    Emerson, F G; Kolb, G E; VanNatta, F A

    1983-01-01

    Cholera-like lesions appeared in four house-confined flocks of tom turkeys on one farm from October 30, 1980, to December 2, 1980; Moraxella osloensis was isolated from the tissues. All flocks were treated with 0.04% sulfaquinoxaline in the water for 3 days. The flocks returned to normal and had normal condemnation rates at slaughter. An experiment was conducted in which six hen turkeys were inoculated with a M. osloensis isolate. The same gross lesions were produced as seen in the field cases.

  13. Effects of Caffeine on Classroom Behavior, Sustained Attention, and a Memory Task in Preschool Children.

    ERIC Educational Resources Information Center

    Baer, Ruth A.

    1987-01-01

    The investigation of the effect of normative amounts of caffeine on the behavior of six normal kindergarten children found that caffeine exerted only small and inconsistent effects on such classroom behaviors as time off-task and gross motor activity. (Author/DB)

  14. 78 FR 38308 - PK Ventures, Inc.; North Carolina; Notice Soliciting Applications

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-26

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Project No. 4093-031] PK Ventures, Inc... at normal pool elevation of 315 feet mean sea level and a gross storage capacity of 100 acre-feet; and (5) appurtenant facilities. The project operates run-of-river and generates and estimated average...

  15. 49 CFR 178.704 - General IBC standards.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... sift-proof and water-resistant. (b) All service equipment must be so positioned or protected as to..., without loss of hazardous materials, the internal pressure of the contents and the stresses of normal... transportation without gross distortion or failure and must be positioned so as to cause no undue stress in any...

  16. A Social Ecology of Hyperactive Boys: Medication Effects in Structured Classroom Environments.

    ERIC Educational Resources Information Center

    Whalen, Carol K.; And Others

    1979-01-01

    Among the findings were that hyperactive Ss on placebo showed lower rates of task attention and higher rates of gross motor movement, regular and negative verbalization, noise making, physical contact, social initiation, and other responses than did normal Ss and hyperactive Ss on Ritalin. (Author/DLS)

  17. Infant and Newborn Development

    MedlinePlus

    ... During their first year, babies start to develop skills they will use for the rest of their lives. The normal growth of babies can be broken down into the following areas: Gross motor - controlling the head, sitting, crawling, maybe even starting to walk Fine motor - holding a spoon, picking up a piece ...

  18. 46 CFR 182.430 - Engine exhaust pipe installation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 7 2014-10-01 2014-10-01 false Engine exhaust pipe installation. 182.430 Section 182... 100 GROSS TONS) MACHINERY INSTALLATION Specific Machinery Requirements § 182.430 Engine exhaust pipe... must be so arranged as to prevent backflow of water from reaching engine exhaust ports under normal...

  19. 46 CFR 182.430 - Engine exhaust pipe installation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 7 2011-10-01 2011-10-01 false Engine exhaust pipe installation. 182.430 Section 182... 100 GROSS TONS) MACHINERY INSTALLATION Specific Machinery Requirements § 182.430 Engine exhaust pipe... must be so arranged as to prevent backflow of water from reaching engine exhaust ports under normal...

  20. Assessment of global motor performance and gross and fine motor skills of infants attending day care centers.

    PubMed

    Souza, Carolina T; Santos, Denise C C; Tolocka, Rute E; Baltieri, Letícia; Gibim, Nathália C; Habechian, Fernanda A P

    2010-01-01

    To analyze the global motor performance and the gross and fine motor skills of infants attending two public child care centers full-time. This was a longitudinal study that included 30 infants assessed at 12 and 17 months of age with the Motor Scale of the Bayley Scales of Infant and Toddler Development, Third Edition (Bayley-III). This scale allows the analysis of global motor performance, fine and gross motor performance, and the discrepancy between them. The Wilcoxon test and Spearman's correlation coefficient were used. Most of the participants showed global motor performance within the normal range, but below the reference mean at 12 and 17 months, with 30% classified as having "suspected delays" in at least one of the assessments. Gross motor development was poorer than fine motor development at 12 and at 17 months of age, with great discrepancy between these two subtests in the second assessment. A clear individual variability was observed in fine motor skills, with weak linear correlation between the first and the second assessment of this subtest. A lower individual variability was found in the gross motor skills and global motor performance with positive moderate correlation between assessments. Considering both performance measurements obtained at 12 and 17 months of age, four infants were identified as having a "possible delay in motor development". The study showed the need for closer attention to the motor development of children who attend day care centers during the first 17 months of life, with special attention to gross motor skills (which are considered an integral part of the child's overall development) and to children with suspected delays in two consecutive assessments.

  1. Gallbladder torsion with acute cholecystitis and gross necrosis

    PubMed Central

    Alkhalili, Eyas; Bencsath, Kalman

    2014-01-01

    A 92-year-old woman presented to the emergency department with a 2-week history of worsening right-sided abdominal pain. On examination she had right mid-abdominal tenderness. Laboratory studies demonstrated leukocytosis with normal liver function tests. A CT of the abdomen was remarkable for a large fluid collection in the right abdomen and no discernible gallbladder in the gallbladder fossa. An ultrasound confirmed the suspicion of a distended, floating gallbladder. The patient was taken to the operating room for laparoscopic cholecystectomy. The gallbladder was found to have volvulised in a counter -clockwise manner around its pedicle, with gross necrosis of the gallbladder. She underwent laparoscopic cholecystectomy. Pathological examination revealed acute necrotising calculus cholecystitis. PMID:24862426

  2. Spatial interpolation of solar global radiation

    NASA Astrophysics Data System (ADS)

    Lussana, C.; Uboldi, F.; Antoniazzi, C.

    2010-09-01

    Solar global radiation is defined as the radiant flux incident onto an area element of the terrestrial surface. Its direct knowledge plays a crucial role in many applications, from agrometeorology to environmental meteorology. The ARPA Lombardia's meteorological network includes about one hundred of pyranometers, mostly distributed in the southern part of the Alps and in the centre of the Po Plain. A statistical interpolation method based on an implementation of the Optimal Interpolation is applied to the hourly average of the solar global radiation observations measured by the ARPA Lombardia's network. The background field is obtained using SMARTS (The Simple Model of the Atmospheric Radiative Transfer of Sunshine, Gueymard, 2001). The model is initialised by assuming clear sky conditions and it takes into account the solar position and orography related effects (shade and reflection). The interpolation of pyranometric observations introduces in the analysis fields information about cloud presence and influence. A particular effort is devoted to prevent observations affected by large errors of different kinds (representativity errors, systematic errors, gross errors) from entering the analysis procedure. The inclusion of direct cloud information from satellite observations is also planned.

  3. Performance of normal females and carriers of color-vision deficiencies on standard color-vision tests.

    PubMed

    Dees, Elise W; Baraas, Rigmor C

    2014-04-01

    Carriers of red-green color-vision deficiencies are generally thought to behave like normal trichromats, although it is known that they may make errors on Ishihara plates. The aim here was to compare the performance of carriers with that of normal females on seven standard color-vision tests, including Ishihara plates. One hundred and twenty-six normal females, 14 protan carriers, and 29 deutan carriers aged 9-66 years were included in the study. Generally, deutan carriers performed worse than protan carriers and normal females on six out of the seven tests. The difference in performance between carriers and normal females was independent of age, but the proportion of carriers that made errors on pseudo-isochromatic tests increased with age. It was the youngest carriers, however, who made the most errors. There was considerable variation in performance among individuals in each group of females. The results are discussed in relation to variability in the number of different L-cone pigments.

  4. Toddle temporal-spatial deviation index: Assessment of pediatric gait.

    PubMed

    Cahill-Rowley, Katelyn; Rose, Jessica

    2016-09-01

    This research aims to develop a gait index for use in the pediatric clinic as well as research, that quantifies gait deviation in 18-22 month-old children: the Toddle Temporal-spatial Deviation Index (Toddle TDI). 81 preterm children (≤32 weeks) with very-low-birth-weights (≤1500g) and 42 full-term TD children aged 18-22 months, adjusted for prematurity, walked on a pressure-sensitive mat. Preterm children were administered the Bayley Scales of Infant Development-3rd Edition (BSID-III). Principle component analysis of TD children's temporal-spatial gait parameters quantified raw gait deviation from typical, normalized to an average(standard deviation) Toddle TDI score of 100(10), and calculated for all participants. The Toddle TDI was significantly lower for preterm versus TD children (86 vs. 100, p=0.003), and lower in preterm children with <85 vs. ≥85 BSID-III motor composite scores (66 vs. 89, p=0.004). The Toddle TDI, which by design plateaus at typical average (BSID-III gross motor 8-12), correlated with BSID-III gross motor (r=0.60, p<0.001) and not fine motor (r=0.08, p=0.65) in preterm children with gross motor scores ≤8, suggesting sensitivity to gross motor development. The Toddle TDI demonstrated sensitivity and specificity to gross motor function in very-low-birth-weight preterm children aged 18-22 months, and has been potential as an easily-administered, revealing clinical gait metric. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Estimating and testing interactions when explanatory variables are subject to non-classical measurement error.

    PubMed

    Murad, Havi; Kipnis, Victor; Freedman, Laurence S

    2016-10-01

    Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Z.; Pike, R.W.; Hertwig, T.A.

    An effective approach for source reduction in chemical plants has been demonstrated using on-line optimization with flowsheeting (ASPEN PLUS) for process optimization and parameter estimation and the Tjao-Biegler algorithm implemented in a mathematical programming language (GAMS/MINOS) for data reconciliation and gross error detection. Results for a Monsanto sulfuric acid plant with a Bailey distributed control system showed a 25% reduction in the sulfur dioxide emissions and a 17% improvement in the profit over the current operating conditions. Details of the methods used are described.

  7. Comparison of analytical and predictive methods for water, protein, fat, sugar, and gross energy in marine mammal milk.

    PubMed

    Oftedal, O T; Eisert, R; Barrell, G K

    2014-01-01

    Mammalian milks may differ greatly in composition from cow milk, and these differences may affect the performance of analytical methods. High-fat, high-protein milks with a preponderance of oligosaccharides, such as those produced by many marine mammals, present a particular challenge. We compared the performance of several methods against reference procedures using Weddell seal (Leptonychotes weddellii) milk of highly varied composition (by reference methods: 27-63% water, 24-62% fat, 8-12% crude protein, 0.5-1.8% sugar). A microdrying step preparatory to carbon-hydrogen-nitrogen (CHN) gas analysis slightly underestimated water content and had a higher repeatability relative standard deviation (RSDr) than did reference oven drying at 100°C. Compared with a reference macro-Kjeldahl protein procedure, the CHN (or Dumas) combustion method had a somewhat higher RSDr (1.56 vs. 0.60%) but correlation between methods was high (0.992), means were not different (CHN: 17.2±0.46% dry matter basis; Kjeldahl 17.3±0.49% dry matter basis), there were no significant proportional or constant errors, and predictive performance was high. A carbon stoichiometric procedure based on CHN analysis failed to adequately predict fat (reference: Röse-Gottlieb method) or total sugar (reference: phenol-sulfuric acid method). Gross energy content, calculated from energetic factors and results from reference methods for fat, protein, and total sugar, accurately predicted gross energy as measured by bomb calorimetry. We conclude that the CHN (Dumas) combustion method and calculation of gross energy are acceptable analytical approaches for marine mammal milk, but fat and sugar require separate analysis by appropriate analytic methods and cannot be adequately estimated by carbon stoichiometry. Some other alternative methods-low-temperature drying for water determination; Bradford, Lowry, and biuret methods for protein; the Folch and the Bligh and Dyer methods for fat; and enzymatic and reducing sugar methods for total sugar-appear likely to produce substantial error in marine mammal milks. It is important that alternative analytical methods be properly validated against a reference method before being used, especially for mammalian milks that differ greatly from cow milk in analyte characteristics and concentrations. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  8. Coordinating robot motion, sensing, and control in plans. LDRD project final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xavier, P.G.; Brown, R.G.; Watterberg, P.A.

    1997-08-01

    The goal of this project was to develop a framework for robotic planning and execution that provides a continuum of adaptability with respect to model incompleteness, model error, and sensing error. For example, dividing robot motion into gross-motion planning, fine-motion planning, and sensor-augmented control had yielded productive research and solutions to individual problems. Unfortunately, these techniques could only be combined by hand with ad hoc methods and were restricted to systems where all kinematics are completely modeled in planning. The original intent was to develop methods for understanding and autonomously synthesizing plans that coordinate motion, sensing, and control. The projectmore » considered this problem from several perspectives. Results included (1) theoretical methods to combine and extend gross-motion and fine-motion planning; (2) preliminary work in flexible-object manipulation and an implementable algorithm for planning shortest paths through obstacles for the free-end of an anchored cable; (3) development and implementation of a fast swept-body distance algorithm; and (4) integration of Sandia`s C-Space Toolkit geometry engine and SANDROS motion planer and improvements, which yielded a system practical for everyday motion planning, with path-segment planning at interactive speeds. Results (3) and (4) have either led to follow-on work or are being used in current projects, and they believe that (2) will eventually be also.« less

  9. Experimental intraperitoneal infusion of OK-432 in rats: evaluation of peritoneal complications and pathology.

    PubMed

    Kim, Dong Wook; Kim, Hak Jin; Lee, Jun Woo

    2010-06-01

    OK-432 is known to be a potent sclerosant of cystic lesions. The purpose of this study was to evaluate both its safety and pathologic effects after the infusion of OK-432 into the peritoneal cavity of rats. Twenty male rats were used in this study. Twelve rats were infused intraperitoneally with 0.2 Klinishe Einheit of OK-432 melted in 2 mL of normal saline (group 1: the treated group); four rats each were infused intraperitoneally with 0.5 mL of 99% ethanol (group 2) and normal saline (group 3), and served as the control groups. An abdominal ultrasonographic examination was performed both before and after the infusions in all rats. Three rats in group 1 and one rat in each of groups 2 and 3 were sacrificed each week following the infusion. Gross and microscopic evaluations of the peritoneum and abdominal cavity were performed on each rat. In group 1, the abdomen was clear on gross inspection and the peritoneum was unremarkable on microscopic examination. In group 2, mild-to-moderate peritoneal adhesions were revealed grossly, and inflammation and fibrosis of the peritoneum were demonstrated microscopically. In group 3, no specific abnormalities were noted on gross or microscopic examinations. Leakage or abnormal infusion of OK-432 solution into the peritoneal cavity during sclerotherapy of intra-abdominal or retroperitoneal cystic lesions does not result in any significant complications. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.

  10. Gross regional domestic product estimation: Application of two-way unbalanced panel data models to economic growth in East Nusa Tenggara province

    NASA Astrophysics Data System (ADS)

    Wibowo, Wahyu; Sinu, Elisabeth B.; Setiawan

    2017-03-01

    The condition of East Nusa Tenggara Province which recently developed new districts can affect the number of information or data collected become unbalanced. One of the consequences of ignoring the data incompleteness is the estimator become not valid. Therefore, the analysis of unbalanced panel data is very crucial.The aim of this paper is to find the estimation of Gross Regional Domestic Product in East Nusa Tenggara Province using unbalanced panel data regression model for two-way error component which assume random effect model (REM). In this research, we employ Feasible Generalized Least Squares (FGLS) as regression coefficients estimation method. Since variance of the model is unknown, ANOVA method is considered to obtain the variance components in order to construct the variance-covariance matrix. The data used in this research is secondary data taken from Central Bureau of Statistics of East Nusa Tenggara Province in 21 districts period 2004-2013. The predictors are the number of labor over 15 years old (X1), electrification ratios (X2), and local revenues (X3) while Gross Regional Domestic Product based on constant price 2000 is the response (Y). The FGLS estimation result shows that the value of R2 is 80,539% and all the predictors chosen are significantly affect (α = 5%) the Gross Regional Domestic Product in all district of East Nusa Tenggara Province. Those variables are the number of labor over 15 years old (X1), electrification ratios (X2), and local revenues (X3) with 0,22986, 0,090476, and 0,14749 of elasticities, respectively.

  11. 49 CFR 178.915 - General Large Packaging standards.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... Large Packagings intended for solid hazardous materials must be sift-proof and water-resistant. (b) All... materials, the internal pressure of the contents and the stresses of normal handling and transport. A Large... without gross distortion or failure and must be positioned so as to cause no undue stress in any part of...

  12. 49 CFR 178.915 - General Large Packaging standards.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... intended for solid hazardous materials must be sift-proof and water-resistant. (b) All service equipment... internal pressure of the contents and the stresses of normal handling and transport. A Large Packaging... gross distortion or failure and must be positioned so as to cause no undue stress in any part of the...

  13. Generalized Motor Abilities and Timing Behavior in Children with Specific Language Impairment

    ERIC Educational Resources Information Center

    Zelaznik, Howard N.; Goffman, Lisa

    2010-01-01

    Purpose: To examine whether children with specific language impairment (SLI) differ from normally developing peers in motor skills, especially those skills related to timing. Method: Standard measures of gross and fine motor development were obtained. Furthermore, finger and hand movements were recorded while children engaged in 4 different timing…

  14. 46 CFR 130.140 - Steering on OSVs of 100 or more gross tons.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... pilothouse. (11) Instantaneous protection against short circuit for electrical power, and control circuits... normal steering power. (c) For compliance with paragraph (b) of this section, a common piping system for... VESSEL CONTROL, AND MISCELLANEOUS EQUIPMENT AND SYSTEMS Vessel Control § 130.140 Steering on OSVs of 100...

  15. A Brief Hydrodynamic Investigation of a 1/24-Scale Model of the DR-77 Seaplane

    NASA Technical Reports Server (NTRS)

    Fisher, Lloyd J.; Hoffman, Edward L.

    1953-01-01

    A limited investigation of a 1/24-scale dynamically similar model of the Navy Bureau of Aeronautics DR-77 design was conducted in Langley tank no. 2 to determine the calm-water take-off and the rough-water landing characteristics of the design with particular regard to the take-off resistance and the landing accelerations. During the take-off tests, resistance, trim, and rise were measured and photographs were taken to study spray. During the landing tests, motion-picture records and normal-acceleration records were obtained. A ratio of gross load to maximum resistance of 3.2 was obtained with a 30 deg. dead-rise hydro-ski installation. The maximum normal accelerations obtained with a 30 deg. dead-rise hydro-ski installation were of the order of 8g to log in waves 8 feet high (full scale). A yawing instability that occurred just prior to hydro-ski emergence was improved by adding an afterbody extension, but adding the extension reduced the ratio of gross load to maximum resistance to 2.9.

  16. Profile of refractive errors in cerebral palsy: impact of severity of motor impairment (GMFCS) and CP subtype on refractive outcome.

    PubMed

    Saunders, Kathryn J; Little, Julie-Anne; McClelland, Julie F; Jackson, A Jonathan

    2010-06-01

    To describe refractive status in children and young adults with cerebral palsy (CP) and relate refractive error to standardized measures of type and severity of CP impairment and to ocular dimensions. A population-based sample of 118 participants aged 4 to 23 years with CP (mean 11.64 +/- 4.06) and an age-appropriate control group (n = 128; age, 4-16 years; mean, 9.33 +/- 3.52) were recruited. Motor impairment was described with the Gross Motor Function Classification Scale (GMFCS), and subtype was allocated with the Surveillance of Cerebral Palsy in Europe (SCPE). Measures of refractive error were obtained from all participants and ocular biometry from a subgroup with CP. A significantly higher prevalence and magnitude of refractive error was found in the CP group compared to the control group. Axial length and spherical refractive error were strongly related. This relation did not improve with inclusion of corneal data. There was no relation between the presence or magnitude of spherical refractive errors in CP and the level of motor impairment, intellectual impairment, or the presence of communication difficulties. Higher spherical refractive errors were significantly associated with the nonspastic CP subtype. The presence and magnitude of astigmatism were greater when intellectual impairment was more severe, and astigmatic errors were explained by corneal dimensions. Conclusions. High refractive errors are common in CP, pointing to impairment of the emmetropization process. Biometric data support this In contrast to other functional vision measures, spherical refractive error is unrelated to CP severity, but those with nonspastic CP tend to demonstrate the most extreme errors in refraction.

  17. Regression-assisted deconvolution.

    PubMed

    McIntyre, Julie; Stefanski, Leonard A

    2011-06-30

    We present a semi-parametric deconvolution estimator for the density function of a random variable biX that is measured with error, a common challenge in many epidemiological studies. Traditional deconvolution estimators rely only on assumptions about the distribution of X and the error in its measurement, and ignore information available in auxiliary variables. Our method assumes the availability of a covariate vector statistically related to X by a mean-variance function regression model, where regression errors are normally distributed and independent of the measurement errors. Simulations suggest that the estimator achieves a much lower integrated squared error than the observed-data kernel density estimator when models are correctly specified and the assumption of normal regression errors is met. We illustrate the method using anthropometric measurements of newborns to estimate the density function of newborn length. Copyright © 2011 John Wiley & Sons, Ltd.

  18. Correcting AUC for Measurement Error.

    PubMed

    Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang

    2015-12-01

    Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.

  19. Combining forecast weights: Why and how?

    NASA Astrophysics Data System (ADS)

    Yin, Yip Chee; Kok-Haur, Ng; Hock-Eam, Lim

    2012-09-01

    This paper proposes a procedure called forecast weight averaging which is a specific combination of forecast weights obtained from different methods of constructing forecast weights for the purpose of improving the accuracy of pseudo out of sample forecasting. It is found that under certain specified conditions, forecast weight averaging can lower the mean squared forecast error obtained from model averaging. In addition, we show that in a linear and homoskedastic environment, this superior predictive ability of forecast weight averaging holds true irrespective whether the coefficients are tested by t statistic or z statistic provided the significant level is within the 10% range. By theoretical proofs and simulation study, we have shown that model averaging like, variance model averaging, simple model averaging and standard error model averaging, each produces mean squared forecast error larger than that of forecast weight averaging. Finally, this result also holds true marginally when applied to business and economic empirical data sets, Gross Domestic Product (GDP growth rate), Consumer Price Index (CPI) and Average Lending Rate (ALR) of Malaysia.

  20. Continuous estimation of evapotranspiration and gross primary productivity from an Unmanned Aerial System

    NASA Astrophysics Data System (ADS)

    Wang, S.; Bandini, F.; Jakobsen, J.; J Zarco-Tejada, P.; Liu, X.; Haugård Olesen, D.; Ibrom, A.; Bauer-Gottwein, P.; Garcia, M.

    2017-12-01

    Model prediction of evapotranspiration (ET) and gross primary productivity (GPP) using optical and thermal satellite imagery is biased towards clear-sky conditions. Unmanned Aerial Systems (UAS) can collect optical and thermal signals at unprecedented very high spatial resolution (< 1 meter) under sunny and cloudy weather conditions. However, methods to obtain model outputs between image acquisitions are still needed. This study uses UAS based optical and thermal observations to continuously estimate daily ET and GPP in a Danish willow forest for an entire growing season of 2016. A hexacopter equipped with multispectral and thermal infrared cameras and a real-time kinematic Global Navigation Satellite System was used. The Normalized Differential Vegetation Index (NDVI) and the Temperature Vegetation Dryness Index (TVDI) were used as proxies for leaf area index and soil moisture conditions, respectively. To obtain continuously daily records between UAS acquisitions, UAS surface temperature was assimilated by the ensemble Kalman filter into a prognostic land surface model (Noilhan and Planton, 1989), which relies on the force-restore method, to simulate the continuous land surface temperature. NDVI was interpolated into daily time steps by the cubic spline method. Using these continuous datasets, a joint ET and GPP model, which combines the Priestley-Taylor Jet Propulsion Laboratory ET model (Fisher et al., 2008; Garcia et al., 2013) and the Light Use Efficiency GPP model (Potter et al., 1993), was applied. The simulated ET and GPP were compared with the footprint of eddy covariance observations. The simulated daily ET has a RMSE of 14.41 W•m-2 and a correlation coefficient of 0.83. The simulated daily GPP has a root mean square error (RMSE) of 1.56 g•C•m-2•d-1 and a correlation coefficient of 0.87. This study demonstrates the potential of UAS based multispectral and thermal mapping to continuously estimate ET and GPP for both sunny and cloudy weather conditions.

  1. Syntactic error modeling and scoring normalization in speech recognition

    NASA Technical Reports Server (NTRS)

    Olorenshaw, Lex

    1991-01-01

    The objective was to develop the speech recognition system to be able to detect speech which is pronounced incorrectly, given that the text of the spoken speech is known to the recognizer. Research was performed in the following areas: (1) syntactic error modeling; (2) score normalization; and (3) phoneme error modeling. The study into the types of errors that a reader makes will provide the basis for creating tests which will approximate the use of the system in the real world. NASA-Johnson will develop this technology into a 'Literacy Tutor' in order to bring innovative concepts to the task of teaching adults to read.

  2. Derivation of three closed loop kinematic velocity models using normalized quaternion feedback for an autonomous redundant manipulator with application to inverse kinematics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unseren, M.A.

    1993-04-01

    The report discusses the orientation tracking control problem for a kinematically redundant, autonomous manipulator moving in a three dimensional workspace. The orientation error is derived using the normalized quaternion error method of Ickes, the Luh, Walker, and Paul error method, and a method suggested here utilizing the Rodrigues parameters, all of which are expressed in terms of normalized quaternions. The analytical time derivatives of the orientation errors are determined. The latter, along with the translational velocity error, form a dosed loop kinematic velocity model of the manipulator using normalized quaternion and translational position feedback. An analysis of the singularities associatedmore » with expressing the models in a form suitable for solving the inverse kinematics problem is given. Two redundancy resolution algorithms originally developed using an open loop kinematic velocity model of the manipulator are extended to properly take into account the orientation tracking control problem. This report furnishes the necessary mathematical framework required prior to experimental implementation of the orientation tracking control schemes on the seven axis CESARm research manipulator or on the seven-axis Robotics Research K1207i dexterous manipulator, the latter of which is to be delivered to the Oak Ridge National Laboratory in 1993.« less

  3. Novel Radiobiological Gamma Index for Evaluation of 3-Dimensional Predicted Dose Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sumida, Iori, E-mail: sumida@radonc.med.osaka-u.ac.jp; Yamaguchi, Hajime; Kizaki, Hisao

    2015-07-15

    Purpose: To propose a gamma index-based dose evaluation index that integrates the radiobiological parameters of tumor control (TCP) and normal tissue complication probabilities (NTCP). Methods and Materials: Fifteen prostate and head and neck (H&N) cancer patients received intensity modulated radiation therapy. Before treatment, patient-specific quality assurance was conducted via beam-by-beam analysis, and beam-specific dose error distributions were generated. The predicted 3-dimensional (3D) dose distribution was calculated by back-projection of relative dose error distribution per beam. A 3D gamma analysis of different organs (prostate: clinical [CTV] and planned target volumes [PTV], rectum, bladder, femoral heads; H&N: gross tumor volume [GTV], CTV,more » spinal cord, brain stem, both parotids) was performed using predicted and planned dose distributions under 2%/2 mm tolerance and physical gamma passing rate was calculated. TCP and NTCP values were calculated for voxels with physical gamma indices (PGI) >1. We propose a new radiobiological gamma index (RGI) to quantify the radiobiological effects of TCP and NTCP and calculate radiobiological gamma passing rates. Results: The mean RGI gamma passing rates for prostate cases were significantly different compared with those of PGI (P<.03–.001). The mean RGI gamma passing rates for H&N cases (except for GTV) were significantly different compared with those of PGI (P<.001). Differences in gamma passing rates between PGI and RGI were due to dose differences between the planned and predicted dose distributions. Radiobiological gamma distribution was visualized to identify areas where the dose was radiobiologically important. Conclusions: RGI was proposed to integrate radiobiological effects into PGI. This index would assist physicians and medical physicists not only in physical evaluations of treatment delivery accuracy, but also in clinical evaluations of predicted dose distribution.« less

  4. Diagnostic modeling of dimethylsulfide production in coastal water west of the Antarctic Peninsula

    NASA Technical Reports Server (NTRS)

    Hermann, Maria; Najjar, Raymond G.; Neeley, Aimee R.; Vila-Costa, Maria; Dacey, John W. H.; DiTullio, Giacomo, R.; Kieber, David J.; Kiene, Ronald P.; Matrai, Patricia A.; Simo, Rafel; hide

    2012-01-01

    The rate of gross biological dimethylsulfide (DMS) production at two coastal sites west of the Antarctic Peninsula, off Anvers Island, near Palmer Station, was estimated using a diagnostic approach that combined field measurements from 1 January 2006 through 1 March 2006 and a one-dimensional physical model of ocean mixing. The average DMS production rate in the upper water column (0-60 m) was estimated to be 3.1 +/- 0.6 nM/d at station B (closer to shore) and 2.7 +/- 0.6 nM/d1 at station E (further from shore). The estimated DMS replacement time was on the order of 1 d at both stations. DMS production was greater in the mixed layer than it was below the mixed layer. The average DMS production normalized to chlorophyll was 0.5 +/- nM/d)/(mg cubic m) at station B and 0.7 +/- 0.2 (nM/d)/(mg/cubic m3) at station E. When the diagnosed production rates were normalized to the observed concentrations of total dimethylsulfoniopropionate (DMSPt, the biogenic precursor of DMS), we found a remarkable similarity between our estimates at stations B and E (0.06 +/- 0.02 and 0.04 +/- 0.01 (nM DMS / d1)/(nM DMSP), respectively) and the results obtained in a previous study from a contrasting biogeochemical environment in the North Atlantic subtropical gyre (0.047 =/- 0.006 and 0.087 +/- 0.014 (nM DMS d1)/(nM DMSP) in a cyclonic and anticyclonic eddy, respectively).We propose that gross biological DMS production normalized to DMSPt might be relatively independent of the biogeochemical environment, and place our average estimate at 0.06 +/- 0.01 (nM DMS / d)/(nM DMSPt). The significance of this finding is that it can provide a means to use DMSPt measurements to extrapolate gross biological DMS production, which is extremely difficult to measure experimentally under realistic in situ conditions.

  5. Smoothing of the bivariate LOD score for non-normal quantitative traits.

    PubMed

    Buil, Alfonso; Dyer, Thomas D; Almasy, Laura; Blangero, John

    2005-12-30

    Variance component analysis provides an efficient method for performing linkage analysis for quantitative traits. However, type I error of variance components-based likelihood ratio testing may be affected when phenotypic data are non-normally distributed (especially with high values of kurtosis). This results in inflated LOD scores when the normality assumption does not hold. Even though different solutions have been proposed to deal with this problem with univariate phenotypes, little work has been done in the multivariate case. We present an empirical approach to adjust the inflated LOD scores obtained from a bivariate phenotype that violates the assumption of normality. Using the Collaborative Study on the Genetics of Alcoholism data available for the Genetic Analysis Workshop 14, we show how bivariate linkage analysis with leptokurtotic traits gives an inflated type I error. We perform a novel correction that achieves acceptable levels of type I error.

  6. 3D measurement using combined Gray code and dual-frequency phase-shifting approach

    NASA Astrophysics Data System (ADS)

    Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Liu, Xin

    2018-04-01

    The combined Gray code and phase-shifting approach is a commonly used 3D measurement technique. In this technique, an error that equals integer multiples of the phase-shifted fringe period, i.e. period jump error, often exists in the absolute analog code, which can lead to gross measurement errors. To overcome this problem, the present paper proposes 3D measurement using a combined Gray code and dual-frequency phase-shifting approach. Based on 3D measurement using the combined Gray code and phase-shifting approach, one set of low-frequency phase-shifted fringe patterns with an odd-numbered multiple of the original phase-shifted fringe period is added. Thus, the absolute analog code measured value can be obtained by the combined Gray code and phase-shifting approach, and the low-frequency absolute analog code measured value can also be obtained by adding low-frequency phase-shifted fringe patterns. Then, the corrected absolute analog code measured value can be obtained by correcting the former by the latter, and the period jump errors can be eliminated, resulting in reliable analog code unwrapping. For the proposed approach, we established its measurement model, analyzed its measurement principle, expounded the mechanism of eliminating period jump errors by error analysis, and determined its applicable conditions. Theoretical analysis and experimental results show that the proposed approach can effectively eliminate period jump errors, reliably perform analog code unwrapping, and improve the measurement accuracy.

  7. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  8. The decline and fall of Type II error rates

    Treesearch

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  9. Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality

    ERIC Educational Resources Information Center

    Bishara, Anthony J.; Hittner, James B.

    2015-01-01

    It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared…

  10. The impact of sample non-normality on ANOVA and alternative methods.

    PubMed

    Lantz, Björn

    2013-05-01

    In this journal, Zimmerman (2004, 2011) has discussed preliminary tests that researchers often use to choose an appropriate method for comparing locations when the assumption of normality is doubtful. The conceptual problem with this approach is that such a two-stage process makes both the power and the significance of the entire procedure uncertain, as type I and type II errors are possible at both stages. A type I error at the first stage, for example, will obviously increase the probability of a type II error at the second stage. Based on the idea of Schmider et al. (2010), which proposes that simulated sets of sample data be ranked with respect to their degree of normality, this paper investigates the relationship between population non-normality and sample non-normality with respect to the performance of the ANOVA, Brown-Forsythe test, Welch test, and Kruskal-Wallis test when used with different distributions, sample sizes, and effect sizes. The overall conclusion is that the Kruskal-Wallis test is considerably less sensitive to the degree of sample normality when populations are distinctly non-normal and should therefore be the primary tool used to compare locations when it is known that populations are not at least approximately normal. © 2012 The British Psychological Society.

  11. A log-sinh transformation for data normalization and variance stabilization

    NASA Astrophysics Data System (ADS)

    Wang, Q. J.; Shrestha, D. L.; Robertson, D. E.; Pokhrel, P.

    2012-05-01

    When quantifying model prediction uncertainty, it is statistically convenient to represent model errors that are normally distributed with a constant variance. The Box-Cox transformation is the most widely used technique to normalize data and stabilize variance, but it is not without limitations. In this paper, a log-sinh transformation is derived based on a pattern of errors commonly seen in hydrological model predictions. It is suited to applications where prediction variables are positively skewed and the spread of errors is seen to first increase rapidly, then slowly, and eventually approach a constant as the prediction variable becomes greater. The log-sinh transformation is applied in two case studies, and the results are compared with one- and two-parameter Box-Cox transformations.

  12. A Posteriori Correction of Forecast and Observation Error Variances

    NASA Technical Reports Server (NTRS)

    Rukhovets, Leonid

    2005-01-01

    Proposed method of total observation and forecast error variance correction is based on the assumption about normal distribution of "observed-minus-forecast" residuals (O-F), where O is an observed value and F is usually a short-term model forecast. This assumption can be accepted for several types of observations (except humidity) which are not grossly in error. Degree of nearness to normal distribution can be estimated by the symmetry or skewness (luck of symmetry) a(sub 3) = mu(sub 3)/sigma(sup 3) and kurtosis a(sub 4) = mu(sub 4)/sigma(sup 4) - 3 Here mu(sub i) = i-order moment, sigma is a standard deviation. It is well known that for normal distribution a(sub 3) = a(sub 4) = 0.

  13. Hessian matrix approach for determining error field sensitivity to coil deviations

    NASA Astrophysics Data System (ADS)

    Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.; Song, Yuntao; Wan, Yuanxi

    2018-05-01

    The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code (Zhu et al 2018 Nucl. Fusion 58 016008) is utilized to provide fast and accurate calculations of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.

  14. Measurement-based reliability/performability models

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  15. Effects of shape, size, and chromaticity of stimuli on estimated size in normally sighted, severely myopic, and visually impaired students.

    PubMed

    Huang, Kuo-Chen; Wang, Hsiu-Feng; Chen, Chun-Ching

    2010-06-01

    Effects of shape, size, and chromaticity of stimuli on participants' errors when estimating the size of simultaneously presented standard and comparison stimuli were examined. 48 Taiwanese college students ages 20 to 24 years old (M = 22.3, SD = 1.3) participated. Analysis showed that the error for estimated size was significantly greater for those in the low-vision group than for those in the normal-vision and severe-myopia groups. The errors were significantly greater with green and blue stimuli than with red stimuli. Circular stimuli produced smaller mean errors than did square stimuli. The actual size of the standard stimulus significantly affected the error for estimated size. Errors for estimations using smaller sizes were significantly higher than when the sizes were larger. Implications of the results for graphics-based interface design, particularly when taking account of visually impaired users, are discussed.

  16. Inca - interparietal bones in neurocranium of human skulls in central India

    PubMed Central

    Marathe, RR; Yogesh, AS; Pandit, SV; Joshi, M; Trivedi, GN

    2010-01-01

    Inca bones are accessory bones found in neurocranium of human skulls. Occurrence of Inca bones is rare as compared to other inter sutural bones such as wormian bones. These Inca ossicles are regarded as variants of the normal. The reporting of such occurrences is inadequate from Central India. Objectives: To find the incidence of Inca variants in Central India. Materials and Methods: In the present study, 380 dried adult human skulls were examined. All specimen samples were procured from various Medical colleges of Central India. They were analyzed for gross incidence, sexual dimorphism and number of fragments of Inca bones. Results: Gross incidence of Inca bones was found to be 1.315 %. Incidence rate was higher in male skulls than female skulls (male: 1.428%; female: 1.176%). The Inca bones frequently occurred signally. Out of the five observed Inca ossicles, two were fragmented. Conclusions: This data gives idea regarding gross incidence, sexual dimorphism and number of fragments of Inca bones in neurocranium of human skulls from Central India. The knowledge of this variable is useful for neurosurgeons, anthropologists and radiologists. PMID:21799611

  17. Inca - interparietal bones in neurocranium of human skulls in central India.

    PubMed

    Marathe, Rr; Yogesh, As; Pandit, Sv; Joshi, M; Trivedi, Gn

    2010-01-01

    Inca bones are accessory bones found in neurocranium of human skulls. Occurrence of Inca bones is rare as compared to other inter sutural bones such as wormian bones. These Inca ossicles are regarded as variants of the normal. The reporting of such occurrences is inadequate from Central India. To find the incidence of Inca variants in Central India. In the present study, 380 dried adult human skulls were examined. All specimen samples were procured from various Medical colleges of Central India. They were analyzed for gross incidence, sexual dimorphism and number of fragments of Inca bones. Gross incidence of Inca bones was found to be 1.315 %. Incidence rate was higher in male skulls than female skulls (male: 1.428%; female: 1.176%). The Inca bones frequently occurred signally. Out of the five observed Inca ossicles, two were fragmented. This data gives idea regarding gross incidence, sexual dimorphism and number of fragments of Inca bones in neurocranium of human skulls from Central India. The knowledge of this variable is useful for neurosurgeons, anthropologists and radiologists.

  18. Problems and methods of calculating the Legendre functions of arbitrary degree and order

    NASA Astrophysics Data System (ADS)

    Novikova, Elena; Dmitrenko, Alexander

    2016-12-01

    The known standard recursion methods of computing the full normalized associated Legendre functions do not give the necessary precision due to application of IEEE754-2008 standard, that creates a problems of underflow and overflow. The analysis of the problems of the calculation of the Legendre functions shows that the problem underflow is not dangerous by itself. The main problem that generates the gross errors in its calculations is the problem named the effect of "absolute zero". Once appeared in a forward column recursion, "absolute zero" converts to zero all values which are multiplied by it, regardless of whether a zero result of multiplication is real or not. Three methods of calculating of the Legendre functions, that removed the effect of "absolute zero" from the calculations are discussed here. These methods are also of interest because they almost have no limit for the maximum degree of Legendre functions. It is shown that the numerical accuracy of these three methods is the same. But, the CPU calculation time of the Legendre functions with Fukushima method is minimal. Therefore, the Fukushima method is the best. Its main advantage is computational speed which is an important factor in calculation of such large amount of the Legendre functions as 2 401 336 for EGM2008.

  19. Foot Structure in Japanese Speech Errors: Normal vs. Pathological

    ERIC Educational Resources Information Center

    Miyakoda, Haruko

    2008-01-01

    Although many studies of speech errors have been presented in the literature, most have focused on errors occurring at either the segmental or feature level. Few, if any, studies have dealt with the prosodic structure of errors. This paper aims to fill this gap by taking up the issue of prosodic structure in Japanese speech errors, with a focus on…

  20. Cache-based error recovery for shared memory multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.

    1989-01-01

    A multiprocessor cache-based checkpointing and recovery scheme for of recovering from transient processor errors in a shared-memory multiprocessor with private caches is presented. New implementation techniques that use checkpoint identifiers and recovery stacks to reduce performance degradation in processor utilization during normal execution are examined. This cache-based checkpointing technique prevents rollback propagation, provides for rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions that take error latency into account are presented.

  1. Analysis of difference between direct and geodetic mass balance measurements at South Cascade Glacier, Washington

    USGS Publications Warehouse

    Krimmel, R.M.

    1999-01-01

    Net mass balance has been measured since 1958 at South Cascade Glacier using the 'direct method,' e.g. area averages of snow gain and firn and ice loss at stakes. Analysis of cartographic vertical photography has allowed measurement of mass balance using the 'geodetic method' in 1970, 1975, 1977, 1979-80, and 1985-97. Water equivalent change as measured by these nearly independent methods should give similar results. During 1970-97, the direct method shows a cumulative balance of about -15 m, and the geodetic method shows a cumulative balance of about -22 m. The deviation between the two methods is fairly consistent, suggesting no gross errors in either, but rather a cumulative systematic error. It is suspected that the cumulative error is in the direct method because the geodetic method is based on a non-changing reference, the bedrock control, whereas the direct method is measured with reference to only the previous year's summer surface. Possible sources of mass loss that are missing from the direct method are basal melt, internal melt, and ablation on crevasse walls. Possible systematic measurement errors include under-estimation of the density of lost material, sinking stakes, or poorly represented areas.

  2. Prediction of municipal solid waste generation using nonlinear autoregressive network.

    PubMed

    Younes, Mohammad K; Nopiah, Z M; Basri, N E Ahmad; Basri, H; Abushammala, Mohammed F M; Maulud, K N A

    2015-12-01

    Most of the developing countries have solid waste management problems. Solid waste strategic planning requires accurate prediction of the quality and quantity of the generated waste. In developing countries, such as Malaysia, the solid waste generation rate is increasing rapidly, due to population growth and new consumption trends that characterize society. This paper proposes an artificial neural network (ANN) approach using feedforward nonlinear autoregressive network with exogenous inputs (NARX) to predict annual solid waste generation in relation to demographic and economic variables like population number, gross domestic product, electricity demand per capita and employment and unemployment numbers. In addition, variable selection procedures are also developed to select a significant explanatory variable. The model evaluation was performed using coefficient of determination (R(2)) and mean square error (MSE). The optimum model that produced the lowest testing MSE (2.46) and the highest R(2) (0.97) had three inputs (gross domestic product, population and employment), eight neurons and one lag in the hidden layer, and used Fletcher-Powell's conjugate gradient as the training algorithm.

  3. Reflective-impulsive style and conceptual tempo in a gross motor task.

    PubMed

    Keller, J; Ripoll, H

    2001-06-01

    The reflective-impulsive construct refers to responses made slowly or quickly in a situation with high uncertainty. Children who are labeled "reflective" take a longer time to respond and make few errors, whereas "impulsive" children are fast and inaccurate. Although the validity of the test and the definition of reflective-impulsive style are well accepted, whether such respond fast or slow to all tasks is questioned. Some children do not fit the dichotomy. Two other groups arise, the fast-accurate and the slow-inaccurate. The response styles of 86 boys, ages 5, 7, and 9 years performing a gross motor task, i.e., hitting a ball with a racquet, were studied. Analysis indicated that the slowest children on the Matching Familiar Figures Test can be faster than the fastest ones and remain more accurate. As the definition of the reflective-impulsive style is based on time, the reflective ones might better be viewed as children who can adapt the response time to the context and thus be more efficient at problem-solving.

  4. An attempt to determine the effect of increase of observation correlations on detectability and identifiability of a single gross error

    NASA Astrophysics Data System (ADS)

    Prószyński, Witold; Kwaśniak, Mieczysław

    2016-12-01

    The paper presents the results of investigating the effect of increase of observation correlations on detectability and identifiability of a single gross error, the outlier test sensitivity and also the response-based measures of internal reliability of networks. To reduce in a research a practically incomputable number of possible test options when considering all the non-diagonal elements of the correlation matrix as variables, its simplest representation was used being a matrix with all non-diagonal elements of equal values, termed uniform correlation. By raising the common correlation value incrementally, a sequence of matrix configurations could be obtained corresponding to the increasing level of observation correlations. For each of the measures characterizing the above mentioned features of network reliability the effect is presented in a diagram form as a function of the increasing level of observation correlations. The influence of observation correlations on sensitivity of the w-test for correlated observations (Förstner 1983, Teunissen 2006) is investigated in comparison with the original Baarda's w-test designated for uncorrelated observations, to determine the character of expected sensitivity degradation of the latter when used for correlated observations. The correlation effects obtained for different reliability measures exhibit mutual consistency in a satisfactory extent. As a by-product of the analyses, a simple formula valid for any arbitrary correlation matrix is proposed for transforming the Baarda's w-test statistics into the w-test statistics for correlated observations.

  5. Health-Related Fitness, Motor Coordination, and Physical and Sedentary Activities of Urban and Rural Children in Suriname.

    PubMed

    Walhain, Fenna; van Gorp, Marloes; Lamur, Kenneth S; Veeger, Dirkjan H E J; Ledebt, Annick

    2016-10-01

    Health-related fitness (HRF) and motor coordination (MC) can be influenced by children's environment and lifestyle behavior. This study evaluates the association between living environment and HRF, MC, and physical and sedentary activities of children in Suriname. Tests were performed for HRF (morphological, muscular, and cardiorespiratory component), gross MC (Körperkoordinations Test für Kinder), fine MC (Movement Assessment Battery for Children), and self-reported activities in 79 urban and 77 rural 7-year-old Maroon children. Urban-rural differences were calculated by an independent sample t test (Mann-Whitney U test if not normally distributed) and χ 2 test. No difference was found in body mass index, muscle strength, and the overall score of gross and fine MC. However, urban children scored lower in HRF on the cardiorespiratory component (P ≤ .001), in gross MC on walking backward (P = .014), and jumping sideways (P = 0.011). They scored higher in the gross MC component moving sideways (P ≤ .001) and lower in fine MC on the trail test (P = .036) and reported significantly more sedentary and fewer physical activities than rural children. Living environment was associated with certain components of HRF, MC, and physical and sedentary activities of 7-year-old children in Suriname. Further research is needed to evaluate the development of urban children to provide information for possible intervention and prevention strategies.

  6. Macroscopic Rotator Cuff Tendinopathy and Histopathology Do Not Predict Repair Outcomes of Rotator Cuff Tears.

    PubMed

    Sethi, Paul M; Sheth, Chirag D; Pauzenberger, Leo; McCarthy, Mary Beth R; Cote, Mark P; Soneson, Emma; Miller, Seth; Mazzocca, Augustus D

    2018-03-01

    Numerous studies have identified factors that may affect the chances of rotator cuff healing after surgery. Intraoperative tendon quality may be used to predict healing and to determine type of repair and/or consideration of augmentation. There are no data that correlate how gross tendon morphology and degree of tendinopathy affect patient outcome or postoperative tendon healing. Purpose/Hypothesis: The purposes of this study were to (1) compare the gross appearance of the tendon edge during arthroscopic rotator cuff repair with its histological degree of tendinopathy and (2) determine if gross appearance correlated with postoperative repair integrity. The hypothesis was that gross (macroscopic) tendon with normal thickness, no delamination, and elastic tissue before repair would have a correlation with low Bonar scores, higher postoperative American Shoulder and Elbow Surgeons (ASES) scores, and increased rates of postoperative tendon healing on ultrasound. Cross-sectional study; Level of evidence, 3. A total of 105 patients undergoing repair of medium-size (1-3 cm) full-thickness rotator cuff tears were enrolled in the study. Intraoperatively, the supraspinatus tendon was rated on thickness, fraying, and stiffness. Tendon tissue was recovered for histological analysis based on the Bonar scoring system. Postoperative ASES and ultrasound assessment of healing were obtained 1 year after repair. Correlation between gross appearance of the tendon and rotator cuff histology was determined. Of the 105 patients, 85 were followed the study to completion. The mean age of the patients was 61.6 years; Bonar score, 7.5; preoperative ASES score, 49; and postoperative ASES score, 86. Ninety-one percent of repairs were intact on ultrasound. Gross appearance of torn rotator cuff tendon tissue did not correlate with histological appearance. Neither histological (Bonar) score nor gross appearance correlated with multivariate analysis of ASES score, postoperative repair status, or demographic data. The degree of tendinopathy did not correlate with morphological appearance of the tendon. Neither of these parameters correlated with healing or patient outcome. This study suggests that the degree of tendinopathy, unlike muscle atrophy, may not be predictive of outcomes and that, on appearance, poor quality tendon has adequate healing capacity. Therefore, abnormal gross tendon appearance should not affect the repair effort or technique.

  7. Hyaline cartilage thickness in radiographically normal cadaveric hips: comparison of spiral CT arthrographic and macroscopic measurements.

    PubMed

    Wyler, Annabelle; Bousson, Valérie; Bergot, Catherine; Polivka, Marc; Leveque, Eric; Vicaut, Eric; Laredo, Jean-Denis

    2007-02-01

    To assess spiral multidetector computed tomographic (CT) arthrography for the depiction of cartilage thickness in hips without cartilage loss, with evaluation of anatomic slices as the reference standard. Permission to perform imaging studies in cadaveric specimens of individuals who had willed their bodies to science was obtained from the institutional review board. Two independent observers measured the femoral and acetabular hyaline cartilage thickness of 12 radiographically normal cadaveric hips (from six women and five men; age range at death, 52-98 years; mean, 76.5 years) on spiral multidetector CT arthrographic reformations and on coronal anatomic slices. Regions of cartilage loss at gross or histologic examination were excluded. CT arthrographic and anatomic measurements in the coronal plane were compared by using Bland-Altman representation and a paired t test. Differences between mean cartilage thicknesses at the points of measurement were tested by means of analysis of variance. Interobserver and intraobserver reproducibilities were determined. At CT arthrography, mean cartilage thickness ranged from 0.32 to 2.53 mm on the femoral head and from 0.95 to 3.13 mm on the acetabulum. Observers underestimated cartilage thickness in the coronal plane by 0.30 mm +/- 0.52 (mean +/- standard error) at CT arthrography (P < .001) compared with the anatomic reference standard. Ninety-five percent of the differences between CT arthrography and anatomic values ranged from -1.34 to 0.74 mm. The difference between mean cartilage thicknesses at the different measurement points was significant for coronal spiral multidetector CT arthrography and anatomic measurement of the femoral head and acetabulum and for sagittal and transverse CT arthrography of the femoral head (P < .001). Changes in cartilage thickness from the periphery to the center of the joint ("gradients") were found by means of spiral multidetector CT arthrography and anatomic measurement. Spiral multidetector CT arthrography depicts cartilage thickness gradients in radiographically normal cadaveric hips. (c) RSNA, 2007.

  8. Response Errors in Females' and Males' Sentence Lipreading Necessitate Structurally Different Models for Predicting Lipreading Accuracy

    ERIC Educational Resources Information Center

    Bernstein, Lynne E.

    2018-01-01

    Lipreaders recognize words with phonetically impoverished stimuli, an ability that varies widely in normal-hearing adults. Lipreading accuracy for sentence stimuli was modeled with data from 339 normal-hearing adults. Models used measures of phonemic perceptual errors, insertion of text that was not in the stimulus, gender, and auditory speech…

  9. A partial least squares based spectrum normalization method for uncertainty reduction for laser-induced breakdown spectroscopy measurements

    NASA Astrophysics Data System (ADS)

    Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou

    2013-10-01

    A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.

  10. Healthcare: affordable quality coverage for all.

    PubMed

    Lee, Keat Jin

    2009-06-01

    The quality of medical care available in the United States is the best in the world. However, today's American healthcare delivery system is unacceptable. It is too expensive, disjointed, and wasteful. The amount spent on healthcare in the United States is sufficient to meet everyone's needs; the reason it does not is that the money is misspent. Healthcare makes up 16 percent of the gross domestic product, or $2.3 trillion, yet 46 million people are uninsured, the majority of people are underinsured, and even those with insurance suffer significant hassles in receiving healthcare. Medical errors occur at alarming rates. The lack of quality measures to define best practices leads to a wide variation of practices and costs. Fragmented healthcare leads to errors. The goal of this paper is to explore a set of 20 comprehensive steps to begin reform of healthcare in this country.

  11. Accuracy and uncertainty analysis of soil Bbf spatial distribution estimation at a coking plant-contaminated site based on normalization geostatistical technologies.

    PubMed

    Liu, Geng; Niu, Junjie; Zhang, Chao; Guo, Guanlin

    2015-12-01

    Data distribution is usually skewed severely by the presence of hot spots in contaminated sites. This causes difficulties for accurate geostatistical data transformation. Three types of typical normal distribution transformation methods termed the normal score, Johnson, and Box-Cox transformations were applied to compare the effects of spatial interpolation with normal distribution transformation data of benzo(b)fluoranthene in a large-scale coking plant-contaminated site in north China. Three normal transformation methods decreased the skewness and kurtosis of the benzo(b)fluoranthene, and all the transformed data passed the Kolmogorov-Smirnov test threshold. Cross validation showed that Johnson ordinary kriging has a minimum root-mean-square error of 1.17 and a mean error of 0.19, which was more accurate than the other two models. The area with fewer sampling points and that with high levels of contamination showed the largest prediction standard errors based on the Johnson ordinary kriging prediction map. We introduce an ideal normal transformation method prior to geostatistical estimation for severely skewed data, which enhances the reliability of risk estimation and improves the accuracy for determination of remediation boundaries.

  12. Alteration of a motor learning rule under mirror-reversal transformation does not depend on the amplitude of visual error.

    PubMed

    Kasuga, Shoko; Kurata, Makiko; Liu, Meigen; Ushiba, Junichi

    2015-05-01

    Human's sophisticated motor learning system paradoxically interferes with motor performance when visual information is mirror-reversed (MR), because normal movement error correction further aggravates the error. This error-increasing mechanism makes performing even a simple reaching task difficult, but is overcome by alterations in the error correction rule during the trials. To isolate factors that trigger learners to change the error correction rule, we manipulated the gain of visual angular errors when participants made arm-reaching movements with mirror-reversed visual feedback, and compared the rule alteration timing between groups with normal or reduced gain. Trial-by-trial changes in the visual angular error was tracked to explain the timing of the change in the error correction rule. Under both gain conditions, visual angular errors increased under the MR transformation, and suddenly decreased after 3-5 trials with increase. The increase became degressive at different amplitude between the two groups, nearly proportional to the visual gain. The findings suggest that the alteration of the error-correction rule is not dependent on the amplitude of visual angular errors, and possibly determined by the number of trials over which the errors increased or statistical property of the environment. The current results encourage future intensive studies focusing on the exact rule-change mechanism. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  13. Radiochemical monitoring of water after the Cannikin event, Amchitka Island, Alaska, August 1974 and chemical monitoring from July 1972 to June 1974

    USGS Publications Warehouse

    Ballance, Wilbur C.; Thordarson, William

    1976-01-01

    Radiochemical data from the Arnchitka Island study area were obtained from water samples collected by the U.S. Geological Survey during August 1974. Tritium determinations were made on 18 samples, and gross alpha and gross beta/ gamma determinations were made on 12 samples. No appreciable differences were found between the data obtained during August 1974 and the data obtained before the Cannikin event. Chemical analyses were made on 4 samples collected in 1971, on 15 samples in 1972, on 11 samples in 1973, and 7 samples in 1974. Comparison of these analyses to analyses of samples collected before the Cannikin event indicates no changes outside of the seasonal range normally found at the sampling locations.

  14. Biochemical monitoring of water after the Cannikin event, Amchitka Island, Alaska, August 1974 and chemical monitoring from July 1972--June 1974

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thordarson, W.; Ballance, W.C.

    Radiochemical data from the Amchitka Island study area were obtained from water samples collected by the U. S. Geological Survey during August 1974. Tritium determinations were made on 18 samples, and gross alpha and gross beta/gamma determinations were made on 12 samples. No appreciable differences were found between the data obtained during August 1974 and the data obtained before the Cannikin event. Chemical analyses were made on 4 samples collected in 1971, on 15 samples in 1972, on 11 samples in 1973, and 7 samples in 1974. Comparison of these analyses to analyses of samples collected before the Cannikin eventmore » indicates no changes outside of the seasonal range normally found at the sampling locations.« less

  15. Comparing risk in conventional and organic dairy farming in the Netherlands: an empirical analysis.

    PubMed

    Berentsen, P B M; Kovacs, K; van Asseldonk, M A P M

    2012-07-01

    This study was undertaken to contribute to the understanding of why most dairy farmers do not convert to organic farming. Therefore, the objective of this research was to assess and compare risks for conventional and organic farming in the Netherlands with respect to gross margin and the underlying price and production variables. To investigate the risk factors a farm accountancy database was used containing panel data from both conventional and organic representative Dutch dairy farms (2001-2007). Variables with regard to price and production risk were identified using a gross margin analysis scheme. Price risk variables were milk price and concentrate price. The main production risk variables were milk yield per cow, roughage yield per hectare, and veterinary costs per cow. To assess risk, an error component implicit detrending method was applied and the resulting detrended standard deviations were compared between conventional and organic farms. Results indicate that the risk included in the gross margin per cow is significantly higher in organic farming. This is caused by both higher price and production risks. Price risks are significantly higher in organic farming for both milk price and concentrate price. With regard to production risk, only milk yield per cow poses a significantly higher risk in organic farming. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  16. Míranos! Look at us, we are healthy! An environmental approach to early childhood obesity prevention.

    PubMed

    Yin, Zenong; Parra-Medina, Deborah; Cordova, Alberto; He, Meizi; Trummer, Virginia; Sosa, Erica; Gallion, Kipling J; Sintes-Yallen, Amanda; Huang, Yaling; Wu, Xuelian; Acosta, Desiree; Kibbe, Debra; Ramirez, Amelie

    2012-10-01

    Obesity prevention research is sparse in young children at risk for obesity. This study tested the effectiveness of a culturally tailored, multicomponent prevention intervention to promote healthy weight gain and gross motor development in low-income preschool age children. Study participants were predominantly Mexican-American children (n = 423; mean age = 4.1; 62% in normal weight range) enrolled in Head Start. The study was conducted using a quasi-experimental pretest/posttest design with two treatment groups and a comparison group. A center-based intervention included an age-appropriate gross motor program with structured outdoor play, supplemental classroom activities, and staff development. A combined center- and home-based intervention added peer-led parent education to create a broad supportive environment in the center and at home. Primary outcomes were weight-based z-scores and raw scores of gross motor skills of the Learning Achievement Profile Version 3. Favorable changes occurred in z-scores for weight (one-tailed p < 0.04) for age and gender among children in the combined center- and home-based intervention compared to comparison children at posttest. Higher gains of gross motor skills were found in children in the combined center- and home-based (p < 0.001) and the center-based intervention (p < 0.01). Children in both intervention groups showed increases in outdoor physical activity and consumption of healthy food. Process evaluation data showed high levels of protocol implementation fidelity and program participation of children, Head Start staff, and parents. The study demonstrated great promise in creating a health-conducive environment that positively impacts weight and gross motor skill development in children at risk for obesity. Program efficacy should be tested in a randomized trial.

  17. Error-Based Simulation for Error-Awareness in Learning Mechanics: An Evaluation

    ERIC Educational Resources Information Center

    Horiguchi, Tomoya; Imai, Isao; Toumoto, Takahito; Hirashima, Tsukasa

    2014-01-01

    Error-based simulation (EBS) has been developed to generate phenomena by using students' erroneous ideas and also offers promise for promoting students' awareness of errors. In this paper, we report the evaluation of EBS used in learning "normal reaction" in a junior high school. An EBS class, where students learned the concept…

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unseren, M.A.

    The report discusses the orientation tracking control problem for a kinematically redundant, autonomous manipulator moving in a three dimensional workspace. The orientation error is derived using the normalized quaternion error method of Ickes, the Luh, Walker, and Paul error method, and a method suggested here utilizing the Rodrigues parameters, all of which are expressed in terms of normalized quaternions. The analytical time derivatives of the orientation errors are determined. The latter, along with the translational velocity error, form a dosed loop kinematic velocity model of the manipulator using normalized quaternion and translational position feedback. An analysis of the singularities associatedmore » with expressing the models in a form suitable for solving the inverse kinematics problem is given. Two redundancy resolution algorithms originally developed using an open loop kinematic velocity model of the manipulator are extended to properly take into account the orientation tracking control problem. This report furnishes the necessary mathematical framework required prior to experimental implementation of the orientation tracking control schemes on the seven axis CESARm research manipulator or on the seven-axis Robotics Research K1207i dexterous manipulator, the latter of which is to be delivered to the Oak Ridge National Laboratory in 1993.« less

  19. BIOMASSCOMP: Artificial Neural Networks and Neurocomputers

    DTIC Science & Technology

    1988-09-01

    52 APPENDICES A. NTSU NEUROSCIENCE LABORATORY (30 pages) B. SOFTWARE LISTINGS (27 pages) 1. DTEST - Structure Function...neurocomputers normally proceeds by using the best available data from the neuroscience community to build and test computational models of the components...53 Mo BIOHASSCOM? PHASE I FINAL REPORT Page 54 APPENDIX A THE NORTH TEXAS STATE UNIVERSITY NEUROSCIENCE LABORATORY OF PROF. GUENTER W. GROSS Dr

  20. Delineating Normal from Diseased Brain by Aminolevulinic Acid-Induced Fluorescence

    NASA Astrophysics Data System (ADS)

    Stepp, Herbert; Stummer, Walter

    5-Aminolevulinic acid (5-ALA) as a precursor of protoporphyrin IX (PpIX) has been established as an orally applied drug to guide surgical resection of malignant brain tumors by exciting the red fluorescence of PpIX. The accumulation of PpIX in glioblastoma multiforme (GBM) is highly selective and provides excellent contrast to normal brain when using surgical microscopes with appropriately filtered light sources and cameras. The positive predictive value of fluorescent tissue is very high, enabling safe gross total resection of GBM and other brain tumors and improving prognosis of patients. Compared to other intraoperative techniques that have been developed with the aim of increasing the rate of safe gross total resections of malignant gliomas, PpIX fluorescence is considerably simpler, more cost effective, and comparably reliable. We present the basics of 5-ALA-based fluorescence-guided resection, and discuss the clinical results obtained for GBM and the experience with the fluorescence staining of other primary brain tumors and metastases as well as the results for spinal cord tumors. The phototoxicity of PpIX, increasingly used for photodynamic therapy of brain tumors, is mentioned briefly in this chapter.

  1. Radiology curriculum topics for medical students: students' perspectives.

    PubMed

    Subramaniam, Rathan M; Beckley, Vaughan; Chan, Michael; Chou, Tina; Scally, Peter

    2006-07-01

    We sought to establish medical students' perspectives of a set of curriculum topics for radiology teaching. A multicenter study was conducted in New Zealand. A modified Delphi method was adopted. Students enrolled in two New Zealand Universities received a questionnaire. Each learning topic was graded on a scale of 1 (very strongly disagree) to 6 (very strongly agree). Students could also put forward and grade suggestions that were not on the questionnaire. Of 200 questionnaires, 107 were returned. Fifty male and 57 female students participated, with an average age of 23.7 years. The five highest ranking curriculum topics in order of importance were developing a system for viewing chest radiographs (5.77, SD 0.7), developing a system for viewing abdominal radiographs (5.66, SD 0.8), developing a system for viewing bone and joint radiographs (5.56, SD 0.8), distinguishing normal structures from abnormal in chest and abdominal radiographs (5.38, SD 0.9), and identifying gross bone or joint abnormalities in skeletal radiographs (5.29, SD 0.9). Medical students want to know how to look at radiographs, how to distinguish normal from abnormal, and how to identify gross abnormalities.

  2. Ultrasonography of the equine shoulder: technique and normal appearance.

    PubMed

    Tnibar, M A; Auer, J A; Bakkali, S

    1999-01-01

    This study was intended to document normal ultrasonographic appearance of the equine shoulder and anatomic landmarks useful in clinical imaging. Both forelimbs of five equine cadavers and both forelimbs of six live adult horses were used. To facilitate understanding of the images, a zoning system assigned to the biceps brachii and to the infraspinatus tendon was developed. Ultrasonography was performed with a real-time B-mode semiportable sector scanner using 7.5- and 5-MHz transducers. On one cadaver limb, magnetic resonance imaging (MRI) was performed using a system at 1.5 Tesla, T1-weighted spin-echo sequence. Ultrasonography images were compared to frozen specimens and MRI images to correlate the ultrasonographic findings to the gross anatomy of the shoulder. Ultrasonography allowed easy evaluation of the biceps brachii and the infraspinatus tendon and their bursae, the supraspinatus muscle and tendons, the superficial muscles of the shoulder, and the underlying humerus and scapula. Only the lateral and, partially, the caudal aspects of the humeral head could be visualized with ultrasound. Ultrasonographic appearance, orientation, and anatomic relationships of these structures are described. Ultrasonographic findings correlated well with MRI images and with gross anatomy in the cadavers' limbs.

  3. Space resection model calculation based on Random Sample Consensus algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Xinzhu; Kang, Zhizhong

    2016-03-01

    Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.

  4. Seepage investigation and selected hydrologic data for the Escalante River drainage basin, Garfield and Kane Counties, Utah, 1909-2002

    USGS Publications Warehouse

    Wilberg, Dale E.; Stolp, Bernard J.

    2005-01-01

    This report contains the results of an October 2001 seepage investigation conducted along a reach of the Escalante River in Utah extending from the U.S. Geological Survey streamflow-gaging station near Escalante to the mouth of Stevens Canyon. Discharge was measured at 16 individual sites along 15 consecutive reaches. Total reach length was about 86 miles. A reconnaissance-level sampling of water for tritium and chlorofluorcarbons was also done. In addition, hydrologic and water-quality data previously collected and published by the U.S. Geological Survey for the 2,020-square-mile Escalante River drainage basin was compiled and is presented in 12 tables. These data were collected from 64 surface-water sites and 28 springs from 1909 to 2002.None of the 15 consecutive reaches along the Escalante River had a measured loss or gain that exceeded the measurement error. All discharge measurements taken during the seepage investigation were assigned a qualitative rating of accuracy that ranged from 5 percent to greater than 8 percent of the actual flow. Summing the potential error for each measurement and dividing by the maximum of either the upstream discharge and any tributary inflow, or the downstream discharge, determined the normalized error for a reach. This was compared to the computed loss or gain that also was normalized to the maximum discharge. A loss or gain for a specified reach is considered significant when the loss or gain (normalized percentage difference) is greater than the measurement error (normalized percentage error). The percentage difference and percentage error were normalized to allow comparison between reaches with different amounts of discharge.The plate that accompanies the report is 36" by 40" and can be printed in 16 tiles, 8.5 by 11 inches. An index for the tiles is located on the lower left-hand side of the plate. Using Adobe Acrobat, the plate can be viewed independent of the report; all Acrobat functions are available.

  5. Estimation of cortical magnification from positional error in normally sighted and amblyopic subjects

    PubMed Central

    Hussain, Zahra; Svensson, Carl-Magnus; Besle, Julien; Webb, Ben S.; Barrett, Brendan T.; McGraw, Paul V.

    2015-01-01

    We describe a method for deriving the linear cortical magnification factor from positional error across the visual field. We compared magnification obtained from this method between normally sighted individuals and amblyopic individuals, who receive atypical visual input during development. The cortical magnification factor was derived for each subject from positional error at 32 locations in the visual field, using an established model of conformal mapping between retinal and cortical coordinates. Magnification of the normally sighted group matched estimates from previous physiological and neuroimaging studies in humans, confirming the validity of the approach. The estimate of magnification for the amblyopic group was significantly lower than the normal group: by 4.4 mm deg−1 at 1° eccentricity, assuming a constant scaling factor for both groups. These estimates, if correct, suggest a role for early visual experience in establishing retinotopic mapping in cortex. We discuss the implications of altered cortical magnification for cortical size, and consider other neural changes that may account for the amblyopic results. PMID:25761341

  6. Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality.

    PubMed

    Bishara, Anthony J; Hittner, James B

    2015-10-01

    It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared with its major alternatives, including the Spearman rank-order correlation, the bootstrap estimate, the Box-Cox transformation family, and a general normalizing transformation (i.e., rankit), as well as to various bias adjustments. Nonnormality caused the correlation coefficient to be inflated by up to +.14, particularly when the nonnormality involved heavy-tailed distributions. Traditional bias adjustments worsened this problem, further inflating the estimate. The Spearman and rankit correlations eliminated this inflation and provided conservative estimates. Rankit also minimized random error for most sample sizes, except for the smallest samples ( n = 10), where bootstrapping was more effective. Overall, results justify the use of carefully chosen alternatives to the Pearson correlation when normality is violated.

  7. Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality

    PubMed Central

    Hittner, James B.

    2014-01-01

    It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared with its major alternatives, including the Spearman rank-order correlation, the bootstrap estimate, the Box–Cox transformation family, and a general normalizing transformation (i.e., rankit), as well as to various bias adjustments. Nonnormality caused the correlation coefficient to be inflated by up to +.14, particularly when the nonnormality involved heavy-tailed distributions. Traditional bias adjustments worsened this problem, further inflating the estimate. The Spearman and rankit correlations eliminated this inflation and provided conservative estimates. Rankit also minimized random error for most sample sizes, except for the smallest samples (n = 10), where bootstrapping was more effective. Overall, results justify the use of carefully chosen alternatives to the Pearson correlation when normality is violated. PMID:29795841

  8. Contributors to ozone episodes in three US/Mexico border twin-cities.

    PubMed

    Shi, Chune; Fernando, H J S; Yang, Jie

    2009-09-01

    The Process Analysis tools of the Community Multiscale Air Quality (CMAQ) modeling system together with back-trajectory analysis were used to assess potential contributors to ozone episodes that occurred during June 1-4, 2006, in three populated U.S.-Mexico border twin cities: San Diego/Tijuana, Imperial/Mexicali and El Paso/Ciudad Juárez. Validation of CMAQ output against surface ozone measurements indicates that the predictions are acceptable with regard to commonly recommended statistical standards and comparable to other reported studies. The mean normalized bias test (MNBT) and mean normalized gross error (MNGE) for hourly ozone fall well within the US EPA suggested range of +/-15% and 35%, respectively, except MNBT for El Paso. The MNBTs for maximum 8-h average ozone are larger than those for hourly ozone, but all the simulated maximum 8-h average ozone are within a factor of 2 of those measured in all three regions. The process and back-trajectory analyses indicate that the main sources of daytime ground-level ozone are the local photochemical production and regional transport. By integrating the effects of each process over the depth of the daytime planetary boundary layer (PBL), it is found that in the San Diego area (SD), chemistry and vertical advection contributed about 36%/48% and 64%/52% for June 2 and 3, respectively. This confirms the previous finding that high-altitude regional transport followed by fumigation contributes significantly to ozone in SD. The back-trajectory analysis shows that this ozone was mostly transported from the coastal area of southern California. For the episodes in Imperial Valley and El Paso, respectively, ozone was transported from the coastal areas of southern California and Mexico and from northern Texas and Oklahoma.

  9. Exploiting data representation for fault tolerance

    DOE PAGES

    Hoemmen, Mark Frederick; Elliott, J.; Sandia National Lab.; ...

    2015-01-06

    Incorrect computer hardware behavior may corrupt intermediate computations in numerical algorithms, possibly resulting in incorrect answers. Prior work models misbehaving hardware by randomly flipping bits in memory. We start by accepting this premise, and present an analytic model for the error introduced by a bit flip in an IEEE 754 floating-point number. We then relate this finding to the linear algebra concepts of normalization and matrix equilibration. In particular, we present a case study illustrating that normalizing both vector inputs of a dot product minimizes the probability of a single bit flip causing a large error in the dot product'smore » result. Moreover, the absolute error is either less than one or very large, which allows detection of large errors. Then, we apply this to the GMRES iterative solver. We count all possible errors that can be introduced through faults in arithmetic in the computationally intensive orthogonalization phase of GMRES, and show that when the matrix is equilibrated, the absolute error is bounded above by one.« less

  10. Hessian matrix approach for determining error field sensitivity to coil deviations.

    DOE PAGES

    Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.; ...

    2018-03-15

    The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code [Zhu et al., Nucl. Fusion 58(1):016008 (2018)] is utilized to provide fast and accurate calculationsmore » of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.« less

  11. Hessian matrix approach for determining error field sensitivity to coil deviations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.

    The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code [Zhu et al., Nucl. Fusion 58(1):016008 (2018)] is utilized to provide fast and accurate calculationsmore » of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.« less

  12. THE RESPONSE OF X-IRRADIATED LIMBS OF ADULT URODELES TO NORMAL TISSUE GRAFTS. I. EFFECTS OF AUTOGRAFTS OF SIXTY-DAY FOREARM REGENERATES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stinson, B.D.

    1963-06-01

    Results are reported of autoplastic transplantation of parts of nonirradiated, regenerated forelimb to the contralateral x-irradiated forelimb in adult Triturus viridescens. The right forelimbs were exposed to various doses of localized irradiation (1000 to 5000 r) followed by amputation of both left and right forelimbs through the mid forearm. Left limbs regenerated normally, but irradiated right limbs failed to exhibit any significant degree of regenerative activity over a 3-month period. Both forelimbs were reamputated through the distal humerus and observed for an additional two months. Left limbs produced normal regenerates, but irradiated right limbs gave no gross evidence of regenerationmore » at any of the radiation dose levels. Normal left regenerates were reamputated immediately distal to the elbow on the 60th day after the second amputation; the severed forearm was trimmed with scissors along anterior and posterior borders and denuded of skin over its proximal half, leaving an essentially complete forearm region as a normal autograft. This was implanted into the irradiated right upper arm stump, after ablation of the distal half of its humerus, with normal proximodistal polarity in all cases. The irradiated stump was reamputated through the distal portion of the implanted normal autograft two weeks after implantation, and was observed for four months. Periodic gross observations showed that over 90% of irradiated upper arms formed regenerates at a rate which paralleled that of nonirradiated controls. However, regenerates formed on irradiated upper arms exhibited a restriction of morphogenetic capacity, only 60% attaining 3- and 4-digit stages. Most of the morphologically more complex regenerates which developed on the irradiated upper arm stumps manifested left limb asymmetry despite their formation on right irradiated stumps, suggesting a relation between the asymmetry of the normal graft and that of the resulting regenerate. All regenerates which developed on irradiated upper arms showed marked deficiencies in the restoration of a complete proximodistal structural pattern appropriate to the level of amputation through the irradiated stump. However, the actual pattern produced was appropriate to the level of amputation through the implanted normal autograft. These findings support the hypothesis that normal grafts promote the formation of regenerates on irradiated limbs through the autonomous developmental activity of the transected graft. (BBB)« less

  13. Soil water nitrate and ammonium dynamics under a sewage effluent irrigated eucalypt plantation.

    PubMed

    Livesley, S J; Adams, M A; Grierson, P F

    2007-01-01

    Managed forests and plantations are appropriate ecosystems for land-based treatment of effluent, but concerns remain regarding nutrient contamination of ground- and surface waters. Monthly NO3-N and NH4-N concentrations in soil water, accumulated soil N, and gross ammonification and nitrification rates were measured in the second year of a second rotation of an effluent irrigated Eucalyptus globulus plantation in southern Western Australia to investigate the separate and interactive effects of drip and sprinkler irrigation, effluent and water irrigation, irrigation rate, and harvest residues retention. Nitrate concentrations of soil water were greater under effluent irrigation than water irrigation but remained <15 mg L(-1) when irrigated at the normal rate (1.5-2.0 mm d(-1)), and there was little evidence of downward movement. In contrast, NH4-N concentrations of soil water at 30 and 100 cm were generally greater under effluent irrigation than water irrigation when irrigated at the normal rate because of direct effluent NH4-N input and indirect ammonification of soil organic N. Drip irrigation of effluent approximately doubled peak NO3-N and NH4-N concentrations in soil water. Harvest residue retention reduced concentrations of soil water NO3-N at 30 cm during active sprinkler irrigation, but after 1 yr of irrigation there was no significant difference in the amount of N stored in the soil system, although harvest residue retention did enhance the "nitrate flush" in the following spring. Gross mineralization rates without irrigation increased with harvest residue retention and further increased with water irrigation. Irrigation with effluent further increased gross nitrification to 3.1 mg N kg(-1) d(-1) when harvest residues were retained but had no effect on gross ammonification, which suggested the importance of heterotrophic nitrification. The downward movement of N under effluent irrigation was dominated by NH4-N rather than NO3-N. Improving the capacity of forest soils to store and transform N inputs through organic matter management must consider the dynamic equilibrium between N input, uptake, and immobilization according to soil C status, and the effect changing microbial processes and environmental conditions can have on this equilibrium.

  14. Sustained Attention is Associated with Error Processing Impairment: Evidence from Mental Fatigue Study in Four-Choice Reaction Time Task

    PubMed Central

    Xiao, Yi; Ma, Feng; Lv, Yixuan; Cai, Gui; Teng, Peng; Xu, FengGang; Chen, Shanguang

    2015-01-01

    Attention is important in error processing. Few studies have examined the link between sustained attention and error processing. In this study, we examined how error-related negativity (ERN) of a four-choice reaction time task was reduced in the mental fatigue condition and investigated the role of sustained attention in error processing. Forty-one recruited participants were divided into two groups. In the fatigue experiment group, 20 subjects performed a fatigue experiment and an additional continuous psychomotor vigilance test (PVT) for 1 h. In the normal experiment group, 21 subjects only performed the normal experimental procedures without the PVT test. Fatigue and sustained attention states were assessed with a questionnaire. Event-related potential results showed that ERN (p < 0.005) and peak (p < 0.05) mean amplitudes decreased in the fatigue experiment. ERN amplitudes were significantly associated with the attention and fatigue states in electrodes Fz, FC1, Cz, and FC2. These findings indicated that sustained attention was related to error processing and that decreased attention is likely the cause of error processing impairment. PMID:25756780

  15. Spatial compression impairs prism adaptation in healthy individuals.

    PubMed

    Scriven, Rachel J; Newport, Roger

    2013-01-01

    Neglect patients typically present with gross inattention to one side of space following damage to the contralateral hemisphere. While prism-adaptation (PA) is effective in ameliorating some neglect behaviors, the mechanisms involved and their relationship to neglect remain unclear. Recent studies have shown that conscious strategic control (SC) processes in PA may be impaired in neglect patients, who are also reported to show extraordinarily long aftereffects compared to healthy participants. Determining the underlying cause of these effects may be the key to understanding therapeutic benefits. Alternative accounts suggest that reduced SC might result from a failure to detect prism-induced reaching errors properly either because (a) the size of the error is underestimated in compressed visual space or (b) pathologically increased error-detection thresholds reduce the requirement for error correction. The purpose of this study was to model these two alternatives in healthy participants and to examine whether SC and subsequent aftereffects were abnormal compared to standard PA. Each participant completed three PA procedures within a MIRAGE mediated reality environment with direction errors recorded before, during and after adaptation. During PA, visual feedback of the reach could be compressed, perturbed by noise, or represented veridically. Compressed visual space significantly reduced SC and aftereffects compared to control and noise conditions. These results support recent observations in neglect patients, suggesting that a distortion of spatial representation may successfully model neglect and explain neglect performance while adapting to prisms.

  16. Orbitopterional Approach for the Resection of a Suprasellar Craniopharyngioma: Adapting the Strategy to the Microsurgical and Pathologic Anatomy.

    PubMed

    Nguyen, Vincent; Basma, Jaafar; Klimo, Paul; Sorenson, Jeffrey; Michael, L Madison

    2018-04-01

    Objectives  To describe the orbitopterional approach for the resection of a suprasellar craniopharyngioma with emphasis on the microsurgical and pathological anatomy of such lesions. Design  After completing the orbitopterional craniotomy in one piece including a supraorbital ridge osteotomy, the Sylvian fissure was split in a distal to proximal direction. The ipsilateral optic nerve and internal carotid artery were identified. Establishing a corridor to the tumor through both the opticocarotid and optic cisterns allowed for a wide angle of attack. Using both corridors, a microsurgical gross total resection was achieved. A radical resection required transection of the stalk at the level of the hypothalamus. Photographs of the region are borrowed from Dr Rhoton's laboratory to illustrate the microsurgical anatomy. Understanding the cisternal and topographic relationships of the optic nerve, optic chiasm, and internal carotid artery is critical to achieving gross total resection while preserving normal anatomy. Participants  The surgery was performed by the senior author assisted by Dr. Jaafar Basma. The video was edited by Dr. Vincent Nguyen. Outcome Measures  Outcome was assessed with extent of resection and postoperative visual function. Results  A gross total resection of the tumor was achieved. The patient had resolution of her bitemporal hemianopsia. She had diabetes insipidus with normal anterior pituitary function. Conclusions  Understanding the microsurgical anatomy of the suprasellar region and the pathological anatomy of craniopharyngiomas is necessary to achieve a good resection of these tumors. The orbitopterional approach provides the appropriate access for such endeavor. The link to the video can be found at: https://youtu.be/Be6dtYIGqfs .

  17. Obesity and motor skills among 4 to 6-year-old children in the United States: nationally-representative surveys.

    PubMed

    Castetbon, Katia; Andreyeva, Tatiana

    2012-03-15

    Few population-based studies have assessed relationships between body weight and motor skills in young children. Our objective was to estimate the association between obesity and motor skills at 4 years and 5-6 years of age in the United States. We used repeated cross-sectional assessments of the national sample from the Early Childhood Longitudinal Survey-Birth Cohort (ECLS-B) of preschool 4-year-old children (2005-2006; n = 5 100) and 5-6-year-old kindergarteners (2006-2007; n = 4 700). Height, weight, and fine and gross motor skills were assessed objectively via direct standardized procedures. We used categorical and continuous measures of body weight status, including obesity (Body Mass Index (BMI) ≥ 95th percentile) and BMI z-scores. Multivariate logistic and linear models estimated the association between obesity and gross and fine motor skills in very young children adjusting for individual, social, and economic characteristics and parental involvement. The prevalence of obesity was about 15%. The relationship between motor skills and obesity varied across types of skills. For hopping, obese boys and girls had significantly lower scores, 20% lower in obese preschoolers and 10% lower in obese kindergarteners than normal weight counterparts, p < 0.01. Obese girls could jump 1.6-1.7 inches shorter than normal weight peers (p < 0.01). Other gross motor skills and fine motor skills of young children were not consistently related to BMI z-scores and obesity. Based on objective assessment of children's motor skills and body weight and a full adjustment for confounding covariates, we find no reduction in overall coordination and fine motor skills in obese young children. Motor skills are adversely associated with childhood obesity only for skills most directly related to body weight.

  18. Effects of oral 3% hydrogen peroxide used as an emetic on the gastroduodenal mucosa of healthy dogs.

    PubMed

    Niedzwecki, Alicia H; Book, Bradley P; Lewis, Kristin M; Estep, J Scot; Hagan, Joseph

    2017-03-01

    To characterize the extent of mucosal injury on the upper gastrointestinal tract following oral administration of 3% hydrogen peroxide (H 2 O 2 ) to induce emesis in normal dogs. Prospective clinical study. Specialty referral hospital. Seven staff-owned, healthy, adult dogs. Six dogs were assigned to the H 2 O 2 group and 1 dog was assigned as the apomorphine control. Dogs were anesthetized for gastroduodenoscopy with gross inspection and gastroduodenal biopsies at time 0 and 4 hours, 24 hours, 1 week, and 2 weeks following administration of oral 3% H 2 O 2 or subconjunctival apomorphine. Gross esophageal, gastric, and duodenal mucosal lesion scoring was performed by 2 blinded, experienced scorers. Biopsy samples were evaluated histologically by a veterinary pathologist. Grade I esophagitis was noted in 2 dogs at 4 hours and in 1 dog at 2 weeks, while grade III esophagitis was observed in 1 dog 1 week following H 2 O 2 administration. At 4 hours, gastric mucosal lesions were visualized in all dogs, and lesions worsened by 24 hours. Mild to moderate duodenal mucosal lesions were visualized up to 24 hours after administration. Histopathology identified the most severe gastric lesions at 4 hours as hemorrhage; at 24 hours as degeneration, necrosis, and mucosal edema; and at 1 week as inflammation. By 2 weeks, most visual and histopathologic lesions were resolved. No histopathologic lesions were identified at any time point in the dog administered apomorphine. Significant visual and histopathologic gastric lesions occurred following administration of 3% H 2 O 2 in all dogs. Less severe visual duodenal lesions were identified. As compared to H 2 O 2 dogs, minimal gross gastroduodenal lesions and normal histopathology were identified in the apomorphine control. © Veterinary Emergency and Critical Care Society 2016.

  19. Obesity and motor skills among 4 to 6-year-old children in the united states: nationally-representative surveys

    PubMed Central

    2012-01-01

    Background Few population-based studies have assessed relationships between body weight and motor skills in young children. Our objective was to estimate the association between obesity and motor skills at 4 years and 5-6 years of age in the United States. We used repeated cross-sectional assessments of the national sample from the Early Childhood Longitudinal Survey-Birth Cohort (ECLS-B) of preschool 4-year-old children (2005-2006; n = 5 100) and 5-6-year-old kindergarteners (2006-2007; n = 4 700). Height, weight, and fine and gross motor skills were assessed objectively via direct standardized procedures. We used categorical and continuous measures of body weight status, including obesity (Body Mass Index (BMI) ≥ 95th percentile) and BMI z-scores. Multivariate logistic and linear models estimated the association between obesity and gross and fine motor skills in very young children adjusting for individual, social, and economic characteristics and parental involvement. Results The prevalence of obesity was about 15%. The relationship between motor skills and obesity varied across types of skills. For hopping, obese boys and girls had significantly lower scores, 20% lower in obese preschoolers and 10% lower in obese kindergarteners than normal weight counterparts, p < 0.01. Obese girls could jump 1.6-1.7 inches shorter than normal weight peers (p < 0.01). Other gross motor skills and fine motor skills of young children were not consistently related to BMI z-scores and obesity. Conclusions Based on objective assessment of children's motor skills and body weight and a full adjustment for confounding covariates, we find no reduction in overall coordination and fine motor skills in obese young children. Motor skills are adversely associated with childhood obesity only for skills most directly related to body weight. PMID:22420636

  20. Systematic bias in genomic classification due to contaminating non-neoplastic tissue in breast tumor samples.

    PubMed

    Elloumi, Fathi; Hu, Zhiyuan; Li, Yan; Parker, Joel S; Gulley, Margaret L; Amos, Keith D; Troester, Melissa A

    2011-06-30

    Genomic tests are available to predict breast cancer recurrence and to guide clinical decision making. These predictors provide recurrence risk scores along with a measure of uncertainty, usually a confidence interval. The confidence interval conveys random error and not systematic bias. Standard tumor sampling methods make this problematic, as it is common to have a substantial proportion (typically 30-50%) of a tumor sample comprised of histologically benign tissue. This "normal" tissue could represent a source of non-random error or systematic bias in genomic classification. To assess the performance characteristics of genomic classification to systematic error from normal contamination, we collected 55 tumor samples and paired tumor-adjacent normal tissue. Using genomic signatures from the tumor and paired normal, we evaluated how increasing normal contamination altered recurrence risk scores for various genomic predictors. Simulations of normal tissue contamination caused misclassification of tumors in all predictors evaluated, but different breast cancer predictors showed different types of vulnerability to normal tissue bias. While two predictors had unpredictable direction of bias (either higher or lower risk of relapse resulted from normal contamination), one signature showed predictable direction of normal tissue effects. Due to this predictable direction of effect, this signature (the PAM50) was adjusted for normal tissue contamination and these corrections improved sensitivity and negative predictive value. For all three assays quality control standards and/or appropriate bias adjustment strategies can be used to improve assay reliability. Normal tissue sampled concurrently with tumor is an important source of bias in breast genomic predictors. All genomic predictors show some sensitivity to normal tissue contamination and ideal strategies for mitigating this bias vary depending upon the particular genes and computational methods used in the predictor.

  1. A feasibility study in adapting Shamos Bickel and Hodges Lehman estimator into T-Method for normalization

    NASA Astrophysics Data System (ADS)

    Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan

    2018-03-01

    T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.

  2. Wave aberrations in rhesus monkeys with vision-induced ametropias

    PubMed Central

    Ramamirtham, Ramkumar; Kee, Chea-su; Hung, Li-Fang; Qiao-Grider, Ying; Huang, Juan; Roorda, Austin; Smith, Earl L.

    2007-01-01

    The purpose of this study was to investigate the relationship between refractive errors and high-order aberrations in infant rhesus monkeys. Specifically, we compared the monochromatic wave aberrations measured with a Shack-Hartman wavefront sensor between normal monkeys and monkeys with vision-induced refractive errors. Shortly after birth, both normal monkeys and treated monkeys reared with optically induced defocus or form deprivation showed a decrease in the magnitude of high-order aberrations with age. However, the decrease in aberrations was typically smaller in the treated animals. Thus, at the end of the lens-rearing period, higher than normal amounts of aberrations were observed in treated eyes, both hyperopic and myopic eyes and treated eyes that developed astigmatism, but not spherical ametropias. The total RMS wavefront error increased with the degree of spherical refractive error, but was not correlated with the degree of astigmatism. Both myopic and hyperopic treated eyes showed elevated amounts of coma and trefoil and the degree of trefoil increased with the degree of spherical ametropia. Myopic eyes also exhibited a much higher prevalence of positive spherical aberration than normal or treated hyperopic eyes. Following the onset of unrestricted vision, the amount of high-order aberrations decreased in the treated monkeys that also recovered from the experimentally induced refractive errors. Our results demonstrate that high-order aberrations are influenced by visual experience in young primates and that the increase in high-order aberrations in our treated monkeys appears to be an optical byproduct of the vision-induced alterations in ocular growth that underlie changes in refractive error. The results from our study suggest that the higher amounts of wave aberrations observed in ametropic humans are likely to be a consequence, rather than a cause, of abnormal refractive development. PMID:17825347

  3. Comparison of Pre-Attentive Auditory Discrimination at Gross and Fine Difference between Auditory Stimuli.

    PubMed

    Sanju, Himanshu Kumar; Kumar, Prawin

    2016-10-01

    Introduction  Mismatch Negativity is a negative component of the event-related potential (ERP) elicited by any discriminable changes in auditory stimulation. Objective  The present study aimed to assess pre-attentive auditory discrimination skill with fine and gross difference between auditory stimuli. Method  Seventeen normal hearing individual participated in the study. To assess pre-attentive auditory discrimination skill with fine difference between auditory stimuli, we recorded mismatch negativity (MMN) with pair of stimuli (pure tones), using /1000 Hz/ and /1010 Hz/ with /1000 Hz/ as frequent stimulus and /1010 Hz/ as infrequent stimulus. Similarly, we used /1000 Hz/ and /1100 Hz/ with /1000 Hz/ as frequent stimulus and /1100 Hz/ as infrequent stimulus to assess pre-attentive auditory discrimination skill with gross difference between auditory stimuli. The study included 17 subjects with informed consent. We analyzed MMN for onset latency, offset latency, peak latency, peak amplitude, and area under the curve parameters. Result  Results revealed that MMN was present only in 64% of the individuals in both conditions. Further Multivariate Analysis of Variance (MANOVA) showed no significant difference in all measures of MMN (onset latency, offset latency, peak latency, peak amplitude, and area under the curve) in both conditions. Conclusion  The present study showed similar pre-attentive skills for both conditions: fine (1000 Hz and 1010 Hz) and gross (1000 Hz and 1100 Hz) difference in auditory stimuli at a higher level (endogenous) of the auditory system.

  4. Gross, histologic, and computed tomographic characterization of nonpathological intrascleral cartilage and bone in the domestic goat (Capra aegagrus hircus).

    PubMed

    Tusler, Charlotte A; Good, Kathryn L; Maggs, David J; Zwingenberger, Allison L; Reilly, Christopher M

    2017-05-01

    To characterize grossly, histologically, and via computed tomography (CT) the appearance of intrascleral cartilage, bone, or both in domestic goats with otherwise normal eyes and to correlate this with age, sex, and breed. Sixty-eight domestic goats (89 eyes). Forty-nine formalin-fixed globes from 38 goats underwent high-resolution CT, and gross and light microscopic examination. An additional 40 eyes from 30 goats underwent light microscopy only. Age, breed, and sex of affected goats were retrieved from medical records. Considering all methods of evaluation collectively, cartilage was detected in 42% of eyes (44% of goats) and bone in 11% of eyes (12% of goats); bone was never seen without cartilage. Goats in which bone, cartilage, or both were detected ranged from 0.25 to 13 (median = 3.5) years of age, represented 11 of 12 breeds of the study population, and had a male:female ratio of 11:19. Bone was detected in the eyes of significantly more males (n = 8) than females (n = 2). No sex predilection was noted for cartilage alone. Histology revealed intrascleral chondrocyte-like cells, hyaline cartilage, and islands of lamellar bone. Some regions of bone had central, adipose-rich, marrow-like cavities. CT localized mineralized tissue as adjacent to or partially surrounding the optic nerve head. This is the first report of intrascleral bone or cartilage in a normal goat and of intrascleral bone in an otherwise normal mammal. The high prevalence of intrascleral cartilage and bone in this study suggests that this finding is normal and likely represents an adaptation in goats. © 2016 American College of Veterinary Ophthalmologists.

  5. Growth models and the expected distribution of fluctuating asymmetry

    USGS Publications Warehouse

    Graham, John H.; Shimizu, Kunio; Emlen, John M.; Freeman, D. Carl; Merkel, John

    2003-01-01

    Multiplicative error accounts for much of the size-scaling and leptokurtosis in fluctuating asymmetry. It arises when growth involves the addition of tissue to that which is already present. Such errors are lognormally distributed. The distribution of the difference between two lognormal variates is leptokurtic. If those two variates are correlated, then the asymmetry variance will scale with size. Inert tissues typically exhibit additive error and have a gamma distribution. Although their asymmetry variance does not exhibit size-scaling, the distribution of the difference between two gamma variates is nevertheless leptokurtic. Measurement error is also additive, but has a normal distribution. Thus, the measurement of fluctuating asymmetry may involve the mixing of additive and multiplicative error. When errors are multiplicative, we recommend computing log E(l) − log E(r), the difference between the logarithms of the expected values of left and right sides, even when size-scaling is not obvious. If l and r are lognormally distributed, and measurement error is nil, the resulting distribution will be normal, and multiplicative error will not confound size-related changes in asymmetry. When errors are additive, such a transformation to remove size-scaling is unnecessary. Nevertheless, the distribution of l − r may still be leptokurtic.

  6. Assessing Error Awareness as a Mediator of the Relationship between Subjective Concerns and Cognitive Performance in Older Adults

    PubMed Central

    Buckley, Rachel F.; Laming, Gemma; Chen, Li Peng Evelyn; Crole, Alice; Hester, Robert

    2016-01-01

    Objectives Subjective concerns of cognitive decline (SCD) often manifest in older adults who exhibit objectively normal cognitive functioning. This subjective-objective discrepancy is counter-intuitive when mounting evidence suggests that subjective concerns relate to future clinical progression to Alzheimer’s disease, and so possess the potential to be a sensitive early behavioural marker of disease. In the current study, we aimed to determine whether individual variability in conscious awareness of errors in daily life might mediate this subjective-objective relationship. Methods 67 cognitively-normal older adults underwent cognitive, SCD and mood tests, and an error awareness task. Results Poorer error awareness was not found to mediate a relationship between SCD and objective performance. Furthermore, non-clinical levels of depressive symptomatology were a primary driving factor of SCD and error awareness, and significantly mediated a relationship between the two. Discussion We were unable to show that poorer error awareness mediates SCD and cognitive performance in older adults. Our study does suggest, however, that underlying depressive symptoms influence both poorer error awareness and greater SCD severity. Error awareness is thus not recommended as a proxy for SCD, as reduced levels of error awareness do not seem to be reflected by greater SCD. PMID:27832173

  7. Error Characterization of Vision-Aided Navigation Systems

    DTIC Science & Technology

    2013-03-01

    Cll .0 0 .8 0 ..... a_ 0.6 0 .4 0 .2 0 -8 X 10.3 3.5 3 ~ 2.5 (/) c (]) 0 >- 2 o= :..0 Cll .0 0 1.5 ..... a_ 0.5 Normalized...Normalized Histogram and Gaussian fit , E Pos Err, i = 560 3_5 ~ : 0.45247 cr : 3 1.0686 > 2.5 ;t::: Ill c::: (].) 0 2 > := :.c Cll -g 1_5...4.5 4 > ~ 3 (].) 0 ~ 2.5 :.c Cll ..c 2 e a.. 1.5 0 .5 -6 0 Error (m) 2 4 6 8 Figure 4.18: Normalized Down Position

  8. Craniopagus twins: surgical anatomy and embryology and their implications.

    PubMed Central

    O'Connell, J E

    1976-01-01

    Craniopagus is of two types, partial and total. In the partial form the union is of limited extent, particularly as regards its depth, and separation can be expected to be followed by the survival of both children to lead normal lives. In the total form, of which three varieties can be recognized, the two brains can be regarded as lying within a single cranium and a series of gross intracranial abnormalities develops. These include deformity of the skull base, deformity and displacement of the cerebrum, and a gross circulatory abnormality. It is considered that these and other abnormalities, unlike the primary defect, which is defined, are secondary ones; explanations for them, based on anatomy and embryology, are put forward. The implications of the various anomalies are discussed and the ethical aspects of attempted separation in these major unions considered. Images PMID:1255206

  9. The Small Intestine in Experimental Acute Iron Poisoning

    PubMed Central

    Hosking, C. S.

    1971-01-01

    A histological examination of the small intestine of rats following acute iron poisoning by ingestion of ferrous sulphate solution is presented. The changes that occur depend on the dose and can be broadly divided into 2 classes. When a very large dose is given (greater than 0·3 mg. Fe/g.), there is gross shrinkage of the villi, sub-epithelial oedema and eventual loss of epithelium. With doses less than 0·2 mg. Fe/g., contraction of villi was not so obvious, but as the animals survive longer with the lower dose, the changes often progressed to gross destruction of the villous stalk. Some of the animals given smaller doses survived and in those that were killed after 24 hr, the mucosa of the small intestine was essentially normal. ImagesFigs. 1-4Figs. 5-7Figs. 8-11 PMID:5547659

  10. Peliosis hepatis in a dog infected with Bartonella henselae.

    PubMed

    Kitchell, B E; Fan, T M; Kordick, D; Breitschwerdt, E B; Wollenberg, G; Lichtensteiger, C A

    2000-02-15

    A 6-year-old spayed female Golden Retriever was examined because of generalized weakness and abdominal distention. Abdominal ultrasonography revealed a large quantity of peritoneal fluid. In addition, the liver appeared larger than normal and contained multiple, small, nodular masses and cyst-like structures. Abdominal exploratory surgery was performed, and 5 L of serosanguineous peritoneal fluid was removed. Gross lesions were not found in the stomach, kidneys, intestines, adrenal glands, or urinary bladder. There were diffuse cystic nodules in all liver lobes. The dog did not recover from anesthesia. A diagnosis of peliosis hepatis was made on the basis of gross and histologic appearance of the liver. A polymerase chain reaction assay revealed Bartonella henselae DNA in liver specimens. To our knowledge, this is the first report of molecular evidence of B henselae infection in a dog with peliosis hepatis.

  11. Advancement of proprotor technology. Task 1: Design study summary. [aerodynamic concept of minimum size tilt proprotor research aircraft

    NASA Technical Reports Server (NTRS)

    1969-01-01

    A tilt-proprotor proof-of-concept aircraft design study has been conducted. The results are presented. The ojective of the contract is to advance the state of proprotor technology through design studies and full-scale wind-tunnel tests. The specific objective is to conduct preliminary design studies to define a minimum-size tilt-proprotor research aircraft that can perform proof-of-concept flight research. The aircraft that results from these studies is a twin-engine, high-wing aircraft with 25-foot, three-bladed tilt proprotors mounted on pylons at the wingtips. Each pylon houses a Pratt and Whitney PT6C-40 engine with a takeoff rating of 1150 horsepower. Empty weight is estimated at 6876 pounds. The normal gross weight is 9500 pounds, and the maximum gross weight is 12,400 pounds.

  12. Bayes classification of terrain cover using normalized polarimetric data

    NASA Technical Reports Server (NTRS)

    Yueh, H. A.; Swartz, A. A.; Kong, J. A.; Shin, R. T.; Novak, L. M.

    1988-01-01

    The normalized polarimetric classifier (NPC) which uses only the relative magnitudes and phases of the polarimetric data is proposed for discrimination of terrain elements. The probability density functions (PDFs) of polarimetric data are assumed to have a complex Gaussian distribution, and the marginal PDF of the normalized polarimetric data is derived by adopting the Euclidean norm as the normalization function. The general form of the distance measure for the NPC is also obtained. It is demonstrated that for polarimetric data with an arbitrary PDF, the distance measure of NPC will be independent of the normalization function selected even when the classifier is mistrained. A complex Gaussian distribution is assumed for the polarimetric data consisting of grass and tree regions. The probability of error for the NPC is compared with those of several other single-feature classifiers. The classification error of NPCs is shown to be independent of the normalization function.

  13. Intensity-modulated proton therapy further reduces normal tissue exposure during definitive therapy for locally advanced distal esophageal tumors: a dosimetric study.

    PubMed

    Welsh, James; Gomez, Daniel; Palmer, Matthew B; Riley, Beverly A; Mayankkumar, Amin V; Komaki, Ritsuko; Dong, Lei; Zhu, X Ronald; Likhacheva, Anna; Liao, Zhongxing; Hofstetter, Wayne L; Ajani, Jaffer A; Cox, James D

    2011-12-01

    We have previously found that ≤ 75% of treatment failures after chemoradiotherapy for unresectable esophageal cancer appear within the gross tumor volume and that intensity-modulated (photon) radiotherapy (IMRT) might allow dose escalation to the tumor without increasing normal tissue toxicity. Proton therapy might allow additional dose escalation, with even lower normal tissue toxicity. In the present study, we compared the dosimetric parameters for photon IMRT with that for intensity-modulated proton therapy (IMPT) for unresectable, locally advanced, distal esophageal cancer. Four plans were created for each of 10 patients. IMPT was delivered using anteroposterior (AP)/posteroanterior beams, left posterior oblique/right posterior oblique (LPO/RPO) beams, or AP/LPO/RPO beams. IMRT was delivered with a concomitant boost to the gross tumor volume. The dose was 65.8 Gy to the gross tumor volume and 50.4 Gy to the planning target volume in 28 fractions. Relative to IMRT, the IMPT (AP/posteroanterior) plan led to considerable reductions in the mean lung dose (3.18 vs. 8.27 Gy, p<.0001) and the percentage of lung volume receiving 5, 10, and 20 Gy (p≤.0006) but did not reduce the cardiac dose. The IMPT LPO/RPO plan also reduced the mean lung dose (4.9 Gy vs. 8.2 Gy, p<.001), the heart dose (mean cardiac dose and percentage of the cardiac volume receiving 10, 20, and 30 Gy, p≤.02), and the liver dose (mean hepatic dose 5 Gy vs. 14.9 Gy, p<.0001). The IMPT AP/LPO/RPO plan led to considerable reductions in the dose to the lung (p≤.005), heart (p≤.003), and liver (p≤.04). Compared with IMRT, IMPT for distal esophageal cancer lowered the dose to the heart, lung, and liver. The AP/LPO/RPO beam arrangement was optimal for sparing all three organs. The dosimetric benefits of protons will need to be tailored to each patient according to their specific cardiac and pulmonary risks. IMPT for esophageal cancer will soon be investigated further in a prospective trial at our institution. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Background Error Correlation Modeling with Diffusion Operators

    DTIC Science & Technology

    2013-01-01

    RESPONSIBLE PERSON 19b. TELEPHONE NUMBER (Include area code) 07-10-2013 Book Chapter Background Error Correlation Modeling with Diffusion Operators...normalization Unclassified Unclassified Unclassified UU 27 Max Yaremchuk (228) 688-5259 Reset Chapter 8 Background error correlation modeling with diffusion ...field, then a structure like this simulates enhanced diffusive transport of model errors in the regions of strong cur- rents on the background of

  15. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.

    PubMed

    Lin, Johnny; Bentler, Peter M

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.

  16. Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits

    NASA Astrophysics Data System (ADS)

    Hoogland, Jiri; Kleiss, Ronald

    1997-04-01

    In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.

  17. Refraction test

    MedlinePlus

    ... purpose is to determine whether you have a refractive error (a need for glasses or contact lenses). For ... glasses or contact lenses) is normal, then the refractive error is zero (plano) and your vision should be ...

  18. Spectrum of gross motor function in extremely low birth weight children with cerebral palsy at 18 months of age.

    PubMed

    Vohr, Betty R; Msall, Michael E; Wilson, Dee; Wright, Linda L; McDonald, Scott; Poole, W Kenneth

    2005-07-01

    The purpose of this study was to evaluate the relationship between cerebral palsy (CP) diagnoses as measured by the topographic distribution of the tone abnormality with level of function on the Gross Motor Function Classification System (GMFCS) and developmental performance on the Bayley Scales of Infant Development II (BSID-II). It was hypothesized that (1) the greater the number of limbs involved, the higher the GMFCS and the lower the BSID-II Motor Scores and (2) there would be a spectrum of function and skill achievement on the GMFCS and BSID-II Motor Scores for children in each of the CP categories. A multicenter, longitudinal cohort study was conducted of 1860 extremely low birth weight (ELBW) infants who were born between August 1, 1995 and February 1, 1998, and evaluated at 18 to 22 months' corrected age. Children were categorized into impairment groups on the basis of the typography of neurologic findings: spastic quadriplegia, triplegia, diplegia, hemiplegia, monoplegia, hypotonic and/or athetotic CP, other abnormal neurologic findings, and normal. The neurologic category then was compared with GMFCS level and BSID-II Motor Scores. A total of 282 (15.2%) of the 1860 children evaluated had CP. Children with more limbs involved had more abnormal GMFCS levels and lower BSID-II scores, reflecting more severe functional limitations. However, for each CP diagnostic category, there was a spectrum of gross motor functional levels and BSID-II scores. Although more than 1 (26.6%) in 4 of the children with CP had moderate to severe gross motor functional impairment, 1 (27.6%) in 4 had motor functional skills that allowed for ambulation. Given the range of gross motor skill outcomes for specific types of CP, the GMFCS is a better indicator of gross motor functional impairment than the traditional categorization of CP that specifies the number of limbs with neurologic impairment. The neurodevelopmental assessment of young children is optimized by combining a standard neurologic examination with measures of gross and fine motor function (GMFCS and Bayley Psychomotor Developmental Index). Additional studies to examine longer term functional motor and adaptive-functional developmental skills are required to devise strategies that delineate therapies to optimize functional performance.

  19. Air Force Operational Test and Evaluation Center, Volume 2, Number 2

    DTIC Science & Technology

    1988-01-01

    the special class of attributes arc recorded, cost or In place of the normalization ( I). we propose beliefit. the lollowins normalization NUMERICAL ...comprchcnsi\\c set of modular basic data flow to meet requirements at test tools ,. designed to provide flexible data reduction start, then building to...possible. a totlinaion ot the two position error measurement techniques arc used SLR is a methd of fitting a linear model o accumlulate a position error

  20. Effect of Box-Cox transformation on power of Haseman-Elston and maximum-likelihood variance components tests to detect quantitative trait Loci.

    PubMed

    Etzel, C J; Shete, S; Beasley, T M; Fernandez, J R; Allison, D B; Amos, C I

    2003-01-01

    Non-normality of the phenotypic distribution can affect power to detect quantitative trait loci in sib pair studies. Previously, we observed that Winsorizing the sib pair phenotypes increased the power of quantitative trait locus (QTL) detection for both Haseman-Elston (HE) least-squares tests [Hum Hered 2002;53:59-67] and maximum likelihood-based variance components (MLVC) analysis [Behav Genet (in press)]. Winsorizing the phenotypes led to a slight increase in type 1 error in H-E tests and a slight decrease in type I error for MLVC analysis. Herein, we considered transforming the sib pair phenotypes using the Box-Cox family of transformations. Data were simulated for normal and non-normal (skewed and kurtic) distributions. Phenotypic values were replaced by Box-Cox transformed values. Twenty thousand replications were performed for three H-E tests of linkage and the likelihood ratio test (LRT), the Wald test and other robust versions based on the MLVC method. We calculated the relative nominal inflation rate as the ratio of observed empirical type 1 error divided by the set alpha level (5, 1 and 0.1% alpha levels). MLVC tests applied to non-normal data had inflated type I errors (rate ratio greater than 1.0), which were controlled best by Box-Cox transformation and to a lesser degree by Winsorizing. For example, for non-transformed, skewed phenotypes (derived from a chi2 distribution with 2 degrees of freedom), the rates of empirical type 1 error with respect to set alpha level=0.01 were 0.80, 4.35 and 7.33 for the original H-E test, LRT and Wald test, respectively. For the same alpha level=0.01, these rates were 1.12, 3.095 and 4.088 after Winsorizing and 0.723, 1.195 and 1.905 after Box-Cox transformation. Winsorizing reduced inflated error rates for the leptokurtic distribution (derived from a Laplace distribution with mean 0 and variance 8). Further, power (adjusted for empirical type 1 error) at the 0.01 alpha level ranged from 4.7 to 17.3% across all tests using the non-transformed, skewed phenotypes, from 7.5 to 20.1% after Winsorizing and from 12.6 to 33.2% after Box-Cox transformation. Likewise, power (adjusted for empirical type 1 error) using leptokurtic phenotypes at the 0.01 alpha level ranged from 4.4 to 12.5% across all tests with no transformation, from 7 to 19.2% after Winsorizing and from 4.5 to 13.8% after Box-Cox transformation. Thus the Box-Cox transformation apparently provided the best type 1 error control and maximal power among the procedures we considered for analyzing a non-normal, skewed distribution (chi2) while Winzorizing worked best for the non-normal, kurtic distribution (Laplace). We repeated the same simulations using a larger sample size (200 sib pairs) and found similar results. Copyright 2003 S. Karger AG, Basel

  1. Technetium-99m mercaptoacetyltriglycine clearance: reference values for infants and children.

    PubMed

    Schofer, O; König, G; Bartels, U; Bockisch, A; Piepenburg, R; Beetz, R; Meyer, G; Hahn, K

    1995-11-01

    Six hundred and thirty-nine clearance studies performed in children aged 7 days to 19 years utilizing technetium-99m mercaptoacetyltriglycine (MAG 3) were retrospectively analysed. Standardized conditions for the investigation included: parenteral hydration (60 ml/hxm2 body surface) in addition to normal oral fluid intake, weight-related dose of 99mTc-MAG 3 (1 MBq/kg body weight, minimum 15 MBq) and calculation of clearance according to Bubeck et al. Of the 513 children, 169 included in this analysis could be classified as "normal" with regard to their renal function. Normal kidney function was judged by the following criteria: normal GFR for age, normal tubular function (absence of proteinuria and glucosuria), normal renal parenchyma (on ultrasonography, MAG 3 scan and intravenous pyelography), absence of significant obstruction and gross reflux (>grade I), no single kidney and no difference in split renal function >20%. Results showed increasing MAG 3 clearance values for infants during the first months of life, reaching the normal range for older children and adults between 7 and 12 months.

  2. Estimates and Standard Errors for Ratios of Normalizing Constants from Multiple Markov Chains via Regeneration

    PubMed Central

    Doss, Hani; Tan, Aixin

    2017-01-01

    In the classical biased sampling problem, we have k densities π1(·), …, πk(·), each known up to a normalizing constant, i.e. for l = 1, …, k, πl(·) = νl(·)/ml, where νl(·) is a known function and ml is an unknown constant. For each l, we have an iid sample from πl,·and the problem is to estimate the ratios ml/ms for all l and all s. This problem arises frequently in several situations in both frequentist and Bayesian inference. An estimate of the ratios was developed and studied by Vardi and his co-workers over two decades ago, and there has been much subsequent work on this problem from many different perspectives. In spite of this, there are no rigorous results in the literature on how to estimate the standard error of the estimate. We present a class of estimates of the ratios of normalizing constants that are appropriate for the case where the samples from the πl’s are not necessarily iid sequences, but are Markov chains. We also develop an approach based on regenerative simulation for obtaining standard errors for the estimates of ratios of normalizing constants. These standard error estimates are valid for both the iid case and the Markov chain case. PMID:28706463

  3. Estimates and Standard Errors for Ratios of Normalizing Constants from Multiple Markov Chains via Regeneration.

    PubMed

    Doss, Hani; Tan, Aixin

    2014-09-01

    In the classical biased sampling problem, we have k densities π 1 (·), …, π k (·), each known up to a normalizing constant, i.e. for l = 1, …, k , π l (·) = ν l (·)/ m l , where ν l (·) is a known function and m l is an unknown constant. For each l , we have an iid sample from π l , · and the problem is to estimate the ratios m l /m s for all l and all s . This problem arises frequently in several situations in both frequentist and Bayesian inference. An estimate of the ratios was developed and studied by Vardi and his co-workers over two decades ago, and there has been much subsequent work on this problem from many different perspectives. In spite of this, there are no rigorous results in the literature on how to estimate the standard error of the estimate. We present a class of estimates of the ratios of normalizing constants that are appropriate for the case where the samples from the π l 's are not necessarily iid sequences, but are Markov chains. We also develop an approach based on regenerative simulation for obtaining standard errors for the estimates of ratios of normalizing constants. These standard error estimates are valid for both the iid case and the Markov chain case.

  4. Syntactic error modeling and scoring normalization in speech recognition: Error modeling and scoring normalization in the speech recognition task for adult literacy training

    NASA Technical Reports Server (NTRS)

    Olorenshaw, Lex; Trawick, David

    1991-01-01

    The purpose was to develop a speech recognition system to be able to detect speech which is pronounced incorrectly, given that the text of the spoken speech is known to the recognizer. Better mechanisms are provided for using speech recognition in a literacy tutor application. Using a combination of scoring normalization techniques and cheater-mode decoding, a reasonable acceptance/rejection threshold was provided. In continuous speech, the system was tested to be able to provide above 80 pct. correct acceptance of words, while correctly rejecting over 80 pct. of incorrectly pronounced words.

  5. A closer look at the effect of preliminary goodness-of-fit testing for normality for the one-sample t-test.

    PubMed

    Rochon, Justine; Kieser, Meinhard

    2011-11-01

    Student's one-sample t-test is a commonly used method when inference about the population mean is made. As advocated in textbooks and articles, the assumption of normality is often checked by a preliminary goodness-of-fit (GOF) test. In a paper recently published by Schucany and Ng it was shown that, for the uniform distribution, screening of samples by a pretest for normality leads to a more conservative conditional Type I error rate than application of the one-sample t-test without preliminary GOF test. In contrast, for the exponential distribution, the conditional level is even more elevated than the Type I error rate of the t-test without pretest. We examine the reasons behind these characteristics. In a simulation study, samples drawn from the exponential, lognormal, uniform, Student's t-distribution with 2 degrees of freedom (t(2) ) and the standard normal distribution that had passed normality screening, as well as the ingredients of the test statistics calculated from these samples, are investigated. For non-normal distributions, we found that preliminary testing for normality may change the distribution of means and standard deviations of the selected samples as well as the correlation between them (if the underlying distribution is non-symmetric), thus leading to altered distributions of the resulting test statistics. It is shown that for skewed distributions the excess in Type I error rate may be even more pronounced when testing one-sided hypotheses. ©2010 The British Psychological Society.

  6. The distribution of seismic velocities and attenuation in the earth. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Hart, R. S.

    1977-01-01

    Estimates of the radial distribution of seismic velocities and density and of seismic attenuation within the earth are obtained through inversion of body wave, surface wave, and normal mode data. The effect of attenuation related dispersion on gross earth structure, and on the reliability of eigenperiod identifications is discussed. The travel time baseline discrepancies between body waves and free oscillation models are examined and largely resolved.

  7. Error recovery in shared memory multiprocessors using private caches

    NASA Technical Reports Server (NTRS)

    Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.

    1990-01-01

    The problem of recovering from processor transient faults in shared memory multiprocesses systems is examined. A user-transparent checkpointing and recovery scheme using private caches is presented. Processes can recover from errors due to faulty processors by restarting from the checkpointed computation state. Implementation techniques using checkpoint identifiers and recovery stacks are examined as a means of reducing performance degradation in processor utilization during normal execution. This cache-based checkpointing technique prevents rollback propagation, provides rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions to take error latency into account are presented.

  8. Long Wavelength Ripples in the Nearshore

    NASA Astrophysics Data System (ADS)

    Alcinov, T.; Hay, A. E.

    2008-12-01

    Sediment bedforms are ubiquitous in the nearshore environment, and their characteristics and evolution have a direct effect on the hydrodynamics and the rate of sediment transport. The focus of this study is long wavelength ripples (LWR) observed at two locations in the nearshore at roughly 3m water depth under combined current and wave conditions in Duck, North Carolina. LWR are straight-crested bedforms with wavelengths in the range of 20-200cm, and steepness of about 0.1. They occur in the build up and decay of storms, in a broader range of values of the flow parameters compared to other ripple types. The main goal of the study is to test the maximum gross bedform-normal transport (mGBNT) hypothesis, which states that the orientation of ripples in directionally varying flows is such that the gross sediment transport normal to the ripple crest is maximized. Ripple wavelengths and orientation are measured from rotary fanbeam images and current and wave conditions are obtained from electromagnetic (EM) flowmeters and an offshore pressure gauge array. Preliminary tests in which transport direction is estimated from the combined flow velocity vectors indicate that the mGBNT is not a good predictor of LWR orientation. Results from tests of the mGBNT hypothesis using a sediment transport model will be presented.

  9. Increased collagen-linked pentosidine levels and advanced glycosylation end products in early diabetic nephropathy.

    PubMed Central

    Beisswenger, P J; Moore, L L; Brinck-Johnsen, T; Curphey, T J

    1993-01-01

    RATIONALE: Advanced glycosylation end products (AGEs) may play an important role in the development of diabetic vascular sequelae. An AGE cross-link, pentosidine, is a sensitive and specific marker for tissue levels of AGEs. OBJECTIVES: To evaluate the role of AGEs in the development of diabetic nephropathy and retinopathy, we studied pentosidine levels and the clinical characteristics of 48 subjects with insulin-dependent diabetes mellitus. Diabetic nephropathy was classified as normal, microalbuminuria, or gross proteinuria, and retinopathy was graded as none, background, or proliferative. NEWLY OBSERVED FINDINGS: Significant elevation of pentosidine (P = 0.025) was found in subjects with microalbuminuria or gross proteinuria (73.03 +/- 9.47 vs 76.46 +/- 6.37 pmol/mg col) when compared with normal (56.96 +/- 3.26 pmol/mg col). Multivariate analysis to correct for age, duration of diabetes, and gender did not modify the results. Elevated pentosidine levels were also found in those with proliferative when compared with those with background retinopathy (75.86 +/- 5.66 vs 60.42 +/- 5.98 pmol/mg col) (P < 0.05). CONCLUSIONS: Microalbuminuria is associated with elevated levels of pentosidine similar to those found in overt diabetic nephropathy suggesting that elevated AGE levels are already present during the earliest detectable phase of diabetic nephropathy. Images PMID:8325987

  10. Integrating gross pathology into teaching of undergraduate medical science students using human cadavers.

    PubMed

    Gopalan, Vinod; Dissabandara, Lakal; Nirthanan, Selvanayagam; Forwood, Mark R; Lam, Alfred King-Yin

    2016-09-01

    Human cadavers offer a great opportunity for histopathology students for the learning and teaching of tissue pathology. In this study, we aimed to implement an integrated learning approach by using cadavers to enhance students' knowledge and to develop their skills in gross tissue identification, handling and dissection techniques. A total of 35 students enrolled in the undergraduate medical science program participated in this study. A 3-hour laboratory session was conducted that included an active exploration of cadaveric specimens to identify normal and pathological tissues as well as tissue dissection. The majority of the students strongly agreed that the integration of normal and morbid anatomy improved their understanding of tissue pathology. All the students either agreed or strongly agreed that this laboratory session was useful to improve their tissue dissection and instrument handling skills. Furthermore, students from both cohorts rated the session as very relevant to their learning and recommended that this approach be added to the existing histopathology curriculum. To conclude, an integrated cadaver-based practical session can be used effectively to enhance the learning experience of histopathology science students, as well as improving their manual skills of tissue treatment, instrument handling and dissection. © 2016 Japanese Society of Pathology and John Wiley & Sons Australia, Ltd.

  11. The Dynamics of Catastrophic and Impoverishing Health Spending in Indonesia: How Well Does the Indonesian Health Care Financing System Perform?

    PubMed

    Aji, Budi; Mohammed, Shafiu; Haque, Md Aminul; Allegri, Manuela De

    2017-09-01

    Our study examines the incidence and intensity of catastrophic and impoverishing health spending in Indonesia. A panel data set was used from 4 waves of the Indonesian Family Life Surveys 1993, 1997, 2000, and 2007. Catastrophic health expenditure was measured by calculating the ratio of out-of-pocket payments to household income. Then, we calculated poverty indicators as a measure of impoverishing spending in the health care financing system. Head count, overshoot, and mean positive overshoot for each given threshold in 2000 were lower than other surveyed periods; otherwise, fraction headcount in 2007 of households were the higher. Between 1993 and 2007, the percentage of households in poverty decreased, both in gross and net of health payments. However, in each year, the percentages of households in poverty using net health payments were higher than the gross. The estimates of poverty gap, normalized poverty gap, and normalized mean positive gap decreased across the survey periods. The health care financing system performance has shown positive evidence for financial protection offerings. A sound relationship between improvements of health care financing performance and the existing health reform demonstrated a mutual reinforcement, which should be maintained to promote equity and fairness in health care financing in Indonesia.

  12. Culture appropriate indicators for monitoring growth and development of urban and rural children below 6 years.

    PubMed

    Dixit, A; Govil, S; Patel, N V

    1992-03-01

    In this cross sectional study, 2000 apparently normal children aged 0-6 years (1200 urban and 800 rural), were nutritionally and developmentally assessed and their environment scrutinized for possible risk factors. Measurement of mid upper arm circumference (MUAC) using standard techniques revealed malnutrition in 44% of the rural and 24% of the urban children especially in the 2-6 years of age group. Culture appropriate indicators of psycho-social development picked up gross delays in gross motor (GM), vision and fine motor (V&FM) and language skills. Self help, concept hearing (SHCH) skills were recorded as normal while social skills were advanced particularly in the 0-2 years old urban group. By the use of the family protocols, low socio-economic status, malnutrition and 9 other risks factors have been generated for the urban group. No risk factor could be identified for the rural group. Better income emerged as the only real protective factor for the sample showing a direct positive relationship with the 45 skills tested, especially in the 2-6 years age group. Nineteen developmental skills were identified as powerful predictors of development. A prototype home based screening record was constructed for monitoring of growth and development which can be even used by minimally trained primary care worker.

  13. Economic values under inappropriate normal distribution assumptions.

    PubMed

    Sadeghi-Sefidmazgi, A; Nejati-Javaremi, A; Moradi-Shahrbabak, M; Miraei-Ashtiani, S R; Amer, P R

    2012-08-01

    The objectives of this study were to quantify the errors in economic values (EVs) for traits affected by cost or price thresholds when skewed or kurtotic distributions of varying degree are assumed to be normal and when data with a normal distribution is subject to censoring. EVs were estimated for a continuous trait with dichotomous economic implications because of a price premium or penalty arising from a threshold ranging between -4 and 4 standard deviations from the mean. In order to evaluate the impacts of skewness, positive and negative excess kurtosis, standard skew normal, Pearson and the raised cosine distributions were used, respectively. For the various evaluable levels of skewness and kurtosis, the results showed that EVs can be underestimated or overestimated by more than 100% when price determining thresholds fall within a range from the mean that might be expected in practice. Estimates of EVs were very sensitive to censoring or missing data. In contrast to practical genetic evaluation, economic evaluation is very sensitive to lack of normality and missing data. Although in some special situations, the presence of multiple thresholds may attenuate the combined effect of errors at each threshold point, in practical situations there is a tendency for a few key thresholds to dominate the EV, and there are many situations where errors could be compounded across multiple thresholds. In the development of breeding objectives for non-normal continuous traits influenced by value thresholds, it is necessary to select a transformation that will resolve problems of non-normality or consider alternative methods that are less sensitive to non-normality.

  14. The formulation and estimation of a spatial skew-normal generalized ordered-response model.

    DOT National Transportation Integrated Search

    2016-06-01

    This paper proposes a new spatial generalized ordered response model with skew-normal kernel error terms and an : associated estimation method. It contributes to the spatial analysis field by allowing a flexible and parametric skew-normal : distribut...

  15. Estimation of Crop Gross Primary Production (GPP). 2; Do Scaled (MODIS) Vegetation Indices Improve Performance?

    NASA Technical Reports Server (NTRS)

    Zhang, Qingyuan; Cheng, Yen-Ben; Lyapustin, Alexei I.; Wang, Yujie; Zhang, Xiaoyang; Suyker, Andrew; Verma, Shashi; Shuai, Yanmin; Middleton, Elizabeth M.

    2015-01-01

    Satellite remote sensing estimates of Gross Primary Production (GPP) have routinely been made using spectral Vegetation Indices (VIs) over the past two decades. The Normalized Difference Vegetation Index (NDVI), the Enhanced Vegetation Index (EVI), the green band Wide Dynamic Range Vegetation Index (WDRVIgreen), and the green band Chlorophyll Index (CIgreen) have been employed to estimate GPP under the assumption that GPP is proportional to the product of VI and photosynthetically active radiation (PAR) (where VI is one of four VIs: NDVI, EVI, WDRVIgreen, or CIgreen). However, the empirical regressions between VI*PAR and GPP measured locally at flux towers do not pass through the origin (i.e., the zero X-Y value for regressions). Therefore they are somewhat difficult to interpret and apply. This study investigates (1) what are the scaling factors and offsets (i.e., regression slopes and intercepts) between the fraction of PAR absorbed by chlorophyll of a canopy (fAPARchl) and the VIs, and (2) whether the scaled VIs developed in (1) can eliminate the deficiency and improve the accuracy of GPP estimates. Three AmeriFlux maize and soybean fields were selected for this study, two of which are irrigated and one is rainfed. The four VIs and fAPARchl of the fields were computed with the MODerate resolution Imaging Spectroradiometer (MODIS) satellite images. The GPP estimation performance for the scaled VIs was compared to results obtained with the original VIs and evaluated with standard statistics: the coefficient of determination (R2), the root mean square error (RMSE), and the coefficient of variation (CV). Overall, the scaled EVI obtained the best performance. The performance of the scaled NDVI, EVI and WDRVIgreen was improved across sites, crop types and soil/background wetness conditions. The scaled CIgreen did not improve results, compared to the original CIgreen. The scaled green band indices (WDRVIgreen, CIgreen) did not exhibit superior performance to either the scaled EVI or NDVI in estimating crop daily GPP at these agricultural fields. The scaled VIs are more physiologically meaningful than original un-scaled VIs, but scaling factors and offsets may vary across crop types and surface conditions.

  16. Pelvic fracture and injury to the lower urinary tract.

    PubMed

    Spirnak, J P

    1988-10-01

    The presence of a urologic injury must be considered in all patients with pelvic fracture. Uroradiographic evaluation starting with retrograde urethrography is indicated in all male patients with concomitant gross hematuria, bloody urethral discharge, scrotal or perineal ecchymosis, a nonpalpable prostate on rectal examination, or an inability to urinate. If the urethra is normal, a catheter may be passed, and in the presence of gross hematuria, a cystogram must be performed. Female patients rarely suffer urethral lacerations. The urethra is examined, and a Foley catheter may be passed without a urethrogram. The immediate management of associated urologic injuries continues to evolve and evoke controversy. Selected cases of extraperitoneal bladder perforation may be safely managed solely by catheter drainage. Intraperitoneal perforations require surgical exploration and repair. Urethral disruption (partial or complete) may be safely managed by primary cystostomy drainage with management of potential complications (stricture, impotence, incontinence) in 4 to 6 months.

  17. Phonological learning in semantic dementia.

    PubMed

    Jefferies, Elizabeth; Bott, Samantha; Ehsan, Sheeba; Lambon Ralph, Matthew A

    2011-04-01

    Patients with semantic dementia (SD) have anterior temporal lobe (ATL) atrophy that gives rise to a highly selective deterioration of semantic knowledge. Despite pronounced anomia and poor comprehension of words and pictures, SD patients have well-formed, fluent speech and normal digit span. Given the intimate connection between phonological STM and word learning revealed by both neuropsychological and developmental studies, SD patients might be expected to show good acquisition of new phonological forms, even though their ability to map these onto meanings is impaired. In contradiction of these predictions, a limited amount of previous research has found poor learning of new phonological forms in SD. In a series of experiments, we examined whether SD patient, GE, could learn novel phonological sequences and, if so, under which circumstances. GE showed normal benefits of phonological knowledge in STM (i.e., normal phonotactic frequency and phonological similarity effects) but reduced support from semantic memory (i.e., poor immediate serial recall for semantically degraded words, characterised by frequent item errors). Next, we demonstrated normal learning of serial order information for repeated lists of single-digit number words using the Hebb paradigm: these items were well-understood allowing them to be repeated without frequent item errors. In contrast, patient GE showed little learning of nonsense syllable sequences using the same Hebb paradigm. Detailed analysis revealed that both GE and the controls showed a tendency to learn their own errors as opposed to the target items. Finally, we showed normal learning of phonological sequences for GE when he was prevented from repeating his errors. These findings confirm that the ATL atrophy in SD disrupts phonological processing for semantically degraded words but leaves the phonological architecture intact. Consequently, when item errors are minimised, phonological STM can support the acquisition of new phoneme sequences in patients with SD. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Wavefront Sensing Analysis of Grazing Incidence Optical Systems

    NASA Technical Reports Server (NTRS)

    Rohrbach, Scott; Saha, Timo

    2012-01-01

    Wavefront sensing is a process by which optical system errors are deduced from the aberrations in the image of an ideal source. The method has been used successfully in near-normal incidence, but not for grazing incidence systems. This innovation highlights the ability to examine out-of-focus images from grazing incidence telescopes (typically operating in the x-ray wavelengths, but integrated using optical wavelengths) and determine the lower-order deformations. This is important because as a metrology tool, this method would allow the integration of high angular resolution optics without the use of normal incidence interferometry, which requires direct access to the front surface of each mirror. Measuring the surface figure of mirror segments in a highly nested x-ray telescope mirror assembly is difficult due to the tight packing of elements and blockage of all but the innermost elements to normal incidence light. While this can be done on an individual basis in a metrology mount, once the element is installed and permanently bonded into the assembly, it is impossible to verify the figure of each element and ensure that the necessary imaging quality will be maintained. By examining on-axis images of an ideal point source, one can gauge the low-order figure errors of individual elements, even when integrated into an assembly. This technique is known as wavefront sensing (WFS). By shining collimated light down the optical axis of the telescope and looking at out-of-focus images, the blur due to low-order figure errors of individual elements can be seen, and the figure error necessary to produce that blur can be calculated. The method avoids the problem of requiring normal incidence access to the surface of each mirror segment. Mirror figure errors span a wide range of spatial frequencies, from the lowest-order bending to the highest order micro-roughness. While all of these can be measured in normal incidence, only the lowest-order contributors can be determined through this WFS technique.

  19. Improved understanding of human anatomy through self-guided radiological anatomy modules.

    PubMed

    Phillips, Andrew W; Smith, Sandy G; Ross, Callum F; Straus, Christopher M

    2012-07-01

    To quantifiably measure the impact of self-instructed radiological anatomy modules on anatomy comprehension, demonstrated by radiology, gross, and written exams. Study guides for independent use that emphasized structural relationships were created for use with two online radiology atlases. A guide was created for each module of the first year medical anatomy course and incorporated as an optional course component. A total of 93 of 96 eligible students participated. All exams were normalized to control for variances in exam difficulty and body region tested. An independent t-test was used to compare overall exam scores with respect to guide completion or incompletion. To account for aptitude differences between students, a paired t-test of each student's exam scores with and without completion of the associated guide was performed, thus allowing students to serve as their own controls. Twenty-one students completed no study guides; 22 completed all six guides; and 50 students completed between one and five guides. Aggregate comparisons of all students' exam scores showed significantly improved mean performance when guides were used (radiology, 57.8% [percentile] vs. 45.1%, P < .001; gross, 56.9% vs. 46.5%, P = .001; written, 57.8% vs. 50.2%, P = .011). Paired comparisons among students who completed between one and five guides demonstrated significantly higher mean practical exam scores when guides were used (radiology, 49.3% [percentile] vs. 36.0%, P = .001; gross, 51.5% vs. 40.4%, P = .005), but not higher written scores. Radiological anatomy study guides significantly improved anatomy comprehension on radiology, gross, and written exams. Copyright © 2012 AUR. Published by Elsevier Inc. All rights reserved.

  20. The impact of diurnal sleep on the consolidation of a complex gross motor adaptation task

    PubMed Central

    Hoedlmoser, Kerstin; Birklbauer, Juergen; Schabus, Manuel; Eibenberger, Patrick; Rigler, Sandra; Mueller, Erich

    2015-01-01

    Diurnal sleep effects on consolidation of a complex, ecological valid gross motor adaptation task were examined using a bicycle with an inverse steering device. We tested 24 male subjects aged between 20 and 29 years using a between-subjects design. Participants were trained to adapt to the inverse steering bicycle during 45 min. Performance was tested before (TEST1) and after (TEST2) training, as well as after a 2 h retention interval (TEST3). During retention, participants either slept or remained awake. To assess gross motor performance, subjects had to ride the inverse steering bicycle 3 × 30 m straight-line and 3 × 30 m through a slalom. Beyond riding time, we sophisticatedly measured performance accuracy (standard deviation of steering angle) in both conditions using a rotatory potentiometer. A significant decrease of accuracy during straight-line riding after nap and wakefulness was shown. Accuracy during slalom riding remained stable after wakefulness but was reduced after sleep. We found that the duration of rapid eye movement sleep as well as sleep spindle activity are negatively related with gross motor performance changes over sleep. Together these findings suggest that the consolidation of adaptation to a new steering device does not benefit from a 2 h midday nap. We speculate that in case of strongly overlearned motor patterns such as normal cycling, diurnal sleep spindles and rapid eye movement sleep might even help to protect everyday needed skills, and to rapidly forget newly acquired, interfering and irrelevant material. PMID:25256866

  1. The study on achievement of motor milestones and associated factors among children in rural North India

    PubMed Central

    Gupta, Arti; Kalaivani, Mani; Gupta, Sanjeev Kumar; Rai, Sanjay K.; Nongkynrih, Baridalyne

    2016-01-01

    Background: Nearly 14% of children worldwide do not reach their developmental potential in early childhood. The early identification of delays in achieving milestones is critical. The World Health Organization (WHO) has developed normal age ranges for the achievement of motor milestones by healthy children. This study aimed to assess the gross motor developmental achievements and associated factors among children in rural India. Materials and Methods: A cross-sectional study was conducted with rural children in North India. A pretested questionnaire was used to collect the data. The median age at the time of the highest observed milestone was calculated and compared with the WHO windows of achievement. Results: Overall, 221 children aged 4–18 months were included in the study. The median age of motor development exhibited a 0.1–2.1-month delay compared to the WHO median age of motor milestone achievement. The prevalence of the gross motor milestone achievements for each of the six milestones ranged from 91.6% to 98.4%. Developmental delay was observed in 6.3% of the children. After adjusting for different variables, children with birth order of second or more were found to be significantly associated with the timely achievement of gross motor milestones. Conclusion: The apparently healthy children of the rural area of Haryana achieved gross motor milestones with some delay with respect to the WHO windows of achievement. Although the median value of this delay was low, awareness campaigns should be implemented to promote timely identification of children with development delays. PMID:27843845

  2. Influence of incident angle on the decoding in laser polarization encoding guidance

    NASA Astrophysics Data System (ADS)

    Zhou, Muchun; Chen, Yanru; Zhao, Qi; Xin, Yu; Wen, Hongyuan

    2009-07-01

    Dynamic detection of polarization states is very important for laser polarization coding guidance systems. In this paper, a set of dynamic polarization decoding and detection system used in laser polarization coding guidance was designed. Detection process of the normal incident polarized light is analyzed with Jones Matrix; the system can effectively detect changes in polarization. Influence of non-normal incident light on performance of polarization decoding and detection system is studied; analysis showed that changes in incident angle will have a negative impact on measure results, the non-normal incident influence is mainly caused by second-order birefringence and polarization sensitivity effect generated in the phase delay and beam splitter prism. Combined with Fresnel formula, decoding errors of linearly polarized light, elliptically polarized light and circularly polarized light with different incident angles into the detector are calculated respectively, the results show that the decoding errors increase with increase of incident angle. Decoding errors have relations with geometry parameters, material refractive index of wave plate, polarization beam splitting prism. Decoding error can be reduced by using thin low-order wave-plate. Simulation of detection of polarized light with different incident angle confirmed the corresponding conclusions.

  3. A numerical study of adaptive space and time discretisations for Gross-Pitaevskii equations.

    PubMed

    Thalhammer, Mechthild; Abhau, Jochen

    2012-08-15

    As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross-Pitaevskii equation arising in the description of Bose-Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross-Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter [Formula: see text], especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross-Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study.

  4. Simultaneous treatment of unspecified heteroskedastic model error distribution and mismeasured covariates for restricted moment models.

    PubMed

    Garcia, Tanya P; Ma, Yanyuan

    2017-10-01

    We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.

  5. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis

    PubMed Central

    Lin, Johnny; Bentler, Peter M.

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne’s asymptotically distribution-free method and Satorra Bentler’s mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler’s statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby’s study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic. PMID:23144511

  7. Photorespiration and carbon limitation determine productivity in temperate seagrasses.

    PubMed

    Buapet, Pimchanok; Rasmusson, Lina M; Gullström, Martin; Björk, Mats

    2013-01-01

    The gross primary productivity of two seagrasses, Zostera marina and Ruppia maritima, and one green macroalga, Ulva intestinalis, was assessed in laboratory and field experiments to determine whether the photorespiratory pathway operates at a substantial level in these macrophytes and to what extent it is enhanced by naturally occurring shifts in dissolved inorganic carbon (DIC) and O2 in dense vegetation. To achieve these conditions in laboratory experiments, seawater was incubated with U. intestinalis in light to obtain a range of higher pH and O2 levels and lower DIC levels. Gross photosynthetic O2 evolution was then measured in this pretreated seawater (pH, 7.8-9.8; high to low DIC:O2 ratio) at both natural and low O2 concentrations (adjusted by N2 bubbling). The presence of photorespiration was indicated by a lower gross O2 evolution rate under natural O2 conditions than when O2 was reduced. In all three macrophytes, gross photosynthetic rates were negatively affected by higher pH and lower DIC. However, while both seagrasses exhibited significant photorespiratory activity at increasing pH values, the macroalga U. intestinalis exhibited no such activity. Rates of seagrass photosynthesis were then assessed in seawater collected from the natural habitats (i.e., shallow bays characterized by high macrophyte cover and by low DIC and high pH during daytime) and compared with open baymouth water conditions (where seawater DIC is in equilibrium with air, normal DIC, and pH). The gross photosynthetic rates of both seagrasses were significantly higher when incubated in the baymouth water, indicating that these grasses can be significantly carbon limited in shallow bays. Photorespiration was also detected in both seagrasses under shallow bay water conditions. Our findings indicate that natural carbon limitations caused by high community photosynthesis can enhance photorespiration and cause a significant decline in seagrass primary production in shallow waters.

  8. An extension of the receiver operating characteristic curve and AUC-optimal classification.

    PubMed

    Takenouchi, Takashi; Komori, Osamu; Eguchi, Shinto

    2012-10-01

    While most proposed methods for solving classification problems focus on minimization of the classification error rate, we are interested in the receiver operating characteristic (ROC) curve, which provides more information about classification performance than the error rate does. The area under the ROC curve (AUC) is a natural measure for overall assessment of a classifier based on the ROC curve. We discuss a class of concave functions for AUC maximization in which a boosting-type algorithm including RankBoost is considered, and the Bayesian risk consistency and the lower bound of the optimum function are discussed. A procedure derived by maximizing a specific optimum function has high robustness, based on gross error sensitivity. Additionally, we focus on the partial AUC, which is the partial area under the ROC curve. For example, in medical screening, a high true-positive rate to the fixed lower false-positive rate is preferable and thus the partial AUC corresponding to lower false-positive rates is much more important than the remaining AUC. We extend the class of concave optimum functions for partial AUC optimality with the boosting algorithm. We investigated the validity of the proposed method through several experiments with data sets in the UCI repository.

  9. Model assessment using a multi-metric ranking technique

    NASA Astrophysics Data System (ADS)

    Fitzpatrick, P. J.; Lau, Y.; Alaka, G.; Marks, F.

    2017-12-01

    Validation comparisons of multiple models presents challenges when skill levels are similar, especially in regimes dominated by the climatological mean. Assessing skill separation will require advanced validation metrics and identifying adeptness in extreme events, but maintain simplicity for management decisions. Flexibility for operations is also an asset. This work postulates a weighted tally and consolidation technique which ranks results by multiple types of metrics. Variables include absolute error, bias, acceptable absolute error percentages, outlier metrics, model efficiency, Pearson correlation, Kendall's Tau, reliability Index, multiplicative gross error, and root mean squared differences. Other metrics, such as root mean square difference and rank correlation were also explored, but removed when the information was discovered to be generally duplicative to other metrics. While equal weights are applied, weights could be altered depending for preferred metrics. Two examples are shown comparing ocean models' currents and tropical cyclone products, including experimental products. The importance of using magnitude and direction for tropical cyclone track forecasts instead of distance, along-track, and cross-track are discussed. Tropical cyclone intensity and structure prediction are also assessed. Vector correlations are not included in the ranking process, but found useful in an independent context, and will be briefly reported.

  10. Semiautomatic segmentation and follow-up of multicomponent low-grade tumors in longitudinal brain MRI studies

    PubMed Central

    Weizman, Lior; Sira, Liat Ben; Joskowicz, Leo; Rubin, Daniel L.; Yeom, Kristen W.; Constantini, Shlomi; Shofty, Ben; Bashat, Dafna Ben

    2014-01-01

    Purpose: Tracking the progression of low grade tumors (LGTs) is a challenging task, due to their slow growth rate and associated complex internal tumor components, such as heterogeneous enhancement, hemorrhage, and cysts. In this paper, the authors show a semiautomatic method to reliably track the volume of LGTs and the evolution of their internal components in longitudinal MRI scans. Methods: The authors' method utilizes a spatiotemporal evolution modeling of the tumor and its internal components. Tumor components gray level parameters are estimated from the follow-up scan itself, obviating temporal normalization of gray levels. The tumor delineation procedure effectively incorporates internal classification of the baseline scan in the time-series as prior data to segment and classify a series of follow-up scans. The authors applied their method to 40 MRI scans of ten patients, acquired at two different institutions. Two types of LGTs were included: Optic pathway gliomas and thalamic astrocytomas. For each scan, a “gold standard” was obtained manually by experienced radiologists. The method is evaluated versus the gold standard with three measures: gross total volume error, total surface distance, and reliability of tracking tumor components evolution. Results: Compared to the gold standard the authors' method exhibits a mean Dice similarity volumetric measure of 86.58% and a mean surface distance error of 0.25 mm. In terms of its reliability in tracking the evolution of the internal components, the method exhibits strong positive correlation with the gold standard. Conclusions: The authors' method provides accurate and repeatable delineation of the tumor and its internal components, which is essential for therapy assessment of LGTs. Reliable tracking of internal tumor components over time is novel and potentially will be useful to streamline and improve follow-up of brain tumors, with indolent growth and behavior. PMID:24784396

  11. Influence of large intrahepatic blood vessels on the gross and histological characteristics of lesions produced by radiofrequency ablation in a pig liver model.

    PubMed

    Tamaki, Katsuyoshi; Shimizu, Ichiro; Oshio, Atsuo; Fukuno, Hiroshi; Inoue, Hiroshi; Tsutsui, Akemi; Shibata, Hiroshi; Sano, Nobuya; Ito, Susumu

    2004-12-01

    To determine whether the presence of large intrahepatic blood vessels (>/=3 mm) affect radiofrequency (RF)-induced coagulation necrosis, the gross and histological characteristics of RF-ablated areas proximal to or around vessels were examined in normal pig livers. An RF ablation treatment using a two-stepwise extension technique produced 12 lesions: six contained vessels (Group A), and the other six were localized around vessels (Group B). Gross examination revealed that the longest and shortest diameters of the ablated lesions were significantly larger in Group B than in Group A. In Group A, patent vessels contiguous to the lesion were present in a tongue-shaped area, whereas the lesions in Group B were spherical. Staining with nicotinamide adenine dinucleotide diaphorase was negative within the ablated area; but, if vessels were present in the ablated area, the cells around the vessels in an opposite direction to the ablation were stained blue. Roll-off can be achieved with 100% cellular destruction within a lesion that does not contain large vessels. The ablated area was decreased in lesions that contained large vessels, suggesting that the presence of large vessels in the ablated area further increases the cooling effect and may require repeated RF ablation treatment to achieve complete coagulation necrosis.

  12. [Effect of Mn(II) on the error-prone DNA polymerase iota activity in extracts from human normal and tumor cells].

    PubMed

    Lakhin, A V; Efremova, A S; Makarova, I V; Grishina, E E; Shram, S I; Tarantul, V Z; Gening, L V

    2013-01-01

    The DNA polymerase iota (Pol iota), which has some peculiar features and is characterized by an extremely error-prone DNA synthesis, belongs to the group of enzymes preferentially activated by Mn2+ instead of Mg2+. In this work, the effect of Mn2+ on DNA synthesis in cell extracts from a) normal human and murine tissues, b) human tumor (uveal melanoma), and c) cultured human tumor cell lines SKOV-3 and HL-60 was tested. Each group displayed characteristic features of Mn-dependent DNA synthesis. The changes in the Mn-dependent DNA synthesis caused by malignant transformation of normal tissues are described. It was also shown that the error-prone DNA synthesis catalyzed by Pol iota in extracts of all cell types was efficiently suppressed by an RNA aptamer (IKL5) against Pol iota obtained in our work earlier. The obtained results suggest that IKL5 might be used to suppress the enhanced activity of Pol iota in tumor cells.

  13. Speed and Accuracy of Rapid Speech Output by Adolescents with Residual Speech Sound Errors Including Rhotics

    ERIC Educational Resources Information Center

    Preston, Jonathan L.; Edwards, Mary Louise

    2009-01-01

    Children with residual speech sound errors are often underserved clinically, yet there has been a lack of recent research elucidating the specific deficits in this population. Adolescents aged 10-14 with residual speech sound errors (RE) that included rhotics were compared to normally speaking peers on tasks assessing speed and accuracy of speech…

  14. Interferometry On Grazing Incidence Optics

    NASA Astrophysics Data System (ADS)

    Geary, Joseph; Maeda, Riki

    1988-08-01

    A preliminary interferometric procedure is described showing potential for obtaining surface figure error maps of grazing incidence optics at normal incidence. The latter are found in some laser resonator configurations, and in Wolter type X-ray optics. The procedure makes use of cylindrical wavefronts and error subtraction techniques over subapertures. The surface error maps obtained will provide critical information to opticians in the fabrication process.

  15. Interferometry on grazing incidence optics

    NASA Astrophysics Data System (ADS)

    Geary, Joseph M.; Maeda, Riki

    1987-12-01

    An interfeormetric procedure is described that shows potential for obtaining surface figure error maps of grazing incidence optics at normal incidence. Such optics are found in some laser resonator configurations and in Wolter-type X-ray optics. The procedure makes use of cylindrical wavefronts and error subtraction techniques over subapertures. The surface error maps obtained will provide critical information to opticians for the fabrication process.

  16. The estimation of pointing angle and normalized surface scattering cross section from GEOS-3 radar altimeter measurements

    NASA Technical Reports Server (NTRS)

    Brown, G. S.; Curry, W. J.

    1977-01-01

    The statistical error of the pointing angle estimation technique is determined as a function of the effective receiver signal to noise ratio. Other sources of error are addressed and evaluated with inadequate calibration being of major concern. The impact of pointing error on the computation of normalized surface scattering cross section (sigma) from radar and the waveform attitude induced altitude bias is considered and quantitative results are presented. Pointing angle and sigma processing algorithms are presented along with some initial data. The intensive mode clean vs. clutter AGC calibration problem is analytically resolved. The use clutter AGC data in the intensive mode is confirmed as the correct calibration set for the sigma computations.

  17. Association between macroscopic appearance of liver lesions and liver histology in dogs with splenic hemangiosarcoma: 79 cases (2004-2009).

    PubMed

    Clendaniel, Daphne C; Sivacolundhu, Ramesh K; Sorenmo, Karin U; Donovan, Taryn A; Turner, Avenelle; Arteaga, Theresa; Bergman, Philip J

    2014-01-01

    Medical records for 79 dogs with confirmed splenic hemangiosarcoma (HSA) following splenectomy were reviewed for information regarding either the presence or absence of macroscopic liver lesions and the histopathological characteristics of the liver. Only 29 of 58 dogs (50%) with grossly abnormal livers had HSA metastasis. No dogs with grossly normal livers had metastasis detected on liver pathology. Gross lesions in the liver such as multiple nodules, dark-colored nodules, and active bleeding nodules were highly associated with malignancy. For the dogs in this study, performing biopsy in a grossly normal liver was a low-yield procedure in dogs with splenic HSA.

  18. Familial microscopic hematuria caused by hypercalciuria and hyperuricosuria.

    PubMed

    Praga, M; Alegre, R; Hernández, E; Morales, E; Domínguez-Gil, B; Carreño, A; Andrés, A

    2000-01-01

    We report 12 patients belonging to five different families in whom persistent isolated microhematuria was associated with hypercalciuria and/or hyperuricosuria. Four patients had episodes of gross hematuria, three patients had passed renal stones, and a history of nephrolithiasis was obtained in four of the families (80%). Calcium oxalate and uric acid crystals were commonly observed in the urine sediments. Urinary erythrocytes had a normal appearance on phase-microscopic examination. Reduction of calciuria and uricosuria by thiazide diuretics, allopurinol, forced fluid intake, and dietetic measures led to a persistent normalization of urine sediment with complete disappearance of hematuria. Determination of calcium and uric acid urinary excretions should be included in the study of familial hematuria.

  19. Acute Esophageal Necrosis: “Black Esophagus”

    PubMed Central

    Weigel, Tracey L.

    2007-01-01

    Acute esophageal necrosis (AEN) is an uncommon event. We report a case of an 84-year-old female with a giant paraesophageal hernia who presented with coffee ground emesis and on esophagogastroduodenoscopy (EGD) demonstrated findings consistent with acute esophageal necrosis and a giant paraesophageal hernia with normal-appearing gastric mucosa. She was managed conservatively with bowel rest, parenteral nutrition, and continuous intravenous proton pump inhibitor (PPI). After significant improvement in the gross appearance of her esophageal mucosa, surgery was performed to reduce her giant paraesophageal hernia. The patient's postoperative course was uneventful, and she was discharged home on postoperative day 6, tolerating a normal diet. The percutaneous endoscopic gastrostomy (PEG) tube was removed in clinic 2 months postoperatively. PMID:17651583

  20. The albino chick as a model for studying ocular developmental anomalies, including refractive errors, associated with albinism.

    PubMed

    Rymer, Jodi; Choh, Vivian; Bharadwaj, Shrikant; Padmanabhan, Varuna; Modilevsky, Laura; Jovanovich, Elizabeth; Yeh, Brenda; Zhang, Zhan; Guan, Huanxian; Payne, W; Wildsoet, Christine F

    2007-10-01

    Albinism is associated with a variety of ocular anomalies including refractive errors. The purpose of this study was to investigate the ocular development of an albino chick line. The ocular development of both albino and normally pigmented chicks was monitored using retinoscopy to measure refractive errors and high frequency A-scan ultrasonography to measure axial ocular dimensions. Functional tests included an optokinetic nystagmus paradigm to assess visual acuity, and flash ERGs to assess retinal function. The underlying genetic abnormality was characterized using a gene microarray, PCR and a tyrosinase assay. The ultrastructure of the retinal pigment epithelium (RPE) was examined using transmission electron microscopy. PCR confirmed that the genetic abnormality in this line is a deletion in exon 1 of the tyrosinase gene. Tyrosinase gene expression in isolated RPE cells was minimally detectable, and there was minimal enzyme activity in albino feather bulbs. The albino chicks had pink eyes and their eyes transilluminated, reflecting the lack of melanin in all ocular tissues. All three main components, anterior chamber, crystalline lens and vitreous chamber, showed axial expansion over time in both normal and albino animals, but the anterior chambers of albino chicks were consistently shallower than those of normal chicks, while in contrast, their vitreous chambers were longer. Albino chicks remained relatively myopic, with higher astigmatism than the normally pigmented chicks, even though both groups underwent developmental emmetropization. Albino chicks had reduced visual acuity yet the ERG a- and b-wave components had larger amplitudes and shorter than normal implicit times. Developmental emmetropization occurs in the albino chick but is impaired, likely because of functional abnormalities in the RPE and/or retina as well as optical factors. In very young chicks the underlying genetic mutation may also contribute to refractive error and eye shape abnormalities.

  1. Bayesian inversions of a dynamic vegetation model at four European grassland sites

    NASA Astrophysics Data System (ADS)

    Minet, J.; Laloy, E.; Tychon, B.; Francois, L.

    2015-05-01

    Eddy covariance data from four European grassland sites are used to probabilistically invert the CARAIB (CARbon Assimilation In the Biosphere) dynamic vegetation model (DVM) with 10 unknown parameters, using the DREAM(ZS) (DiffeRential Evolution Adaptive Metropolis) Markov chain Monte Carlo (MCMC) sampler. We focus on comparing model inversions, considering both homoscedastic and heteroscedastic eddy covariance residual errors, with variances either fixed a priori or jointly inferred together with the model parameters. Agreements between measured and simulated data during calibration are comparable with previous studies, with root mean square errors (RMSEs) of simulated daily gross primary productivity (GPP), ecosystem respiration (RECO) and evapotranspiration (ET) ranging from 1.73 to 2.19, 1.04 to 1.56 g C m-2 day-1 and 0.50 to 1.28 mm day-1, respectively. For the calibration period, using a homoscedastic eddy covariance residual error model resulted in a better agreement between measured and modelled data than using a heteroscedastic residual error model. However, a model validation experiment showed that CARAIB models calibrated considering heteroscedastic residual errors perform better. Posterior parameter distributions derived from using a heteroscedastic model of the residuals thus appear to be more robust. This is the case even though the classical linear heteroscedastic error model assumed herein did not fully remove heteroscedasticity of the GPP residuals. Despite the fact that the calibrated model is generally capable of fitting the data within measurement errors, systematic bias in the model simulations are observed. These are likely due to model inadequacies such as shortcomings in the photosynthesis modelling. Besides the residual error treatment, differences between model parameter posterior distributions among the four grassland sites are also investigated. It is shown that the marginal distributions of the specific leaf area and characteristic mortality time parameters can be explained by site-specific ecophysiological characteristics.

  2. A Skew-Normal Mixture Regression Model

    ERIC Educational Resources Information Center

    Liu, Min; Lin, Tsung-I

    2014-01-01

    A challenge associated with traditional mixture regression models (MRMs), which rest on the assumption of normally distributed errors, is determining the number of unobserved groups. Specifically, even slight deviations from normality can lead to the detection of spurious classes. The current work aims to (a) examine how sensitive the commonly…

  3. Wind Power Forecasting Error Distributions over Multiple Timescales: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodge, B. M.; Milligan, M.

    2011-03-01

    In this paper, we examine the shape of the persistence model error distribution for ten different wind plants in the ERCOT system over multiple timescales. Comparisons are made between the experimental distribution shape and that of the normal distribution.

  4. Comparison of visual and emotional continuous performance test related to sequence of presentation, gender and age.

    PubMed

    Markovska-Simoska, S; Pop-Jordanova, N

    2009-07-01

    (Full text is available at http://www.manu.edu.mk/prilozi). Continous Performance Tests (CPTs) form a group of paradigms for the evaluation of attention and, to a lesser degree, the response inhibition (or disinhibition) component of executive control. The object of this study was to compare performance on a CPT using both visual and emotional tasks in 46 normal adult subjects. In particular, it was to examine the effects of the type of task (VCPT or ECPT), sequence of presentation, and gender/age influence on performance as measured errors of omission, errors of commission, reaction time and variation of reaction time. From the results we can assume that there are significantly worse performance parameters for ECPT than VCPT tasks, with a probable explanation of the influence of emotional stimuli on attention and information-processing and no significant effect of order of presentation and gender on performance. Significant differences with more omission errors for older groups were obtained, showing better attention in younger subjects. Key words: VCPT, ECPT, omission errors, commission errors, reaction time, variation of reaction time, normal adults.

  5. Truths, errors, and lies around "reflex sympathetic dystrophy" and "complex regional pain syndrome".

    PubMed

    Ochoa, J L

    1999-10-01

    The shifting paradigm of reflex sympathetic dystrophy-sympathetically maintained pains-complex regional pain syndrome is characterized by vestigial truths and understandable errors, but also unjustifiable lies. It is true that patients with organically based neuropathic pain harbor unquestionable and physiologically demonstrable evidence of nerve fiber dysfunction leading to a predictable clinical profile with stereotyped temporal evolution. In turn, patients with psychogenic pseudoneuropathy, sustained by conversion-somatization-malingering, not only lack physiological evidence of structural nerve fiber disease but display a characteristically atypical, half-subjective, psychophysical sensory-motor profile. The objective vasomotor signs may have any variety of neurogenic, vasogenic, and psychogenic origins. Neurological differential diagnosis of "neuropathic pain" versus pseudoneuropathy is straight forward provided that stringent requirements of neurological semeiology are not bypassed. Embarrassing conceptual errors explain the assumption that there exists a clinically relevant "sympathetically maintained pain" status. Errors include historical misinterpretation of vasomotor signs in symptomatic body parts, and misconstruing symptomatic relief after "diagnostic" sympathetic blocks, due to lack of consideration of the placebo effect which explains the outcome. It is a lie that sympatholysis may specifically cure patients with unqualified "reflex sympathetic dystrophy." This was already stated by the father of sympathectomy, René Leriche, more than half a century ago. As extrapolated from observations in animals with gross experimental nerve injury, adducing hypothetical, untestable, secondary central neuron sensitization to explain psychophysical sensory-motor complaints displayed by patients with blatantly absent nerve fiber injury, is not an error, but a lie. While conceptual errors are not only forgivable, but natural to inexact medical science, lies particularly when entrepreneurially inspired are condemnable and call for peer intervention.

  6. Estimating Non-Normal Latent Trait Distributions within Item Response Theory Using True and Estimated Item Parameters

    ERIC Educational Resources Information Center

    Sass, D. A.; Schmitt, T. A.; Walker, C. M.

    2008-01-01

    Item response theory (IRT) procedures have been used extensively to study normal latent trait distributions and have been shown to perform well; however, less is known concerning the performance of IRT with non-normal latent trait distributions. This study investigated the degree of latent trait estimation error under normal and non-normal…

  7. Absolute color scale for improved diagnostics with wavefront error mapping.

    PubMed

    Smolek, Michael K; Klyce, Stephen D

    2007-11-01

    Wavefront data are expressed in micrometers and referenced to the pupil plane, but current methods to map wavefront error lack standardization. Many use normalized or floating scales that may confuse the user by generating ambiguous, noisy, or varying information. An absolute scale that combines consistent clinical information with statistical relevance is needed for wavefront error mapping. The color contours should correspond better to current corneal topography standards to improve clinical interpretation. Retrospective analysis of wavefront error data. Historic ophthalmic medical records. Topographic modeling system topographical examinations of 120 corneas across 12 categories were used. Corneal wavefront error data in micrometers from each topography map were extracted at 8 Zernike polynomial orders and for 3 pupil diameters expressed in millimeters (3, 5, and 7 mm). Both total aberrations (orders 2 through 8) and higher-order aberrations (orders 3 through 8) were expressed in the form of frequency histograms to determine the working range of the scale across all categories. The standard deviation of the mean error of normal corneas determined the map contour resolution. Map colors were based on corneal topography color standards and on the ability to distinguish adjacent color contours through contrast. Higher-order and total wavefront error contour maps for different corneal conditions. An absolute color scale was produced that encompassed a range of +/-6.5 microm and a contour interval of 0.5 microm. All aberrations in the categorical database were plotted with no loss of clinical information necessary for classification. In the few instances where mapped information was beyond the range of the scale, the type and severity of aberration remained legible. When wavefront data are expressed in micrometers, this absolute scale facilitates the determination of the severity of aberrations present compared with a floating scale, particularly for distinguishing normal from abnormal levels of wavefront error. The new color palette makes it easier to identify disorders. The corneal mapping method can be extended to mapping whole eye wavefront errors. When refraction data are expressed in diopters, the previously published corneal topography scale is suggested.

  8. Ecological footprint model using the support vector machine technique.

    PubMed

    Ma, Haibo; Chang, Wenjuan; Cui, Guangbai

    2012-01-01

    The per capita ecological footprint (EF) is one of the most widely recognized measures of environmental sustainability. It aims to quantify the Earth's biological resources required to support human activity. In this paper, we summarize relevant previous literature, and present five factors that influence per capita EF. These factors are: National gross domestic product (GDP), urbanization (independent of economic development), distribution of income (measured by the Gini coefficient), export dependence (measured by the percentage of exports to total GDP), and service intensity (measured by the percentage of service to total GDP). A new ecological footprint model based on a support vector machine (SVM), which is a machine-learning method based on the structural risk minimization principle from statistical learning theory was conducted to calculate the per capita EF of 24 nations using data from 123 nations. The calculation accuracy was measured by average absolute error and average relative error. They were 0.004883 and 0.351078% respectively. Our results demonstrate that the EF model based on SVM has good calculation performance.

  9. Adaptive Trajectory Prediction Algorithm for Climbing Flights

    NASA Technical Reports Server (NTRS)

    Schultz, Charles Alexander; Thipphavong, David P.; Erzberger, Heinz

    2012-01-01

    Aircraft climb trajectories are difficult to predict, and large errors in these predictions reduce the potential operational benefits of some advanced features for NextGen. The algorithm described in this paper improves climb trajectory prediction accuracy by adjusting trajectory predictions based on observed track data. It utilizes rate-of-climb and airspeed measurements derived from position data to dynamically adjust the aircraft weight modeled for trajectory predictions. In simulations with weight uncertainty, the algorithm is able to adapt to within 3 percent of the actual gross weight within two minutes of the initial adaptation. The root-mean-square of altitude errors for five-minute predictions was reduced by 73 percent. Conflict detection performance also improved, with a 15 percent reduction in missed alerts and a 10 percent reduction in false alerts. In a simulation with climb speed capture intent and weight uncertainty, the algorithm improved climb trajectory prediction accuracy by up to 30 percent and conflict detection performance, reducing missed and false alerts by up to 10 percent.

  10. Automatic knee cartilage delineation using inheritable segmentation

    NASA Astrophysics Data System (ADS)

    Dries, Sebastian P. M.; Pekar, Vladimir; Bystrov, Daniel; Heese, Harald S.; Blaffert, Thomas; Bos, Clemens; van Muiswinkel, Arianne M. C.

    2008-03-01

    We present a fully automatic method for segmentation of knee joint cartilage from fat suppressed MRI. The method first applies 3-D model-based segmentation technology, which allows to reliably segment the femur, patella, and tibia by iterative adaptation of the model according to image gradients. Thin plate spline interpolation is used in the next step to position deformable cartilage models for each of the three bones with reference to the segmented bone models. After initialization, the cartilage models are fine adjusted by automatic iterative adaptation to image data based on gray value gradients. The method has been validated on a collection of 8 (3 left, 5 right) fat suppressed datasets and demonstrated the sensitivity of 83+/-6% compared to manual segmentation on a per voxel basis as primary endpoint. Gross cartilage volume measurement yielded an average error of 9+/-7% as secondary endpoint. For cartilage being a thin structure, already small deviations in distance result in large errors on a per voxel basis, rendering the primary endpoint a hard criterion.

  11. URANS simulations of the tip-leakage cavitating flow with verification and validation procedures

    NASA Astrophysics Data System (ADS)

    Cheng, Huai-yu; Long, Xin-ping; Liang, Yun-zhi; Long, Yun; Ji, Bin

    2018-04-01

    In the present paper, the Vortex Identified Zwart-Gerber-Belamri (VIZGB) cavitation model coupled with the SST-CC turbulence model is used to investigate the unsteady tip-leakage cavitating flow induced by a NACA0009 hydrofoil. A qualitative comparison between the numerical and experimental results is made. In order to quantitatively evaluate the reliability of the numerical data, the verification and validation (V&V) procedures are used in the present paper. Errors of numerical results are estimated with seven error estimators based on the Richardson extrapolation method. It is shown that though a strict validation cannot be achieved, a reasonable prediction of the gross characteristics of the tip-leakage cavitating flow can be obtained. Based on the numerical results, the influence of the cavitation on the tip-leakage vortex (TLV) is discussed, which indicates that the cavitation accelerates the fusion of the TLV and the tip-separation vortex (TSV). Moreover, the trajectory of the TLV, when the cavitation occurs, is close to the side wall.

  12. The growth pattern and fuel life cycle analysis of the electricity consumption of Hong Kong.

    PubMed

    To, W M; Lai, T M; Lo, W C; Lam, K H; Chung, W L

    2012-06-01

    As the consumption of electricity increases, air pollutants from power generation increase. In metropolitans such as Hong Kong and other Asian cities, the surge of electricity consumption has been phenomenal over the past decades. This paper presents a historical review about electricity consumption, population, and change in economic structure in Hong Kong. It is hypothesized that the growth of electricity consumption and change in gross domestic product can be modeled by 4-parameter logistic functions. The accuracy of the functions was assessed by Pearson's correlation coefficient, mean absolute percent error, and root mean squared percent error. The paper also applies the life cycle approach to determine carbon dioxide, methane, nitrous oxide, sulfur dioxide, and nitrogen oxide emissions for the electricity consumption of Hong Kong. Monte Carlo simulations were applied to determine the confidence intervals of pollutant emissions. The implications of importing more nuclear power are discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Distribution of the Determinant of the Sample Correlation Matrix: Monte Carlo Type One Error Rates.

    ERIC Educational Resources Information Center

    Reddon, John R.; And Others

    1985-01-01

    Computer sampling from a multivariate normal spherical population was used to evaluate the type one error rates for a test of sphericity based on the distribution of the determinant of the sample correlation matrix. (Author/LMO)

  14. On the use of the covariance matrix to fit correlated data

    NASA Astrophysics Data System (ADS)

    D'Agostini, G.

    1994-07-01

    Best fits to data which are affected by systematic uncertainties on the normalization factor have the tendency to produce curves lower than expected if the covariance matrix of the data points is used in the definition of the χ2. This paper shows that the effect is a direct consequence of the hypothesis used to estimate the empirical covariance matrix, namely the linearization on which the usual error propagation relies. The bias can become unacceptable if the normalization error is large, or a large number of data points are fitted.

  15. Evaluation of speech after correction of rhinophonia with pushback palatoplasty combined with pharyngeal flap.

    PubMed

    Dixon, V L; Bzoch, K R; Habal, M B

    1979-07-01

    A comparison is made of the preoperative and postoperative speech evaluations of 15 selected subjects who had pharyngeal flap operations combined with palatal pushback. Postoperatively, 13 of the 15 patients (86 percent) showed no abnormal nasal emission and no evidence of significant hypernasality during word production. Gross substitution errors were also corrected by the surgical repair. While the number of patients is small, this study indicates equal effectiveness of the surgical technique described--regardless of the sex, the medical diagnosis, whether the procedure was primary or secondary, or the amount of postoperative time--providing there is good function of the muscles of the soft palate.

  16. The impact of diurnal sleep on the consolidation of a complex gross motor adaptation task.

    PubMed

    Hoedlmoser, Kerstin; Birklbauer, Juergen; Schabus, Manuel; Eibenberger, Patrick; Rigler, Sandra; Mueller, Erich

    2015-02-01

    Diurnal sleep effects on consolidation of a complex, ecological valid gross motor adaptation task were examined using a bicycle with an inverse steering device. We tested 24 male subjects aged between 20 and 29 years using a between-subjects design. Participants were trained to adapt to the inverse steering bicycle during 45 min. Performance was tested before (TEST1) and after (TEST2) training, as well as after a 2 h retention interval (TEST3). During retention, participants either slept or remained awake. To assess gross motor performance, subjects had to ride the inverse steering bicycle 3 × 30 m straight-line and 3 × 30 m through a slalom. Beyond riding time, we sophisticatedly measured performance accuracy (standard deviation of steering angle) in both conditions using a rotatory potentiometer. A significant decrease of accuracy during straight-line riding after nap and wakefulness was shown. Accuracy during slalom riding remained stable after wakefulness but was reduced after sleep. We found that the duration of rapid eye movement sleep as well as sleep spindle activity are negatively related with gross motor performance changes over sleep. Together these findings suggest that the consolidation of adaptation to a new steering device does not benefit from a 2 h midday nap. We speculate that in case of strongly overlearned motor patterns such as normal cycling, diurnal sleep spindles and rapid eye movement sleep might even help to protect everyday needed skills, and to rapidly forget newly acquired, interfering and irrelevant material. © 2014 The Authors. Journal of Sleep Research published by John Wiley & Sons Ltd on behalf of European Sleep Research Society.

  17. Childhood clumsiness and peer victimization: a case–control study of psychiatric patients

    PubMed Central

    2013-01-01

    Background Poor motor and social skills as well as peer victimization are commonly reported in both ADHD and autism spectrum disorder. Positive relationships between poor motor and poor social skills, and between poor social skills and peer victimization, are well documented, but the relationship between poor motor skills and peer victimization has not been studied in psychiatric populations. Method 277 patients (133 males, 144 females), mean age 31 years, investigated for ADHD or autism spectrum disorder in adulthood and with normal intelligence, were interviewed about childhood peer victimization and examined for gross motor skills. The parents completed a comprehensive questionnaire on childhood problems, the Five to Fifteen. The Five to Fifteen is a validated questionnaire with 181 statements that covers various symptoms in childhood across eight different domains, one of them targeting motor skills. Regression models were used to evaluate the relationship between motor skills and the risk and duration of peer victimization, adjusted for sex and diagnosis. Results Victims were described as more clumsy in childhood than their non-victimized counterparts. A significant independent association was found between reportedly poor childhood gross motor skills and peer victimization (adjusted odds ratio: 2.97 [95% confidence interval: 1.46-6.07], n = 235, p = 0.003). In adulthood, the victimized group performed worse on vertical jumps, a gross motor task, and were lonelier. Other factors that were expected to be associated with peer victimization were not found in this highly selected group. Conclusion Poor gross motor skills constitute a strong and independent risk factor for peer victimization in childhood, regardless of sex, childhood psychiatric care and diagnosis. PMID:23442984

  18. Quantitative assessment of hit detection and confirmation in single and duplicate high-throughput screenings.

    PubMed

    Wu, Zhijin; Liu, Dongmei; Sui, Yunxia

    2008-02-01

    The process of identifying active targets (hits) in high-throughput screening (HTS) usually involves 2 steps: first, removing or adjusting for systematic variation in the measurement process so that extreme values represent strong biological activity instead of systematic biases such as plate effect or edge effect and, second, choosing a meaningful cutoff on the calculated statistic to declare positive compounds. Both false-positive and false-negative errors are inevitable in this process. Common control or estimation of error rates is often based on an assumption of normal distribution of the noise. The error rates in hit detection, especially false-negative rates, are hard to verify because in most assays, only compounds selected in primary screening are followed up in confirmation experiments. In this article, the authors take advantage of a quantitative HTS experiment in which all compounds are tested 42 times over a wide range of 14 concentrations so true positives can be found through a dose-response curve. Using the activity status defined by dose curve, the authors analyzed the effect of various data-processing procedures on the sensitivity and specificity of hit detection, the control of error rate, and hit confirmation. A new summary score is proposed and demonstrated to perform well in hit detection and useful in confirmation rate estimation. In general, adjusting for positional effects is beneficial, but a robust test can prevent overadjustment. Error rates estimated based on normal assumption do not agree with actual error rates, for the tails of noise distribution deviate from normal distribution. However, false discovery rate based on empirically estimated null distribution is very close to observed false discovery proportion.

  19. Effects of Head-Mounted Display on the Oculomotor System and Refractive Error in Normal Adolescents.

    PubMed

    Ha, Suk-Gyu; Na, Kun-Hoo; Kweon, Il-Joo; Suh, Young-Woo; Kim, Seung-Hyun

    2016-07-01

    To investigate the clinical effects of head-mounted display on the refractive error and oculomotor system in normal adolescents. Sixty volunteers (age: 13 to 18 years) watched a three-dimensional movie and virtual reality application of head-mounted display for 30 minutes. The refractive error (diopters [D]), angle of deviation (prism diopters [PD]) at distance (6 m) and near (33 cm), near point of accommodation, and stereoacuity were measured before, immediately after, and 10 minutes after watching the head-mounted display. The refractive error was presented as spherical equivalent (SE). Refractive error was measured repeatedly after every 10 minutes when a myopic shift greater than 0.15 D was observed after watching the head-mounted display. The mean age of the participants was 14.7 ± 1.3 years and the mean SE before watching head-mounted display was -3.1 ± 2.6 D. One participant in the virtual reality application group was excluded due to motion sickness and nausea. After 30 minutes of watching the head-mounted display, the SE, near point of accommodation, and stereoacuity in both eyes did not change significantly (all P > .05). Immediately after watching the head-mounted display, esophoric shift was observed (0.6 ± 1.5 to 0.2 ± 1.5 PD), although it was not significant (P = .06). Transient myopic shifts of 17.2% to 30% were observed immediately after watching the head-mounted display in both groups, but recovered fully within 40 minutes after watching the head-mounted display. There were no significant clinical effects of watching head-mounted display for 30 minutes on the normal adolescent eye. Transient changes in refractive error and binocular alignment were noted, but were not significant. [J Pediatr Ophthalmol Strabismus. 2016;53(4):238-245.]. Copyright 2016, SLACK Incorporated.

  20. Multilevel Sequential Monte Carlo Samplers for Normalizing Constants

    DOE PAGES

    Moral, Pierre Del; Jasra, Ajay; Law, Kody J. H.; ...

    2017-08-24

    This article considers the sequential Monte Carlo (SMC) approximation of ratios of normalizing constants associated to posterior distributions which in principle rely on continuum models. Therefore, the Monte Carlo estimation error and the discrete approximation error must be balanced. A multilevel strategy is utilized to substantially reduce the cost to obtain a given error level in the approximation as compared to standard estimators. Two estimators are considered and relative variance bounds are given. The theoretical results are numerically illustrated for two Bayesian inverse problems arising from elliptic partial differential equations (PDEs). The examples involve the inversion of observations of themore » solution of (i) a 1-dimensional Poisson equation to infer the diffusion coefficient, and (ii) a 2-dimensional Poisson equation to infer the external forcing.« less

  1. Sunrise/sunset thermal shock disturbance analysis and simulation for the TOPEX satellite

    NASA Technical Reports Server (NTRS)

    Dennehy, C. J.; Welch, R. V.; Zimbelman, D. F.

    1990-01-01

    It is shown here that during normal on-orbit operations the TOPEX low-earth orbiting satellite is subjected to an impulsive disturbance torque caused by rapid heating of its solar array when entering and exiting the earth's shadow. Error budgets and simulation results are used to demonstrate that this sunrise/sunset torque disturbance is the dominant Normal Mission Mode (NMM) attitude error source. The detailed thermomechanical modeling, analysis, and simulation of this torque is described, and the predicted on-orbit performance of the NMM attitude control system in the face of the sunrise/sunset disturbance is presented. The disturbance results in temporary attitude perturbations that exceed NMM pointing requirements. However, they are below the maximum allowable pointing error which would cause the radar altimeter to break lock.

  2. Magnetic resonance imaging, computed tomography, and gross anatomy of the canine tarsus.

    PubMed

    Deruddere, Kirsten J; Milne, Marjorie E; Wilson, Kane M; Snelling, Sam R

    2014-11-01

    To describe the normal anatomy of the soft tissues of the canine tarsus as identified on computed tomography (CT) and magnetic resonance imaging (MRI) and to evaluate specific MRI sequences and planes for observing structures of diagnostic interest. Prospective descriptive study. Canine cadavers (n = 3). A frozen cadaver pelvic limb was used to trial multiple MRI sequences using a 1.5 T superconducting magnet and preferred sequences were selected. Radiographs of 6 canine cadaver pelvic limbs confirmed the tarsi were radiographically normal. A 16-slice CT scanner was used to obtain 1 mm contiguous slices through the tarsi. T1-weighted, proton density with fat suppression (PD FS) and T2-weighted MRI sequences were obtained in the sagittal plane, T1-weighted, and PD FS sequences in the dorsal plane and PD FS sequences in the transverse plane. The limbs were frozen for one month and sliced into 4-5 mm thick frozen sections. Anatomic sections were photographed and visually correlated to CT and MR images. Most soft tissue structures were easiest to identify on the transverse MRI sections with cross reference to either the sagittal or dorsal plane. Bony structures were easily identified on all CT, MR, and gross sections. The anatomy of the canine tarsus can be readily identified on MR imaging. © Copyright 2014 by The American College of Veterinary Surgeons.

  3. Improved Algorithm For Finite-Field Normal-Basis Multipliers

    NASA Technical Reports Server (NTRS)

    Wang, C. C.

    1989-01-01

    Improved algorithm reduces complexity of calculations that must precede design of Massey-Omura finite-field normal-basis multipliers, used in error-correcting-code equipment and cryptographic devices. Algorithm represents an extension of development reported in "Algorithm To Design Finite-Field Normal-Basis Multipliers" (NPO-17109), NASA Tech Briefs, Vol. 12, No. 5, page 82.

  4. 31 CFR 594.505 - Entries in certain accounts for normal service charges authorized.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... reimbursement for normal service charges owed it by the owner of that blocked account. (b) As used in this....505 Entries in certain accounts for normal service charges authorized. (a) A U.S. financial... to correct bookkeeping errors; and, but not by way of limitation, minimum balance charges, notary and...

  5. 31 CFR 543.505 - Entries in certain accounts for normal service charges authorized.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... reimbursement for normal service charges owed it by the owner of that blocked account. (b) As used in this....505 Entries in certain accounts for normal service charges authorized. (a) A U.S. financial... charges to correct bookkeeping errors; and, but not by way of limitation, minimum balance charges, notary...

  6. Mimicking aphasic semantic errors in normal speech production: evidence from a novel experimental paradigm.

    PubMed

    Hodgson, Catherine; Lambon Ralph, Matthew A

    2008-01-01

    Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study utilised a novel method- tempo picture naming. Experiment 1 showed that, compared to standard deadline naming tasks, participants made more errors on the tempo picture naming tasks. Further, RTs were longer and more errors were produced to living items than non-living items a pattern seen in both semantic dementia and semantically-impaired stroke aphasic patients. Experiment 2 showed that providing the initial phoneme as a cue enhanced performance whereas providing an incorrect phonemic cue further reduced performance. These results support the contention that the tempo picture naming paradigm reduces the time allowed for controlled semantic processing causing increased error rates. This experimental procedure would, therefore, appear to mimic the performance of aphasic patients with multi-modal semantic impairment that results from poor semantic control rather than the degradation of semantic representations observed in semantic dementia [Jefferies, E. A., & Lambon Ralph, M. A. (2006). Semantic impairment in stoke aphasia vs. semantic dementia: A case-series comparison. Brain, 129, 2132-2147]. Further implications for theories of semantic cognition and models of speech processing are discussed.

  7. Effects of a cochlear implant simulation on immediate memory in normal-hearing adults

    PubMed Central

    Burkholder, Rose A.; Pisoni, David B.; Svirsky, Mario A.

    2012-01-01

    This study assessed the effects of stimulus misidentification and memory processing errors on immediate memory span in 25 normal-hearing adults exposed to degraded auditory input simulating signals provided by a cochlear implant. The identification accuracy of degraded digits in isolation was measured before digit span testing. Forward and backward digit spans were shorter when digits were degraded than when they were normal. Participants’ normal digit spans and their accuracy in identifying isolated digits were used to predict digit spans in the degraded speech condition. The observed digit spans in degraded conditions did not differ significantly from predicted digit spans. This suggests that the decrease in memory span is related primarily to misidentification of digits rather than memory processing errors related to cognitive load. These findings provide complementary information to earlier research on auditory memory span of listeners exposed to degraded speech either experimentally or as a consequence of a hearing-impairment. PMID:16317807

  8. Checklists and Monitoring in the Cockpit: Why Crucial Defenses Sometimes Fail

    NASA Technical Reports Server (NTRS)

    Dismukes, R. Key; Berman, Ben

    2010-01-01

    Checklists and monitoring are two essential defenses against equipment failures and pilot errors. Problems with checklist use and pilots failures to monitor adequately have a long history in aviation accidents. This study was conducted to explore why checklists and monitoring sometimes fail to catch errors and equipment malfunctions as intended. Flight crew procedures were observed from the cockpit jumpseat during normal airline operations in order to: 1) collect data on monitoring and checklist use in cockpit operations in typical flight conditions; 2) provide a plausible cognitive account of why deviations from formal checklist and monitoring procedures sometimes occur; 3) lay a foundation for identifying ways to reduce vulnerability to inadvertent checklist and monitoring errors; 4) compare checklist and monitoring execution in normal flights with performance issues uncovered in accident investigations; and 5) suggest ways to improve the effectiveness of checklists and monitoring. Cognitive explanations for deviations from prescribed procedures are provided, along with suggestions for countermeasures for vulnerability to error.

  9. Masked and unmasked error-related potentials during continuous control and feedback

    NASA Astrophysics Data System (ADS)

    Lopes Dias, Catarina; Sburlea, Andreea I.; Müller-Putz, Gernot R.

    2018-06-01

    The detection of error-related potentials (ErrPs) in tasks with discrete feedback is well established in the brain–computer interface (BCI) field. However, the decoding of ErrPs in tasks with continuous feedback is still in its early stages. Objective. We developed a task in which subjects have continuous control of a cursor’s position by means of a joystick. The cursor’s position was shown to the participants in two different modalities of continuous feedback: normal and jittered. The jittered feedback was created to mimic the instability that could exist if participants controlled the trajectory directly with brain signals. Approach. This paper studies the electroencephalographic (EEG)—measurable signatures caused by a loss of control over the cursor’s trajectory, causing a target miss. Main results. In both feedback modalities, time-locked potentials revealed the typical frontal-central components of error-related potentials. Errors occurring during the jittered feedback (masked errors) were delayed in comparison to errors occurring during normal feedback (unmasked errors). Masked errors displayed lower peak amplitudes than unmasked errors. Time-locked classification analysis allowed a good distinction between correct and error classes (average Cohen-, average TPR  =  81.8% and average TNR  =  96.4%). Time-locked classification analysis between masked error and unmasked error classes revealed results at chance level (average Cohen-, average TPR  =  60.9% and average TNR  =  58.3%). Afterwards, we performed asynchronous detection of ErrPs, combining both masked and unmasked trials. The asynchronous detection of ErrPs in a simulated online scenario resulted in an average TNR of 84.0% and in an average TPR of 64.9%. Significance. The time-locked classification results suggest that the masked and unmasked errors were indistinguishable in terms of classification. The asynchronous classification results suggest that the feedback modality did not hinder the asynchronous detection of ErrPs.

  10. FLIR Common Module Design Manual. Revision 1

    DTIC Science & Technology

    1978-03-01

    degrade off-axis. The afocal assem- bly is very critical to system performance and normally constitutes a signif- icant portion of the system...not significantly degrade the performance at 10 lp/mm because chromatic errors are about 1/2 of the diffraction error. The chromatic errors are... degradation , though only 3 percent, is unavoidable. It is caused by field curvature in the galilean afocal assembly. This field curvature is

  11. Hemispheric dominance during the mental rotation task in patients with schizophrenia.

    PubMed

    Chen, Jiu; Yang, Laiqi; Zhao, Jin; Li, Lanlan; Liu, Guangxiong; Ma, Wentao; Zhang, Yan; Wu, Xingqu; Deng, Zihe; Tuo, Ran

    2012-04-01

    Mental rotation is a spatial representation conversion capability using an imagined object and either object or self-rotation. This capability is impaired in schizophrenia. To provide a more detailed assessment of impaired cognitive functioning in schizophrenia by comparing the electrophysiological profiles of patients with schizophrenia and controls while completing a mental rotation task using both normally-oriented images and mirror images. This electroencephalographic study compared error rates, reaction times and the topographic map of event-related potentials in 32 participants with schizophrenia and 29 healthy controls during mental rotation tasks involving both normal images and mirror images. Among controls the mean error rate and the mean reaction time for normal images and mirror images were not significantly different but in the patient group the mean (sd) error rate was higher for mirror images than for normal images (42% [6%] vs. 32% [9%], t=2.64, p=0.031) and the mean reaction time was longer for mirror images than for normal images (587 [11] ms vs. 571 [18] ms, t=2.83, p=0.028). The amplitude of the P500 component at Pz (parietal area), Cz (central area), P3 (left parietal area) and P4 (right parietal area) were significantly lower in the patient group than in the control group for both normal images and mirror images. In both groups the P500 for both the normal and mirror images was significantly higher in the right parietal area (P4) compared with left parietal area (P3). The mental rotation abilities of patients with schizophrenia for both normally-oriented images and mirror images are impaired. Patients with schizophrenia show a diminished left cerebral contribution to the mental rotation task, a more rapid response time, and a differential response to normal images versus mirror images not seen in healthy controls. Specific topographic characteristics of the EEG during mental rotation tasks are potential biomarkers for schizophrenia.

  12. SU-D-BRD-07: Evaluation of the Effectiveness of Statistical Process Control Methods to Detect Systematic Errors For Routine Electron Energy Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, S

    2015-06-15

    Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignmentmore » of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors using routine measurement of electron beam energy constancy.« less

  13. Error rates in forensic DNA analysis: definition, numbers, impact and communication.

    PubMed

    Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid

    2014-09-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed. These should be reported, separately from the match probability, when requested by the court or when there are internal or external indications for error. It should also be made clear that there are various other issues to consider, like DNA transfer. Forensic statistical models, in particular Bayesian networks, may be useful to take the various uncertainties into account and demonstrate their effects on the evidential value of the forensic DNA results. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. Dosimetric consequences of translational and rotational errors in frame-less image-guided radiosurgery

    PubMed Central

    2012-01-01

    Background To investigate geometric and dosimetric accuracy of frame-less image-guided radiosurgery (IG-RS) for brain metastases. Methods and materials Single fraction IG-RS was practiced in 72 patients with 98 brain metastases. Patient positioning and immobilization used either double- (n = 71) or single-layer (n = 27) thermoplastic masks. Pre-treatment set-up errors (n = 98) were evaluated with cone-beam CT (CBCT) based image-guidance (IG) and were corrected in six degrees of freedom without an action level. CBCT imaging after treatment measured intra-fractional errors (n = 64). Pre- and post-treatment errors were simulated in the treatment planning system and target coverage and dose conformity were evaluated. Three scenarios of 0 mm, 1 mm and 2 mm GTV-to-PTV (gross tumor volume, planning target volume) safety margins (SM) were simulated. Results Errors prior to IG were 3.9 mm ± 1.7 mm (3D vector) and the maximum rotational error was 1.7° ± 0.8° on average. The post-treatment 3D error was 0.9 mm ± 0.6 mm. No differences between double- and single-layer masks were observed. Intra-fractional errors were significantly correlated with the total treatment time with 0.7mm±0.5mm and 1.2mm±0.7mm for treatment times ≤23 minutes and >23 minutes (p<0.01), respectively. Simulation of RS without image-guidance reduced target coverage and conformity to 75% ± 19% and 60% ± 25% of planned values. Each 3D set-up error of 1 mm decreased target coverage and dose conformity by 6% and 10% on average, respectively, with a large inter-patient variability. Pre-treatment correction of translations only but not rotations did not affect target coverage and conformity. Post-treatment errors reduced target coverage by >5% in 14% of the patients. A 1 mm safety margin fully compensated intra-fractional patient motion. Conclusions IG-RS with online correction of translational errors achieves high geometric and dosimetric accuracy. Intra-fractional errors decrease target coverage and conformity unless compensated with appropriate safety margins. PMID:22531060

  15. Impairment of perception and recognition of faces, mimic expression and gestures in schizophrenic patients.

    PubMed

    Berndl, K; von Cranach, M; Grüsser, O J

    1986-01-01

    The perception and recognition of faces, mimic expression and gestures were investigated in normal subjects and schizophrenic patients by means of a movie test described in a previous report (Berndl et al. 1986). The error scores were compared with results from a semi-quantitative evaluation of psychopathological symptoms and with some data from the case histories. The overall error scores found in the three groups of schizophrenic patients (paranoic, hebephrenic, schizo-affective) were significantly increased (7-fold) over those of normals. No significant difference in the distribution of the error scores in the three different patient groups was found. In 10 different sub-tests following the movie the deficiencies found in the schizophrenic patients were analysed in detail. The error score for the averbal test was on average higher in paranoic patients than in the two other groups of patients, while the opposite was true for the error scores found in the verbal tests. Age and sex had some impact on the test results. In normals, female subjects were somewhat better than male. In schizophrenic patients the reverse was true. Thus female patients were more affected by the disease than male patients with respect to the task performance. The correlation between duration of the disease and error score was small; less than 10% of the error scores could be attributed to factors related to the duration of illness. Evaluation of psychopathological symptoms indicated that the stronger the schizophrenic defect, the higher the error score, but again this relationship was responsible for not more than 10% of the errors. The estimated degree of acute psychosis and overall sum of psychopathological abnormalities as scored in a semi-quantitative exploration did not correlate with the error score, but with each other. Similarly, treatment with psychopharmaceuticals, previous misuse of drugs or of alcohol had practically no effect on the outcome of the test data. The analysis of performance and test data of schizophrenic patients indicated that our findings are most likely not due to a "non-specific" impairment of cognitive function in schizophrenia, but point to a fairly selective defect in elementary cognitive visual functions necessary for averbal social communication. Some possible explanations of the data are discussed in relation to neuropsychological and neurophysiological findings on "face-specific" cortical areas located in the primate temporal lobe.

  16. Confronting Passive and Active Sensors with Non-Gaussian Statistics

    PubMed Central

    Rodríguez-Gonzálvez, Pablo.; Garcia-Gago, Jesús.; Gomez-Lahoz, Javier.; González-Aguilera, Diego.

    2014-01-01

    This paper has two motivations: firstly, to compare the Digital Surface Models (DSM) derived by passive (digital camera) and by active (terrestrial laser scanner) remote sensing systems when applied to specific architectural objects, and secondly, to test how well the Gaussian classic statistics, with its Least Squares principle, adapts to data sets where asymmetrical gross errors may appear and whether this approach should be changed for a non-parametric one. The field of geomatic technology automation is immersed in a high demanding competition in which any innovation by one of the contenders immediately challenges the opponents to propose a better improvement. Nowadays, we seem to be witnessing an improvement of terrestrial photogrammetry and its integration with computer vision to overcome the performance limitations of laser scanning methods. Through this contribution some of the issues of this “technological race” are examined from the point of view of photogrammetry. A new software is introduced and an experimental test is designed, performed and assessed to try to cast some light on this thrilling match. For the case considered in this study, the results show good agreement between both sensors, despite considerable asymmetry. This asymmetry suggests that the standard Normal parameters are not adequate to assess this type of data, especially when accuracy is of importance. In this case, standard deviation fails to provide a good estimation of the results, whereas the results obtained for the Median Absolute Deviation and for the Biweight Midvariance are more appropriate measures. PMID:25196104

  17. Causality and cointegration analysis between macroeconomic variables and the Bovespa.

    PubMed

    da Silva, Fabiano Mello; Coronel, Daniel Arruda; Vieira, Kelmara Mendes

    2014-01-01

    The aim of this study is to analyze the causality relationship among a set of macroeconomic variables, represented by the exchange rate, interest rate, inflation (CPI), industrial production index as a proxy for gross domestic product in relation to the index of the São Paulo Stock Exchange (Bovespa). The period of analysis corresponded to the months from January 1995 to December 2010, making a total of 192 observations for each variable. Johansen tests, through the statistics of the trace and of the maximum eigenvalue, indicated the existence of at least one cointegration vector. In the analysis of Granger (1988) causality tests via error correction, it was found that a short-term causality existed between the CPI and the Bovespa. Regarding the Granger (1988) long-term causality, the results indicated a long-term behaviour among the macroeconomic variables with the BOVESPA. The results of the long-term normalized vector for the Bovespa variable showed that most signals of the cointegration equation parameters are in accordance with what is suggested by the economic theory. In other words, there was a positive behaviour of the GDP and a negative behaviour of the inflation and of the exchange rate (expected to be a positive relationship) in relation to the Bovespa, with the exception of the Selic rate, which was not significant with that index. The variance of the Bovespa was explained by itself in over 90% at the twelfth month, followed by the country risk, with less than 5%.

  18. Causality and Cointegration Analysis between Macroeconomic Variables and the Bovespa

    PubMed Central

    da Silva, Fabiano Mello; Coronel, Daniel Arruda; Vieira, Kelmara Mendes

    2014-01-01

    The aim of this study is to analyze the causality relationship among a set of macroeconomic variables, represented by the exchange rate, interest rate, inflation (CPI), industrial production index as a proxy for gross domestic product in relation to the index of the São Paulo Stock Exchange (Bovespa). The period of analysis corresponded to the months from January 1995 to December 2010, making a total of 192 observations for each variable. Johansen tests, through the statistics of the trace and of the maximum eigenvalue, indicated the existence of at least one cointegration vector. In the analysis of Granger (1988) causality tests via error correction, it was found that a short-term causality existed between the CPI and the Bovespa. Regarding the Granger (1988) long-term causality, the results indicated a long-term behaviour among the macroeconomic variables with the BOVESPA. The results of the long-term normalized vector for the Bovespa variable showed that most signals of the cointegration equation parameters are in accordance with what is suggested by the economic theory. In other words, there was a positive behaviour of the GDP and a negative behaviour of the inflation and of the exchange rate (expected to be a positive relationship) in relation to the Bovespa, with the exception of the Selic rate, which was not significant with that index. The variance of the Bovespa was explained by itself in over 90% at the twelth month, followed by the country risk, with less than 5%. PMID:24587019

  19. Weighted triangulation adjustment

    USGS Publications Warehouse

    Anderson, Walter L.

    1969-01-01

    The variation of coordinates method is employed to perform a weighted least squares adjustment of horizontal survey networks. Geodetic coordinates are required for each fixed and adjustable station. A preliminary inverse geodetic position computation is made for each observed line. Weights associated with each observed equation for direction, azimuth, and distance are applied in the formation of the normal equations in-the least squares adjustment. The number of normal equations that may be solved is twice the number of new stations and less than 150. When the normal equations are solved, shifts are produced at adjustable stations. Previously computed correction factors are applied to the shifts and a most probable geodetic position is found for each adjustable station. Pinal azimuths and distances are computed. These may be written onto magnetic tape for subsequent computation of state plane or grid coordinates. Input consists of punch cards containing project identification, program options, and position and observation information. Results listed include preliminary and final positions, residuals, observation equations, solution of the normal equations showing magnitudes of shifts, and a plot of each adjusted and fixed station. During processing, data sets containing irrecoverable errors are rejected and the type of error is listed. The computer resumes processing of additional data sets.. Other conditions cause warning-errors to be issued, and processing continues with the current data set.

  20. An Ensemble System Based on Hybrid EGARCH-ANN with Different Distributional Assumptions to Predict S&P 500 Intraday Volatility

    NASA Astrophysics Data System (ADS)

    Lahmiri, S.; Boukadoum, M.

    2015-10-01

    Accurate forecasting of stock market volatility is an important issue in portfolio risk management. In this paper, an ensemble system for stock market volatility is presented. It is composed of three different models that hybridize the exponential generalized autoregressive conditional heteroscedasticity (GARCH) process and the artificial neural network trained with the backpropagation algorithm (BPNN) to forecast stock market volatility under normal, t-Student, and generalized error distribution (GED) assumption separately. The goal is to design an ensemble system where each single hybrid model is capable to capture normality, excess skewness, or excess kurtosis in the data to achieve complementarity. The performance of each EGARCH-BPNN and the ensemble system is evaluated by the closeness of the volatility forecasts to realized volatility. Based on mean absolute error and mean of squared errors, the experimental results show that proposed ensemble model used to capture normality, skewness, and kurtosis in data is more accurate than the individual EGARCH-BPNN models in forecasting the S&P 500 intra-day volatility based on one and five-minute time horizons data.

  1. Calibration and temperature correction of heat dissipation matric potential sensors

    USGS Publications Warehouse

    Flint, A.L.; Campbell, G.S.; Ellett, K.M.; Calissendorff, C.

    2002-01-01

    This paper describes how heat dissipation sensors, used to measure soil water matric potential, were analyzed to develop a normalized calibration equation and a temperature correction method. Inference of soil matric potential depends on a correlation between the variable thermal conductance of the sensor's porous ceramic and matric poten-tial. Although this correlation varies among sensors, we demonstrate a normalizing procedure that produces a single calibration relationship. Using sensors from three sources and different calibration methods, the normalized calibration resulted in a mean absolute error of 23% over a matric potential range of -0.01 to -35 MPa. Because the thermal conductivity of variably saturated porous media is temperature dependent, a temperature correction is required for application of heat dissipation sensors in field soils. A temperature correction procedure is outlined that reduces temperature dependent errors by 10 times, which reduces the matric potential measurement errors by more than 30%. The temperature dependence is well described by a thermal conductivity model that allows for the correction of measurements at any temperature to measurements at the calibration temperature.

  2. Collaborative Localization Algorithms for Wireless Sensor Networks with Reduced Localization Error

    PubMed Central

    Sahoo, Prasan Kumar; Hwang, I-Shyan

    2011-01-01

    Localization is an important research issue in Wireless Sensor Networks (WSNs). Though Global Positioning System (GPS) can be used to locate the position of the sensors, unfortunately it is limited to outdoor applications and is costly and power consuming. In order to find location of sensor nodes without help of GPS, collaboration among nodes is highly essential so that localization can be accomplished efficiently. In this paper, novel localization algorithms are proposed to find out possible location information of the normal nodes in a collaborative manner for an outdoor environment with help of few beacons and anchor nodes. In our localization scheme, at most three beacon nodes should be collaborated to find out the accurate location information of any normal node. Besides, analytical methods are designed to calculate and reduce the localization error using probability distribution function. Performance evaluation of our algorithm shows that there is a tradeoff between deployed number of beacon nodes and localization error, and average localization time of the network can be increased with increase in the number of normal nodes deployed over a region. PMID:22163738

  3. Proprioception in patients with posterior cruciate ligament tears: A meta-analysis comparison of reconstructed and contralateral normal knees

    PubMed Central

    Ko, Seung-Nam

    2017-01-01

    Posterior cruciate ligament (PCL) reconstruction for patients with PCL insufficiency has been associated with postoperative improvements in proprioceptive function due to mechanoreceptor regeneration. However, it is unclear whether reconstructed PCL or contralateral normal knees have better proprioceptive function outcomes. This meta-analysis was designed to compare the proprioceptive function of reconstructed PCL or contralateral normal knees in patients with PCL insufficiency. All studies that compared proprioceptive function, as assessed with threshold to detect passive movement (TTDPM) or joint position sense (JPS) in PCL reconstructed or contralateral normal knees were included. JPS was calculated by reproducing passive positioning (RPP). Five studies met the inclusion/exclusion criteria for the meta-analysis. The proprioceptive function, defined as TTDPM (95% CI: 0.25 to 0.51°; P<0.00001) and RPP (95% CI: 0.19 to 0.45°; P<0.00001), was significantly different between the reconstructed PCL and contralateral normal knees. The mean difference in angle of error between the reconstructed PCL and contralateral normal knees was 0.06° greater in TTDPM than by RPP. In addition, results from subgroup analyses, based on the starting angles and the moving directions of the knee, that evaluated TTDPM at 15° flexion to 45° extension, TTDPM at 45° flexion to 110° flexion, RPP in flexion, and RPP in extension demonstrated that mean angles of error were significantly greater, by 0.38° (P = 0.0001), 0.36° (P = 0.02), 0.36° (P<0.00001), and 0.23° (P = 0.04), respectively, in reconstructed PCL than in contralateral normal knees. The proprioceptive function of PCL reconstructed knees was decreased, compared with contralateral normal knees, as determined by both TTDPM and RPP. In addition, the amount of loss of proprioception was greater in TTDPM than in RPP, even with minute differences. Results from subgroup analysis, that evaluated the mean angles of error in moving directions through RPP, suggested that the moving direction of flexion has a significantly greater mean for angles of error than the moving direction of extension. Although the level of differences between various parameters were statistically significant, further studies are needed to determine whether the small differences (>1°) of the loss of proprioception are clinically relevant. PMID:28922423

  4. WE-D-BRA-04: Online 3D EPID-Based Dose Verification for Optimum Patient Safety

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spreeuw, H; Rozendaal, R; Olaciregui-Ruiz, I

    2015-06-15

    Purpose: To develop an online 3D dose verification tool based on EPID transit dosimetry to ensure optimum patient safety in radiotherapy treatments. Methods: A new software package was developed which processes EPID portal images online using a back-projection algorithm for the 3D dose reconstruction. The package processes portal images faster than the acquisition rate of the portal imager (∼ 2.5 fps). After a portal image is acquired, the software seeks for “hot spots” in the reconstructed 3D dose distribution. A hot spot is in this study defined as a 4 cm{sup 3} cube where the average cumulative reconstructed dose exceedsmore » the average total planned dose by at least 20% and 50 cGy. If a hot spot is detected, an alert is generated resulting in a linac halt. The software has been tested by irradiating an Alderson phantom after introducing various types of serious delivery errors. Results: In our first experiment the Alderson phantom was irradiated with two arcs from a 6 MV VMAT H&N treatment having a large leaf position error or a large monitor unit error. For both arcs and both errors the linac was halted before dose delivery was completed. When no error was introduced, the linac was not halted. The complete processing of a single portal frame, including hot spot detection, takes about 220 ms on a dual hexacore Intel Xeon 25 X5650 CPU at 2.66 GHz. Conclusion: A prototype online 3D dose verification tool using portal imaging has been developed and successfully tested for various kinds of gross delivery errors. The detection of hot spots was proven to be effective for the timely detection of these errors. Current work is focused on hot spot detection criteria for various treatment sites and the introduction of a clinical pilot program with online verification of hypo-fractionated (lung) treatments.« less

  5. Correlation- and covariance-supported normalization method for estimating orthodontic trainer treatment for clenching activity.

    PubMed

    Akdenur, B; Okkesum, S; Kara, S; Günes, S

    2009-11-01

    In this study, electromyography signals sampled from children undergoing orthodontic treatment were used to estimate the effect of an orthodontic trainer on the anterior temporal muscle. A novel data normalization method, called the correlation- and covariance-supported normalization method (CCSNM), based on correlation and covariance between features in a data set, is proposed to provide predictive guidance to the orthodontic technique. The method was tested in two stages: first, data normalization using the CCSNM; second, prediction of normalized values of anterior temporal muscles using an artificial neural network (ANN) with a Levenberg-Marquardt learning algorithm. The data set consists of electromyography signals from right anterior temporal muscles, recorded from 20 children aged 8-13 years with class II malocclusion. The signals were recorded at the start and end of a 6-month treatment. In order to train and test the ANN, two-fold cross-validation was used. The CCSNM was compared with four normalization methods: minimum-maximum normalization, z score, decimal scaling, and line base normalization. In order to demonstrate the performance of the proposed method, prevalent performance-measuring methods, and the mean square error and mean absolute error as mathematical methods, the statistical relation factor R2 and the average deviation have been examined. The results show that the CCSNM was the best normalization method among other normalization methods for estimating the effect of the trainer.

  6. Bayesian inversions of a dynamic vegetation model in four European grassland sites

    NASA Astrophysics Data System (ADS)

    Minet, J.; Laloy, E.; Tychon, B.; François, L.

    2015-01-01

    Eddy covariance data from four European grassland sites are used to probabilistically invert the CARAIB dynamic vegetation model (DVM) with ten unknown parameters, using the DREAM(ZS) Markov chain Monte Carlo (MCMC) sampler. We compare model inversions considering both homoscedastic and heteroscedastic eddy covariance residual errors, with variances either fixed a~priori or jointly inferred with the model parameters. Agreements between measured and simulated data during calibration are comparable with previous studies, with root-mean-square error (RMSE) of simulated daily gross primary productivity (GPP), ecosystem respiration (RECO) and evapotranspiration (ET) ranging from 1.73 to 2.19 g C m-2 day-1, 1.04 to 1.56 g C m-2 day-1, and 0.50 to 1.28 mm day-1, respectively. In validation, mismatches between measured and simulated data are larger, but still with Nash-Sutcliffe efficiency scores above 0.5 for three out of the four sites. Although measurement errors associated with eddy covariance data are known to be heteroscedastic, we showed that assuming a classical linear heteroscedastic model of the residual errors in the inversion do not fully remove heteroscedasticity. Since the employed heteroscedastic error model allows for larger deviations between simulated and measured data as the magnitude of the measured data increases, this error model expectedly lead to poorer data fitting compared to inversions considering a constant variance of the residual errors. Furthermore, sampling the residual error variances along with model parameters results in overall similar model parameter posterior distributions as those obtained by fixing these variances beforehand, while slightly improving model performance. Despite the fact that the calibrated model is generally capable of fitting the data within measurement errors, systematic bias in the model simulations are observed. These are likely due to model inadequacies such as shortcomings in the photosynthesis modelling. Besides model behaviour, difference between model parameter posterior distributions among the four grassland sites are also investigated. It is shown that the marginal distributions of the specific leaf area and characteristic mortality time parameters can be explained by site-specific ecophysiological characteristics. Lastly, the possibility of finding a common set of parameters among the four experimental sites is discussed.

  7. Effect of retinol on the hyperthermal response of normal tissue in vivo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rogers, M.A.; Marigold, J.C.L.; Hume, S.P.

    The effect of prior administration of retinol, a membrane labilizer, on the in vivo hyperthermal response of lysosomes was investigated in the mouse spleen using a quantitative histochemical assay for the lysosomal enzyme acid phosphatase. A dose of retinol which had no effect when given alone enhanced the thermal response of the lysosome, causing an increase in lysosomal membrane permeability. In contrast, the same dose of retinol had no effect on the gross hyperthermal response of mouse intestine; a tissue which is relatively susceptible to hyperthermia. Thermal damage to intestine was assayed directly by crypt loss 1 day after treatmentmore » or assessed as thermal enhancement of x-ray damage by counting crypt microcolonies 4 days after a combined heat and x-ray treatment. Thus, although the hyperthermal response of the lysosome could be enhanced by the administration of retinol, thermal damage at a gross tissue level appeared to be unaffected, suggesting that lysosomal membrane injury is unlikely to be a primary event in hyperthermal cell killing.« less

  8. Seismic Excitation of the Polar Motion, 1977-1993

    NASA Technical Reports Server (NTRS)

    Chao, Benjamin Fong; Gross, Richard S.; Han, Yan-Ben

    1996-01-01

    The mass redistribution in the earth as a result of an earthquake faulting changes the earth's inertia tensor, and hence its rotation. Using the complete formulae developed by CHAO and GROSS (1987) based on the normal mode theory, we calculated the earthquake-induced polar motion excitation for the largest 11,015 earthquakes that occurred during 1977.0-1993.6. The seismic excitations in this period are found to be two orders of magnitude below the detection threshold even with today's high precision earth rotation measurements. However, it was calculated that an earthquake of only one tenth the size of the great 1960 Chile event, if happened today, could be comfortably detected in polar motion observations. Furthermore, collectively these seismic excitations have a strong statistical tendency to nudge the pole towards approximately 140deg E, away from the actual observed polar drift direction. This non-random behavior, similarly found in other earthquake-induced changes in earth rotation and low-degree gravitational field by CHAO and GROSS (1987), manifests some geodynamic behavior yet to be explored.

  9. Seismic Excitation of the Polar Motion

    NASA Technical Reports Server (NTRS)

    Chao, Benjamin Fong; Gross, Richard S.; Han, Yan-Ben

    1996-01-01

    The mass redistribution in the earth as a result of an earthquake faulting changes the earth's inertia tensor, and hence its rotation. Using the complete formulae developed by Chao and Gross (1987) based on the normal mode theory, we calculated the earthquake-induced polar motion excitation for the largest 11,015 earthquakes that occurred during 1977.0-1993.6. The seismic excitations in this period are found to be two orders of magnitude below the detection threshold even with today's high precision earth rotation measurements. However, it was calculated that an earthquake of only one tenth the size of the great 1960 Chile event, if happened today, could be comfortably detected in polar motion observations. Furthermore, collectively these seismic excitations have a strong statistical tendency to nudge the pole towards approx. 140 deg E, away from the actually observed polar drift direction. This non-random behavior, similarly found in other earthquake-induced changes in earth rotation and low-degree gravitational field by Chao and Gross (1987), manifests some geodynamic behavior yet to be explored.

  10. Prevention of anemia alleviates heart hypertrophy in copper deficient rats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lure, M.D.; Fields, M.; Lewis, C.G.

    1991-03-11

    The present investigation was designed to examine the role of anemia in the cardiomegaly and myocardial pathology of copper deficiency. Weanling rats were fed a copper deficient diet containing either starch (ST) or fructose (FRU) for five weeks. Six rats consuming the FRU diet were intraperitoneally injected once a week with 1.0 ml/100g bw of packed red blood cells (RBC) obtained from copper deficient rats fed ST. FRU rats injected with RBC did not develop anemia. Additionally, none of the injected rats exhibited heart hypertrophy or gross pathology and all survived. In contrast, non-injected FRU rats were anemic, exhibited severemore » signs of copper deficiency which include heart hypertrophy with gross pathology, and 44% died. Maintaining the hematocrit with RBC injections resulted in normal heart histology and prevented the mortality associated with the fructose x copper interaction. The finding suggest that the anemia associated with copper deficiency contributes to heart pathology.« less

  11. Seismic excitation of the polar motion, 1977 1993

    NASA Astrophysics Data System (ADS)

    Chao, Benjamin Fong; Gross, Richard S.; Han, Yan-Ben

    1996-09-01

    The mass redistribution in the earth as a result of an earthquake faulting changes the earth's inertia tensor, and hence its rotation. Using the complete formulae developed by Chao and Gross (1987) based on the normal mode theory, we calculated the earthquake-induced polar motion excitation for the largest 11,015 earthquakes that occurred during 1977.0 1993.6. The seismic excitations in this period are found to be two orders of magnitude below the detection threshold even with today's high precision earth rotation measurements. However, it was calculated that an earthquake of only one tenth the size of the great 1960 Chile event, if happened today, could be comfortably detected in polar motion observations. Furthermore, collectively these seismic excitations have a strong statistical tendency to nudge the pole towards ˜140°E, away from the actually observed polar drift direction. This non-random behavior, similarly found in other earthquake-induced changes in earth rotation and low-degree gravitational field by Chao and Gross (1987), manifests some geodynamic behavior yet to be explored.

  12. 31 CFR 547.505 - Entries in certain accounts for normal service charges authorized.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... reimbursement for normal service charges owed it by the owner of that blocked account. (b) As used in this... Policy § 547.505 Entries in certain accounts for normal service charges authorized. (a) A U.S. financial... charges to correct bookkeeping errors; and, but not by way of limitation, minimum balance charges, notary...

  13. 31 CFR 544.505 - Entries in certain accounts for normal service charges authorized.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... payment or reimbursement for normal service charges owed it by the owner of that blocked account. (b) As... Licensing Policy § 544.505 Entries in certain accounts for normal service charges authorized. (a) A U.S... adjustment charges to correct bookkeeping errors; and, but not by way of limitation, minimum balance charges...

  14. 31 CFR 542.505 - Entries in certain accounts for normal service charges authorized.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... normal service charges owed it by the owner of that blocked account. (b) As used in this section, the... Entries in certain accounts for normal service charges authorized. (a) A U.S. financial institution is... bookkeeping errors; and, but not by way of limitation, minimum balance charges, notary and protest fees, and...

  15. 31 CFR 546.505 - Entries in certain accounts for normal service charges authorized.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... normal service charges owed it by the owner of that blocked account. (b) As used in this section, the... Entries in certain accounts for normal service charges authorized. (a) A U.S. financial institution is... bookkeeping errors; and, but not by way of limitation, minimum balance charges, notary and protest fees, and...

  16. 31 CFR 588.505 - Entries in certain accounts for normal service charges authorized.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... reimbursement for normal service charges owed it by the owner of that blocked account. (b) As used in this... § 588.505 Entries in certain accounts for normal service charges authorized. (a) A U.S. financial... to correct bookkeeping errors; and, but not by way of limitation, minimum balance charges, notary and...

  17. 31 CFR 548.505 - Entries in certain accounts for normal service charges authorized.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... normal service charges owed it by the owner of that blocked account. (b) As used in this section, the... Entries in certain accounts for normal service charges authorized. (a) A U.S. financial institution is... bookkeeping errors; and, but not by way of limitation, minimum balance charges, notary and protest fees, and...

  18. 31 CFR 545.504 - Entries in certain accounts for normal service charges authorized.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... reimbursement for normal service charges owed it by the owner of that blocked account. (b) As used in this... § 545.504 Entries in certain accounts for normal service charges authorized. (a) A U.S. financial... to correct bookkeeping errors; and, but not by way of limitation, minimum balance charges, notary and...

  19. 31 CFR 541.505 - Entries in certain accounts for normal service charges authorized.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... normal service charges owed it by the owner of that blocked account. (b) As used in this section, the... Entries in certain accounts for normal service charges authorized. (a) A U.S. financial institution is... bookkeeping errors; and, but not by way of limitation, minimum balance charges, notary and protest fees, and...

  20. 31 CFR 551.505 - Entries in certain accounts for normal service charges authorized.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... normal service charges owed it by the owner of that blocked account. (b) As used in this section, the... Entries in certain accounts for normal service charges authorized. (a) A U.S. financial institution is... bookkeeping errors; and, but not by way of limitation, minimum balance charges, notary and protest fees, and...

  1. 31 CFR 593.505 - Entries in certain accounts for normal service charges authorized.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... payment or reimbursement for normal service charges owed it by the owner of that blocked account. (b) As... Licensing Policy § 593.505 Entries in certain accounts for normal service charges authorized. (a) A U.S... adjustment charges to correct bookkeeping errors; and, but not by way of limitation, minimum balance charges...

  2. 31 CFR 537.505 - Entries in certain accounts for normal service charges authorized.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... normal service charges owed it by the owner of that blocked account. (b) As used in this section, the... Entries in certain accounts for normal service charges authorized. (a) A U.S. financial institution is... bookkeeping errors; and, but not by way of limitation, minimum balance charges, notary and protest fees, and...

  3. The comparison of cervical repositioning errors according to smartphone addiction grades.

    PubMed

    Lee, Jeonhyeong; Seo, Kyochul

    2014-04-01

    [Purpose] The purpose of this study was to compare cervical repositioning errors according to smartphone addiction grades of adults in their 20s. [Subjects and Methods] A survey of smartphone addiction was conducted of 200 adults. Based on the survey results, 30 subjects were chosen to participate in this study, and they were divided into three groups of 10; a Normal Group, a Moderate Addiction Group, and a Severe Addiction Group. After attaching a C-ROM, we measured the cervical repositioning errors of flexion, extension, right lateral flexion and left lateral flexion. [Results] Significant differences in the cervical repositioning errors of flexion, extension, and right and left lateral flexion were found among the Normal Group, Moderate Addiction Group, and Severe Addiction Group. In particular, the Severe Addiction Group showed the largest errors. [Conclusion] The result indicates that as smartphone addiction becomes more severe, a person is more likely to show impaired proprioception, as well as impaired ability to recognize the right posture. Thus, musculoskeletal problems due to smartphone addiction should be resolved through social cognition and intervention, and physical therapeutic education and intervention to educate people about correct postures.

  4. A negentropy minimization approach to adaptive equalization for digital communication systems.

    PubMed

    Choi, Sooyong; Lee, Te-Won

    2004-07-01

    In this paper, we introduce and investigate a new adaptive equalization method based on minimizing approximate negentropy of the estimation error for a finite-length equalizer. We consider an approximate negentropy using nonpolynomial expansions of the estimation error as a new performance criterion to improve performance of a linear equalizer based on minimizing minimum mean squared error (MMSE). Negentropy includes higher order statistical information and its minimization provides improved converge, performance and accuracy compared to traditional methods such as MMSE in terms of bit error rate (BER). The proposed negentropy minimization (NEGMIN) equalizer has two kinds of solutions, the MMSE solution and the other one, depending on the ratio of the normalization parameters. The NEGMIN equalizer has best BER performance when the ratio of the normalization parameters is properly adjusted to maximize the output power(variance) of the NEGMIN equalizer. Simulation experiments show that BER performance of the NEGMIN equalizer with the other solution than the MMSE one has similar characteristics to the adaptive minimum bit error rate (AMBER) equalizer. The main advantage of the proposed equalizer is that it needs significantly fewer training symbols than the AMBER equalizer. Furthermore, the proposed equalizer is more robust to nonlinear distortions than the MMSE equalizer.

  5. Sampling errors for satellite-derived tropical rainfall - Monte Carlo study using a space-time stochastic model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.

    1990-01-01

    Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.

  6. The paramedian diencephalic syndrome: a dynamic phenomenon.

    PubMed

    Meissner, I; Sapir, S; Kokmen, E; Stein, S D

    1987-01-01

    The paramedian diencephalic syndrome is characterized by a clinical triad: hypersomnolent apathy, amnesic syndrome, and impaired vertical gaze. We studied 4 cases with computed tomography evidence of bilateral diencephalic infarctions. Each case began abruptly with hypersomnolent apathy followed by fluctuations from appropriate affect, full orientation, and alertness to labile mood, confabulation, and apathy. Speech varied from hypophonia to normal; handwriting varied from legible script to gross scrawl. Psychological testing revealed poor learning and recall, with low performance scores. In 3 patients the predominant abnormality was in downward gaze.

  7. Problem of Single Cell Versus Multicell Origin of a Tumor

    DTIC Science & Technology

    1967-01-01

    variant of glucose-6-phosphate dehydrogenase (G6PD) to study the cell population of leiomyomas of the uterus. G6PD is an enzyme whose gene locus in man...genotype (GdA+) has normal enzyme activity [5]. We have studied leiomyomas of the uterus from females heterozygous for the electrophoretic variant of...G6PD. Leiomyomas are tumors made up of smooth muscle fibers. They are discrete, easy to diagnose on gross examination, available for biochemical analysis

  8. Patterned androgenic alopecia in women.

    PubMed

    Venning, V A; Dawber, R P

    1988-05-01

    Recession of the frontal and frontoparietal hair line in women has been regarded as a marker for pathologic virilization. In a clinical survey of 564 normal women in the population, frontal and frontoparietal recessions were found in 13% of premenopausal and in 37% of postmenopausal women. Patterned hair loss in women is commoner than hitherto described, particularly after the menopause. In the absence of other signs of virilization, "male-pattern" hair loss would therefore appear to be a poor indicator of gross abnormality of androgen metabolism.

  9. Central amaurosis induced by an intraocular, posttraumatic fibrosarcoma in a cat.

    PubMed

    Barrett, P M; Merideth, R E; Alarcon, F L

    1995-01-01

    A 12-year-old, castrated male, domestic shorthair cat with a previous penetrating trauma to the left globe which progressed to a phthisical eye presented for acute blindness. Ophthalmic examination and electroretinography of the right eye were found to be normal. Following euthanasia, gross and microscopic examinations were completed. A left intraocular, posttraumatic fibrosarcoma with extension to the optic nerve and chiasm and induced right optic nerve fiber degeneration at the optic chiasm with necrosis leading to central amaurosis were diagnosed.

  10. Determinants of Standard Errors of MLEs in Confirmatory Factor Analysis

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Cheng, Ying; Zhang, Wei

    2010-01-01

    This paper studies changes of standard errors (SE) of the normal-distribution-based maximum likelihood estimates (MLE) for confirmatory factor models as model parameters vary. Using logical analysis, simplified formulas and numerical verification, monotonic relationships between SEs and factor loadings as well as unique variances are found.…

  11. A Phonological Exploration of Oral Reading Errors.

    ERIC Educational Resources Information Center

    Moscicki, Eve K.; Tallal, Paula

    1981-01-01

    Presents study exploring oral reading errors of normally developing readers to determine any developmental differences in learning phoneme-grapheme units; to discover if the grapheme representations of some phonemes are more difficult to read than others; and to replicate results reported by Fowler, et. al. Findings show most oral reading errors…

  12. Simplified Estimation and Testing in Unbalanced Repeated Measures Designs.

    PubMed

    Spiess, Martin; Jordan, Pascal; Wendt, Mike

    2018-05-07

    In this paper we propose a simple estimator for unbalanced repeated measures design models where each unit is observed at least once in each cell of the experimental design. The estimator does not require a model of the error covariance structure. Thus, circularity of the error covariance matrix and estimation of correlation parameters and variances are not necessary. Together with a weak assumption about the reason for the varying number of observations, the proposed estimator and its variance estimator are unbiased. As an alternative to confidence intervals based on the normality assumption, a bias-corrected and accelerated bootstrap technique is considered. We also propose the naive percentile bootstrap for Wald-type tests where the standard Wald test may break down when the number of observations is small relative to the number of parameters to be estimated. In a simulation study we illustrate the properties of the estimator and the bootstrap techniques to calculate confidence intervals and conduct hypothesis tests in small and large samples under normality and non-normality of the errors. The results imply that the simple estimator is only slightly less efficient than an estimator that correctly assumes a block structure of the error correlation matrix, a special case of which is an equi-correlation matrix. Application of the estimator and the bootstrap technique is illustrated using data from a task switch experiment based on an experimental within design with 32 cells and 33 participants.

  13. Screening for refractive error among primary school children in Bayelsa state, Nigeria

    PubMed Central

    Opubiri, Ibeinmo; Pedro-Egbe, Chinyere

    2013-01-01

    Introduction Vision screening study in primary school children has not been done in Bayelsa State. This study aims to screen for refractive error among primary school children in Bayelsa State and use the data to plan for school Eye Health Program. Methods A cross sectional study on screening for refractive error in school children was carried out in Yenagoa Local Government Area of Bayelsa State in June 2009. A multistage sampling technique was used to select the study population (pupils aged between 5-15 years). Visual acuity (VA) for each eye, was assessed outside the classroom at a distance of 6 meters. Those with VA ≤6/9 were presented with a pinhole and the test repeated. Funduscopy was done inside a poorly lit classroom. An improvement of the VA with pinhole was considered refractive error. Data was analyzed with EPI INFO version 6. Results A total of 1,242 school children consisting of 658 females and 584 males were examined.About 97.7% of pupils had normal VA (VA of 6/6) while 56 eyes had VAs ≤ 6/9. Of these 56 eyes, the visual acuity in 49 eyes (87.5%) improved with pinhole. Twenty seven pupils had refractive error, giving a prevalence of 2.2%. Refractive error involved both eyes in 22 pupils (81.5%) and the 8-10 years age range had the highest proportion (40.7%) of cases of refractive error followed by the 9-13 year-old age range (37%). Conclusion The prevalence of refractive error was 2.2% and most eyes (97.7%) had normal vision. PMID:23646210

  14. Effects of Blood Flow and/or Ventilation Restriction on Radiofrequency Coagulation Size in the Lung: An Experimental Study in Swine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anai, Hiroshi; Uchida, Barry T.; Pavcnik, Dusan, E-mail: pavcnikd@ohsu.edu

    2006-10-15

    The purpose of this study was to investigate how the restriction of blood flow and/or ventilation affects the radiofrequency (RF) ablation coagulation size in lung parenchyma. Thirty-one RF ablations were done in 16 normal lungs of 8 living swine with 2-cm LeVeen needles. Eight RF ablations were performed as a control (group G1), eight with balloon occlusion of the ipsilateral mainstem bronchus (G2), eight with occlusion of the ipsilateral pulmonary artery (G3), and seven with occlusion of both the ipsilateral bronchus and pulmonary artery (G4). Coagulation diameters and volumes of each ablation zone were compared on computed tomography (CT) andmore » gross specimen examinations. Twenty-six coagulation zones were suitable for evaluation: eight in G1, five in G2, seven in G3, and six in G4 groups. In G1, the mean coagulation diameter was 21.5 {+-} 3.5 mm on CT and 19.5 {+-} 1.78 mm on gross specimen examination. In G2, the mean diameters were 26.5 {+-} 5.1 mm and 23.0 {+-} 2.7 mm on CT and gross specimen examination, respectively. In G3, the mean diameters were 29.4 {+-} 2.2 mm and 27.4 {+-} 2.9 mm on CT and gross specimen examination, respectively, and in G4, they were 32.6 {+-} 3.33 mm and 28.8 {+-} 2.6 mm, respectively. The mean coagulation volumes were 3.39 {+-} l.52 cm{sup 3} on CT and 3.01 {+-} 0.94 cm{sup 3} on gross examinations in G1, 6.56 {+-} 2.47 cm{sup 3} and 5.22 {+-} 0.85 cm{sup 3} in G2, 10.93 {+-} 2.17 cm{sup 3} and 9.97 {+-} 2.91 cm{sup 3} in G3, and 13.81 {+-} 3.03 cm{sup 3} and 11.06 {+-} 3.27 cm{sup 3} in G4, respectively. The mean coagulation diameters on gross examination and mean coagulation volumes on CT and gross examination with G3 and G4 were significantly larger than those in G1 (p < 0.0001, p < 0.0001, p < 0.0001, respectively) or in G2 (p < 0.05, p < 0.005, p < 0.005, respectively). Pulmonary collapse occurred in one lung in G2 and pulmonary artery thrombus in two lungs of G3 and two lungs of G4. The coagulation size of RF ablation of the lung parenchyma is increased by ventilation and particularly by pulmonary artery blood flow restriction. The value of these restrictions for potential clinical use needs to be explored in experimentally induced lung tumors.« less

  15. Evaluate error correction ability of magnetorheological finishing by smoothing spectral function

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Fan, Bin; Wan, Yongjian; Shi, Chunyan; Zhuo, Bin

    2014-08-01

    Power Spectral Density (PSD) has been entrenched in optics design and manufacturing as a characterization of mid-high spatial frequency (MHSF) errors. Smoothing Spectral Function (SSF) is a newly proposed parameter that based on PSD to evaluate error correction ability of computer controlled optical surfacing (CCOS) technologies. As a typical deterministic and sub-aperture finishing technology based on CCOS, magnetorheological finishing (MRF) leads to MHSF errors inevitably. SSF is employed to research different spatial frequency error correction ability of MRF process. The surface figures and PSD curves of work-piece machined by MRF are presented. By calculating SSF curve, the correction ability of MRF for different spatial frequency errors will be indicated as a normalized numerical value.

  16. Multi-year objective analyses of warm season ground-level ozone and PM2.5 over North America using real-time observations and Canadian operational air quality models

    NASA Astrophysics Data System (ADS)

    Robichaud, A.; Ménard, R.

    2013-05-01

    We present multi-year objective analyses (OA) on a high spatio-temporal resolution (15 or 21 km, every hour) for the warm season period (1 May-31 October) for ground-level ozone (2002-2012) and for fine particulate matter (diameter less than 2.5 microns (PM2.5)) (2004-2012). The OA used here combines the Canadian Air Quality forecast suite with US and Canadian surface air quality monitoring sites. The analysis is based on an optimal interpolation with capabilities for adaptive error statistics for ozone and PM2.5 and an explicit bias correction scheme for the PM2.5 analyses. The estimation of error statistics has been computed using a modified version of the Hollingsworth-Lönnberg's (H-L) method. Various quality controls (gross error check, sudden jump test and background check) have been applied to the observations to remove outliers. An additional quality control is applied to check the consistency of the error statistics estimation model at each observing station and for each hour. The error statistics are further tuned "on the fly" using a χ2 (chi-square) diagnostic, a procedure which verifies significantly better than without tuning. Successful cross-validation experiments were performed with an OA set-up using 90% of observations to build the objective analysis and with the remainder left out as an independent set of data for verification purposes. Furthermore, comparisons with other external sources of information (global models and PM2.5 satellite surface derived measurements) show reasonable agreement. The multi-year analyses obtained provide relatively high precision with an absolute yearly averaged systematic error of less than 0.6 ppbv (parts per billion by volume) and 0.7 μg m-3 (micrograms per cubic meter) for ozone and PM2.5 respectively and a random error generally less than 9 ppbv for ozone and under 12 μg m-3 for PM2.5. In this paper, we focus on two applications: (1) presenting long term averages of objective analysis and analysis increments as a form of summer climatology and (2) analyzing long term (decadal) trends and inter-annual fluctuations using OA outputs. Our results show that high percentiles of ozone and PM2.5 are both following a decreasing trend overall in North America with the eastern part of United States (US) presenting the highest decrease likely due to more effective pollution controls. Some locations, however, exhibited an increasing trend in the mean ozone and PM2.5 such as the northwestern part of North America (northwest US and Alberta). The low percentiles are generally rising for ozone which may be linked to increasing emissions from emerging countries and the resulting pollution brought by the intercontinental transport. After removing the decadal trend, we demonstrate that the inter-annual fluctuations of the high percentiles are significantly correlated with temperature fluctuations for ozone and precipitation fluctuations for PM2.5. We also show that there was a moderately significant correlation between the inter-annual fluctuations of the high percentiles of ozone and PM2.5 with economic indices such as the Industrial Dow Jones and/or the US gross domestic product growth rate.

  17. Medication errors: definitions and classification

    PubMed Central

    Aronson, Jeffrey K

    2009-01-01

    To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey–Lewis method (based on an understanding of theory and practice). A medication error is ‘a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient’. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is ‘a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient’. The converse of this, ‘balanced prescribing’ is ‘the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm’. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. A prescription error is ‘a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription’. The ‘normal features’ include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies. PMID:19594526

  18. The effect of speaking rate on serial-order sound-level errors in normal healthy controls and persons with aphasia.

    PubMed

    Fossett, Tepanta R D; McNeil, Malcolm R; Pratt, Sheila R; Tompkins, Connie A; Shuster, Linda I

    Although many speech errors can be generated at either a linguistic or motoric level of production, phonetically well-formed sound-level serial-order errors are generally assumed to result from disruption of phonologic encoding (PE) processes. An influential model of PE (Dell, 1986; Dell, Burger & Svec, 1997) predicts that speaking rate should affect the relative proportion of these serial-order sound errors (anticipations, perseverations, exchanges). These predictions have been extended to, and have special relevance for persons with aphasia (PWA) because of the increased frequency with which speech errors occur and because their localization within the functional linguistic architecture may help in diagnosis and treatment. Supporting evidence regarding the effect of speaking rate on phonological encoding has been provided by studies using young normal language (NL) speakers and computer simulations. Limited data exist for older NL users and no group data exist for PWA. This study tested the phonologic encoding properties of Dell's model of speech production (Dell, 1986; Dell,et al., 1997), which predicts that increasing speaking rate affects the relative proportion of serial-order sound errors (i.e., anticipations, perseverations, and exchanges). The effects of speech rate on the error ratios of anticipation/exchange (AE), anticipation/perseveration (AP) and vocal reaction time (VRT) were examined in 16 normal healthy controls (NHC) and 16 PWA without concomitant motor speech disorders. The participants were recorded performing a phonologically challenging (tongue twister) speech production task at their typical and two faster speaking rates. A significant effect of increased rate was obtained for the AP but not the AE ratio. Significant effects of group and rate were obtained for VRT. Although the significant effect of rate for the AP ratio provided evidence that changes in speaking rate did affect PE, the results failed to support the model derived predictions regarding the direction of change for error type proportions. The current findings argued for an alternative concept of the role of activation and decay in influencing types of serial-order sound errors. Rather than a slow activation decay rate (Dell, 1986), the results of the current study were more compatible with an alternative explanation of rapid activation decay or slow build-up of residual activation.

  19. Six1-Eya-Dach Network in Breast Cancer

    DTIC Science & Technology

    2009-05-01

    Ctrl scramble controls. Responsiveness was tested using luciferase activity of the 3TP reporter construct and normalized to renilla luciferase...construct and normalized to renilla luciferase activity. Data points show the mean of two individual clones from two experiments and error bars represent

  20. Not Quite Normal: Consequences of Violating the Assumption of Normality in Regression Mixture Models

    ERIC Educational Resources Information Center

    Van Horn, M. Lee; Smith, Jessalyn; Fagan, Abigail A.; Jaki, Thomas; Feaster, Daniel J.; Masyn, Katherine; Hawkins, J. David; Howe, George

    2012-01-01

    Regression mixture models, which have only recently begun to be used in applied research, are a new approach for finding differential effects. This approach comes at the cost of the assumption that error terms are normally distributed within classes. This study uses Monte Carlo simulations to explore the effects of relatively minor violations of…

  1. How does aging affect the types of error made in a visual short-term memory ‘object-recall’ task?

    PubMed Central

    Sapkota, Raju P.; van der Linde, Ian; Pardhan, Shahina

    2015-01-01

    This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits. PMID:25653615

  2. How does aging affect the types of error made in a visual short-term memory 'object-recall' task?

    PubMed

    Sapkota, Raju P; van der Linde, Ian; Pardhan, Shahina

    2014-01-01

    This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits.

  3. Pre-Test Assessment of the Upper Bound of the Drag Coefficient Repeatability of a Wind Tunnel Model

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; L'Esperance, A.

    2017-01-01

    A new method is presented that computes a pre{test estimate of the upper bound of the drag coefficient repeatability of a wind tunnel model. This upper bound is a conservative estimate of the precision error of the drag coefficient. For clarity, precision error contributions associated with the measurement of the dynamic pressure are analyzed separately from those that are associated with the measurement of the aerodynamic loads. The upper bound is computed by using information about the model, the tunnel conditions, and the balance in combination with an estimate of the expected output variations as input. The model information consists of the reference area and an assumed angle of attack. The tunnel conditions are described by the Mach number and the total pressure or unit Reynolds number. The balance inputs are the partial derivatives of the axial and normal force with respect to all balance outputs. Finally, an empirical output variation of 1.0 microV/V is used to relate both random instrumentation and angle measurement errors to the precision error of the drag coefficient. Results of the analysis are reported by plotting the upper bound of the precision error versus the tunnel conditions. The analysis shows that the influence of the dynamic pressure measurement error on the precision error of the drag coefficient is often small when compared with the influence of errors that are associated with the load measurements. Consequently, the sensitivities of the axial and normal force gages of the balance have a significant influence on the overall magnitude of the drag coefficient's precision error. Therefore, results of the error analysis can be used for balance selection purposes as the drag prediction characteristics of balances of similar size and capacities can objectively be compared. Data from two wind tunnel models and three balances are used to illustrate the assessment of the precision error of the drag coefficient.

  4. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    NASA Astrophysics Data System (ADS)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  5. Applicability of Infrared Photorefraction for Measurement of Accommodation in Awake-Behaving Normal and Strabismic Monkeys

    PubMed Central

    Bossong, Heather; Swann, Michelle; Glasser, Adrian; Das, Vallabh E.

    2010-01-01

    Purpose This study was designed to use infrared photorefraction to measure accommodation in awake-behaving normal and strabismic monkeys and describe properties of photorefraction calibrations in these monkeys. Methods Ophthalmic trial lenses were used to calibrate the slope of pupil vertical pixel intensity profile measurements that were made with a custom-built infrared photorefractor. Day to day variability in photorefraction calibration curves, variability in calibration coefficients due to misalignment of the photorefractor Purkinje image and the center of the pupil, and variability in refractive error due to off-axis measurements were evaluated. Results The linear range of calibration of the photorefractor was found for ophthalmic lenses ranging from –1 D to +4 D. Calibration coefficients were different across monkeys tested (two strabismic, one normal) but were similar for each monkey over different experimental days. In both normal and strabismic monkeys, small misalignment of the photorefractor Purkinje image with the center of pupil resulted in only small changes in calibration coefficients, that were not statistically significant (P > 0.05). Off-axis measurement of refractive error was also small in the normal and strabismic monkeys (~1 D to 2 D) as long as the magnitude of misalignment was <10°. Conclusions Remote infrared photorefraction is suitable for measuring accommodation in awake, behaving normal, and strabismic monkeys. Specific challenges posed by the strabismic monkeys, such as possible misalignment of the photorefractor Purkinje image and the center of the pupil during either calibration or measurement of accommodation, that may arise due to unsteady fixation or small eye movements including nystagmus, results in small changes in measured refractive error. PMID:19029024

  6. A-Posteriori Error Estimates for Mixed Finite Element and Finite Volume Methods for Problems Coupled Through a Boundary with Non-Matching Grids

    DTIC Science & Technology

    2013-08-01

    both MFE and GFV, are often similar in size. As a gross measure of the effect of geometric projection and of the use of quadrature, we also report the...interest MFE ∑(e,ψ) or GFV ∑(e,ψ). Tables 1 and 2 show this using coarse and fine forward solutions. Table 1. The forward problem with solution (4.1) is run...adjoint data components ψu and ψp are constant everywhere and ψξ = 0. adj. grid MFE ∑(e,ψ) ∑MFEi ratio GFV ∑(e,ψ) ∑GFV i ratio 20x20 : 32x32 1.96E−3

  7. Automated estimation of abdominal effective diameter for body size normalization of CT dose.

    PubMed

    Cheng, Phillip M

    2013-06-01

    Most CT dose data aggregation methods do not currently adjust dose values for patient size. This work proposes a simple heuristic for reliably computing an effective diameter of a patient from an abdominal CT image. Evaluation of this method on 106 patients scanned on Philips Brilliance 64 and Brilliance Big Bore scanners demonstrates close correspondence between computed and manually measured patient effective diameters, with a mean absolute error of 1.0 cm (error range +2.2 to -0.4 cm). This level of correspondence was also demonstrated for 60 patients on Siemens, General Electric, and Toshiba scanners. A calculated effective diameter in the middle slice of an abdominal CT study was found to be a close approximation of the mean calculated effective diameter for the study, with a mean absolute error of approximately 1.0 cm (error range +3.5 to -2.2 cm). Furthermore, the mean absolute error for an adjusted mean volume computed tomography dose index (CTDIvol) using a mid-study calculated effective diameter, versus a mean per-slice adjusted CTDIvol based on the calculated effective diameter of each slice, was 0.59 mGy (error range 1.64 to -3.12 mGy). These results are used to calculate approximate normalized dose length product values in an abdominal CT dose database of 12,506 studies.

  8. Metrological Software Test for Simulating the Method of Determining the Thermocouple Error in Situ During Operation

    NASA Astrophysics Data System (ADS)

    Chen, Jingliang; Su, Jun; Kochan, Orest; Levkiv, Mariana

    2018-04-01

    The simplified metrological software test (MST) for modeling the method of determining the thermocouple (TC) error in situ during operation is considered in the paper. The interaction between the proposed MST and a temperature measuring system is also reflected in order to study the error of determining the TC error in situ during operation. The modelling studies of the random error influence of the temperature measuring system, as well as interference magnitude (both the common and normal mode noises) on the error of determining the TC error in situ during operation using the proposed MST, have been carried out. The noise and interference of the order of 5-6 μV cause the error of about 0.2-0.3°C. It is shown that high noise immunity is essential for accurate temperature measurements using TCs.

  9. Outliers: A Potential Data Problem.

    ERIC Educational Resources Information Center

    Douzenis, Cordelia; Rakow, Ernest A.

    Outliers, extreme data values relative to others in a sample, may distort statistics that assume internal levels of measurement and normal distribution. The outlier may be a valid value or an error. Several procedures are available for identifying outliers, and each may be applied to errors of prediction from the regression lines for utility in a…

  10. Phonological Spelling and Reading Deficits in Children with Spelling Disabilities

    ERIC Educational Resources Information Center

    Friend, Angela; Olson, Richard K.

    2008-01-01

    Spelling errors in the Wide Range Achievement Test were analyzed for 77 pairs of children, each of which included one older child with spelling disability (SD) and one spelling-level-matched younger child with normal spelling ability from the Colorado Learning Disabilities Research Center database. Spelling error analysis consisted of a percent…

  11. Reading and Spelling Error Analysis of Native Arabic Dyslexic Readers

    ERIC Educational Resources Information Center

    Abu-rabia, Salim; Taha, Haitham

    2004-01-01

    This study was an investigation of reading and spelling errors of dyslexic Arabic readers ("n"=20) compared with two groups of normal readers: a young readers group, matched with the dyslexics by reading level ("n"=20) and an age-matched group ("n"=20). They were tested on reading and spelling of texts, isolated…

  12. Computer-socket manufacturing error: How much before it is clinically apparent?

    PubMed Central

    Sanders, Joan E.; Severance, Michael R.; Allyn, Kathryn J.

    2015-01-01

    The purpose of this research was to pursue quality standards for computer-manufacturing of prosthetic sockets for people with transtibial limb loss. Thirty-three duplicates of study participants’ normally used sockets were fabricated using central fabrication facilities. Socket-manufacturing errors were compared with clinical assessments of socket fit. Of the 33 sockets tested, 23 were deemed clinically to need modification. All 13 sockets with mean radial error (MRE) greater than 0.25 mm were clinically unacceptable, and 11 of those were deemed in need of sizing reduction. Of the remaining 20 sockets, 5 sockets with interquartile range (IQR) greater than 0.40 mm were deemed globally or regionally oversized and in need of modification. Of the remaining 15 sockets, 5 sockets with closed contours of elevated surface normal angle error (SNAE) were deemed clinically to need shape modification at those closed contour locations. The remaining 10 sockets were deemed clinically acceptable and not in need modification. MRE, IQR, and SNAE may serve as effective metrics to characterize quality of computer-manufactured prosthetic sockets, helping facilitate the development of quality standards for the socket manufacturing industry. PMID:22773260

  13. The characteristics of patients with uncertain/mild cognitive impairment on the Alzheimer disease assessment scale-cognitive subscale.

    PubMed

    Pyo, Geunyeong; Elble, Rodger J; Ala, Thomas; Markwell, Stephen J

    2006-01-01

    The performances of the uncertain/mild cognitive impairment (MCI) patients on the Alzheimer Disease Assessment Scale-Cognitive (ADAS-Cog) subscale were compared with those of normal controls, Alzheimer disease patients with CDR 0.5, and Alzheimer disease patients with CDR 1.0. The Uncertain/MCI group was significantly different from normal controls and Alzheimer disease CDR 0.5 or 1.0 groups on the ADAS-Cog except on a few non-memory subtests. Age was significantly correlated with total error score in the normal group, but there was no significant correlation between age and ADAS-Cog scores in the patient groups. Education was not significantly correlated with the ADAS-Cog scores in any group. Regardless of age and educational level, there were clear differences between the normal group and the Uncertain/MCI group, especially on the total error scores. We found that the total error score of the ADAS-Cog was the most reliable variable in detecting patients with mild cognitive impairment. The present study demonstrated that the ADAS-Cog is a promising tool for detecting and studying patients with mild cognitive impairment. The results also indicated that demographic variables such as age and education do not play a significant role in the diagnosis of mild cognitive impaired patients based on the ADAS-Cog scores.

  14. Bayesian analysis of stochastic volatility-in-mean model with leverage and asymmetrically heavy-tailed error using generalized hyperbolic skew Student’s t-distribution*

    PubMed Central

    Leão, William L.; Chen, Ming-Hui

    2017-01-01

    A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor’s 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model. PMID:29333210

  15. Modeling and simulation for fewer-axis grinding of complex surface

    NASA Astrophysics Data System (ADS)

    Li, Zhengjian; Peng, Xiaoqiang; Song, Ci

    2017-10-01

    As the basis of fewer-axis grinding of complex surface, the grinding mathematical model is of great importance. A mathematical model of the grinding wheel was established, and then coordinate and normal vector of the wheel profile could be calculated. Through normal vector matching at the cutter contact point and the coordinate system transformation, the grinding mathematical model was established to work out the coordinate of the cutter location point. Based on the model, interference analysis was simulated to find out the right position and posture of workpiece for grinding. Then positioning errors of the workpiece including the translation positioning error and the rotation positioning error were analyzed respectively, and the main locating datum was obtained. According to the analysis results, the grinding tool path was planned and generated to grind the complex surface, and good form accuracy was obtained. The grinding mathematical model is simple, feasible and can be widely applied.

  16. Bayesian analysis of stochastic volatility-in-mean model with leverage and asymmetrically heavy-tailed error using generalized hyperbolic skew Student's t-distribution.

    PubMed

    Leão, William L; Abanto-Valle, Carlos A; Chen, Ming-Hui

    2017-01-01

    A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor's 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model.

  17. Motor and Cognitive Assessment of Infants and Young Boys with Duchenne Muscular Dystrophy; Results from the Muscular Dystrophy Association DMD Clinical Research Network

    PubMed Central

    Connolly, Anne M.; Florence, Julaine M.; Cradock, Mary M.; Malkus, Elizabeth C.; Schierbecker, Jeanine R.; Siener, Catherine A.; Wulf, Charlie O.; Anand, Pallavi; Golumbek, Paul T.; Zaidman, Craig M; Miller, J Philip; Lowes, Linda P; Alfano, Lindsay N.; Viollet-Callendret, Laurence; Flanigan, Kevin M.; Mendell, Jerry R.; McDonald, Craig M.; Goude, Erica; Johnson, Linda; Nicorici, Alina; Karachunski, Peter I.; Day, John W.; Dalton, Joline C.; Farber, Janey M.; Buser, Karen K.; Darras, Basil T.; Kang, Peter B.; Riley, Susan O.; Shriber, Elizabeth; Parad, Rebecca; Bushby, Kate; Eagle, Michelle

    2013-01-01

    Therapeutic trials in Duchenne Muscular dystrophy (DMD) exclude young boys because traditional outcome measures rely on cooperation. The Bayley-III Scales of Infant and Toddler Development (Bayley-III) have been validated in developing children and those with developmental disorders but have not been studied in DMD. Expanded Hammersmith Functional Motor Scale (HFMSE) and North Star Ambulatory Assessment (NSAA) may also be useful in this young DMD population. Clinical evaluators from the MDA-DMD Clinical Research Network were trained in these assessment tools. Infants and boys with DMD (n=24; 1.9±0.7 years) were assessed. The mean Bayley-III motor composite score was low (82.8 ± 8; p=<.0001)(normal=100 ± 15). Mean gross motor and fine motor function scaled scores were low (both p=<.0001). The mean cognitive comprehensive (p=.0002), receptive language (p=<.0001), and expressive language (p=.0001) were also low compared to normal children. Age was negatively associated with Bayley-III gross motor (r=−0.44 p=.02) but not with fine motor, cognitive, or language scores. HFMSE (n=23) showed a mean score of 31 ± 13. NSAA (n =18 boys; 2.2 ± 0.4years) showed a mean score of 12 ± 5. Outcome assessments of young boys with DMD are feasible and in this multicenter study were best demonstrated using the Bayley-III. PMID:23726376

  18. Impact resistance of fiber composite blades used in aircraft turbine engines

    NASA Technical Reports Server (NTRS)

    Friedrich, L. A.; Preston, J. L., Jr.

    1973-01-01

    Resistance of advanced fiber reinforced epoxy matrix composite materials to ballistic impact was investigated as a function of impacting projectile characteristics, and composite material properties. Ballistic impact damage due to normal impacts, was classified as transverse (stress wave delamination and splitting), penetrative, or structural (gross failure). Steel projectiles were found to be gelatin ice projectiles in causing penetrative damage leading to reduced tensile strength. Gelatin and ice projectiles caused either transverse or structural damage, depending upon projectile mass and velocity. Improved composite transverse tensile strength, use of dispersed ply lay-ups, and inclusion of PRD-49-1 or S-glass fibers correlated with improved resistance of composite materials to transverse damage. In non-normal impacts against simulated blade shapes, the normal velocity component of the impact was used to correlate damage results with normal impact results. Stiffening the leading edge of simulated blade specimens led to reduced ballistic damage, while addition of a metallic leading edge provided nearly complete protection against 0.64 cm diameter steel, and 1.27 cm diameter ice and gelatin projectiles, and partial protection against 2.54 cm diameter projectiles of ice and gelatin.

  19. Positioning accuracy during VMAT of gynecologic malignancies and the resulting dosimetric impact by a 6-degree-of-freedom couch in combination with daily kilovoltage cone beam computed tomography.

    PubMed

    Yao, Lihong; Zhu, Lihong; Wang, Junjie; Liu, Lu; Zhou, Shun; Jiang, ShuKun; Cao, Qianqian; Qu, Ang; Tian, Suqing

    2015-04-26

    To improve the delivery of radiotherapy in gynecologic malignancies and to minimize the irradiation of unaffected tissues by using daily kilovoltage cone beam computed tomography (kV-CBCT) to reduce setup errors. Thirteen patients with gynecologic cancers were treated with postoperative volumetric-modulated arc therapy (VMAT). All patients had a planning CT scan and daily CBCT during treatment. Automatic bone anatomy matching was used to determine initial inter-fraction positioning error. Positional correction on a six-degrees-of-freedom (6DoF) couch was followed by a second scan to calculate the residual inter-fraction error, and a post-treatment scan assessed intra-fraction motion. The margins of the planning target volume (MPTV) were calculated from these setup variations and the effect of margin size on normal tissue sparing was evaluated. In total, 573 CBCT scans were acquired. Mean absolute pre-/post-correction errors were obtained in all six planes. With 6DoF couch correction, the MPTV accounting for intra-fraction errors was reduced by 3.8-5.6 mm. This permitted a reduction in the maximum dose to the small intestine, bladder and femoral head (P=0.001, 0.035 and 0.032, respectively), the average dose to the rectum, small intestine, bladder and pelvic marrow (P=0.003, 0.000, 0.001 and 0.000, respectively) and markedly reduced irradiated normal tissue volumes. A 6DoF couch in combination with daily kV-CBCT can considerably improve positioning accuracy during VMAT treatment in gynecologic malignancies, reducing the MPTV. The reduced margin size permits improved normal tissue sparing and a smaller total irradiated volume.

  20. Sample size re-assessment leading to a raised sample size does not inflate type I error rate under mild conditions.

    PubMed

    Broberg, Per

    2013-07-19

    One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.

  1. Computed tomographic and cross-sectional anatomy of the normal pacu (Colossoma macroponum).

    PubMed

    Carr, Alaina; Weber, E P Scott; Murphy, Chris J; Zwingenberger, Alison

    2014-03-01

    The purpose of this study was to compare and define the normal cross-sectional gross and computed tomographic (CT) anatomy for a species of boney fish to better gain insight into the use of advanced diagnostic imaging for future clinical cases. The pacu (Colossoma macropomum) was used because of its widespread presence in the aquarium trade, its relatively large body size, and its importance in the research and aquaculture settings. Transverse 0.6-mm CT images of three cadaver fish were obtained and compared to corresponding frozen cross sections of the fish. Relevant anatomic structures were identified and labeled at each level; the Hounsfield unit density of major organs was established. The images presented good anatomic detail and provide a reference for future research and clinical investigation.

  2. Two hermaphroditic alewives from Lake Michigan

    USGS Publications Warehouse

    Edsall, Thomas A.; Saxon, Margaret I.

    1968-01-01

    Hermaphroditism has been reported frequently among many of the Clupeidae, but only one account of hermaphroditism has been published for the alewife, Alosa pseudoharengus. Rothschild discovered four hermaphroditic alewives among 444 fish he examined from Cayuga Lake, New York. We recently collected two hermaphroditic alewives from Lake Michigan. Both fish were normal in external appearance but were easily identified as hermaphrodites by gross examination of their gonads. The first hermaphrodite (177 mm T.L.) was discovered among several hundred normal adult alewives captured in early July 1965 in the Kalamazoo River about one mile upstream from Lake Michigan. The second hermaphroditic alewife (152 mm T.L.) was obtained from a sample of 160 adult alewives captured in Lake Michigan near the mouth of the Kalamazoo River in mid-April 1966.

  3. An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution

    NASA Technical Reports Server (NTRS)

    Campbell, C. W.

    1983-01-01

    An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.

  4. Sodium in weak G-band giants

    NASA Technical Reports Server (NTRS)

    Drake, Jeremy J.; Lambert, David L.

    1994-01-01

    Sodium abundances have been determined for eight weak G-band giants whose atmospheres are greatly enriched with products of the CN-cycling H-burning reactions. Systematic errors are minimized by comparing the weak G-band giants to a sample of similar but normal giants. If, further, Ca is selected as a reference element, model atmosphere-related errors should largely be removed. For the weak-G-band stars (Na/Ca) = 0.16 +/- 0.01, which is just possibly greater than the result (Na/Ca) = 0.10 /- 0.03 from the normal giants. This result demonstrates that the atmospheres of the weak G-band giants are not seriously contaminated with products of ON cycling.

  5. Temperature corrections in routine spirometry.

    PubMed Central

    Cramer, D; Peacock, A; Denison, D

    1984-01-01

    Forced expiratory volume (FEV1) and forced vital capacity (FVC) were measured in nine normal subjects with three Vitalograph and three rolling seal spirometers at three different ambient temperatures (4 degrees C, 22 degrees C, 32 degrees C). When the results obtained with the rolling seal spirometer were converted to BTPS the agreement between measurements in the three environments improved, but when the Vitalograph measurements obtained in the hot and cold rooms were converted an error of up to 13% was introduced. The error was similar whether ambient or spirometer temperatures were used to make the conversion. In an attempt to explain the behaviour of the Vitalograph spirometers the compliance of their bellows was measured at the three temperatures. It was higher at the higher temperature (32 degrees C) and lower at the lower temperature (4 degrees C) than at the normal room temperature. These changes in instrument compliance could account for the differences in measured values between the two types of spirometer. It is concluded that the ATPS-BTPS conversion is valid and necessary for measurements made with rolling seal spirometers, but can cause substantial error if it is used for Vitalograph measurements made under conditions other than normal room temperature. PMID:6495245

  6. Color Vision in Aniridia.

    PubMed

    Pedersen, Hilde R; Hagen, Lene A; Landsend, Erlend C S; Gilson, Stuart J; Utheim, Øygunn A; Utheim, Tor P; Neitz, Maureen; Baraas, Rigmor C

    2018-04-01

    To assess color vision and its association with retinal structure in persons with congenital aniridia. We included 36 persons with congenital aniridia (10-66 years), and 52 healthy, normal trichromatic controls (10-74 years) in the study. Color vision was assessed with Hardy-Rand-Rittler (HRR) pseudo-isochromatic plates (4th ed., 2002); Cambridge Color Test and a low-vision version of the Color Assessment and Diagnosis test (CAD-LV). Cone-opsin genes were analyzed to confirm normal versus congenital color vision deficiencies. Visual acuity and ocular media opacities were assessed. The central 30° of both eyes were imaged with the Heidelberg Spectralis OCT2 to grade the severity of foveal hypoplasia (FH, normal to complete: 0-4). Five participants with aniridia had cone opsin genes conferring deutan color vision deficiency and were excluded from further analysis. Of the 31 with aniridia and normal opsin genes, 11 made two or more red-green (RG) errors on HRR, four of whom also made yellow-blue (YB) errors; one made YB errors only. A total of 19 participants had higher CAD-LV RG thresholds, of which eight also had higher CAD-LV YB thresholds, than normal controls. In aniridia, the thresholds were higher along the RG than the YB axis, and those with a complete FH had significantly higher RG thresholds than those with mild FH (P = 0.038). Additional increase in YB threshold was associated with secondary ocular pathology. Arrested foveal formation and associated alterations in retinal processing are likely to be the primary reason for impaired red-green color vision in aniridia.

  7. Method for Expressing Clinical and Statistical Significance of Ocular and Corneal Wavefront Error Aberrations

    PubMed Central

    Smolek, Michael K.

    2011-01-01

    Purpose The significance of ocular or corneal aberrations may be subject to misinterpretation whenever eyes with different pupil sizes or the application of different Zernike expansion orders are compared. A method is shown that uses simple mathematical interpolation techniques based on normal data to rapidly determine the clinical significance of aberrations, without concern for pupil and expansion order. Methods Corneal topography (Tomey, Inc.; Nagoya, Japan) from 30 normal corneas was collected and the corneal wavefront error analyzed by Zernike polynomial decomposition into specific aberration types for pupil diameters of 3, 5, 7, and 10 mm and Zernike expansion orders of 6, 8, 10 and 12. Using this 4×4 matrix of pupil sizes and fitting orders, best-fitting 3-dimensional functions were determined for the mean and standard deviation of the RMS error for specific aberrations. The functions were encoded into software to determine the significance of data acquired from non-normal cases. Results The best-fitting functions for 6 types of aberrations were determined: defocus, astigmatism, prism, coma, spherical aberration, and all higher-order aberrations. A clinical screening method of color-coding the significance of aberrations in normal, postoperative LASIK, and keratoconus cases having different pupil sizes and different expansion orders is demonstrated. Conclusions A method to calibrate wavefront aberrometry devices by using a standard sample of normal cases was devised. This method could be potentially useful in clinical studies involving patients with uncontrolled pupil sizes or in studies that compare data from aberrometers that use different Zernike fitting-order algorithms. PMID:22157570

  8. A statistical model for analyzing the rotational error of single isocenter for multiple targets technique.

    PubMed

    Chang, Jenghwa

    2017-06-01

    To develop a statistical model that incorporates the treatment uncertainty from the rotational error of the single isocenter for multiple targets technique, and calculates the extra PTV (planning target volume) margin required to compensate for this error. The random vector for modeling the setup (S) error in the three-dimensional (3D) patient coordinate system was assumed to follow a 3D normal distribution with a zero mean, and standard deviations of σ x , σ y , σ z . It was further assumed that the rotation of clinical target volume (CTV) about the isocenter happens randomly and follows a three-dimensional (3D) independent normal distribution with a zero mean and a uniform standard deviation of σ δ . This rotation leads to a rotational random error (R), which also has a 3D independent normal distribution with a zero mean and a uniform standard deviation of σ R equal to the product of σδπ180 and dI⇔T, the distance between the isocenter and CTV. Both (S and R) random vectors were summed, normalized, and transformed to the spherical coordinates to derive the Chi distribution with three degrees of freedom for the radial coordinate of S+R. PTV margin was determined using the critical value of this distribution for a 0.05 significance level so that 95% of the time the treatment target would be covered by the prescription dose. The additional PTV margin required to compensate for the rotational error was calculated as a function of σ R and dI⇔T. The effect of the rotational error is more pronounced for treatments that require high accuracy/precision like stereotactic radiosurgery (SRS) or stereotactic body radiotherapy (SBRT). With a uniform 2-mm PTV margin (or σ x = σ y = σ z = 0.715 mm), a σ R = 0.328 mm will decrease the CTV coverage probability from 95.0% to 90.9%, or an additional 0.2-mm PTV margin is needed to prevent this loss of coverage. If we choose 0.2 mm as the threshold, any σ R > 0.328 mm will lead to an extra PTV margin that cannot be ignored, and the maximal σ δ that can be ignored is 0.45° (or 0.0079 rad ) for dI⇔T = 50 mm or 0.23° (or 0.004 rad ) for dI⇔T = 100 mm. The rotational error cannot be ignored for high-accuracy/-precision treatments like SRS/SBRT, particularly when the distance between the isocenter and target is large. © 2017 American Association of Physicists in Medicine.

  9. 29 CFR 779.259 - What is included in annual gross volume.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... whole. The computation of the annual gross volume of sales or business of the enterprise is made... Coverage Annual Gross Volume of Sales Made Or Business Done § 779.259 What is included in annual gross volume. (a) The annual gross volume of sales made or business done of an enterprise consists of its gross...

  10. 29 CFR 779.259 - What is included in annual gross volume.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... whole. The computation of the annual gross volume of sales or business of the enterprise is made... Coverage Annual Gross Volume of Sales Made Or Business Done § 779.259 What is included in annual gross volume. (a) The annual gross volume of sales made or business done of an enterprise consists of its gross...

  11. 26 CFR 1.832-1 - Gross income.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... deposit premiums received, but not assessments, shall be excluded from gross income. Gross income does not... 26 Internal Revenue 8 2010-04-01 2010-04-01 false Gross income. 1.832-1 Section 1.832-1 Internal... TAXES Other Insurance Companies § 1.832-1 Gross income. (a) Gross income as defined in section 832(b)(1...

  12. Making and monitoring errors based on altered auditory feedback

    PubMed Central

    Pfordresher, Peter Q.; Beasley, Robertson T. E.

    2014-01-01

    Previous research has demonstrated that altered auditory feedback (AAF) disrupts music performance and causes disruptions in both action planning and the perception of feedback events. It has been proposed that this disruption occurs because of interference within a shared representation for perception and action (Pfordresher, 2006). Studies reported here address this claim from the standpoint of error monitoring. In Experiment 1 participants performed short melodies on a keyboard while hearing no auditory feedback, normal auditory feedback, or alterations to feedback pitch on some subset of events. Participants overestimated error frequency when AAF was present but not for normal feedback. Experiment 2 introduced a concurrent load task to determine whether error monitoring requires executive resources. Although the concurrent task enhanced the effect of AAF, it did not alter participants’ tendency to overestimate errors when AAF was present. A third correlational study addressed whether effects of AAF are reduced for a subset of the population who may lack the kind of perception/action associations that lead to AAF disruption: poor-pitch singers. Effects of manipulations similar to those presented in Experiments 1 and 2 were reduced for these individuals. We propose that these results are consistent with the notion that AAF interference is based on associations between perception and action within a forward internal model of auditory-motor relationships. PMID:25191294

  13. Chain pooling to minimize prediction error in subset regression. [Monte Carlo studies using population models

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1974-01-01

    Monte Carlo studies using population models intended to represent response surface applications are reported. Simulated experiments were generated by adding pseudo random normally distributed errors to population values to generate observations. Model equations were fitted to the observations and the decision procedure was used to delete terms. Comparison of values predicted by the reduced models with the true population values enabled the identification of deletion strategies that are approximately optimal for minimizing prediction errors.

  14. Uncertainties in climate data sets

    NASA Technical Reports Server (NTRS)

    Mcguirk, James P.

    1992-01-01

    Climate diagnostics are constructed from either analyzed fields or from observational data sets. Those that have been commonly used are normally considered ground truth. However, in most of these collections, errors and uncertainties exist which are generally ignored due to the consistency of usage over time. Examples of uncertainties and errors are described in NMC and ECMWF analyses and in satellite observational sets-OLR, TOVS, and SMMR. It is suggested that these errors can be large, systematic, and not negligible in climate analysis.

  15. Amplitude/frequency differences in a supine resting single-lead electrocardiogram of normal versus coronary heart diseased males.

    DOT National Transportation Integrated Search

    1974-05-01

    A resting 'normal' ECG can coexist with known angina pectoris, positive angiocardiography and previous myocardial infarction. In contemporary exercise ECG tests, a false positive/false negative total error of 10% is not unusual. Research aimed at imp...

  16. 1 λ × 1.44 Tb/s free-space IM-DD transmission employing OAM multiplexing and PDM.

    PubMed

    Zhu, Yixiao; Zou, Kaiheng; Zheng, Zhennan; Zhang, Fan

    2016-02-22

    We report the experimental demonstration of single wavelength terabit free-space intensity modulation direct detection (IM-DD) system employing both orbital angular momentum (OAM) multiplexing and polarization division multiplexing (PDM). In our experiment, 12 OAM modes with two orthogonal polarization states are used to generate 24 channels for transmission. Each channel carries 30 Gbaud Nyquist PAM-4 signal. Therefore an aggregate gross capacity record of 1.44 Tb/s (12 × 2 × 30 × 2 Gb/s) is acheived with a modulation efficiency of 48 bits/symbol. After 0.8m free-space transmission, the bit error rates (BERs) of all the channels are below the 20% hard-decision forward error correction (HD-FEC) threshold of 1.5 × 10(-2). After applying the decision directed recursive least square (DD-RLS) based filter and post filter, the BERs of two polarizations can be reduced from 5.3 × 10(-3) and 7.3 × 10(-3) to 2.2 × 10(-3) and 3.4 × 10(-3), respectively.

  17. HARMONY: a server for the assessment of protein structures

    PubMed Central

    Pugalenthi, G.; Shameer, K.; Srinivasan, N.; Sowdhamini, R.

    2006-01-01

    Protein structure validation is an important step in computational modeling and structure determination. Stereochemical assessment of protein structures examine internal parameters such as bond lengths and Ramachandran (φ,ψ) angles. Gross structure prediction methods such as inverse folding procedure and structure determination especially at low resolution can sometimes give rise to models that are incorrect due to assignment of misfolds or mistracing of electron density maps. Such errors are not reflected as strain in internal parameters. HARMONY is a procedure that examines the compatibility between the sequence and the structure of a protein by assigning scores to individual residues and their amino acid exchange patterns after considering their local environments. Local environments are described by the backbone conformation, solvent accessibility and hydrogen bonding patterns. We are now providing HARMONY through a web server such that users can submit their protein structure files and, if required, the alignment of homologous sequences. Scores are mapped on the structure for subsequent examination that is useful to also recognize regions of possible local errors in protein structures. HARMONY server is located at PMID:16844999

  18. Consistent lattice Boltzmann methods for incompressible axisymmetric flows

    NASA Astrophysics Data System (ADS)

    Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Yin, Linmao; Zhao, Ya; Chew, Jia Wei

    2016-08-01

    In this work, consistent lattice Boltzmann (LB) methods for incompressible axisymmetric flows are developed based on two efficient axisymmetric LB models available in the literature. In accord with their respective original models, the proposed axisymmetric models evolve within the framework of the standard LB method and the source terms contain no gradient calculations. Moreover, the incompressibility conditions are realized with the Hermite expansion, thus the compressibility errors arising in the existing models are expected to be reduced by the proposed incompressible models. In addition, an extra relaxation parameter is added to the Bhatnagar-Gross-Krook collision operator to suppress the effect of the ghost variable and thus the numerical stability of the present models is significantly improved. Theoretical analyses, based on the Chapman-Enskog expansion and the equivalent moment system, are performed to derive the macroscopic equations from the LB models and the resulting truncation terms (i.e., the compressibility errors) are investigated. In addition, numerical validations are carried out based on four well-acknowledged benchmark tests and the accuracy and applicability of the proposed incompressible axisymmetric LB models are verified.

  19. Thermal dye double indicator dilution measurement of lung water in man: comparison with gravimetric measurements.

    PubMed Central

    Mihm, F G; Feeley, T W; Jamieson, S W

    1987-01-01

    The thermal dye double indicator dilution technique for estimating lung water was compared with gravimetric analyses in nine human subjects who were organ donors. As observed in animal studies, the thermal dye measurement of extravascular thermal volume (EVTV) consistently overestimated gravimetric extravascular lung water (EVLW), the mean (SEM) difference being 3.43 (0.59) ml/kg. In eight of the nine subjects the EVTV -3.43 ml/kg would yield an estimate of EVLW that would be from 3.23 ml/kg under to 3.37 ml/kg over the actual value EVLW at the 95% confidence limits. Reproducibility, assessed with the standard error of the mean percentage, suggested that a 15% change in EVTV can be reliably detected with repeated measurements. One subject was excluded from analysis because the EVTV measurement grossly underestimated its actual EVLW. This error was associated with regional injury observed on gross examination of the lung. Experimental and clinical evidence suggest that the thermal dye measurement provides a reliable estimate of lung water in diffuse pulmonary oedema states. PMID:3616974

  20. BIOCHEMISTRY OF NORMAL AND IRRADIATED STRAINS OF HYMENOLEPIS DIMINUTA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fairbairn, D.; Wertheim, G.; Harpur, R.P.

    1961-09-01

    An irradiated strain of H. diminuta was developed in which morphological anomalies persisted for at least 7 generations. This and the normal strain from which it was derived were corapared for biochemical differences, which might lead to the discovery of a biocheraical lesion. No significant differences were found in fresh weights between normal and irradiated strains of H. diminuta. Samples of H. diminuta were then prepared, and their composition determined. There was a notable loss of carbohydrates by tapeworms during 24 hr of in vivo fasting, amounting to 56% and 62% in the normal and irradiated strains, respectively. On themore » other hand, lipids increased by 10% and protein by 4% in both strains, which suggests that the substances were not concerned with energy metabolism during starvation. The giycogen of both normal and irradiated strains of H. diminuta obtained from fed or fasted rats, determined directly or after maintenance of the parasites in glucose-saline, accounted for 99% of the alkali-stable carbohydrates. which in turn, comprised about 96% of the total carbohydrates. In general, no notable differences in the growth, chemical composition, or gross metabolism between normal and irradiated strains of H. diminuta were recognized. Thus, the morphological changes due to irradiation previously described are the reflection of biochemical events. (H.H.D.)« less

  1. Automatic detection of patient identification and positioning errors in radiation therapy treatment using 3-dimensional setup images.

    PubMed

    Jani, Shyam S; Low, Daniel A; Lamb, James M

    2015-01-01

    To develop an automated system that detects patient identification and positioning errors between 3-dimensional computed tomography (CT) and kilovoltage CT planning images. Planning kilovoltage CT images were collected for head and neck (H&N), pelvis, and spine treatments with corresponding 3-dimensional cone beam CT and megavoltage CT setup images from TrueBeam and TomoTherapy units, respectively. Patient identification errors were simulated by registering setup and planning images from different patients. For positioning errors, setup and planning images were misaligned by 1 to 5 cm in the 6 anatomical directions for H&N and pelvis patients. Spinal misalignments were simulated by misaligning to adjacent vertebral bodies. Image pairs were assessed using commonly used image similarity metrics as well as custom-designed metrics. Linear discriminant analysis classification models were trained and tested on the imaging datasets, and misclassification error (MCE), sensitivity, and specificity parameters were estimated using 10-fold cross-validation. For patient identification, our workflow produced MCE estimates of 0.66%, 1.67%, and 0% for H&N, pelvis, and spine TomoTherapy images, respectively. Sensitivity and specificity ranged from 97.5% to 100%. MCEs of 3.5%, 2.3%, and 2.1% were obtained for TrueBeam images of the above sites, respectively, with sensitivity and specificity estimates between 95.4% and 97.7%. MCEs for 1-cm H&N/pelvis misalignments were 1.3%/5.1% and 9.1%/8.6% for TomoTherapy and TrueBeam images, respectively. Two-centimeter MCE estimates were 0.4%/1.6% and 3.1/3.2%, respectively. MCEs for vertebral body misalignments were 4.8% and 3.6% for TomoTherapy and TrueBeam images, respectively. Patient identification and gross misalignment errors can be robustly and automatically detected using 3-dimensional setup images of different energies across 3 commonly treated anatomical sites. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  2. How Hinge Positioning in Cross-Country Ski Bindings Affect Exercise Efficiency, Cycle Characteristics and Muscle Coordination during Submaximal Roller Skiing

    PubMed Central

    Bolger, Conor M.; Sandbakk, Øyvind; Ettema, Gertjan; Federolf, Peter

    2016-01-01

    The purposes of the current study were to 1) test if the hinge position in the binding of skating skis has an effect on gross efficiency or cycle characteristics and 2) investigate whether hinge positioning affects synergistic components of the muscle activation in six lower leg muscles. Eleven male skiers performed three 4-min sessions at moderate intensity while cross-country ski-skating and using a klapskate binding. Three different positions were tested for the binding’s hinge, ranging from the front of the first distal phalange to the metatarsal-phalangeal joint. Gross efficiency and cycle characteristics were determined, and the electromyographic (EMG) signals of six lower limb muscles were collected. EMG signals were wavelet transformed, normalized, joined into a multi-dimensional vector, and submitted to a principle component analysis (PCA). Our results did not reveal any changes to gross efficiency or cycle characteristics when altering the hinge position. However, our EMG analysis found small but significant effects of hinge positioning on muscle coordinative patterns (P < 0.05). The changed patterns in muscle activation are in alignment with previously described mechanisms that explain the effects of hinge positioning in speed-skating klapskates. Finally, the within-subject results of the EMG analysis suggested that in addition to the between-subject effects, further forms of muscle coordination patterns appear to be employed by some, but not all participants. PMID:27203597

  3. [Attempt to reduce the formaldehyde concentration by blowing cooled fresh air down in to the breathing zone of medical students from an admission port on the ceiling during gross anatomy class].

    PubMed

    Takayanagi, Masaaki; Sakai, Makoto; Ishikawa, Youichi; Murakami, Kunio; Kimura, Akihiko; Kakuta, Sachiko; Sato, Fumi

    2008-09-01

    Cadavers in gross anatomy laboratories at most medical schools are conventionally embalmed in formaldehyde solution, which is carcinogenic to humans. Medical students and instructors are thus exposed to formaldehyde vapors emitted from cadavers during dissection. To reduce high formaldehyde concentrations in the breathing zone above cadavers being examined by anatomy medical students provisionally, dissection beds were located under existing admission ports on the ceiling to supply cooled fresh air from the admission port blowing downward on to the cadaver. In all cases, compared to normal condition, the downward flow of cooled fresh air from an admission port reduced formaldehyde concentrations by 0.09-0.98 ppm and reduced to 12.6-65.4% in the air above a cadaver in the breathing zone of students. The formaldehyde concentrations above cadavers under admission ports were not more than the formaldehyde concentrations between beds representing the indoor formaldehyde concentrations. Although the application of an existing admission port on the ceiling in this study did not remove formaldehyde, the downflow of cooled fresh air using this system reduced the formaldehyde concentration in the air above cadavers being attended by anatomy students during dissections. These results suggest the need for reducing formaldehyde levels in gross anatomy laboratories using fundamental countermeasures in order to satisfy the guidelines of 0.08 ppm established by the World Health Organization and the Japan Ministry of Health, Labor and Welfare.

  4. Radiographic, computed tomographic, gross pathological and histological findings with suspected apical infection in 32 equine maxillary cheek teeth (2012-2015).

    PubMed

    Liuti, T; Smith, S; Dixon, P M

    2018-01-01

    Equine maxillary cheek teeth apical infections are a significant disorder because of frequent spread of infection to the supporting bones. The accuracy of computed tomographic imaging (CT) of this disorder has not been fully assessed. To compare the radiographic and CT findings in horses diagnosed with maxillary cheek teeth apical infections with pathological findings in the extracted teeth to assess the accuracy of these imaging techniques. Observational clinical study. Thirty-two maxillary cheek teeth (in 29 horses) diagnosed with apical infections by clinical, radiographic and principally by CT examinations, were extracted orally. The extracted teeth were subjected to further CT, gross pathological and histological examinations. Four normal teeth extracted from a cadaver served as controls. Pulpar and apical changes highly indicative of maxillary cheek teeth apical infection were present in all 32 teeth on CT, but in just 17/32 teeth (53%) radiographically. Gross pulpar/apical abnormalities and histological pulpar/periapical changes were present in 31/32 (97%) extracted teeth. On CT, one tooth contained small gas pockets in the apical aspect of one pulp and adjacent periodontal space, however no pathological changes were found following its extraction. The study is descriptive and is confined to a small number of cases. This study showed a 97% agreement between CT diagnosis of maxillary cheek teeth apical infection and the presence of pathological changes in the extracted teeth, confirming the diagnostic accuracy of CT compared with radiography for this disorder. © 2017 EVJ Ltd.

  5. Prevention of congenital defects induced by prenatal alcohol exposure (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Sheehan, Megan M.; Karunamuni, Ganga; Pedersen, Cameron J.; Gu, Shi; Doughman, Yong Qiu; Jenkins, Michael W.; Watanabe, Michiko; Rollins, Andrew M.

    2017-02-01

    Nearly 2 million women in the United States alone are at risk for an alcohol-exposed pregnancy, including more than 600,000 who binge drink. Even low levels of prenatal alcohol exposure (PAE) can lead to a variety of birth defects, including craniofacial and neurodevelopmental defects, as well as increased risk of miscarriages and stillbirths. Studies have also shown an interaction between drinking while pregnant and an increase in congenital heart defects (CHD), including atrioventricular septal defects and other malformations. We have previously established a quail model of PAE, modeling a single binge drinking episode in the third week of a woman's pregnancy. Using optical coherence tomography (OCT), we quantified intraventricular septum thickness, great vessel diameters, and atrioventricular valve volumes. Early-stage ethanol-exposed embryos had smaller cardiac cushions (valve precursors) and increased retrograde flow, while late-stage embryos presented with gross head/body defects, and exhibited smaller atrio-ventricular (AV) valves, interventricular septum, and aortic vessels. We previously showed that supplementation with the methyl donor betaine reduced gross defects, improved survival rates, and prevented cardiac defects. Here we show that these preventative effects are also observed with folate (another methyl donor) supplementation. Folate also appears to normalize retrograde flow levels which are elevated by ethanol exposure. Finally, preliminary findings have shown that glutathione, a crucial antioxidant, is noticeably effective at improving survival rates and minimizing gross defects in ethanol-exposed embryos. Current investigations will examine the impact of glutathione supplementation on PAE-related CHDs.

  6. Linear constraint relations in biochemical reaction systems: I. Classification of the calculability and the balanceability of conversion rates.

    PubMed

    van der Heijden, R T; Heijnen, J J; Hellinga, C; Romein, B; Luyben, K C

    1994-01-05

    Measurements provide the basis for process monitoring and control as well as for model development and validation. Systematic approaches to increase the accuracy and credibility of the empirical data set are therefore of great value. In (bio)chemical conversions, linear conservation relations such as the balance equations for charge, enthalpy, and/or chemical elements, can be employed to relate conversion rates. In a pactical situation, some of these rates will be measured (in effect, be calculated directly from primary measurements of, e.g., concentrations and flow rates), as others can or cannot be calculated from the measured ones. When certain measured rates can also be calculated from other measured rates, the set of equations, the accuracy and credibility of the measured rates can indeed be improved by, respectively, balancing and gross error diagnosis. The balanced conversion rates are more accurate, and form a consistent set of data, which is more suitable for further application (e.g., to calculate nonmeasured rates) than the raw measurements. Such an approach has drawn attention in previous studies. The current study deals mainly with the problem of mathematically classifying the conversion rates into balanceable and calculable rates, given the subset of measured rates. The significance of this problem is illustrated with some examples. It is shown that a simple matrix equation can be derived that contains the vector of measured conversion rates and the redundancy matrix R. Matrix R plays a predominant role in the classification problem. In supplementary articles, significance of the redundancy matrix R for an improved gross error diagnosis approach will be shown. In addition, efficient equations have been derived to calculate the balanceable and/or calculable rates. The method is completely based on matrix algebra (principally different from the graph-theoretical approach), and it is easily implemented into a computer program. (c) 1994 John Wiley & Sons, Inc.

  7. Increased Error-Related Negativity (ERN) in Childhood Anxiety Disorders: ERP and Source Localization

    ERIC Educational Resources Information Center

    Ladouceur, Cecile D.; Dahl, Ronald E.; Birmaher, Boris; Axelson, David A.; Ryan, Neal D.

    2006-01-01

    Background: In this study we used event-related potentials (ERPs) and source localization analyses to track the time course of neural activity underlying response monitoring in children diagnosed with an anxiety disorder compared to age-matched low-risk normal controls. Methods: High-density ERPs were examined following errors on a flanker task…

  8. A Comparison of Normal and Elliptical Estimation Methods in Structural Equation Models.

    ERIC Educational Resources Information Center

    Schumacker, Randall E.; Cheevatanarak, Suchittra

    Monte Carlo simulation compared chi-square statistics, parameter estimates, and root mean square error of approximation values using normal and elliptical estimation methods. Three research conditions were imposed on the simulated data: sample size, population contamination percent, and kurtosis. A Bentler-Weeks structural model established the…

  9. High Resolution Qualitative and Quantitative MR Evaluation of the Glenoid Labrum

    PubMed Central

    Iwasaki, Kenyu; Tafur, Monica; Chang, Eric Y.; SherondaStatum; Biswas, Reni; Tran, Betty; Bae, Won C.; Du, Jiang; Bydder, Graeme M.; Chung, Christine B.

    2015-01-01

    Objective To implement qualitative and quantitative MR sequences for the evaluation of labral pathology. Methods Six glenoid labra were dissected and the anterior and posterior portions were divided into normal, mildly degenerated, or severely degenerated groups using gross and MR findings. Qualitative evaluation was performed using T1-weighted, proton density-weighted (PD), spoiled gradient echo (SPGR) and ultra-short echo time (UTE) sequences. Quantitative evaluation included T2 and T1rho measurements as well as T1, T2*, and T1rho measurements acquired with UTE techniques. Results SPGR and UTE sequences best demonstrated labral fiber structure. Degenerated labra had a tendency towards decreased T1 values, increased T2/T2* values and increased T1 rho values. T2* values obtained with the UTE sequence allowed for delineation between normal, mildly degenerated and severely degenerated groups (p<0.001). Conclusion Quantitative T2* measurements acquired with the UTE technique are useful for distinguishing between normal, mildly degenerated and severely degenerated labra. PMID:26359581

  10. Registration of 2D to 3D joint images using phase-based mutual information

    NASA Astrophysics Data System (ADS)

    Dalvi, Rupin; Abugharbieh, Rafeef; Pickering, Mark; Scarvell, Jennie; Smith, Paul

    2007-03-01

    Registration of two dimensional to three dimensional orthopaedic medical image data has important applications particularly in the area of image guided surgery and sports medicine. Fluoroscopy to computer tomography (CT) registration is an important case, wherein digitally reconstructed radiographs derived from the CT data are registered to the fluoroscopy data. Traditional registration metrics such as intensity-based mutual information (MI) typically work well but often suffer from gross misregistration errors when the image to be registered contains a partial view of the anatomy visible in the target image. Phase-based MI provides a robust alternative similarity measure which, in addition to possessing the general robustness and noise immunity that MI provides, also employs local phase information in the registration process which makes it less susceptible to the aforementioned errors. In this paper, we propose using the complex wavelet transform for computing image phase information and incorporating that into a phase-based MI measure for image registration. Tests on a CT volume and 6 fluoroscopy images of the knee are presented. The femur and the tibia in the CT volume were individually registered to the fluoroscopy images using intensity-based MI, gradient-based MI and phase-based MI. Errors in the coordinates of fiducials present in the bone structures were used to assess the accuracy of the different registration schemes. Quantitative results demonstrate that the performance of intensity-based MI was the worst. Gradient-based MI performed slightly better, while phase-based MI results were the best consistently producing the lowest errors.

  11. Improving the Non-Hydrostatic Numerical Dust Model by Integrating Soil Moisture and Greenness Vegetation Fraction Data with Different Spatiotemporal Resolutions.

    PubMed

    Yu, Manzhu; Yang, Chaowei

    2016-01-01

    Dust storms are devastating natural disasters that cost billions of dollars and many human lives every year. Using the Non-Hydrostatic Mesoscale Dust Model (NMM-dust), this research studies how different spatiotemporal resolutions of two input parameters (soil moisture and greenness vegetation fraction) impact the sensitivity and accuracy of a dust model. Experiments are conducted by simulating dust concentration during July 1-7, 2014, for the target area covering part of Arizona and California (31, 37, -118, -112), with a resolution of ~ 3 km. Using ground-based and satellite observations, this research validates the temporal evolution and spatial distribution of dust storm output from the NMM-dust, and quantifies model error using measurements of four evaluation metrics (mean bias error, root mean square error, correlation coefficient and fractional gross error). Results showed that the default configuration of NMM-dust (with a low spatiotemporal resolution of both input parameters) generates an overestimation of Aerosol Optical Depth (AOD). Although it is able to qualitatively reproduce the temporal trend of the dust event, the default configuration of NMM-dust cannot fully capture its actual spatial distribution. Adjusting the spatiotemporal resolution of soil moisture and vegetation cover datasets showed that the model is sensitive to both parameters. Increasing the spatiotemporal resolution of soil moisture effectively reduces model's overestimation of AOD, while increasing the spatiotemporal resolution of vegetation cover changes the spatial distribution of reproduced dust storm. The adjustment of both parameters enables NMM-dust to capture the spatial distribution of dust storms, as well as reproducing more accurate dust concentration.

  12. Variations in clinical presentation and anatomical distribution of gross lesions of African swine fever in domestic pigs in the southern highlands of Tanzania: a field experience.

    PubMed

    Kipanyula, Maulilio John; Nong'ona, Solomon Wilson

    2017-02-01

    African swine fever is a contagious viral disease responsible for up to 100% mortality among domestic pigs. A longitudinal study was carried out to determine the clinical presentation and anatomical distribution of gross lesions in affected pigs in Mbeya region, Tanzania during the 2010 to 2014 outbreaks. Data were collected during clinical and postmortem examination by field veterinarians and using a structured questionnaire. A total of 118 respondents (100%) showed awareness about African swine fever. During previous outbreaks, the mortality rate was almost 100%, while in 2014 it was estimated to be less than 50%.The clinical picture of the 2010-2012 outbreaks was characterized by high fever, depression, inappetance, mucosal congestion, hemorrhages, erythematous lesions in different body parts, and abortion. Several internal organs including the kidneys, spleen, and liver were congested and edematous. During the 2014 outbreak, a number of pigs (49.7%) were asymptomatic when brought to slaughter slabs but were found to have African swine fever gross lesions at postmortem examination as compared to 12.3% in 2010-2012. Bluish discoloration, which is normally distributed on the non-hairy parts of the body, was not apparent in some pigs except at postmortem examination. Some pigs (36.1%) presented nasal and/or oral bloody discharges which were uncommon (9.1%) during previous outbreaks. Moreover, other gross features included enlarged dark red renal lymph nodes and spleen. Clinical signs such as anorexia, diarrhea, and pyrexia were mainly observed when affected pigs reached moribund stage. The majority of pregnant sows died without presenting abortions. In some litters, suckling piglets (3-6 weeks) survived from the disease. These findings indicated that in 2014, African swine fever outbreak in Mbeya region was characterized by a different clinical picture.

  13. Seeking realistic upper-bounds for internal reliability of systems with uncorrelated observations

    NASA Astrophysics Data System (ADS)

    Prószyński, Witold

    2014-06-01

    From the theory of reliability it follows that the greater the observational redundancy in a network, the higher is its level of internal reliability. However, taking into account physical nature of the measurement process one may notice that the planned additional observations may increase the number of potential gross errors in a network, not raising the internal reliability to the theoretically expected degree. Hence, it is necessary to set realistic limits for a sufficient number of observations in a network. An attempt to provide principles for finding such limits is undertaken in the present paper. An empirically obtained formula (Adamczewski 2003) called there the law of gross errors, determining the chances that a certain number of gross errors may occur in a network, was taken as a starting point in the analysis. With the aid of an auxiliary formula derived on the basis of the Gaussian law, the Adamczewski formula was modified to become an explicit function of the number of observations in a network. This made it possible to construct tools necessary for the analysis and finally, to formulate the guidelines for determining the upper-bounds for internal reliability indices. Since the Adamczewski formula was obtained for classical networks, the guidelines should be considered as an introductory proposal requiring verification with reference to modern measuring techniques. Z teorii niezawodności wynika, że im większy jest nadmiar obserwacyjny w sieci, tym wyższy poziom jej niezawodności wewnętrznej. Biorąc jednakże pod uwagę fi zykalną naturę procesu pomiaru można zauważyć, że projektowane dodatkowe obserwacje mogą zwiększyć liczbę potencjalnych błędów grubych w sieci, nie podnosząc niezawodności wewnętrznej do oczekiwanego według teorii poziomu. Niezbędne jest zatem ustalenie realistycznych poziomów górnych dla liczby obserwacji w sieci. W niniejszym artykule podjęta jest próba sformułowania zasad ustalania takich poziomów. Jako punkt wyjściowy w analizie przyjęto uzyskaną na drodze empirycznej formułę (Adamczewski 2003), nazwaną prawem błędów grubych, pozwalającą wyznaczyć prawdopodobieństwo wystąpienia w sieci określonej liczby błędów grubych. Przy użyciu pomocniczej zależności wyprowadzonej na podstawie gaussowskiego rozkładu błędów dokonano modyfikacji formuły Adamczewskiego, przekształcając ją w jawną funkcję liczby obserwacji w sieci. Umożliwiło to skonstruowanie narzędzi niezbędnych do analizy, i ostatecznie sformułowanie wskazań co do wyznaczania górnych limitów niezawodności wewnętrznej sieci. Ponieważ formuła Adamczewskiego uzyskana została dla sieci klasycznych, wskazania niniejsze powinny być potraktowane jako wstępna propozycja wymagająca sprawdzenia w odniesieniu do nowoczesnych technik pomiarowych.

  14. Total energy based flight control system

    NASA Technical Reports Server (NTRS)

    Lambregts, Antonius A. (Inventor)

    1985-01-01

    An integrated aircraft longitudinal flight control system uses a generalized thrust and elevator command computation (38), which accepts flight path angle, longitudinal acceleration command signals, along with associated feedback signals, to form energy rate error (20) and energy rate distribution error (18) signals. The engine thrust command is developed (22) as a function of the energy rate distribution error and the elevator position command is developed (26) as a function of the energy distribution error. For any vertical flight path and speed mode the outerloop errors are normalized (30, 34) to produce flight path angle and longitudinal acceleration commands. The system provides decoupled flight path and speed control for all control modes previously provided by the longitudinal autopilot, autothrottle and flight management systems.

  15. Safety and Performance Analysis of the Non-Radar Oceanic/Remote Airspace In-Trail Procedure

    NASA Technical Reports Server (NTRS)

    Carreno, Victor A.; Munoz, Cesar A.

    2007-01-01

    This document presents a safety and performance analysis of the nominal case for the In-Trail Procedure (ITP) in a non-radar oceanic/remote airspace. The analysis estimates the risk of collision between the aircraft performing the ITP and a reference aircraft. The risk of collision is only estimated for the ITP maneuver and it is based on nominal operating conditions. The analysis does not consider human error, communication error conditions, or the normal risk of flight present in current operations. The hazards associated with human error and communication errors are evaluated in an Operational Hazards Analysis presented elsewhere.

  16. Analysis of iodinated contrast delivered during thermal ablation: is material trapped in the ablation zone?

    PubMed

    Wu, Po-Hung; Brace, Chris L

    2016-08-21

    Intra-procedural contrast-enhanced CT (CECT) has been proposed to evaluate treatment efficacy of thermal ablation. We hypothesized that contrast material delivered concurrently with thermal ablation may become trapped in the ablation zone, and set out to determine whether such an effect would impact ablation visualization. CECT images were acquired during microwave ablation in normal porcine liver with: (A) normal blood perfusion and no iodinated contrast, (B) normal perfusion and iodinated contrast infusion or (C) no blood perfusion and residual iodinated contrast. Changes in CT attenuation were analyzed from before, during and after ablation to evaluate whether contrast was trapped inside of the ablation zone. Visualization was compared between groups using post-ablation contrast-to-noise ratio (CNR). Attenuation gradients were calculated at the ablation boundary and background to quantitate ablation conspicuity. In Group A, attenuation decreased during ablation due to thermal expansion of tissue water and water vaporization. The ablation zone was difficult to visualize (CNR  =  1.57  ±  0.73, boundary gradient  =  0.7  ±  0.4 HU mm(-1)), leading to ablation diameter underestimation compared to gross pathology. Group B ablations saw attenuation increase, suggesting that iodine was trapped inside the ablation zone. However, because the normally perfused liver increased even more, Group B ablations were more visible than Group A (CNR  =  2.04  ±  0.84, boundary gradient  =  6.3  ±  1.1 HU mm(-1)) and allowed accurate estimation of the ablation zone dimensions compared to gross pathology. Substantial water vaporization led to substantial attenuation changes in Group C, though the ablation zone boundary was not highly visible (boundary gradient  =  3.9  ±  1.1 HU mm(-1)). Our results demonstrate that despite iodinated contrast being trapped in the ablation zone, ablation visibility was highest when contrast is delivered intra-procedurally. Therefore, CECT may be feasible for real-time thermal ablation monitoring.

  17. Gender Differences in Capitate Kinematics are Eliminated After Accounting for Variation in Carpal Size

    PubMed Central

    Rainbow, Michael J.; Moore, Douglas C.; Wolfe, Scott W.

    2012-01-01

    Previous studies have found gender differences in carpal kinematics, and there are discrepancies in the literature on the location of the flexion/extension and radio-ulnar deviation rotation axes of the wrist. It has been postulated that these differences are due to carpal bone size differences rather than gender and that they may be resolved by normalizing the kinematics by carpal size. The purpose of this study was to determine if differences in radio-capitate kinematics are a function of size or gender. We also sought to determine if a best-fit pivot point (PvP) describes the radio-capitate joint as a ball-and-socket articulation. By using an in vivo markerless bone registration technique applied to computed tomography scans of 26 male and 28 female wrists, we applied scaling derived from capitate length to radio-capitate kinematics, characterized by a best-fit PvP. We determined if radio-capitate kinematics behave as a ball-and-socket articulation by examining the error in the best-fit PvP. Scaling PvP location completely removed gender differences (P = 0.3). This verifies that differences in radio-capitate kinematics are due to size and not gender. The radio-capitate joint did not behave as a perfect ball and socket because helical axes representing anatomical motions such as flexion-extension, radio-ulnar deviation, dart throwers, and antidart throwers, were located at distances up to 4.5 mm from the PvP. Although the best-fit PvP did not yield a single center of rotation, it was still consistently found within the proximal pole of the capitate, and rms errors of the best-fit PvP calculation were on the order of 2 mm. Therefore, the ball-and-socket model of the wrist joint center using the best-fit PvP is appropriate when considering gross motion of the hand with respect to the forearm such as in optical motion capture models. However, the ball-and-socket model of the wrist is an insufficient description of the complex motion of the capitate with respect to the radius. These findings may aid in the design of wrist external fixation and orthotics. PMID:18601445

  18. Towards reporting standards for neuropsychological study results: A proposal to minimize communication errors with standardized qualitative descriptors for normalized test scores.

    PubMed

    Schoenberg, Mike R; Rum, Ruba S

    2017-11-01

    Rapid, clear and efficient communication of neuropsychological results is essential to benefit patient care. Errors in communication are a lead cause of medical errors; nevertheless, there remains a lack of consistency in how neuropsychological scores are communicated. A major limitation in the communication of neuropsychological results is the inconsistent use of qualitative descriptors for standardized test scores and the use of vague terminology. PubMed search from 1 Jan 2007 to 1 Aug 2016 to identify guidelines or consensus statements for the description and reporting of qualitative terms to communicate neuropsychological test scores was conducted. The review found the use of confusing and overlapping terms to describe various ranges of percentile standardized test scores. In response, we propose a simplified set of qualitative descriptors for normalized test scores (Q-Simple) as a means to reduce errors in communicating test results. The Q-Simple qualitative terms are: 'very superior', 'superior', 'high average', 'average', 'low average', 'borderline' and 'abnormal/impaired'. A case example illustrates the proposed Q-Simple qualitative classification system to communicate neuropsychological results for neurosurgical planning. The Q-Simple qualitative descriptor system is aimed as a means to improve and standardize communication of standardized neuropsychological test scores. Research are needed to further evaluate neuropsychological communication errors. Conveying the clinical implications of neuropsychological results in a manner that minimizes risk for communication errors is a quintessential component of evidence-based practice. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Regionalökonomische Bedeutung kleiner Wallfahrtsorte. Lokale Wertschöpfung am Beispiel Biberbach (Bayerisch-Schwaben)

    NASA Astrophysics Data System (ADS)

    Hilpert, Markus; Mahne-Bieder, Johannes; Stifter, Vanessa

    2016-09-01

    Today pilgrimage is experiencing an increasing interest, although motives are notable changing. Sites along pilgrim routes normally profit in an economically way. But until today regional economic effects of even very small pilgrimage sites are underexplored. Therefore economic impulses of the pilgrimage site in Biberbach (Bavaria) were identified using interviews and calculations. According to that the 10,000 visitors per year generate nearly 30,000 € gross turnover and 0.5 jobs. So even small pilgrimage sites can obviously produce a notable increase of local EVA.

  20. Computed tomography of orbital tumors in the dog.

    PubMed

    LeCouteur, R A; Fike, J R; Scagliotti, R H; Cann, C E

    1982-04-15

    Computed tomography (CT) was used to investigate orbital tumors in 3 dogs. Tumors were clearly defined on transverse CT scans by their inherent density and gross distortion of normal orbital anatomy. Dorsal images synthesized from the original transverse scans were also used to visualize size and extent of tumors. Use of an iodinated contrast medium did not appear to improve localization of tumors in the orbit but was useful for identification of tumor extension into the calvaria. It was concluded that CT offered advantages over existing methods of radiographic diagnosis of orbital tumors and exophthalmos.

  1. Normality of Residuals Is a Continuous Variable, and Does Seem to Influence the Trustworthiness of Confidence Intervals: A Response to, and Appreciation of, Williams, Grajales, and Kurkiewicz (2013)

    ERIC Educational Resources Information Center

    Osborne, Jason W.

    2013-01-01

    Osborne and Waters (2002) focused on checking some of the assumptions of multiple linear regression. In a critique of that paper, Williams, Grajales, and Kurkiewicz correctly clarify that regression models estimated using ordinary least squares require the assumption of normally distributed errors, but not the assumption of normally distributed…

  2. Cognitive Vulnerability in Patients with Generalized Anxiety Disorder, Dysthymic Disorder and Normal Individuals.

    PubMed

    Al-Ghorabaie, Fateme Moin; Noferesti, Azam; Fadaee, Mahdi; Ganji, Nima

    2016-08-01

    The purpose of this study was to assess cognitive vulnerability and response style in clinical and normal individuals. A sample of 90 individuals was selected for each of the 3 groups of Generalized Anxiety disorder, Dysthymic disorder and normal individuals. They completed MCQ and RSQ. Results analyzed by MANOVA and post hoc showed significant differences among groups. Dysthymic group and GAD reported higher scores on cognitive confidence compared to the normal group. Individuals with GAD showed highly negative beliefs about need to control thought, compared to the other groups, but in cognitive self-consciousness they have no differences with the normal group. In regard to uncontrollability, danger and positive beliefs, GAD group had higher levels than the other groups. Although normal and GAD group didn't show any significant differences in response style, there was a significant difference between Dysthymic group and other groups in all response styles.  Beliefs and meta-cognitive strategies can be distinguished between clinical and non clinical individuals. Also, findings support the Self-Regulatory Executive Function model. ary committee was effective in recognizing, designing and implementing tailored interventions for reduction of medication errors. A systematic approach is urgently needed to decrease organizational susceptibility to errors, through providing required resources to monitor, analyze and implement effective interventions.

  3. Evaluation of gap filling skills and reading mistakes of cochlear implanted and normally hearing students.

    PubMed

    Çizmeci, Hülya; Çiprut, Ayça

    2018-06-01

    This study aimed to (1) evaluate the gap filling skills and reading mistakes of students with cochlear implants, and to (2) compare their results with those of their normal-hearing peers. The effects of implantation age and total time of cochlear implant use were analyzed in relation to the subjects' reading skills development. The study included 19 students who underwent cochlear implantation and 20 students with normal hearing, who were enrolled at the 6th to 8th grades. The subjects' ages ranged between 12 and 14 years old. Their reading skills were evaluated by using the Informal Reading Inventory. A significant relationship were found between implanted and normal-hearing students in terms of the percentages of reading error and the percentages of gap filling scores. The average order of the reading errors of students using cochlear implants was higher than that of normal-hearing students. As for the gap filling, the performances of implanted students in the passage are lower than those of their normal-hearing peers. No significant relationship was found between the variables tested in terms of age and duration of implantation on the reading performances of implanted students. Even if they were early implanted, there were significant differences in the reading performances of implanted students compared with those of their normal-hearing peers in older classes. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. GC-Content Normalization for RNA-Seq Data

    PubMed Central

    2011-01-01

    Background Transcriptome sequencing (RNA-Seq) has become the assay of choice for high-throughput studies of gene expression. However, as is the case with microarrays, major technology-related artifacts and biases affect the resulting expression measures. Normalization is therefore essential to ensure accurate inference of expression levels and subsequent analyses thereof. Results We focus on biases related to GC-content and demonstrate the existence of strong sample-specific GC-content effects on RNA-Seq read counts, which can substantially bias differential expression analysis. We propose three simple within-lane gene-level GC-content normalization approaches and assess their performance on two different RNA-Seq datasets, involving different species and experimental designs. Our methods are compared to state-of-the-art normalization procedures in terms of bias and mean squared error for expression fold-change estimation and in terms of Type I error and p-value distributions for tests of differential expression. The exploratory data analysis and normalization methods proposed in this article are implemented in the open-source Bioconductor R package EDASeq. Conclusions Our within-lane normalization procedures, followed by between-lane normalization, reduce GC-content bias and lead to more accurate estimates of expression fold-changes and tests of differential expression. Such results are crucial for the biological interpretation of RNA-Seq experiments, where downstream analyses can be sensitive to the supplied lists of genes. PMID:22177264

  5. Three-dimensional lattice Boltzmann model for compressible flows.

    PubMed

    Sun, Chenghai; Hsu, Andrew T

    2003-07-01

    A three-dimensional compressible lattice Boltzmann model is formulated on a cubic lattice. A very large particle-velocity set is incorporated in order to enable a greater variation in the mean velocity. Meanwhile, the support set of the equilibrium distribution has only six directions. Therefore, this model can efficiently handle flows over a wide range of Mach numbers and capture shock waves. Due to the simple form of the equilibrium distribution, the fourth-order velocity tensors are not involved in the formulation. Unlike the standard lattice Boltzmann model, no special treatment is required for the homogeneity of fourth-order velocity tensors on square lattices. The Navier-Stokes equations were recovered, using the Chapman-Enskog method from the Bhatnagar-Gross-Krook (BGK) lattice Boltzmann equation. The second-order discretization error of the fluctuation velocity in the macroscopic conservation equation was eliminated by means of a modified collision invariant. The model is suitable for both viscous and inviscid compressible flows with or without shocks. Since the present scheme deals only with the equilibrium distribution that depends only on fluid density, velocity, and internal energy, boundary conditions on curved wall are easily implemented by an extrapolation of macroscopic variables. To verify the scheme for inviscid flows, we have successfully simulated a three-dimensional shock-wave propagation in a box and a normal shock of Mach number 10 over a wedge. As an application to viscous flows, we have simulated a flat plate boundary layer flow, flow over a cylinder, and a transonic flow over a NACA0012 airfoil cascade.

  6. The effect of retinol on the hyperthermal response of normal tissue in vivo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rogers, M.A.; Marigold, J.C.; Hume, S.P.

    The effect of prior administration of retinol, a membrane labilizer, on the in vivo hyperthermal response of lysosomes was investigated in the mouse spleen using a quantitative histochemical assay for the lysosomal enzyme acid phosphatase. A dose of retinol which had no effect when given alone enhanced the thermal response of the lysosome, causing an increase in lysosomal membrane permeability. In contrast, the same dose of retinol had no effect on the gross hyperthermal response of mouse intestine; a tissue which is relatively susceptible to hyperthermia. Thermal damage to intestine was assayed directly by crypt loss 1 day after treatmentmore » or assessed as thermal enhancement of X-ray damage by counting crypt microcolonies 4 days after a combined heat and X-ray treatment. Thus, although the hyperthermal response of the lysosome could be enhanced by the administration of retinol, thermal damage at a gross tissue level appeared to be unaffected, suggesting that lysosomal membrane injury is unlikely to be a primary event in hyperthermal cell killing.« less

  7. Whole Wiskott‑Aldrich syndrome protein gene deletion identified by high throughput sequencing.

    PubMed

    He, Xiangling; Zou, Runying; Zhang, Bing; You, Yalan; Yang, Yang; Tian, Xin

    2017-11-01

    Wiskott‑Aldrich syndrome (WAS) is a rare X‑linked recessive immunodeficiency disorder, characterized by thrombocytopenia, small platelets, eczema and recurrent infections associated with increased risk of autoimmunity and malignancy disorders. Mutations in the WAS protein (WASP) gene are responsible for WAS. To date, WASP mutations, including missense/nonsense, splicing, small deletions, small insertions, gross deletions, and gross insertions have been identified in patients with WAS. In addition, WASP‑interacting proteins are suspected in patients with clinical features of WAS, in whom the WASP gene sequence and mRNA levels are normal. The present study aimed to investigate the application of next generation sequencing in definitive diagnosis and clinical therapy for WAS. A 5 month‑old child with WAS who displayed symptoms of thrombocytopenia was examined. Whole exome sequence analysis of genomic DNA showed that the coverage and depth of WASP were extremely low. Quantitative polymerase chain reaction indicated total WASP gene deletion in the proband. In conclusion, high throughput sequencing is useful for the verification of WAS on the genetic profile, and has implications for family planning guidance and establishment of clinical programs.

  8. Comparison of 2 Orthotic Approaches in Children With Cerebral Palsy.

    PubMed

    Wren, Tishya A L; Dryden, James W; Mueske, Nicole M; Dennis, Sandra W; Healy, Bitte S; Rethlefsen, Susan A

    2015-01-01

    To compare dynamic ankle-foot orthoses (DAFOs) and adjustable dynamic response (ADR) ankle-foot orthoses (AFOs) in children with cerebral palsy. A total of 10 children with cerebral palsy (4-12 years; 6 at Gross Motor Function Classification System level I, 4 at Gross Motor Function Classification System level III) and crouch and/or equinus gait wore DAFOs and ADR-AFOs, each for 4 weeks, in randomized order. Laboratory-based gait analysis, walking activity monitor, and parent-reported questionnaire outcomes were compared among braces and barefoot conditions. Children demonstrated better stride length (11-12 cm), hip extension (2°-4°), and swing-phase dorsiflexion (9°-17°) in both braces versus barefoot. Push-off power (0.3 W/kg) and knee extension (5°) were better in ADR-AFOs than in DAFOs. Parent satisfaction and walking activity (742 steps per day, 43 minutes per day) were higher for DAFOs. ADR-AFOs produce better knee extension and push-off power; DAFOs produce more normal ankle motion, greater parent satisfaction, and walking activity. Both braces provide improvements over barefoot.

  9. 26 CFR 1.61-1 - Gross income.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 2 2010-04-01 2010-04-01 false Gross income. 1.61-1 Section 1.61-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Definition of Gross Income, Adjusted Gross Income, and Taxable Income § 1.61-1 Gross...

  10. Effects of land use change on soil gross nitrogen transformation rates in subtropical acid soils of Southwest China.

    PubMed

    Xu, Yongbo; Xu, Zhihong

    2015-07-01

    Land use change affects soil gross nitrogen (N) transformations, but such information is particularly lacking under subtropical conditions. A study was carried out to investigate the potential gross N transformation rates in forest and agricultural (converted from the forest) soils in subtropical China. The simultaneously occurring gross N transformations in soil were quantified by a (15)N tracing study under aerobic conditions. The results showed that change of land use types substantially altered most gross N transformation rates. The gross ammonification and nitrification rates were significantly higher in the agricultural soils than in the forest soils, while the reverse was true for the gross N immobilization rates. The higher total carbon (C) concentrations and C / N ratio in the forest soils relative to the agricultural soils were related to the greater gross N immobilization rates in the forest soils. The lower gross ammonification combined with negligible gross nitrification rates, but much higher gross N immobilization rates in the forest soils than in the agricultural soils suggest that this may be a mechanism to effectively conserve available mineral N in the forest soils through increasing microbial biomass N, the relatively labile organic N. The greater gross nitrification rates and lower gross N immobilization rates in the agricultural soils suggest that conversion of forests to agricultural soils may exert more negative effects on the environment by N loss through NO3 (-) leaching or denitrification (when conditions for denitrification exist).

  11. Quaternion normalization in spacecraft attitude determination

    NASA Technical Reports Server (NTRS)

    Deutschmann, J.; Markley, F. L.; Bar-Itzhack, Itzhack Y.

    1993-01-01

    Attitude determination of spacecraft usually utilizes vector measurements such as Sun, center of Earth, star, and magnetic field direction to update the quaternion which determines the spacecraft orientation with respect to some reference coordinates in the three dimensional space. These measurements are usually processed by an extended Kalman filter (EKF) which yields an estimate of the attitude quaternion. Two EKF versions for quaternion estimation were presented in the literature; namely, the multiplicative EKF (MEKF) and the additive EKF (AEKF). In the multiplicative EKF, it is assumed that the error between the correct quaternion and its a-priori estimate is, by itself, a quaternion that represents the rotation necessary to bring the attitude which corresponds to the a-priori estimate of the quaternion into coincidence with the correct attitude. The EKF basically estimates this quotient quaternion and then the updated quaternion estimate is obtained by the product of the a-priori quaternion estimate and the estimate of the difference quaternion. In the additive EKF, it is assumed that the error between the a-priori quaternion estimate and the correct one is an algebraic difference between two four-tuple elements and thus the EKF is set to estimate this difference. The updated quaternion is then computed by adding the estimate of the difference to the a-priori quaternion estimate. If the quaternion estimate converges to the correct quaternion, then, naturally, the quaternion estimate has unity norm. This fact was utilized in the past to obtain superior filter performance by applying normalization to the filter measurement update of the quaternion. It was observed for the AEKF that when the attitude changed very slowly between measurements, normalization merely resulted in a faster convergence; however, when the attitude changed considerably between measurements, without filter tuning or normalization, the quaternion estimate diverged. However, when the quaternion estimate was normalized, the estimate converged faster and to a lower error than with tuning only. In last years, symposium we presented three new AEKF normalization techniques and we compared them to the brute force method presented in the literature. The present paper presents the issue of normalization of the MEKF and examines several MEKF normalization techniques.

  12. Recent developments in measurement and evaluation of FAC damage in power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garud, Y.S.; Besuner, P.; Cohn, M.J.

    1999-11-01

    This paper describes some recent developments in the measurement and evaluation of flow-accelerated corrosion (FAC) damage in power plants. The evaluation focuses on data checking and smoothing to account for gross errors, noise, and uncertainty in the wall thickness measurements from ultrasonic or pulsed eddy-current data. Also, the evaluation method utilizes advanced regression analysis for spatial and temporal evolution of the wall loss, providing statistically robust predictions of wear rates and associated uncertainty. Results of the application of these new tools are presented for several components in actual service. More importantly, the practical implications of using these advances are discussedmore » in relation to the likely impact on the scope and effectiveness of FAC related inspection programs.« less

  13. Preschool Speech Error Patterns Predict Articulation and Phonological Awareness Outcomes in Children with Histories of Speech Sound Disorders

    ERIC Educational Resources Information Center

    Preston, Jonathan L.; Hull, Margaret; Edwards, Mary Louise

    2013-01-01

    Purpose: To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Method: Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 (years;months) and were followed up…

  14. Fault-Tolerant Computing: An Overview

    DTIC Science & Technology

    1991-06-01

    Addison Wesley:, Reading, MA) 1984. [8] J. Wakerly , Error Detecting Codes, Self-Checking Circuits and Applications , (Elsevier North Holland, Inc.- New York... applicable to bit-sliced organi- zations of hardware. In the first time step, the normal computation is performed on the operands and the results...for error detection and fault tolerance in parallel processor systems while perform- ing specific computation-intensive applications [111. Contrary to

  15. Automated Identification of Abnormal Adult EEGs

    PubMed Central

    López, S.; Suarez, G.; Jungreis, D.; Obeid, I.; Picone, J.

    2016-01-01

    The interpretation of electroencephalograms (EEGs) is a process that is still dependent on the subjective analysis of the examiners. Though interrater agreement on critical events such as seizures is high, it is much lower on subtler events (e.g., when there are benign variants). The process used by an expert to interpret an EEG is quite subjective and hard to replicate by machine. The performance of machine learning technology is far from human performance. We have been developing an interpretation system, AutoEEG, with a goal of exceeding human performance on this task. In this work, we are focusing on one of the early decisions made in this process – whether an EEG is normal or abnormal. We explore two baseline classification algorithms: k-Nearest Neighbor (kNN) and Random Forest Ensemble Learning (RF). A subset of the TUH EEG Corpus was used to evaluate performance. Principal Components Analysis (PCA) was used to reduce the dimensionality of the data. kNN achieved a 41.8% detection error rate while RF achieved an error rate of 31.7%. These error rates are significantly lower than those obtained by random guessing based on priors (49.5%). The majority of the errors were related to misclassification of normal EEGs. PMID:27195311

  16. Growth and development of children with congenital heart disease.

    PubMed

    Chen, Chi-Wen; Li, Chung-Yi; Wang, Jou-Kou

    2004-08-01

    Children with congenital heart disease (CHD) commonly experience delayed growth. Because growth and development are closely related, both should be considered when a child's progress is examined. This paper reports a study to evaluate and compare the growth and development of preschool children with CHD to those of normal preschool children. The heights and weights of 42 preschool children with CHD and 116 normal preschool children were compared with standard growth curves. Differences in development of personal and social skills, fine motor skills and adaptability, language, and gross motor skills were evaluated. Developmental skills were assessed using the Denver Developmental Screening Test II. A significant difference was found in both body height (P < 0.05) and weight (P < 0.05) between the two groups. More preschoolers with congenital hear disease were below the 50th percentile in height (P < 0.05) and weight (P < 0.001). Preschoolers with CHD had more suspicious interpretations than non-CHD preschoolers, specifically in the language (P < 0.01) and gross motor sections (P < 0.001). Nevertheless, there were two items in the personal-social section and one in the language section on which the children with heart disease passed in the range of 55.6-63.2%. Problems were encountered with the Denver II test because of differences in language, culture and childrearing methods between Taiwanese and Western societies. These cultural differences must be considered when the test is used to assess development. Learning about the growth and developmental differences between children with CHD and normal children may help parents of the former to detect problems associated with delayed growth and development earlier. These children and their families should have the opportunity to participate in a long-term, follow-up programme that provides information and encourages developmental progress. The results could serve as a reference for those in both clinical and community workers who provide nursing care to children with CHD.

  17. 36 CFR 13.1102 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... under 100 tons gross (U.S. System) or 2,000 tons gross (International Convention System) engaged in... less than 200 tons gross (U.S. Tonnage “Simplified Measurement System”) and not more than 24 meters (79... means any motor vessel of at least 100 tons gross (U.S. System) or 2,000 tons gross (International...

  18. 36 CFR 13.1102 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... under 100 tons gross (U.S. System) or 2,000 tons gross (International Convention System) engaged in... less than 200 tons gross (U.S. Tonnage “Simplified Measurement System”) and not more than 24 meters (79... means any motor vessel of at least 100 tons gross (U.S. System) or 2,000 tons gross (International...

  19. Wound repair in Pocillopora

    USGS Publications Warehouse

    Rodríguez-Villalobos, Jenny Carolina; Work, Thierry M.; Calderon-Aguileraa, Luis Eduardo

    2016-01-01

    Corals routinely lose tissue due to causes ranging from predation to disease. Tissue healing and regeneration are fundamental to the normal functioning of corals, yet we know little about this process. We described the microscopic morphology of wound repair in Pocillopora damicornis. Tissue was removed by airbrushing fragments from three healthy colonies, and these were monitored daily at the gross and microscopic level for 40 days. Grossly, corals healed by Day 30, but repigmentation was not evident at the end of the study (40 d). On histology, from Day 8 onwards, tissues at the lesion site were microscopically indistinguishable from adjacent normal tissues with evidence of zooxanthellae in gastrodermis. Inflammation was not evident. P. damicornis manifested a unique mode of regeneration involving projections of cell-covered mesoglea from the surface body wall that anastomosed to form gastrovascular canals.

  20. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002

  1. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.

  2. Performability modeling based on real data: A case study

    NASA Technical Reports Server (NTRS)

    Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.

    1988-01-01

    Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of apparent types of errors.

  3. Performability modeling based on real data: A casestudy

    NASA Technical Reports Server (NTRS)

    Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.

    1987-01-01

    Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different types of errors.

  4. Linguistic pattern analysis of misspellings of typically developing writers in grades 1-9.

    PubMed

    Bahr, Ruth Huntley; Sillian, Elaine R; Berninger, Virginia W; Dow, Michael

    2012-12-01

    A mixed-methods approach, evaluating triple word-form theory, was used to describe linguistic patterns of misspellings. Spelling errors were taken from narrative and expository writing samples provided by 888 typically developing students in Grades 1-9. Errors were coded by category (phonological, orthographic, and morphological) and specific linguistic feature affected. Grade-level effects were analyzed with trend analysis. Qualitative analyses determined frequent error types and how use of specific linguistic features varied across grades. Phonological, orthographic, and morphological errors were noted across all grades, but orthographic errors predominated. Linear trends revealed developmental shifts in error proportions for the orthographic and morphological categories between Grades 4 and 5. Similar error types were noted across age groups, but the nature of linguistic feature error changed with age. Triple word-form theory was supported. By Grade 1, orthographic errors predominated, and phonological and morphological error patterns were evident. Morphological errors increased in relative frequency in older students, probably due to a combination of word-formation issues and vocabulary growth. These patterns suggest that normal spelling development reflects nonlinear growth and that it takes a long time to develop a robust orthographic lexicon that coordinates phonology, orthography, and morphology and supports word-specific, conventional spelling.

  5. Experimental determination of the navigation error of the 4-D navigation, guidance, and control systems on the NASA B-737 airplane

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1978-01-01

    Navigation error data from these flights are presented in a format utilizing three independent axes - horizontal, vertical, and time. The navigation position estimate error term and the autopilot flight technical error term are combined to form the total navigation error in each axis. This method of error presentation allows comparisons to be made between other 2-, 3-, or 4-D navigation systems and allows experimental or theoretical determination of the navigation error terms. Position estimate error data are presented with the navigation system position estimate based on dual DME radio updates that are smoothed with inertial velocities, dual DME radio updates that are smoothed with true airspeed and magnetic heading, and inertial velocity updates only. The normal mode of navigation with dual DME updates that are smoothed with inertial velocities resulted in a mean error of 390 m with a standard deviation of 150 m in the horizontal axis; a mean error of 1.5 m low with a standard deviation of less than 11 m in the vertical axis; and a mean error as low as 252 m with a standard deviation of 123 m in the time axis.

  6. Robust Methods for Moderation Analysis with a Two-Level Regression Model.

    PubMed

    Yang, Miao; Yuan, Ke-Hai

    2016-01-01

    Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.

  7. Erratum: ``The Luminosity Function of IRAS Point Source Catalog Redshift Survey Galaxies'' (ApJ, 587, L89 [2003])

    NASA Astrophysics Data System (ADS)

    Takeuchi, Tsutomu T.; Yoshikawa, Kohji; Ishii, Takako T.

    2004-05-01

    We have mentioned that we normalized the parameters for the luminosity function by the Hubble constant H0=100 km s-1 Mpc-1 however, for the characteristic luminosity L* we erroneously normalized it by H0=70 km s-1 Mpc-1. As a result, we have proposed wrong numerical factors for L*. In addition, there is a typographic error in the exponent of equation (6) of the published manuscript. Correct values are as follows: L*=(4.34+/-0.86)×108 h-2 [Lsolar] for equation (4), and L*=(2.50+/-0.44)×109 h-2 [Lsolar] and L*=(9.55+/-0.20)×108 h-2 [Lsolar] for equations (5) and (6), respectively. All the other parameters are correct. The errors have occurred only in the final conversion, and they do not affect our discussions and conclusions at all. We thank P. Ranalli for pointing out the errors.

  8. APOLLO clock performance and normal point corrections

    NASA Astrophysics Data System (ADS)

    Liang, Y.; Murphy, T. W., Jr.; Colmenares, N. R.; Battat, J. B. R.

    2017-12-01

    The Apache point observatory lunar laser-ranging operation (APOLLO) has produced a large volume of high-quality lunar laser ranging (LLR) data since it began operating in 2006. For most of this period, APOLLO has relied on a GPS-disciplined, high-stability quartz oscillator as its frequency and time standard. The recent addition of a cesium clock as part of a timing calibration system initiated a comparison campaign between the two clocks. This has allowed correction of APOLLO range measurements—called normal points—during the overlap period, but also revealed a mechanism to correct for systematic range offsets due to clock errors in historical APOLLO data. Drift of the GPS clock on  ∼1000 s timescales contributed typically 2.5 mm of range error to APOLLO measurements, and we find that this may be reduced to  ∼1.6 mm on average. We present here a characterization of APOLLO clock errors, the method by which we correct historical data, and the resulting statistics.

  9. Modeling and calculation of impact friction caused by corner contact in gear transmission

    NASA Astrophysics Data System (ADS)

    Zhou, Changjiang; Chen, Siyu

    2014-09-01

    Corner contact in gear pair causes vibration and noise, which has attracted many attentions. However, teeth errors and deformation make it difficulty to determine the point situated at corner contact and study the mechanism of teeth impact friction in the current researches. Based on the mechanism of corner contact, the process of corner contact is divided into two stages of impact and scratch, and the calculation model including gear equivalent error—combined deformation is established along the line of action. According to the distributive law, gear equivalent error is synthesized by base pitch error, normal backlash and tooth profile modification on the line of action. The combined tooth compliance of the first point lying in corner contact before the normal path is inversed along the line of action, on basis of the theory of engagement and the curve of tooth synthetic compliance & load-history. Combined secondarily the equivalent error with the combined deflection, the position standard of the point situated at corner contact is probed. Then the impact positions and forces, from the beginning to the end during corner contact before the normal path, are calculated accurately. Due to the above results, the lash model during corner contact is founded, and the impact force and frictional coefficient are quantified. A numerical example is performed and the averaged impact friction coefficient based on the presented calculation method is validated. This research obtains the results which could be referenced to understand the complex mechanism of teeth impact friction and quantitative calculation of the friction force and coefficient, and to gear exact design for tribology.

  10. Location tests for biomarker studies: a comparison using simulations for the two-sample case.

    PubMed

    Scheinhardt, M O; Ziegler, A

    2013-01-01

    Gene, protein, or metabolite expression levels are often non-normally distributed, heavy tailed and contain outliers. Standard statistical approaches may fail as location tests in this situation. In three Monte-Carlo simulation studies, we aimed at comparing the type I error levels and empirical power of standard location tests and three adaptive tests [O'Gorman, Can J Stat 1997; 25: 269 -279; Keselman et al., Brit J Math Stat Psychol 2007; 60: 267- 293; Szymczak et al., Stat Med 2013; 32: 524 - 537] for a wide range of distributions. We simulated two-sample scenarios using the g-and-k-distribution family to systematically vary tail length and skewness with identical and varying variability between groups. All tests kept the type I error level when groups did not vary in their variability. The standard non-parametric U-test performed well in all simulated scenarios. It was outperformed by the two non-parametric adaptive methods in case of heavy tails or large skewness. Most tests did not keep the type I error level for skewed data in the case of heterogeneous variances. The standard U-test was a powerful and robust location test for most of the simulated scenarios except for very heavy tailed or heavy skewed data, and it is thus to be recommended except for these cases. The non-parametric adaptive tests were powerful for both normal and non-normal distributions under sample variance homogeneity. But when sample variances differed, they did not keep the type I error level. The parametric adaptive test lacks power for skewed and heavy tailed distributions.

  11. Evaluation of an Automatic Registration-Based Algorithm for Direct Measurement of Volume Change in Tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarkar, Saradwata; Johnson, Timothy D.; Ma, Bing

    2012-07-01

    Purpose: Assuming that early tumor volume change is a biomarker for response to therapy, accurate quantification of early volume changes could aid in adapting an individual patient's therapy and lead to shorter clinical trials. We investigated an image registration-based approach for tumor volume change quantification that may more reliably detect smaller changes that occur in shorter intervals than can be detected by existing algorithms. Methods and Materials: Variance and bias of the registration-based approach were evaluated using retrospective, in vivo, very-short-interval diffusion magnetic resonance imaging scans where true zero tumor volume change is unequivocally known and synthetic data, respectively. Themore » interval scans were nonlinearly registered using two similarity measures: mutual information (MI) and normalized cross-correlation (NCC). Results: The 95% confidence interval of the percentage volume change error was (-8.93% to 10.49%) for MI-based and (-7.69%, 8.83%) for NCC-based registrations. Linear mixed-effects models demonstrated that error in measuring volume change increased with increase in tumor volume and decreased with the increase in the tumor's normalized mutual information, even when NCC was the similarity measure being optimized during registration. The 95% confidence interval of the relative volume change error for the synthetic examinations with known changes over {+-}80% of reference tumor volume was (-3.02% to 3.86%). Statistically significant bias was not demonstrated. Conclusion: A low-noise, low-bias tumor volume change measurement algorithm using nonlinear registration is described. Errors in change measurement were a function of tumor volume and the normalized mutual information content of the tumor.« less

  12. 29 CFR 794.122 - Ascertainment of “annual” gross sales volume.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Ascertainment of âannualâ gross sales volume. 794.122... Annual Gross Volume of Sales § 794.122 Ascertainment of “annual” gross sales volume. The annual gross volume of sales of an enterprise engaged in the wholesale or bulk distribution of petroleum products...

  13. Morphological characterization of the AlphaA- and AlphaB-crystallin double knockout mouse lens

    PubMed Central

    Boyle, Daniel L; Takemoto, Larry; Brady, James P; Wawrousek, Eric F

    2003-01-01

    Background One approach to resolving some of the in vivo functions of alpha-crystallin is to generate animal models where one or both of the alpha-crystallin gene products have been eliminated. In the single alpha-crystallin knockout mice, the remaining alpha-crystallin may fully or partially compensate for some of the functions of the missing protein, especially in the lens, where both alphaA and alphaB are normally expressed at high levels. The purpose of this study was to characterize gross lenticular morphology in normal mice and mice with the targeted disruption of alphaA- and alphaB-crystallin genes (alphaA/BKO). Methods Lenses from 129SvEvTac mice and alphaA/BKO mice were examined by standard scanning electron microscopy and confocal microscopy methodologies. Results Equatorial and axial (sagittal) dimensions of lenses for alphaA/BKO mice were significantly smaller than age-matched wild type lenses. No posterior sutures or fiber cells extending to the posterior capsule of the lens were found in alphaA/BKO lenses. Ectopical nucleic acid staining was observed in the posterior subcapsular region of 5 wk and anterior subcapsular cortex of 54 wk alphaA/BKO lenses. Gross morphological differences were also observed in the equatorial/bow, posterior and anterior regions of lenses from alphaA/BKO mice as compared to wild mice. Conclusion These results indicated that both alphaA- and alphaB-crystallin are necessary for proper fiber cell formation, and that the absence of alpha-crystallin can lead to cataract formation. PMID:12546709

  14. Radiobiological and treatment planning study of a simultaneously integrated boost for canine nasal tumors using helical tomotherapy.

    PubMed

    Gutíerrez, Alonso N; Deveau, Michael; Forrest, Lisa J; Tomé, Wolfgang A; Mackie, Thomas R

    2007-01-01

    Feasibility of delivering a simultaneously integrated boost to canine nasal tumors using helical tomotherapy to improve tumor control probability (TCP) via an increase in total biological equivalent uniform dose (EUD) was evaluated. Eight dogs with varying size nasal tumors (5.8-110.9 cc) were replanned to 42 Gy to the nasal cavity and integrated dose boosts to gross disease of 45.2, 48.3, and 51.3 Gy in 10 fractions. EUD values were calculated for tumors and mean normalized total doses (NTD(mean)) for organs at risk (OAR). Normal Tissue Complication Probability (NTCP) values were obtained for OARs, and estimated TCP values were computed using a logistic dose-response model and based on deliverable EUD boost doses. Significant increases in estimated TCP to 54%, 74%, and 86% can be achieved with 10%, 23%, and 37% mean relative EUD boosts to the gross disease, respectively. NTCP values for blindness of either eye and for brain necrosis were < 0.01% for all boosts. Values for cataract development were 31%, 42%, and 46% for studied boost schemas, respectively. Average NTD(mean) to eyes and brain for mean EUD boosts were 10.2, 11.3, and 12.1 Gy3, and 7.5, 7.2, and 7.9 Gy2, respectively. Using helical tomotherapy, simultaneously integrated dose boosts can be delivered to increase the estimated TCP at 1-year without significantly increasing the NTD(mean) to eyes and brain. Delivery of these treatments in a prospective trial may allow quantification of a dose-response relationship in canine nasal tumors.

  15. V. Multi-level analysis of cortical neuroanatomy in Williams syndrome.

    PubMed

    Galaburda, A M; Bellugi, U

    2000-01-01

    The purpose of a neuroanatomical analysis of Williams Syndrome (WMS) brains is to help bridge the knowledge of the genetics of this disorder with the knowledge on behavior. Here, we outline findings of cortical neuroanatomy at multiple levels. We describe the gross anatomy with respect to brain shape, cortical folding, and asymmetry. This, as with most neuroanatomical information available in the literature on anatomical-functional correlations, links up best to the behavioral profile. Then, we describe the cytoarchitectonic appearance of the cortex. Further, we report on some histometric results. Finally, we present findings of immunocytochemistry that attempt to link up to the genomic deletion. The gross anatomical findings consist mainly of a small brain that shows curtailment in the posterior-parietal and occipital regions. There is also subtle dysmorphism of cortical folding. A consistent finding is a short central sulcus that does not become opercularized in the interhemispheric fissure, bringing attention to a possible developmental anomaly affecting the dorsal half of the hemispheres. There is also lack of asymmetry in the planum temporale. The cortical cytoarchitecture is relatively normal, with all sampled areas showing features typical of the region from which they are taken. Measurements in area 17 show increased cell size and decreased cell-packing density, which address the issue of possible abnormal connectivity. Immunostaining shows absence of elastin but normal staining for Lim-1 kinase, both of which are products of genes that are part of the deletion. Finally, one serially sectioned brain shows a fair amount of acquired pathology of microvascular origin related most likely to underlying hypertension and heart disease.

  16. Using normalized difference vegetation index to estimate carbon fluxes from small rotationally grazed pastures

    USGS Publications Warehouse

    Skinner, R.H.; Wylie, B.K.; Gilmanov, T.G.

    2011-01-01

    Satellite-based normalized difference vegetation index (NDVI) data have been extensively used for estimating gross primary productivity (GPP) and yield of grazing lands throughout the world. However, the usefulness of satellite-based images for monitoring rotationally-grazed pastures in the northeastern United States might be limited because paddock size is often smaller than the resolution limits of the satellite image. This research compared NDVI data from satellites with data obtained using a ground-based system capable of fine-scale (submeter) NDVI measurements. Gross primary productivity was measured by eddy covariance on two pastures in central Pennsylvania from 2003 to 2008. Weekly 250-m resolution satellite NDVI estimates were also obtained for each pasture from the moderate resolution imaging spectroradiometer (MODIS) sensor. Ground-based NDVI data were periodically collected in 2006, 2007, and 2008 from one of the two pastures. Multiple-regression and regression-tree estimates of GPP, based primarily on MODIS 7-d NDVI and on-site measurements of photosynthetically active radiation (PAR), were generally able to predict growing-season GPP to within an average of 3% of measured values. The exception was drought years when estimated and measured GPP differed from each other by 11 to 13%. Ground-based measurements improved the ability of vegetation indices to capture short-term grazing management effects on GPP. However, the eMODIS product appeared to be adequate for regional GPP estimates where total growing-season GPP across a wide area would be of greater interest than short-term management-induced changes in GPP at individual sites.

  17. Bandwagon effects and error bars in particle physics

    NASA Astrophysics Data System (ADS)

    Jeng, Monwhea

    2007-02-01

    We study historical records of experiments on particle masses, lifetimes, and widths, both for signs of expectation bias, and to compare actual errors with reported error bars. We show that significant numbers of particle properties exhibit "bandwagon effects": reported values show trends and clustering as a function of the year of publication, rather than random scatter about the mean. While the total amount of clustering is significant, it is also fairly small; most individual particle properties do not display obvious clustering. When differences between experiments are compared with the reported error bars, the deviations do not follow a normal distribution, but instead follow an exponential distribution for up to ten standard deviations.

  18. Counting-backward test for executive function in idiopathic normal pressure hydrocephalus.

    PubMed

    Kanno, S; Saito, M; Hayashi, A; Uchiyama, M; Hiraoka, K; Nishio, Y; Hisanaga, K; Mori, E

    2012-10-01

    The aim of this study was to develop and validate a bedside test for executive function in patients with idiopathic normal pressure hydrocephalus (INPH). Twenty consecutive patients with INPH and 20 patients with Alzheimer's disease (AD) were enrolled in this study. We developed the counting-backward test for evaluating executive function in patients with INPH. Two indices that are considered to be reflective of the attention deficits and response suppression underlying executive dysfunction in INPH were calculated: the first-error score and the reverse-effect index. Performance on both the counting-backward test and standard neuropsychological tests for executive function was assessed in INPH and AD patients. The first-error score, reverse-effect index and the scores from the standard neuropsychological tests for executive function were significantly lower for individuals in the INPH group than in the AD group. The two indices for the counting-backward test in the INPH group were strongly correlated with the total scores for Frontal Assessment Battery and Phonemic Verbal Fluency. The first-error score was also significantly correlated with the error rate of the Stroop colour-word test and the score of the go/no-go test. In addition, we found that the first-error score highly distinguished patients with INPH from those with AD using these tests. The counting-backward test is useful for evaluating executive dysfunction in INPH and for differentiating between INPH and AD patients. In particular, the first-error score may reflect deficits in the response suppression related to executive dysfunction in INPH. © 2012 John Wiley & Sons A/S.

  19. An Expert System for the Evaluation of Cost Models

    DTIC Science & Technology

    1990-09-01

    contrast to the condition of equal error variance, called homoscedasticity. (Reference: Applied Linear Regression Models by John Neter - page 423...normal. (Reference: Applied Linear Regression Models by John Neter - page 125) Click Here to continue -> Autocorrelation Click Here for the index - Index...over time. Error terms correlated over time are said to be autocorrelated or serially correlated. (REFERENCE: Applied Linear Regression Models by John

  20. Answering Contextually Demanding Questions: Pragmatic Errors Produced by Children with Asperger Syndrome or High-Functioning Autism

    ERIC Educational Resources Information Center

    Loukusa, Soile; Leinonen, Eeva; Jussila, Katja; Mattila, Marja-Leena; Ryder, Nuala; Ebeling, Hanna; Moilanen, Irma

    2007-01-01

    This study examined irrelevant/incorrect answers produced by children with Asperger syndrome or high-functioning autism (7-9-year-olds and 10-12-year-olds) and normally developing children (7-9-year-olds). The errors produced were divided into three types: in Type 1, the child answered the original question incorrectly, in Type 2, the child gave a…

Top