Sample records for member length errors

  1. Minimizing distortion and internal forces in truss structures by simulated annealing

    NASA Technical Reports Server (NTRS)

    Kincaid, Rex K.

    1989-01-01

    Inaccuracies in the length of members and the diameters of joints of large truss reflector backup structures may produce unacceptable levels of surface distortion and member forces. However, if the member lengths and joint diameters can be measured accurately it is possible to configure the members and joints so that root-mean-square (rms) surface error and/or rms member forces is minimized. Following Greene and Haftka (1989) it is assumed that the force vector f is linearly proportional to the member length errors e(sub M) of dimension NMEMB (the number of members) and joint errors e(sub J) of dimension NJOINT (the number of joints), and that the best-fit displacement vector d is a linear function of f. Let NNODES denote the number of positions on the surface of the truss where error influences are measured. The solution of the problem is discussed. To classify, this problem was compared to a similar combinatorial optimization problem. In particular, when only the member length errors are considered, minimizing d(sup 2)(sub rms) is equivalent to the quadratic assignment problem. The quadratic assignment problem is a well known NP-complete problem in operations research literature. Hence minimizing d(sup 2)(sub rms) is is also an NP-complete problem. The focus of the research is the development of a simulated annealing algorithm to reduce d(sup 2)(sub rms). The plausibility of this technique is its recent success on a variety of NP-complete combinatorial optimization problems including the quadratic assignment problem. A physical analogy for simulated annealing is the way liquids freeze and crystallize. All computational experiments were done on a MicroVAX. The two interchange heuristic is very fast but produces widely varying results. The two and three interchange heuristic provides less variability in the final objective function values but runs much more slowly. Simulated annealing produced the best objective function values for every starting configuration and was faster than the two and three interchange heuristic.

  2. Analytical and Photogrammetric Characterization of a Planar Tetrahedral Truss

    NASA Technical Reports Server (NTRS)

    Wu, K. Chauncey; Adams, Richard R.; Rhodes, Marvin D.

    1990-01-01

    Future space science missions are likely to require near-optical quality reflectors which are supported by a stiff truss structure. This support truss should conform closely with its intended shape to minimize its contribution to the overall surface error of the reflector. The current investigation was conducted to evaluate the planar surface accuracy of a regular tetrahedral truss structure by comparing the results of predicted and measured node locations. The truss is a 2-ring hexagonal structure composed of 102 equal-length truss members. Each truss member is nominally 2 meters in length between node centers and is comprised of a graphite/epoxy tube with aluminum nodes and joints. The axial stiffness and the length variation of the truss components were determined experimentally and incorporated into a static finite element analysis of the truss. From this analysis, the root mean square (RMS) surface error of the truss was predicted to be 0.11 mm (0004 in). Photogrammetry tests were performed on the assembled truss to measure the normal displacements of the upper surface nodes and to determine if the truss would maintain its intended shape when subjected to repeated assembly. Considering the variation in the truss component lengths, the measures rms error of 0.14 mm (0.006 in) in the assembled truss is relatively small. The test results also indicate that a repeatable truss surface is achievable. Several potential sources of error were identified and discussed.

  3. Accuracy of acoustic velocity metering systems for measurement of low velocity in open channels

    USGS Publications Warehouse

    Laenen, Antonius; Curtis, R. E.

    1989-01-01

    Acoustic velocity meter (AVM) accuracy depends on equipment limitations, the accuracy of acoustic-path length and angle determination, and the stability of the mean velocity to acoustic-path velocity relation. Equipment limitations depend on path length and angle, transducer frequency, timing oscillator frequency, and signal-detection scheme. Typically, the velocity error from this source is about +or-1 to +or-10 mms/sec. Error in acoustic-path angle or length will result in a proportional measurement bias. Typically, an angle error of one degree will result in a velocity error of 2%, and a path-length error of one meter in 100 meter will result in an error of 1%. Ray bending (signal refraction) depends on path length and density gradients present in the stream. Any deviation from a straight acoustic path between transducer will change the unique relation between path velocity and mean velocity. These deviations will then introduce error in the mean velocity computation. Typically, for a 200-meter path length, the resultant error is less than one percent, but for a 1,000 meter path length, the error can be greater than 10%. Recent laboratory and field tests have substantiated assumptions of equipment limitations. Tow-tank tests of an AVM system with a 4.69-meter path length yielded an average standard deviation error of 9.3 mms/sec, and the field tests of an AVM system with a 20.5-meter path length yielded an average standard deviation error of a 4 mms/sec. (USGS)

  4. JPL-ANTOPT antenna structure optimization program

    NASA Technical Reports Server (NTRS)

    Strain, D. M.

    1994-01-01

    New antenna path-length error and pointing-error structure optimization codes were recently added to the MSC/NASTRAN structural analysis computer program. Path-length and pointing errors are important measured of structure-related antenna performance. The path-length and pointing errors are treated as scalar displacements for statics loading cases. These scalar displacements can be subject to constraint during the optimization process. Path-length and pointing-error calculations supplement the other optimization and sensitivity capabilities of NASTRAN. The analysis and design functions were implemented as 'DMAP ALTERs' to the Design Optimization (SOL 200) Solution Sequence of MSC-NASTRAN, Version 67.5.

  5. Error or "act of God"? A study of patients' and operating room team members' perceptions of error definition, reporting, and disclosure.

    PubMed

    Espin, Sherry; Levinson, Wendy; Regehr, Glenn; Baker, G Ross; Lingard, Lorelei

    2006-01-01

    Calls abound for a culture change in health care to improve patient safety. However, effective change cannot proceed without a clear understanding of perceptions and beliefs about error. In this study, we describe and compare operative team members' and patients' perceptions of error, reporting of error, and disclosure of error. Thirty-nine interviews of team members (9 surgeons, 9 nurses, 10 anesthesiologists) and patients (11) were conducted at 2 teaching hospitals using 4 scenarios as prompts. Transcribed responses to open questions were analyzed by 2 researchers for recurrent themes using the grounded-theory method. Yes/no answers were compared across groups using chi-square analyses. Team members and patients agreed on what constitutes an error. Deviation from standards and negative outcome were emphasized as definitive features. Patients and nurse professionals differed significantly in their perception of whether errors should be reported. Nurses were willing to report only events within their disciplinary scope of practice. Although most patients strongly advocated full disclosure of errors (what happened and how), team members preferred to disclose only what happened. When patients did support partial disclosure, their rationales varied from that of team members. Both operative teams and patients define error in terms of breaking the rules and the concept of "no harm no foul." These concepts pose challenges for treating errors as system failures. A strong culture of individualism pervades nurses' perception of error reporting, suggesting that interventions are needed to foster collective responsibility and a constructive approach to error identification.

  6. Team safety and innovation by learning from errors in long-term care settings.

    PubMed

    Buljac-Samardžić, Martina; van Woerkom, Marianne; Paauwe, Jaap

    2012-01-01

    Team safety and team innovation are underexplored in the context of long-term care. Understanding the issues requires attention to how teams cope with error. Team managers could have an important role in developing a team's error orientation and managing team membership instabilities. The aim of this study was to examine the impact of team member stability, team coaching, and a team's error orientation on team safety and innovation. A cross-sectional survey method was employed within 2 long-term care organizations. Team members and team managers received a survey that measured safety and innovation. Team members assessed member stability, team coaching, and team error orientation (i.e., problem-solving and blaming approach). The final sample included 933 respondents from 152 teams. Stable teams and teams with managers who take on the role of coach are more likely to adopt a problem-solving approach and less likely to adopt a blaming approach toward errors. Both error orientations are related to team member ratings of safety and innovation, but only the blaming approach is (negatively) related to manager ratings of innovation. Differences between members' and managers' ratings of safety are greater in teams with relatively high scores for the blaming approach and relatively low scores for the problem-solving approach. Team coaching was found to be positively related to innovation, especially in unstable teams. Long-term care organizations that wish to enhance team safety and innovation should encourage a problem-solving approach and discourage a blaming approach. Team managers can play a crucial role in this by coaching team members to see errors as sources of learning and improvement and ensuring that individuals will not be blamed for errors.

  7. Curves showing column strength of steel and duralumin tubing

    NASA Technical Reports Server (NTRS)

    Ross, Orrin E

    1929-01-01

    Given here are a set of column strength curves that are intended to simplify the method of determining the size of struts in an airplane structure when the load in the member is known. The curves will also simplify the checking of the strength of a strut if the size and length are known. With these curves, no computations are necessary, as in the case of the old-fashioned method of strut design. The process is so simple that draftsmen or others who are not entirely familiar with mechanics can check the strength of a strut without much danger of error.

  8. Determining the optimal window length for pattern recognition-based myoelectric control: balancing the competing effects of classification error and controller delay.

    PubMed

    Smith, Lauren H; Hargrove, Levi J; Lock, Blair A; Kuiken, Todd A

    2011-04-01

    Pattern recognition-based control of myoelectric prostheses has shown great promise in research environments, but has not been optimized for use in a clinical setting. To explore the relationship between classification error, controller delay, and real-time controllability, 13 able-bodied subjects were trained to operate a virtual upper-limb prosthesis using pattern recognition of electromyogram (EMG) signals. Classification error and controller delay were varied by training different classifiers with a variety of analysis window lengths ranging from 50 to 550 ms and either two or four EMG input channels. Offline analysis showed that classification error decreased with longer window lengths (p < 0.01 ). Real-time controllability was evaluated with the target achievement control (TAC) test, which prompted users to maneuver the virtual prosthesis into various target postures. The results indicated that user performance improved with lower classification error (p < 0.01 ) and was reduced with longer controller delay (p < 0.01 ), as determined by the window length. Therefore, both of these effects should be considered when choosing a window length; it may be beneficial to increase the window length if this results in a reduced classification error, despite the corresponding increase in controller delay. For the system employed in this study, the optimal window length was found to be between 150 and 250 ms, which is within acceptable controller delays for conventional multistate amplitude controllers.

  9. CUSUM-Logistic Regression analysis for the rapid detection of errors in clinical laboratory test results.

    PubMed

    Sampson, Maureen L; Gounden, Verena; van Deventer, Hendrik E; Remaley, Alan T

    2016-02-01

    The main drawback of the periodic analysis of quality control (QC) material is that test performance is not monitored in time periods between QC analyses, potentially leading to the reporting of faulty test results. The objective of this study was to develop a patient based QC procedure for the more timely detection of test errors. Results from a Chem-14 panel measured on the Beckman LX20 analyzer were used to develop the model. Each test result was predicted from the other 13 members of the panel by multiple regression, which resulted in correlation coefficients between the predicted and measured result of >0.7 for 8 of the 14 tests. A logistic regression model, which utilized the measured test result, the predicted test result, the day of the week and time of day, was then developed for predicting test errors. The output of the logistic regression was tallied by a daily CUSUM approach and used to predict test errors, with a fixed specificity of 90%. The mean average run length (ARL) before error detection by CUSUM-Logistic Regression (CSLR) was 20 with a mean sensitivity of 97%, which was considerably shorter than the mean ARL of 53 (sensitivity 87.5%) for a simple prediction model that only used the measured result for error detection. A CUSUM-Logistic Regression analysis of patient laboratory data can be an effective approach for the rapid and sensitive detection of clinical laboratory errors. Published by Elsevier Inc.

  10. Correcting for sequencing error in maximum likelihood phylogeny inference.

    PubMed

    Kuhner, Mary K; McGill, James

    2014-11-04

    Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue. Copyright © 2014 Kuhner and McGill.

  11. Topological analysis of polymeric melts: chain-length effects and fast-converging estimators for entanglement length.

    PubMed

    Hoy, Robert S; Foteinopoulou, Katerina; Kröger, Martin

    2009-09-01

    Primitive path analyses of entanglements are performed over a wide range of chain lengths for both bead spring and atomistic polyethylene polymer melts. Estimators for the entanglement length N_{e} which operate on results for a single chain length N are shown to produce systematic O(1/N) errors. The mathematical roots of these errors are identified as (a) treating chain ends as entanglements and (b) neglecting non-Gaussian corrections to chain and primitive path dimensions. The prefactors for the O(1/N) errors may be large; in general their magnitude depends both on the polymer model and the method used to obtain primitive paths. We propose, derive, and test new estimators which eliminate these systematic errors using information obtainable from the variation in entanglement characteristics with chain length. The new estimators produce accurate results for N_{e} from marginally entangled systems. Formulas based on direct enumeration of entanglements appear to converge faster and are simpler to apply.

  12. Geodesy by radio interferometry - Effects of atmospheric modeling errors on estimates of baseline length

    NASA Technical Reports Server (NTRS)

    Davis, J. L.; Herring, T. A.; Shapiro, I. I.; Rogers, A. E. E.; Elgered, G.

    1985-01-01

    Analysis of very long baseline interferometry data indicates that systematic errors in prior estimates of baseline length, of order 5 cm for approximately 8000-km baselines, were due primarily to mismodeling of the electrical path length of the troposphere and mesosphere ('atmospheric delay'). Here observational evidence for the existence of such errors in the previously used models for the atmospheric delay is discussed, and a new 'mapping' function for the elevation angle dependence of this delay is developed. The delay predicted by this new mapping function differs from ray trace results by less than approximately 5 mm, at all elevations down to 5 deg elevation, and introduces errors into the estimates of baseline length of less than about 1 cm, for the multistation intercontinental experiment analyzed here.

  13. Novel TMEM98 mutations in pedigrees with autosomal dominant nanophthalmos

    PubMed Central

    Khorram, David; Choi, Michael; Roos, Ben R.; Stone, Edwin M.; Kopel, Teresa; Allen, Richard; Alward, Wallace L.M.; Scheetz, Todd E.

    2015-01-01

    Purpose Autosomal dominant nanophthalmos is an inherited eye disorder characterized by a structurally normal but smaller eye. Patients with nanophthalmos have high hyperopia (far-sightedness), a greater incidence of angle-closure glaucoma, and increased risk of surgical complications. In this study, the clinical features and the genetic basis of nanophthalmos were investigated in two large autosomal dominant nanophthalmos pedigrees. Methods Fourteen members of a Caucasian pedigree from the United States and 15 members of a pedigree from the Mariana Islands enrolled in a genetic study of nanophthalmos and contributed DNA samples. Twenty of 29 family members underwent eye examinations that included measurement of axial eye length and/or refractive error. The genetic basis of nanophthalmos in the pedigrees was studied with linkage analysis, whole exome sequencing, and candidate gene (i.e., TMEM98) sequencing to identify the nanophthalmos-causing gene. Results Nine members of the pedigree from the United States and 11 members of the pedigree from the Mariana Islands were diagnosed with nanophthalmos that is transmitted as an autosomal dominant trait. The patients with nanophthalmos had abnormally short axial eye lengths, which ranged from 15.9 to 18.4 mm. Linkage analysis of the nanophthalmos pedigree from the United States identified nine large regions of the genome (greater than 10 Mbp) that were coinherited with disease in this family. Genes within these “linked regions” were examined for disease-causing mutations using exome sequencing, and a His196Pro mutation was detected in the TMEM98 gene, which was recently reported to be a nanophthalmos gene. Sanger sequencing subsequently showed that all other members of this pedigree with nanophthalmos also carry the His196Pro TMEM98 mutation. Testing the Mariana Islands pedigree for TMEM98 mutations identified a 34 bp heterozygous deletion that spans the 3′ end of exon 4 in all affected family members. Neither TMEM98 mutation was detected in public exome sequence databases. Conclusions A recent report identified a single TMEM98 missense mutation in a nanophthalmos pedigree. Our discovery of two additional TMEM98 mutations confirms the important role of the gene in the pathogenesis of autosomal dominant nanophthalmos. PMID:26392740

  14. Novel TMEM98 mutations in pedigrees with autosomal dominant nanophthalmos.

    PubMed

    Khorram, David; Choi, Michael; Roos, Ben R; Stone, Edwin M; Kopel, Teresa; Allen, Richard; Alward, Wallace L M; Scheetz, Todd E; Fingert, John H

    2015-01-01

    Autosomal dominant nanophthalmos is an inherited eye disorder characterized by a structurally normal but smaller eye. Patients with nanophthalmos have high hyperopia (far-sightedness), a greater incidence of angle-closure glaucoma, and increased risk of surgical complications. In this study, the clinical features and the genetic basis of nanophthalmos were investigated in two large autosomal dominant nanophthalmos pedigrees. Fourteen members of a Caucasian pedigree from the United States and 15 members of a pedigree from the Mariana Islands enrolled in a genetic study of nanophthalmos and contributed DNA samples. Twenty of 29 family members underwent eye examinations that included measurement of axial eye length and/or refractive error. The genetic basis of nanophthalmos in the pedigrees was studied with linkage analysis, whole exome sequencing, and candidate gene (i.e., TMEM98) sequencing to identify the nanophthalmos-causing gene. Nine members of the pedigree from the United States and 11 members of the pedigree from the Mariana Islands were diagnosed with nanophthalmos that is transmitted as an autosomal dominant trait. The patients with nanophthalmos had abnormally short axial eye lengths, which ranged from 15.9 to 18.4 mm. Linkage analysis of the nanophthalmos pedigree from the United States identified nine large regions of the genome (greater than 10 Mbp) that were coinherited with disease in this family. Genes within these "linked regions" were examined for disease-causing mutations using exome sequencing, and a His196Pro mutation was detected in the TMEM98 gene, which was recently reported to be a nanophthalmos gene. Sanger sequencing subsequently showed that all other members of this pedigree with nanophthalmos also carry the His196Pro TMEM98 mutation. Testing the Mariana Islands pedigree for TMEM98 mutations identified a 34 bp heterozygous deletion that spans the 3' end of exon 4 in all affected family members. Neither TMEM98 mutation was detected in public exome sequence databases. A recent report identified a single TMEM98 missense mutation in a nanophthalmos pedigree. Our discovery of two additional TMEM98 mutations confirms the important role of the gene in the pathogenesis of autosomal dominant nanophthalmos.

  15. Error analysis of mechanical system and wavelength calibration of monochromator

    NASA Astrophysics Data System (ADS)

    Zhang, Fudong; Chen, Chen; Liu, Jie; Wang, Zhihong

    2018-02-01

    This study focuses on improving the accuracy of a grating monochromator on the basis of the grating diffraction equation in combination with an analysis of the mechanical transmission relationship between the grating, the sine bar, and the screw of the scanning mechanism. First, the relationship between the mechanical error in the monochromator with the sine drive and the wavelength error is analyzed. Second, a mathematical model of the wavelength error and mechanical error is developed, and an accurate wavelength calibration method based on the sine bar's length adjustment and error compensation is proposed. Based on the mathematical model and calibration method, experiments using a standard light source with known spectral lines and a pre-adjusted sine bar length are conducted. The model parameter equations are solved, and subsequent parameter optimization simulations are performed to determine the optimal length ratio. Lastly, the length of the sine bar is adjusted. The experimental results indicate that the wavelength accuracy is ±0.3 nm, which is better than the original accuracy of ±2.6 nm. The results confirm the validity of the error analysis of the mechanical system of the monochromator as well as the validity of the calibration method.

  16. On the error probability of general tree and trellis codes with applications to sequential decoding

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1973-01-01

    An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.

  17. A comparison of correlation-length estimation methods for the objective analysis of surface pollutants at Environment and Climate Change Canada.

    PubMed

    Ménard, Richard; Deshaies-Jacques, Martin; Gasset, Nicolas

    2016-09-01

    An objective analysis is one of the main components of data assimilation. By combining observations with the output of a predictive model we combine the best features of each source of information: the complete spatial and temporal coverage provided by models, with a close representation of the truth provided by observations. The process of combining observations with a model output is called an analysis. To produce an analysis requires the knowledge of observation and model errors, as well as its spatial correlation. This paper is devoted to the development of methods of estimation of these error variances and the characteristic length-scale of the model error correlation for its operational use in the Canadian objective analysis system. We first argue in favor of using compact support correlation functions, and then introduce three estimation methods: the Hollingsworth-Lönnberg (HL) method in local and global form, the maximum likelihood method (ML), and the [Formula: see text] diagnostic method. We perform one-dimensional (1D) simulation studies where the error variance and true correlation length are known, and perform an estimation of both error variances and correlation length where both are non-uniform. We show that a local version of the HL method can capture accurately the error variances and correlation length at each observation site, provided that spatial variability is not too strong. However, the operational objective analysis requires only a single and globally valid correlation length. We examine whether any statistics of the local HL correlation lengths could be a useful estimate, or whether other global estimation methods such as by the global HL, ML, or [Formula: see text] should be used. We found in both 1D simulation and using real data that the ML method is able to capture physically significant aspects of the correlation length, while most other estimates give unphysical and larger length-scale values. This paper describes a proposed improvement of the objective analysis of surface pollutants at Environment and Climate Change Canada (formerly known as Environment Canada). Objective analyses are essentially surface maps of air pollutants that are obtained by combining observations with an air quality model output, and are thought to provide a complete and more accurate representation of the air quality. The highlight of this study is an analysis of methods to estimate the model (or background) error correlation length-scale. The error statistics are an important and critical component to the analysis scheme.

  18. The Influence of Item Calibration Error on Variable-Length Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2013-01-01

    Variable-length computerized adaptive testing (VL-CAT) allows both items and test length to be "tailored" to examinees, thereby achieving the measurement goal (e.g., scoring precision or classification) with as few items as possible. Several popular test termination rules depend on the standard error of the ability estimate, which in turn depends…

  19. Analysis of Relationships between the Level of Errors in Leg and Monofin Movement and Stroke Parameters in Monofin Swimming

    PubMed Central

    Rejman, Marek

    2013-01-01

    The aim of this study was to analyze the error structure in propulsive movements with regard to its influence on monofin swimming speed. The random cycles performed by six swimmers were filmed during a progressive test (900m). An objective method to estimate errors committed in the area of angular displacement of the feet and monofin segments was employed. The parameters were compared with a previously described model. Mutual dependences between the level of errors, stroke frequency, stroke length and amplitude in relation to swimming velocity were analyzed. The results showed that proper foot movements and the avoidance of errors, arising at the distal part of the fin, ensure the progression of swimming speed. The individual stroke parameters distribution which consists of optimally increasing stroke frequency to the maximal possible level that enables the stabilization of stroke length leads to the minimization of errors. Identification of key elements in the stroke structure based on the analysis of errors committed should aid in improving monofin swimming technique. Key points The monofin swimming technique was evaluated through the prism of objectively defined errors committed by the swimmers. The dependences between the level of errors, stroke rate, stroke length and amplitude in relation to swimming velocity were analyzed. Optimally increasing stroke rate to the maximal possible level that enables the stabilization of stroke length leads to the minimization of errors. Propriety foot movement and the avoidance of errors arising at the distal part of fin, provide for the progression of swimming speed. The key elements improving monofin swimming technique, based on the analysis of errors committed, were designated. PMID:24149742

  20. Rapid, Accurate, and Non-Invasive Measurement of Zebrafish Axial Length and Other Eye Dimensions Using SD-OCT Allows Longitudinal Analysis of Myopia and Emmetropization

    PubMed Central

    Collery, Ross F.; Veth, Kerry N.; Dubis, Adam M.; Carroll, Joseph; Link, Brian A.

    2014-01-01

    Refractive errors in vision can be caused by aberrant axial length of the eye, irregular corneal shape, or lens abnormalities. Causes of eye length overgrowth include multiple genetic loci, and visual parameters. We evaluate zebrafish as a potential animal model for studies of the genetic, cellular, and signaling basis of emmetropization and myopia. Axial length and other eye dimensions of zebrafish were measured using spectral domain-optical coherence tomography (SD-OCT). We used ocular lens and body metrics to normalize and compare eye size and relative refractive error (difference between observed retinal radial length and controls) in wild-type and lrp2 zebrafish. Zebrafish were dark-reared to assess effects of visual deprivation on eye size. Two relative measurements, ocular axial length to body length and axial length to lens diameter, were found to accurately normalize comparisons of eye sizes between different sized fish (R2 = 0.9548, R2 = 0.9921). Ray-traced focal lengths of wild-type zebrafish lenses were equal to their retinal radii, while lrp2 eyes had longer retinal radii than focal lengths. Both genetic mutation (lrp2) and environmental manipulation (dark-rearing) caused elongated eye axes. lrp2 mutants had relative refractive errors of −0.327 compared to wild-types, and dark-reared wild-type fish had relative refractive errors of −0.132 compared to light-reared siblings. Therefore, zebrafish eye anatomy (axial length, lens radius, retinal radius) can be rapidly and accurately measured by SD-OCT, facilitating longitudinal studies of regulated eye growth and emmetropization. Specifically, genes homologous to human myopia candidates may be modified, inactivated or overexpressed in zebrafish, and myopia-sensitizing conditions used to probe gene-environment interactions. Our studies provide foundation for such investigations into genetic contributions that control eye size and impact refractive errors. PMID:25334040

  1. Quantifying precision of in situ length and weight measurements of fish

    USGS Publications Warehouse

    Gutreuter, S.; Krzoska, D.J.

    1994-01-01

    We estimated and compared errors in field-made (in situ) measurements of lengths and weights of fish. We made three measurements of length and weight on each of 33 common carp Cyprinus carpio, and on each of a total of 34 bluegills Lepomis macrochirus and black crappies Pomoxis nigromaculatus. Maximum total lengths of all fish were measured to the nearest 1 mm on a conventional measuring board. The bluegills and black crappies (85–282 mm maximum total length) were weighed to the nearest 1 g on a 1,000-g spring-loaded scale. The common carp (415–600 mm maximum total length) were weighed to the nearest 0.05 kg on a 20-kg spring-loaded scale. We present a statistical model for comparison of coefficients of variation of length (Cl ) and weight (Cw ). Expected Cl was near zero and constant across mean length, indicating that length can be measured with good precision in the field. Expected Cw decreased with increasing mean length, and was larger than expected Cl by 5.8 to over 100 times for the bluegills and black crappies, and by 3 to over 20 times for the common carp. Unrecognized in situ weighing errors bias the apparent content of unique information in weight, which is the information not explained by either length or measurement error. We recommend procedures to circumvent effects of weighing errors, including elimination of unnecessary weighing from routine monitoring programs. In situ weighing must be conducted with greater care than is common if the content of unique and nontrivial information in weight is to be correctly identified.

  2. Study on the special vision sensor for detecting position error in robot precise TIG welding of some key part of rocket engine

    NASA Astrophysics Data System (ADS)

    Zhang, Wenzeng; Chen, Nian; Wang, Bin; Cao, Yipeng

    2005-01-01

    Rocket engine is a hard-core part of aerospace transportation and thrusting system, whose research and development is very important in national defense, aviation and aerospace. A novel vision sensor is developed, which can be used for error detecting in arc length control and seam tracking in precise pulse TIG welding of the extending part of the rocket engine jet tube. The vision sensor has many advantages, such as imaging with high quality, compactness and multiple functions. The optics design, mechanism design and circuit design of the vision sensor have been described in detail. Utilizing the mirror imaging of Tungsten electrode in the weld pool, a novel method is proposed to detect the arc length and seam tracking error of Tungsten electrode to the center line of joint seam from a single weld image. A calculating model of the method is proposed according to the relation of the Tungsten electrode, weld pool, the mirror of Tungsten electrode in weld pool and joint seam. The new methodologies are given to detect the arc length and seam tracking error. Through analyzing the results of the experiments, a system error modifying method based on a linear function is developed to improve the detecting precise of arc length and seam tracking error. Experimental results show that the final precision of the system reaches 0.1 mm in detecting the arc length and the seam tracking error of Tungsten electrode to the center line of joint seam.

  3. Concrete Crack Identification Using a UAV Incorporating Hybrid Image Processing.

    PubMed

    Kim, Hyunjun; Lee, Junhwa; Ahn, Eunjong; Cho, Soojin; Shin, Myoungsu; Sim, Sung-Han

    2017-09-07

    Crack assessment is an essential process in the maintenance of concrete structures. In general, concrete cracks are inspected by manual visual observation of the surface, which is intrinsically subjective as it depends on the experience of inspectors. Further, it is time-consuming, expensive, and often unsafe when inaccessible structural members are to be assessed. Unmanned aerial vehicle (UAV) technologies combined with digital image processing have recently been applied to crack assessment to overcome the drawbacks of manual visual inspection. However, identification of crack information in terms of width and length has not been fully explored in the UAV-based applications, because of the absence of distance measurement and tailored image processing. This paper presents a crack identification strategy that combines hybrid image processing with UAV technology. Equipped with a camera, an ultrasonic displacement sensor, and a WiFi module, the system provides the image of cracks and the associated working distance from a target structure on demand. The obtained information is subsequently processed by hybrid image binarization to estimate the crack width accurately while minimizing the loss of the crack length information. The proposed system has shown to successfully measure cracks thicker than 0.1 mm with the maximum length estimation error of 7.3%.

  4. Optimizing X-ray mirror thermal performance using matched profile cooling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Lin; Cocco, Daniele; Kelez, Nicholas

    2015-08-07

    To cover a large photon energy range, the length of an X-ray mirror is often longer than the beam footprint length for much of the applicable energy range. To limit thermal deformation of such a water-cooled X-ray mirror, a technique using side cooling with a cooled length shorter than the beam footprint length is proposed. This cooling length can be optimized by using finite-element analysis. For the Kirkpatrick–Baez (KB) mirrors at LCLS-II, the thermal deformation can be reduced by a factor of up to 30, compared with full-length cooling. Furthermore, a second, alternative technique, based on a similar principle ismore » presented: using a long, single-length cooling block on each side of the mirror and adding electric heaters between the cooling blocks and the mirror substrate. The electric heaters consist of a number of cells, located along the mirror length. The total effective length of the electric heater can then be adjusted by choosing which cells to energize, using electric power supplies. The residual height error can be minimized to 0.02 nm RMS by using optimal heater parameters (length and power density). Compared with a case without heaters, this residual height error is reduced by a factor of up to 45. The residual height error in the LCLS-II KB mirrors, due to free-electron laser beam heat load, can be reduced by a factor of ~11belowthe requirement. The proposed techniques are also effective in reducing thermal slope errors and are, therefore, applicable to white beam mirrors in synchrotron radiation beamlines.« less

  5. Does the Length of Elbow Flexors and Visual Feedback Have Effect on Accuracy of Isometric Muscle Contraction in Men after Stroke?

    PubMed Central

    Juodzbaliene, Vilma; Darbutas, Tomas; Skurvydas, Albertas

    2016-01-01

    The aim of the study was to determine the effect of different muscle length and visual feedback information (VFI) on accuracy of isometric contraction of elbow flexors in men after an ischemic stroke (IS). Materials and Methods. Maximum voluntary muscle contraction force (MVMCF) and accurate determinate muscle force (20% of MVMCF) developed during an isometric contraction of elbow flexors in 90° and 60° of elbow flexion were measured by an isokinetic dynamometer in healthy subjects (MH, n = 20) and subjects after an IS during their postrehabilitation period (MS, n = 20). Results. In order to evaluate the accuracy of the isometric contraction of the elbow flexors absolute errors were calculated. The absolute errors provided information about the difference between determinate and achieved muscle force. Conclusions. There is a tendency that greater absolute errors generating determinate force are made by MH and MS subjects in case of a greater elbow flexors length despite presence of VFI. Absolute errors also increase in both groups in case of a greater elbow flexors length without VFI. MS subjects make greater absolute errors generating determinate force without VFI in comparison with MH in shorter elbow flexors length. PMID:27042670

  6. Multiple symbol partially coherent detection of MPSK

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Divsalar, D.

    1992-01-01

    It is shown that by using the known (or estimated) value of carrier tracking loop signal to noise ratio (SNR) in the decision metric, it is possible to improve the error probability performance of a partially coherent multiple phase-shift-keying (MPSK) system relative to that corresponding to the commonly used ideal coherent decision rule. Using a maximum-likeihood approach, an optimum decision metric is derived and shown to take the form of a weighted sum of the ideal coherent decision metric (i.e., correlation) and the noncoherent decision metric which is optimum for differential detection of MPSK. The performance of a receiver based on this optimum decision rule is derived and shown to provide continued improvement with increasing length of observation interval (data symbol sequence length). Unfortunately, increasing the observation length does not eliminate the error floor associated with the finite loop SNR. Nevertheless, in the limit of infinite observation length, the average error probability performance approaches the algebraic sum of the error floor and the performance of ideal coherent detection, i.e., at any error probability above the error floor, there is no degradation due to the partial coherence. It is shown that this limiting behavior is virtually achievable with practical size observation lengths. Furthermore, the performance is quite insensitive to mismatch between the estimate of loop SNR (e.g., obtained from measurement) fed to the decision metric and its true value. These results may be of use in low-cost Earth-orbiting or deep-space missions employing coded modulations.

  7. Measuring Scale Errors in a Laser Tracker’s Horizontal Angle Encoder Through Simple Length Measurement and Two-Face System Tests

    PubMed Central

    Muralikrishnan, B.; Blackburn, C.; Sawyer, D.; Phillips, S.; Bridges, R.

    2010-01-01

    We describe a method to estimate the scale errors in the horizontal angle encoder of a laser tracker in this paper. The method does not require expensive instrumentation such as a rotary stage or even a calibrated artifact. An uncalibrated but stable length is realized between two targets mounted on stands that are at tracker height. The tracker measures the distance between these two targets from different azimuthal positions (say, in intervals of 20° over 360°). Each target is measured in both front face and back face. Low order harmonic scale errors can be estimated from this data and may then be used to correct the encoder’s error map to improve the tracker’s angle measurement accuracy. We have demonstrated this for the second order harmonic in this paper. It is important to compensate for even order harmonics as their influence cannot be removed by averaging front face and back face measurements whereas odd orders can be removed by averaging. We tested six trackers from three different manufacturers. Two of those trackers are newer models introduced at the time of writing of this paper. For older trackers from two manufacturers, the length errors in a 7.75 m horizontal length placed 7 m away from a tracker were of the order of ± 65 μm before correcting the error map. They reduced to less than ± 25 μm after correcting the error map for second order scale errors. Newer trackers from the same manufacturers did not show this error. An older tracker from a third manufacturer also did not show this error. PMID:27134789

  8. Error, stress, and teamwork in medicine and aviation: cross sectional surveys

    NASA Technical Reports Server (NTRS)

    Sexton, J. B.; Thomas, E. J.; Helmreich, R. L.

    2000-01-01

    OBJECTIVES: To survey operating theatre and intensive care unit staff about attitudes concerning error, stress, and teamwork and to compare these attitudes with those of airline cockpit crew. DESIGN:: Cross sectional surveys. SETTING:: Urban teaching and non-teaching hospitals in the United States, Israel, Germany, Switzerland, and Italy. Major airlines around the world. PARTICIPANTS:: 1033 doctors, nurses, fellows, and residents working in operating theatres and intensive care units and over 30 000 cockpit crew members (captains, first officers, and second officers). MAIN OUTCOME MEASURES:: Perceptions of error, stress, and teamwork. RESULTS:: Pilots were least likely to deny the effects of fatigue on performance (26% v 70% of consultant surgeons and 47% of consultant anaesthetists). Most pilots (97%) and intensive care staff (94%) rejected steep hierarchies (in which senior team members are not open to input from junior members), but only 55% of consultant surgeons rejected such hierarchies. High levels of teamwork with consultant surgeons were reported by 73% of surgical residents, 64% of consultant surgeons, 39% of anaesthesia consultants, 28% of surgical nurses, 25% of anaesthetic nurses, and 10% of anaesthetic residents. Only a third of staff reported that errors are handled appropriately at their hospital. A third of intensive care staff did not acknowledge that they make errors. Over half of intensive care staff reported that they find it difficult to discuss mistakes. CONCLUSIONS: Medical staff reported that error is important but difficult to discuss and not handled well in their hospital. Barriers to discussing error are more important since medical staff seem to deny the effect of stress and fatigue on performance. Further problems include differing perceptions of teamwork among team members and reluctance of senior theatre staff to accept input from junior members.

  9. Error, stress, and teamwork in medicine and aviation: cross sectional surveys

    PubMed Central

    Sexton, J Bryan; Thomas, Eric J; Helmreich, Robert L

    2000-01-01

    Objectives: To survey operating theatre and intensive care unit staff about attitudes concerning error, stress, and teamwork and to compare these attitudes with those of airline cockpit crew. Design: Cross sectional surveys. Setting: Urban teaching and non-teaching hospitals in the United States, Israel, Germany, Switzerland, and Italy. Major airlines around the world. Participants: 1033 doctors, nurses, fellows, and residents working in operating theatres and intensive care units and over 30 000 cockpit crew members (captains, first officers, and second officers). Main outcome measures: Perceptions of error, stress, and teamwork. Results: Pilots were least likely to deny the effects of fatigue on performance (26% v 70% of consultant surgeons and 47% of consultant anaesthetists). Most pilots (97%) and intensive care staff (94%) rejected steep hierarchies (in which senior team members are not open to input from junior members), but only 55% of consultant surgeons rejected such hierarchies. High levels of teamwork with consultant surgeons were reported by 73% of surgical residents, 64% of consultant surgeons, 39% of anaesthesia consultants, 28% of surgical nurses, 25% of anaesthetic nurses, and 10% of anaesthetic residents. Only a third of staff reported that errors are handled appropriately at their hospital. A third of intensive care staff did not acknowledge that they make errors. Over half of intensive care staff reported that they find it difficult to discuss mistakes. Conclusions: Medical staff reported that error is important but difficult to discuss and not handled well in their hospital. Barriers to discussing error are more important since medical staff seem to deny the effect of stress and fatigue on performance. Further problems include differing perceptions of teamwork among team members and reluctance of senior theatre staff to accept input from junior members. PMID:10720356

  10. Implementation of a flow-dependent background error correlation length scale formulation in the NEMOVAR OSTIA system

    NASA Astrophysics Data System (ADS)

    Fiedler, Emma; Mao, Chongyuan; Good, Simon; Waters, Jennifer; Martin, Matthew

    2017-04-01

    OSTIA is the Met Office's Operational Sea Surface Temperature (SST) and Ice Analysis system, which produces L4 (globally complete, gridded) analyses on a daily basis. Work is currently being undertaken to replace the original OI (Optimal Interpolation) data assimilation scheme with NEMOVAR, a 3D-Var data assimilation method developed for use with the NEMO ocean model. A dual background error correlation length scale formulation is used for SST in OSTIA, as implemented in NEMOVAR. Short and long length scales are combined according to the ratio of the decomposition of the background error variances into short and long spatial correlations. The pre-defined background error variances vary spatially and seasonally, but not on shorter time-scales. If the derived length scales applied to the daily analysis are too long, SST features may be smoothed out. Therefore a flow-dependent component to determining the effective length scale has also been developed. The total horizontal gradient of the background SST field is used to identify regions where the length scale should be shortened. These methods together have led to an improvement in the resolution of SST features compared to the previous OI analysis system, without the introduction of spurious noise. This presentation will show validation results for feature resolution in OSTIA using the OI scheme, the dual length scale NEMOVAR scheme, and the flow-dependent implementation.

  11. ASME B89.4.19 Performance Evaluation Tests and Geometric Misalignments in Laser Trackers

    PubMed Central

    Muralikrishnan, B.; Sawyer, D.; Blackburn, C.; Phillips, S.; Borchardt, B.; Estler, W. T.

    2009-01-01

    Small and unintended offsets, tilts, and eccentricity of the mechanical and optical components in laser trackers introduce systematic errors in the measured spherical coordinates (angles and range readings) and possibly in the calculated lengths of reference artifacts. It is desirable that the tests described in the ASME B89.4.19 Standard [1] be sensitive to these geometric misalignments so that any resulting systematic errors are identified during performance evaluation. In this paper, we present some analysis, using error models and numerical simulation, of the sensitivity of the length measurement system tests and two-face system tests in the B89.4.19 Standard to misalignments in laser trackers. We highlight key attributes of the testing strategy adopted in the Standard and propose new length measurement system tests that demonstrate improved sensitivity to some misalignments. Experimental results with a tracker that is not properly error corrected for the effects of the misalignments validate claims regarding the proposed new length tests. PMID:27504211

  12. Skilled adult readers activate the meanings of high-frequency words using phonology: Evidence from eye tracking.

    PubMed

    Jared, Debra; O'Donnell, Katrina

    2017-02-01

    We examined whether highly skilled adult readers activate the meanings of high-frequency words using phonology when reading sentences for meaning. A homophone-error paradigm was used. Sentences were written to fit 1 member of a homophone pair, and then 2 other versions were created in which the homophone was replaced by its mate or a spelling-control word. The error words were all high-frequency words, and the correct homophones were either higher-frequency words or low-frequency words-that is, the homophone errors were either the subordinate or dominant member of the pair. Participants read sentences as their eye movements were tracked. When the high-frequency homophone error words were the subordinate member of the homophone pair, participants had shorter immediate eye-fixation latencies on these words than on matched spelling-control words. In contrast, when the high-frequency homophone error words were the dominant member of the homophone pair, a difference between these words and spelling controls was delayed. These findings provide clear evidence that the meanings of high-frequency words are activated by phonological representations when skilled readers read sentences for meaning. Explanations of the differing patterns of results depending on homophone dominance are discussed.

  13. Computer search for binary cyclic UEP codes of odd length up to 65

    NASA Technical Reports Server (NTRS)

    Lin, Mao-Chao; Lin, Chi-Chang; Lin, Shu

    1990-01-01

    Using an exhaustive computation, the unequal error protection capabilities of all binary cyclic codes of odd length up to 65 that have minimum distances at least 3 are found. For those codes that can only have upper bounds on their unequal error protection capabilities computed, an analytic method developed by Dynkin and Togonidze (1976) is used to show that the upper bounds meet the exact unequal error protection capabilities.

  14. Clinical Dental Faculty Members' Perceptions of Diagnostic Errors and How to Avoid Them.

    PubMed

    Nikdel, Cathy; Nikdel, Kian; Ibarra-Noriega, Ana; Kalenderian, Elsbeth; Walji, Muhammad F

    2018-04-01

    Diagnostic errors are increasingly recognized as a source of preventable harm in medicine, yet little is known about their occurrence in dentistry. The aim of this study was to gain a deeper understanding of clinical dental faculty members' perceptions of diagnostic errors, types of errors that may occur, and possible contributing factors. The authors conducted semi-structured interviews with ten domain experts at one U.S. dental school in May-August 2016 about their perceptions of diagnostic errors and their causes. The interviews were analyzed using an inductive process to identify themes and key findings. The results showed that the participants varied in their definitions of diagnostic errors. While all identified missed diagnosis and wrong diagnosis, only four participants perceived that a delay in diagnosis was a diagnostic error. Some participants perceived that an error occurs only when the choice of treatment leads to harm. Contributing factors associated with diagnostic errors included the knowledge and skills of the dentist, not taking adequate time, lack of communication among colleagues, and cognitive biases such as premature closure based on previous experience. Strategies suggested by the participants to prevent these errors were taking adequate time when investigating a case, forming study groups, increasing communication, and putting more emphasis on differential diagnosis. These interviews revealed differing perceptions of dental diagnostic errors among clinical dental faculty members. To address the variations, the authors recommend adopting shared language developed by the medical profession to increase understanding.

  15. Ensemble Data Mining Methods

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.

    2004-01-01

    Ensemble Data Mining Methods, also known as Committee Methods or Model Combiners, are machine learning methods that leverage the power of multiple models to achieve better prediction accuracy than any of the individual models could on their own. The basic goal when designing an ensemble is the same as when establishing a committee of people: each member of the committee should be as competent as possible, but the members should be complementary to one another. If the members are not complementary, Le., if they always agree, then the committee is unnecessary---any one member is sufficient. If the members are complementary, then when one or a few members make an error, the probability is high that the remaining members can correct this error. Research in ensemble methods has largely revolved around designing ensembles consisting of competent yet complementary models.

  16. Statistical models for estimating daily streamflow in Michigan

    USGS Publications Warehouse

    Holtschlag, D.J.; Salehi, Habib

    1992-01-01

    Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.

  17. An evaluation of water vapor radiometer data for calibration of the wet path delay in very long baseline interferometry experiments

    NASA Technical Reports Server (NTRS)

    Kuehn, C. E.; Himwich, W. E.; Clark, T. A.; Ma, C.

    1991-01-01

    The internal consistency of the baseline-length measurements derived from analysis of several independent VLBI experiments is an estimate of the measurement precision. The paper investigates whether the inclusion of water vapor radiometer (WVR) data as an absolute calibration of the propagation delay due to water vapor improves the precision of VLBI baseline-length measurements. The paper analyzes 28 International Radio Interferometric Surveying runs between June 1988 and January 1989; WVR measurements were made during each session. The addition of WVR data decreased the scatter of the length measurements of the baselines by 5-10 percent. The observed reduction in the scatter of the baseline lengths is less than what is expected from the behavior of the formal errors, which suggest that the baseline-length measurement precision should improve 10-20 percent if WVR data are included in the analysis. The discrepancy between the formal errors and the baseline-length results can be explained as the consequence of systematic errors in the dry-mapping function parameters, instrumental biases in the WVR and the barometer, or both.

  18. Connector adapter

    NASA Technical Reports Server (NTRS)

    Dean, Richard J. (Inventor); Hacker, Scott C. (Inventor); Burge, Scott W. (Inventor); Dartez, Toby W. (Inventor)

    2007-01-01

    An adapter for installing a connector to a terminal post, wherein the connector is attached to a cable, is presented. In an embodiment, the adapter is comprised of an elongated collet member having a longitudinal axis comprised of a first collet member end, a second collet member end, an outer collet member surface, and an inner collet member surface. The inner collet member surface at the first collet member end is used to engage the connector. The outer collet member surface at the first collet member end is tapered for a predetermined first length at a predetermined taper angle. The collet includes a longitudinal slot that extends along the longitudinal axis initiating at the first collet member end for a predetermined second length. The first collet member end is formed of a predetermined number of sections segregated by a predetermined number of channels and the longitudinal slot.

  19. Telomere length analysis in Down syndrome birth.

    PubMed

    Bhaumik, Pranami; Bhattacharya, Mandar; Ghosh, Priyanka; Ghosh, Sujay; Kumar Dey, Subrata

    2017-06-01

    Human reproductive fitness depends upon telomere chemistry. Maternal age, meiotic nondisjunction error and telomere length of mother of trisomic child are someway associated. Reports exhibiting maternal inheritance of telomere length in Down syndrome child are very scanty. To investigate this, we collected peripheral blood from 170 mothers of Down syndrome child and 186 age matched mothers of euploid child with their newly born babies. Telomere length was measured by restriction digestion - southern blotting technique. Meiotic nondisjunction error was detected by STR genotyping. Subjects are classified by age (old >35 years and young ˂35 years) and by meiotic error (MI and MII). Linear regression was run to explore the age - telomere length relationship in each maternal groups. The study reveals that with age, telomere erodes in length. Old MII mothers carry the shortest (p˂0.001), control mothers have the longest telomere and MI lies in between. Babies from older mother have longer telomere (p˂0.001) moreover; telomeres are longer in Down syndrome babies than control babies (p˂0.001). To conclude, this study represents not only the relation between maternal aging and telomere length but also explore the maternal heritability of telomere length in families with Down syndrome child. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Concrete Crack Identification Using a UAV Incorporating Hybrid Image Processing

    PubMed Central

    Lee, Junhwa; Ahn, Eunjong; Cho, Soojin; Shin, Myoungsu

    2017-01-01

    Crack assessment is an essential process in the maintenance of concrete structures. In general, concrete cracks are inspected by manual visual observation of the surface, which is intrinsically subjective as it depends on the experience of inspectors. Further, it is time-consuming, expensive, and often unsafe when inaccessible structural members are to be assessed. Unmanned aerial vehicle (UAV) technologies combined with digital image processing have recently been applied to crack assessment to overcome the drawbacks of manual visual inspection. However, identification of crack information in terms of width and length has not been fully explored in the UAV-based applications, because of the absence of distance measurement and tailored image processing. This paper presents a crack identification strategy that combines hybrid image processing with UAV technology. Equipped with a camera, an ultrasonic displacement sensor, and a WiFi module, the system provides the image of cracks and the associated working distance from a target structure on demand. The obtained information is subsequently processed by hybrid image binarization to estimate the crack width accurately while minimizing the loss of the crack length information. The proposed system has shown to successfully measure cracks thicker than 0.1 mm with the maximum length estimation error of 7.3%. PMID:28880254

  1. Estimate of higher order ionospheric errors in GNSS positioning

    NASA Astrophysics Data System (ADS)

    Hoque, M. Mainul; Jakowski, N.

    2008-10-01

    Precise navigation and positioning using GPS/GLONASS/Galileo require the ionospheric propagation errors to be accurately determined and corrected for. Current dual-frequency method of ionospheric correction ignores higher order ionospheric errors such as the second and third order ionospheric terms in the refractive index formula and errors due to bending of the signal. The total electron content (TEC) is assumed to be same at two GPS frequencies. All these assumptions lead to erroneous estimations and corrections of the ionospheric errors. In this paper a rigorous treatment of these problems is presented. Different approximation formulas have been proposed to correct errors due to excess path length in addition to the free space path length, TEC difference at two GNSS frequencies, and third-order ionospheric term. The GPS dual-frequency residual range errors can be corrected within millimeter level accuracy using the proposed correction formulas.

  2. The use of source memory to identify one's own episodic confusion errors.

    PubMed

    Smith, S M; Tindell, D R; Pierce, B H; Gilliland, T R; Gerkens, D R

    2001-03-01

    In 4 category cued recall experiments, participants falsely recalled nonlist common members, a semantic confusion error. Errors were more likely if critical nonlist words were presented on an incidental task, causing source memory failures called episodic confusion errors. Participants could better identify the source of falsely recalled words if they had deeply processed the words on the incidental task. For deep but not shallow processing, participants could reliably include or exclude incidentally shown category members in recall. The illusion that critical items actually appeared on categorized lists was diminished but not eradicated when participants identified episodic confusion errors post hoc among their own recalled responses; participants often believed that critical items had been on both the incidental task and the study list. Improved source monitoring can potentially mitigate episodic (but not semantic) confusion errors.

  3. Optimization of multimagnetometer systems on a spacecraft

    NASA Technical Reports Server (NTRS)

    Neubauer, F. M.

    1975-01-01

    The problem of optimizing the position of magnetometers along a boom of given length to yield a minimized total error is investigated. The discussion is limited to at most four magnetometers, which seems to be a practical limit due to weight, power, and financial considerations. The outlined error analysis is applied to some illustrative cases. The optimal magnetometer locations, for which the total error is minimum, are computed for given boom length, instrument errors, and very conservative magnetic field models characteristic for spacecraft with only a restricted or ineffective magnetic cleanliness program. It is shown that the error contribution by the magnetometer inaccuracy is increased as the number of magnetometers is increased, whereas the spacecraft field uncertainty is diminished by an appreciably larger amount.

  4. A Study on Sixth Grade Students' Misconceptions and Errors in Spatial Measurement: Length, Area, and Volume

    ERIC Educational Resources Information Center

    Tan Sisman, Gulcin; Aksu, Meral

    2016-01-01

    The purpose of the present study was to portray students' misconceptions and errors while solving conceptually and procedurally oriented tasks involving length, area, and volume measurement. The data were collected from 445 sixth grade students attending public primary schools in Ankara, Türkiye via a test composed of 16 constructed-response…

  5. Energy-Absorbing Beam Member

    NASA Technical Reports Server (NTRS)

    Littell, Justin D. (Inventor)

    2017-01-01

    An energy-absorbing (EA) beam member and having a cell core structure is positioned in an aircraft fuselage proximate to the floor of the aircraft. The cell core structure has a length oriented along a width of the fuselage, a width oriented along a length of the fuselage, and a depth extending away from the floor. The cell core structure also includes cell walls that collectively define a repeating conusoidal pattern of alternating respective larger and smaller first and second radii along the length of the cell core structure. The cell walls slope away from a direction of flight of the aircraft at a calibrated lean angle. An EA beam member may include the cell core structure and first and second plates along the length of the cell core structure on opposite edges of the cell material.

  6. [Fire behavior of Mongolian oak leaves fuel bed under no-wind and zero-slope conditions. II. Analysis of the factors affecting flame length and residence time and related prediction models].

    PubMed

    Zhang, Ji-Li; Liu, Bo-Fei; Di, Xue-Ying; Chu, Teng-Fei; Jin, Sen

    2012-11-01

    Taking fuel moisture content, fuel loading, and fuel bed depth as controlling factors, the fuel beds of Mongolian oak leaves in Maoershan region of Northeast China in field were simulated, and a total of one hundred experimental burnings under no-wind and zero-slope conditions were conducted in laboratory, with the effects of the fuel moisture content, fuel loading, and fuel bed depth on the flame length and its residence time analyzed and the multivariate linear prediction models constructed. The results indicated that fuel moisture content had a significant negative liner correlation with flame length, but less correlation with flame residence time. Both the fuel loading and the fuel bed depth were significantly positively correlated with flame length and its residence time. The interactions of fuel bed depth with fuel moisture content and fuel loading had significant effects on the flame length, while the interactions of fuel moisture content with fuel loading and fuel bed depth affected the flame residence time significantly. The prediction model of flame length had better prediction effect, which could explain 83.3% of variance, with a mean absolute error of 7.8 cm and a mean relative error of 16.2%, while the prediction model of flame residence time was not good enough, which could only explain 54% of variance, with a mean absolute error of 9.2 s and a mean relative error of 18.6%.

  7. Effect of Transducer Orientation on Errors in Ultrasound Image-Based Measurements of Human Medial Gastrocnemius Muscle Fascicle Length and Pennation

    PubMed Central

    Gandevia, Simon C.; Herbert, Robert D.

    2016-01-01

    Ultrasound imaging is often used to measure muscle fascicle lengths and pennation angles in human muscles in vivo. Theoretically the most accurate measurements are made when the transducer is oriented so that the image plane aligns with muscle fascicles and, for measurements of pennation, when the image plane also intersects the aponeuroses perpendicularly. However this orientation is difficult to achieve and usually there is some degree of misalignment. Here, we used simulated ultrasound images based on three-dimensional models of the human medial gastrocnemius, derived from magnetic resonance and diffusion tensor images, to describe the relationship between transducer orientation and measurement errors. With the transducer oriented perpendicular to the surface of the leg, the error in measurement of fascicle lengths was about 0.4 mm per degree of misalignment of the ultrasound image with the muscle fascicles. If the transducer is then tipped by 20°, the error increases to 1.1 mm per degree of misalignment. For a given degree of misalignment of muscle fascicles with the image plane, the smallest absolute error in fascicle length measurements occurs when the transducer is held perpendicular to the surface of the leg. Misalignment of the transducer with the fascicles may cause fascicle length measurements to be underestimated or overestimated. Contrary to widely held beliefs, it is shown that pennation angles are always overestimated if the image is not perpendicular to the aponeurosis, even when the image is perfectly aligned with the fascicles. An analytical explanation is provided for this finding. PMID:27294280

  8. Effect of Transducer Orientation on Errors in Ultrasound Image-Based Measurements of Human Medial Gastrocnemius Muscle Fascicle Length and Pennation.

    PubMed

    Bolsterlee, Bart; Gandevia, Simon C; Herbert, Robert D

    2016-01-01

    Ultrasound imaging is often used to measure muscle fascicle lengths and pennation angles in human muscles in vivo. Theoretically the most accurate measurements are made when the transducer is oriented so that the image plane aligns with muscle fascicles and, for measurements of pennation, when the image plane also intersects the aponeuroses perpendicularly. However this orientation is difficult to achieve and usually there is some degree of misalignment. Here, we used simulated ultrasound images based on three-dimensional models of the human medial gastrocnemius, derived from magnetic resonance and diffusion tensor images, to describe the relationship between transducer orientation and measurement errors. With the transducer oriented perpendicular to the surface of the leg, the error in measurement of fascicle lengths was about 0.4 mm per degree of misalignment of the ultrasound image with the muscle fascicles. If the transducer is then tipped by 20°, the error increases to 1.1 mm per degree of misalignment. For a given degree of misalignment of muscle fascicles with the image plane, the smallest absolute error in fascicle length measurements occurs when the transducer is held perpendicular to the surface of the leg. Misalignment of the transducer with the fascicles may cause fascicle length measurements to be underestimated or overestimated. Contrary to widely held beliefs, it is shown that pennation angles are always overestimated if the image is not perpendicular to the aponeurosis, even when the image is perfectly aligned with the fascicles. An analytical explanation is provided for this finding.

  9. Comparative test on several forms of background error covariance in 3DVar

    NASA Astrophysics Data System (ADS)

    Shao, Aimei

    2013-04-01

    The background error covariance matrix (Hereinafter referred to as B matrix) plays an important role in the three-dimensional variational (3DVar) data assimilation method. However, it is difficult to get B matrix accurately because true atmospheric state is unknown. Therefore, some methods were developed to estimate B matrix (e.g. NMC method, innovation analysis method, recursive filters, and ensemble method such as EnKF). Prior to further development and application of these methods, the function of several B matrixes estimated by these methods in 3Dvar is worth studying and evaluating. For this reason, NCEP reanalysis data and forecast data are used to test the effectiveness of the several B matrixes with VAF (Huang, 1999) method. Here the NCEP analysis is treated as the truth and in this case the forecast error is known. The data from 2006 to 2007 is used as the samples to estimate B matrix and the data in 2008 is used to verify the assimilation effects. The 48h and 24h forecast valid at the same time is used to estimate B matrix with NMC method. B matrix can be represented by a correlation part (a non-diagonal matrix) and a variance part (a diagonal matrix of variances). Gaussian filter function as an approximate approach is used to represent the variation of correlation coefficients with distance in numerous 3DVar systems. On the basis of the assumption, the following several forms of B matrixes are designed and test with VAF in the comparative experiments: (1) error variance and the characteristic lengths are fixed and setted to their mean value averaged over the analysis domain; (2) similar to (1), but the mean characteristic lengths reduce to 50 percent for the height and 60 percent for the temperature of the original; (3) similar to (2), but error variance calculated directly by the historical data is space-dependent; (4) error variance and characteristic lengths are all calculated directly by the historical data; (5) B matrix is estimated directly by the historical data; (6) similar to (5), but a localization process is performed; (7) B matrix is estimated by NMC method but error variance is reduced by 1.7 times in order that the value is close to that calculated from the true forecast error samples; (8) similar to (7), but the localization similar to (6) is performed. Experimental results with the different B matrixes show that for the Gaussian-type B matrix the characteristic lengths calculated from the true error samples don't bring a good analysis results. However, the reduced characteristic lengths (about half of the original one) can lead to a good analysis. If the B matrix estimated directly from the historical data is used in 3DVar, the assimilation effect can not reach to the best. The better assimilation results are generated with the application of reduced characteristic length and localization. Even so, it hasn't obvious advantage compared with Gaussian-type B matrix with the optimal characteristic length. It implies that the Gaussian-type B matrix, widely used for operational 3DVar system, can get a good analysis with the appropriate characteristic lengths. The crucial problem is how to determine the appropriate characteristic lengths. (This work is supported by the National Natural Science Foundation of China (41275102, 40875063), and the Fundamental Research Funds for the Central Universities (lzujbky-2010-9) )

  10. Found Poems, Member Checking and Crises of Representation

    ERIC Educational Resources Information Center

    Reilly, Rosemary C.

    2013-01-01

    In order to establish veracity, qualitative researchers frequently rely on member checks to insure credibility by giving participants opportunities to correct errors, challenge interpretations and assess results; however, member checks are not without drawbacks. This paper describes an innovative approach to conducting member checks. Six members…

  11. Effects of energy chirp on bunch length measurement in linear accelerator beams

    NASA Astrophysics Data System (ADS)

    Sabato, L.; Arpaia, P.; Giribono, A.; Liccardo, A.; Mostacci, A.; Palumbo, L.; Vaccarezza, C.; Variola, A.

    2017-08-01

    The effects of assumptions about bunch properties on the accuracy of the measurement method of the bunch length based on radio frequency deflectors (RFDs) in electron linear accelerators (LINACs) are investigated. In particular, when the electron bunch at the RFD has a non-negligible energy chirp (i.e. a correlation between the longitudinal positions and energies of the particle), the measurement is affected by a deterministic intrinsic error, which is directly related to the RFD phase offset. A case study on this effect in the electron LINAC of a gamma beam source at the Extreme Light Infrastructure-Nuclear Physics (ELI-NP) is reported. The relative error is estimated by using an electron generation and tracking (ELEGANT) code to define the reference measurements of the bunch length. The relative error is proved to increase linearly with the RFD phase offset. In particular, for an offset of {{7}\\circ} , corresponding to a vertical centroid offset at a screen of about 1 mm, the relative error is 4.5%.

  12. Variable-Length Computerized Adaptive Testing: Adaptation of the A-Stratified Strategy in Item Selection with Content Balancing

    ERIC Educational Resources Information Center

    Huo, Yan

    2009-01-01

    Variable-length computerized adaptive testing (CAT) can provide examinees with tailored test lengths. With the fixed standard error of measurement ("SEM") termination rule, variable-length CAT can achieve predetermined measurement precision by using relatively shorter tests compared to fixed-length CAT. To explore the application of…

  13. Effects of minute misregistrations of prefabricated markers for image-guided dental implant surgery: an analytical evaluation.

    PubMed

    Rußig, Lorenz L; Schulze, Ralf K W

    2013-12-01

    The goal of the present study was to develop a theoretical analysis of errors in implant position, which can occur owing to minute registration errors of a reference marker in a cone beam computed tomography volume when inserting an implant with a surgical stent. A virtual dental-arch model was created using anatomic data derived from the literature. Basic trigonometry was used to compute effects of defined minute registration errors of only voxel size. The errors occurring at the implant's neck and apex both in horizontal as in vertical direction were computed for mean ±95%-confidence intervals of jaw width and length and typical implant lengths (8, 10 and 12 mm). Largest errors occur in vertical direction for larger voxel sizes and for greater arch dimensions. For a 10 mm implant in the frontal region, these can amount to a mean of 0.716 mm (range: 0.201-1.533 mm). Horizontal errors at the neck are negligible, with a mean overall deviation of 0.009 mm (range: 0.001-0.034 mm). Errors increase with distance to the registration marker and voxel size and are affected by implant length. Our study shows that minute and realistic errors occurring in the automated registration of a reference object have an impact on the implant's position and angulation. These errors occur in the fundamental initial step in the long planning chain; thus, they are critical and should be made aware to users of these systems. © 2012 John Wiley & Sons A/S.

  14. Fundamental Bounds for Sequence Reconstruction from Nanopore Sequencers.

    PubMed

    Magner, Abram; Duda, Jarosław; Szpankowski, Wojciech; Grama, Ananth

    2016-06-01

    Nanopore sequencers are emerging as promising new platforms for high-throughput sequencing. As with other technologies, sequencer errors pose a major challenge for their effective use. In this paper, we present a novel information theoretic analysis of the impact of insertion-deletion (indel) errors in nanopore sequencers. In particular, we consider the following problems: (i) for given indel error characteristics and rate, what is the probability of accurate reconstruction as a function of sequence length; (ii) using replicated extrusion (the process of passing a DNA strand through the nanopore), what is the number of replicas needed to accurately reconstruct the true sequence with high probability? Our results provide a number of important insights: (i) the probability of accurate reconstruction of a sequence from a single sample in the presence of indel errors tends quickly (i.e., exponentially) to zero as the length of the sequence increases; and (ii) replicated extrusion is an effective technique for accurate reconstruction. We show that for typical distributions of indel errors, the required number of replicas is a slow function (polylogarithmic) of sequence length - implying that through replicated extrusion, we can sequence large reads using nanopore sequencers. Moreover, we show that in certain cases, the required number of replicas can be related to information-theoretic parameters of the indel error distributions.

  15. Restrictions on surgical resident shift length does not impact type of medical errors.

    PubMed

    Anderson, Jamie E; Goodman, Laura F; Jensen, Guy W; Salcedo, Edgardo S; Galante, Joseph M

    2017-05-15

    In 2011, resident duty hours were restricted in an attempt to improve patient safety and resident education. With the goal of reducing fatigue, shorter shift length leads to more patient handoffs, raising concerns about adverse effects on patient safety. This study seeks to determine whether differences in duty-hour restrictions influence types of errors made by residents. This is a nested retrospective cohort study at a surgery department in an academic medical center. During 2013-14, standard 2011 duty hours were in place for residents. In 2014-15, duty-hour restrictions at the study site were relaxed ("flexible") with no restrictions on shift length. We reviewed all morbidity and mortality submissions from July 1, 2013-June 30, 2015 and compared differences in types of errors between these periods. A total of 383 patients experienced adverse events, including 59 deaths (15.4%). Comparing standard versus flexible periods, there was no difference in mortality (15.7% versus 12.6%, P = 0.479) or complication rates (2.6% versus 2.5%, P = 0.696). There was no difference in types of errors between periods (P = 0.050-0.808). The most number of errors were due to cognitive failures (229, 59.6%), whereas the fewest number of errors were due to team failure (127, 33.2%). By subset, technical errors resulted in the highest number of errors (169, 44.1%). There were no differences between types of errors of cases that were nonelective, at night, or involving residents. Among adverse events reported in this departmental surgical morbidity and mortality, there were no differences in types of errors when resident duty hours were less restrictive. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Optical fiber cable chemical stripping fixture

    NASA Technical Reports Server (NTRS)

    Kolasinski, John R. (Inventor); Coleman, Alexander M. (Inventor)

    1995-01-01

    An elongated fixture handle member is connected to a fixture body member with both members having interconnecting longitudinal central axial bores for the passage of an optical cable therethrough. The axial bore of the fixture body member, however, terminates in a shoulder stop for the outer end of a jacket of the optical cable covering both an optical fiber and a coating therefor, with an axial bore of reduced diameter continuing from the shoulder stop forward for a predetermined desired length to the outer end of the fixture body member. A subsequent insertion of the fixture body member including the above optical fiber elements into a chemical stripping solution results in a softening of the exposed external coating thereat which permits easy removal thereof from the optical fiber while leaving a desired length coated fiber intact within the fixture body member.

  17. Estimating Gestational Age With Sonography: Regression-Derived Formula Versus the Fetal Biometric Average.

    PubMed

    Cawyer, Chase R; Anderson, Sarah B; Szychowski, Jeff M; Neely, Cherry; Owen, John

    2018-03-01

    To compare the accuracy of a new regression-derived formula developed from the National Fetal Growth Studies data to the common alternative method that uses the average of the gestational ages (GAs) calculated for each fetal biometric measurement (biparietal diameter, head circumference, abdominal circumference, and femur length). This retrospective cross-sectional study identified nonanomalous singleton pregnancies that had a crown-rump length plus at least 1 additional sonographic examination with complete fetal biometric measurements. With the use of the crown-rump length to establish the referent estimated date of delivery, each method's (National Institute of Child Health and Human Development regression versus Hadlock average [Radiology 1984; 152:497-501]), error at every examination was computed. Error, defined as the difference between the crown-rump length-derived GA and each method's predicted GA (weeks), was compared in 3 GA intervals: 1 (14 weeks-20 weeks 6 days), 2 (21 weeks-28 weeks 6 days), and 3 (≥29 weeks). In addition, the proportion of each method's examinations that had errors outside prespecified (±) day ranges was computed by using odds ratios. A total of 16,904 sonograms were identified. The overall and prespecified GA range subset mean errors were significantly smaller for the regression compared to the average (P < .01), and the regression had significantly lower odds of observing examinations outside the specified range of error in GA intervals 2 (odds ratio, 1.15; 95% confidence interval, 1.01-1.31) and 3 (odds ratio, 1.24; 95% confidence interval, 1.17-1.32) than the average method. In a contemporary unselected population of women dated by a crown-rump length-derived GA, the National Institute of Child Health and Human Development regression formula produced fewer estimates outside a prespecified margin of error than the commonly used Hadlock average; the differences were most pronounced for GA estimates at 29 weeks and later. © 2017 by the American Institute of Ultrasound in Medicine.

  18. Novel measuring strategies in neutron interferometry

    NASA Astrophysics Data System (ADS)

    Bonse, Ulrich; Wroblewski, Thomas

    1985-04-01

    Angular misalignment of a sample in a single crystal neutron interferometer leads to systematic errors of the effective sample thickness and in this way to errors in the determination of the coherent scattering length. The misalignment can be determined and the errors can be corrected by a second measurement at a different angular sample position. Furthermore, a method has been developed which allows supervision of the wavelength during the measurements. These two techniques were tested by determining the scattering length of copper. A value of bc = 7.66(4) fm was obtained which is in excellent agreement with previous measurements.

  19. Childhood exposure to constricted living space: a possible environmental threat for myopia development.

    PubMed

    Choi, Kai Yip; Yu, Wing Yan; Lam, Christie Hang I; Li, Zhe Chuang; Chin, Man Pan; Lakshmanan, Yamunadevi; Wong, Francisca Siu Yin; Do, Chi Wai; Lee, Paul Hong; Chan, Henry Ho Lung

    2017-09-01

    People in Hong Kong generally live in a densely populated area and their homes are smaller compared with most other cities worldwide. Interestingly, East Asian cities with high population densities seem to have higher myopia prevalence, but the association between them has not been established. This study investigated whether the crowded habitat in Hong Kong is associated with refractive error among children. In total, 1075 subjects [Mean age (S.D.): 9.95 years (0.97), 586 boys] were recruited. Information such as demographics, living environment, parental education and ocular status were collected using parental questionnaires. The ocular axial length and refractive status of all subjects were measured by qualified personnel. Ocular axial length was found to be significantly longer among those living in districts with a higher population density (F 2,1072  = 6.15, p = 0.002) and those living in a smaller home (F 2,1072  = 3.16, p = 0.04). Axial lengths were the same among different types of housing (F 3,1071  = 1.24, p = 0.29). Non-cycloplegic autorefraction suggested a more negative refractive error in those living in districts with a higher population density (F 2,1072  = 7.88, p < 0.001) and those living in a smaller home (F 2,1072  = 4.25, p = 0.02). After adjustment for other confounding covariates, the population density and home size also significantly predicted axial length and non-cycloplegic refractive error in the multiple linear regression model, while axial length and refractive error had no relationship with types of housing. Axial length in children and childhood refractive error were associated with high population density and small home size. A constricted living space may be an environmental threat for myopia development in children. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.

  20. Axial length variation impacts on retinal vessel density and foveal avascular zone area measurement using optical coherence tomography angiography

    NASA Astrophysics Data System (ADS)

    Sampson, Danuta M.; Gong, Peijun; An, Di; Menghini, Moreno; Hansen, Alex; Mackey, David A.; Sampson, David D.; Chen, Fred K.

    2017-04-01

    We examined the impact of axial length on superficial retinal vessel density (SRVD) and foveal avascular zone area (FAZA) measurement using optical coherence tomography angiography. The SRVD and FAZA were quantified before and after correction for magnification error associated with axial length variation. Although SRVD did not differ before and after correction for magnification error in the parafoveal region, change in foveal SRVD and FAZA were significant. This has implications for clinical trials outcome in diseased eyes where significant capillary dropout may occur in the parafovea.

  1. Effects of correlations between particle longitudinal positions and transverse plane on bunch length measurement: a case study on GBS electron LINAC at ELI-NP

    NASA Astrophysics Data System (ADS)

    Sabato, L.; Arpaia, P.; Cianchi, A.; Liccardo, A.; Mostacci, A.; Palumbo, L.; Variola, A.

    2018-02-01

    In high-brightness LINear ACcelerators (LINACs), electron bunch length can be measured indirectly by a radio frequency deflector (RFD). In this paper, the accuracy loss arising from non-negligible correlations between particle longitudinal positions and the transverse plane (in particular the vertical one) at RFD entrance is analytically assessed. Theoretical predictions are compared with simulation results, obtained by means of ELEctron Generation ANd Tracking (ELEGANT) code, in the case study of the gamma beam system (GBS) at the extreme light infrastructure—nuclear physics (ELI-NP). In particular, the relative error affecting the bunch length measurement, for bunches characterized by both energy chirp and fixed correlation coefficients between longitudinal particle positions and the vertical plane, is reported. Moreover, the relative error versus the correlation coefficients is shown for fixed RFD phase 0 rad and π rad. The relationship between relative error and correlations factors can help the decision of using the bunch length measurement technique with one or two vertical spot size measurements in order to cancel the correlations contribution. In the case of the GBS electron LINAC, the misalignment of one of the quadrupoles before the RFD between  -2 mm and 2 mm leads to a relative error less than 5%. The misalignment of the first C-band accelerating section between  -2 mm and 2 mm could lead to a relative error up to 10%.

  2. Ciliary Body Thickness and Refractive Error in Children

    PubMed Central

    Bailey, Melissa D.; Sinnott, Loraine T.; Mutti, Donald O.

    2010-01-01

    Purpose To determine whether ciliary body thickness (CBT) is related to refractive error in school-age children. Methods Fifty-three children, 8 to 15 years of age, were recruited. CBT was measured from anterior segment OCT images (Visante; Carl Zeiss Meditec, Inc., Dublin, CA) at 1 (CBT1), 2 (CBT2) and 3 (CBT3) mm posterior to the scleral spur. Cycloplegic refractive error was measured with an autorefractor, and axial length was measured with an optical biometer. Multilevel regression models determined the relationship between CBT measurements and refractive error or axial length. A Bland-Altman analysis was used to assess the between-visit repeatability of the ciliary body measurements. Results The between-visits coefficients of repeatability for CBT1, -2, and -3 were 148.04, 165.68, and 110.90, respectively. Thicker measurements at CBT2 (r = −0.29, P = 0.03) and CBT3 (r = −0.38, P = 0.005) were associated with increasingly myopic refractive errors (multilevel model: P < 0.001). Thicker measurements at CBT2 (r = 0.40, P = 0.003) and CBT3 (r = 0.51, P < 0.001) were associated with longer axial lengths (multilevel model: P < 0.001). Conclusions Thicker ciliary body measurements were associated with myopia and a longer axial length. Future studies should determine whether this relationship is also present in animal models of myopia and determine the temporal relationship between thickening of the ciliary muscle and the onset of myopia. PMID:18566470

  3. Relations between age, weight, refractive error and eye shape by computerized tomography in children.

    PubMed

    Song, Ha Tae; Kim, Young Jun; Lee, Soo Jung; Moon, Yeon Sung

    2007-09-01

    To investigate relationships between age, weight, refractive error, and morphologic changes in children's eyes by computerized tomography (CT). Of the 772 eyes of 386 patients under the age of 20 years, who visited our Department of Ophthalmology between January 2005 to August 2006 and underwent CT of the orbit, 406 eyes of 354 patients with clear CT images and normal eyeball contour were enrolled in the present retrospective study. The axial lengths, widths, horizontal and vertical lengths, refractive errors, and body weight of eyes were measured, and relationship between these parameters were investigated. Axial length was found to correlate significantly with eye width (r=0.914), and in emmetropic eyes and myopic eyes, axial lengths and widths were found to increase as age and body weight increased. Axial lengths increased rapidly until age 10, and then increased slowly. In emmetropic eyes, widths/axial lengths increased with age, but in myopic eyes these decreased as age or severity of myopia increased. Moreover, as age increased, the myopic population and severity also increased. The axial length was longer in case of myopia compared to emmetropia in all age groups and there was almost no difference in the increase rate of axial length by the age of myopia and emmetropia. However, the width was wider in case of myopia compared to emmetropia in all age groups and the increase rate of width in myopia by age was smaller than that of emmetropia. Myopia showed decreasing rate of width/axial length with increase of age, from 1.004 in 5 years to 0.971 in 20 years. However, emmetropia showed increasing rate of width/axial length with increase of age, from 0.990 in 5 years to 1.006 in 20 years.

  4. Relations between Age, Weight, Refractive Error and Eye Shape by Computerized Tomography in Children

    PubMed Central

    Song, Ha Tae; Kim, Young Jun; Lee, Soo Jung

    2007-01-01

    Purpose To investigate relationships between age, weight, refractive error, and morphologic changes in children's eyes by computerized tomography (CT). Methods Of the 772 eyes of 386 patients under the age of 20 years, who visited our Department of Ophthalmology between January 2005 to August 2006 and underwent CT of the orbit, 406 eyes of 354 patients with clear CT images and normal eyeball contour were enrolled in the present retrospective study. The axial lengths, widths, horizontal and vertical lengths, refractive errors, and body weight of eyes were measured, and relationship between these parameters were investigated. Results Axial length was found to correlate significantly with eye width (r=0.914), and in emmetropic eyes and myopic eyes, axial lengths and widths were found to increase as age and body weight increased. Axial lengths increased rapidly until age 10, and then increased slowly. In emmetropic eyes, widths / axial lengths increased with age, but in myopic eyes these decreased as age or severity of myopia increased. Moreover, as age increased, the myopic population and severity also increased. Conclusions The axial length was longer in case of myopia compared to emmetropia in all age groups and there was almost no difference in the increase rate of axial length by the age of myopia and emmetropia. However, the width was wider in case of myopia compared to emmetropia in all age groups and the increase rate of width in myopia by age was smaller than that of emmetropia. Myopia showed decreasing rate of width/axial length with increase of age, from 1.004 in 5 years to 0.971 in 20 years. However, emmetropia showed increasing rate of width/axial length with increase of age, from 0.990 in 5 years to 1.006 in 20 years. PMID:17804923

  5. Responses to Error: Sentence-Level Error and the Teacher of Basic Writing

    ERIC Educational Resources Information Center

    Foltz-Gray, Dan

    2012-01-01

    In this article, the author talks about sentence-level error, error in grammar, mechanics, punctuation, usage, and the teacher of basic writing. He states that communities are crawling with teachers and administrators and parents and state legislators and school board members who are engaged in sometimes rancorous debate over what to do about…

  6. A Theoretical Foundation for the Study of Inferential Error in Decision-Making Groups.

    ERIC Educational Resources Information Center

    Gouran, Dennis S.

    To provide a theoretical base for investigating the influence of inferential error on group decision making, current literature on both inferential error and decision making is reviewed and applied to the Watergate incident. Although groups tend to make fewer inferential errors because members' inferences are generally not biased in the same…

  7. Is adult gait less susceptible than paediatric gait to hip joint centre regression equation error?

    PubMed

    Kiernan, D; Hosking, J; O'Brien, T

    2016-03-01

    Hip joint centre (HJC) regression equation error during paediatric gait has recently been shown to have clinical significance. In relation to adult gait, it has been inferred that comparable errors with children in absolute HJC position may in fact result in less significant kinematic and kinetic error. This study investigated the clinical agreement of three commonly used regression equation sets (Bell et al., Davis et al. and Orthotrak) for adult subjects against the equations of Harrington et al. The relationship between HJC position error and subject size was also investigated for the Davis et al. set. Full 3-dimensional gait analysis was performed on 12 healthy adult subjects with data for each set compared to Harrington et al. The Gait Profile Score, Gait Variable Score and GDI-kinetic were used to assess clinical significance while differences in HJC position between the Davis and Harrington sets were compared to leg length and subject height using regression analysis. A number of statistically significant differences were present in absolute HJC position. However, all sets fell below the clinically significant thresholds (GPS <1.6°, GDI-Kinetic <3.6 points). Linear regression revealed a statistically significant relationship for both increasing leg length and increasing subject height with decreasing error in anterior/posterior and superior/inferior directions. Results confirm a negligible clinical error for adult subjects suggesting that any of the examined sets could be used interchangeably. Decreasing error with both increasing leg length and increasing subject height suggests that the Davis set should be used cautiously on smaller subjects. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Augmented burst-error correction for UNICON laser memory. [digital memory

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1974-01-01

    A single-burst-error correction system is described for data stored in the UNICON laser memory. In the proposed system, a long fire code with code length n greater than 16,768 bits was used as an outer code to augment an existing inner shorter fire code for burst error corrections. The inner fire code is a (80,64) code shortened from the (630,614) code, and it is used to correct a single-burst-error on a per-word basis with burst length b less than or equal to 6. The outer code, with b less than or equal to 12, would be used to correct a single-burst-error on a per-page basis, where a page consists of 512 32-bit words. In the proposed system, the encoding and error detection processes are implemented by hardware. A minicomputer, currently used as a UNICON memory management processor, is used on a time-demanding basis for error correction. Based upon existing error statistics, this combination of an inner code and an outer code would enable the UNICON system to obtain a very low error rate in spite of flaws affecting the recorded data.

  9. VizieR Online Data Catalog: 5 Galactic GC proper motions from Gaia DR1 (Watkins+, 2017)

    NASA Astrophysics Data System (ADS)

    Watkins, L. L.; van der Marel, R. P.

    2017-11-01

    We present a pilot study of Galactic globular cluster (GC) proper motion (PM) determinations using Gaia data. We search for GC stars in the Tycho-Gaia Astrometric Solution (TGAS) catalog from Gaia Data Release 1 (DR1), and identify five members of NGC 104 (47 Tucanae), one member of NGC 5272 (M3), five members of NGC 6121 (M4), seven members of NGC 6397, and two members of NGC 6656 (M22). By taking a weighted average of member stars, fully accounting for the correlations between parameters, we estimate the parallax (and, hence, distance) and PM of the GCs. This provides a homogeneous PM study of multiple GCs based on an astrometric catalog with small and well-controlled systematic errors and yields random PM errors similar to existing measurements. Detailed comparison to the available Hubble Space Telescope (HST) measurements generally shows excellent agreement, validating the astrometric quality of both TGAS and HST. By contrast, comparison to ground-based measurements shows that some of those must have systematic errors exceeding the random errors. Our parallax estimates have uncertainties an order of magnitude larger than previous studies, but nevertheless imply distances consistent with previous estimates. By combining our PM measurements with literature positions, distances, and radial velocities, we measure Galactocentric space motions for the clusters and find that these also agree well with previous analyses. Our analysis provides a framework for determining more accurate distances and PMs of Galactic GCs using future Gaia data releases. This will provide crucial constraints on the near end of the cosmic distance ladder and provide accurate GC orbital histories. (4 data files).

  10. Tycho- Gaia Astrometric Solution Parallaxes and Proper Motions for Five Galactic Globular Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watkins, Laura L.; Van der Marel, Roeland P., E-mail: lwatkins@stsci.edu

    2017-04-20

    We present a pilot study of Galactic globular cluster (GC) proper motion (PM) determinations using Gaia data. We search for GC stars in the Tycho- Gaia Astrometric Solution (TGAS) catalog from Gaia Data Release 1 (DR1), and identify five members of NGC 104 (47 Tucanae), one member of NGC 5272 (M3), five members of NGC 6121 (M4), seven members of NGC 6397, and two members of NGC 6656 (M22). By taking a weighted average of member stars, fully accounting for the correlations between parameters, we estimate the parallax (and, hence, distance) and PM of the GCs. This provides a homogeneousmore » PM study of multiple GCs based on an astrometric catalog with small and well-controlled systematic errors and yields random PM errors similar to existing measurements. Detailed comparison to the available Hubble Space Telescope ( HST ) measurements generally shows excellent agreement, validating the astrometric quality of both TGAS and HST . By contrast, comparison to ground-based measurements shows that some of those must have systematic errors exceeding the random errors. Our parallax estimates have uncertainties an order of magnitude larger than previous studies, but nevertheless imply distances consistent with previous estimates. By combining our PM measurements with literature positions, distances, and radial velocities, we measure Galactocentric space motions for the clusters and find that these also agree well with previous analyses. Our analysis provides a framework for determining more accurate distances and PMs of Galactic GCs using future Gaia data releases. This will provide crucial constraints on the near end of the cosmic distance ladder and provide accurate GC orbital histories.« less

  11. Disclosure of Medical Errors in Oman

    PubMed Central

    Norrish, Mark I. K.

    2015-01-01

    Objectives: This study aimed to provide insight into the preferences for and perceptions of medical error disclosure (MED) by members of the public in Oman. Methods: Between January and June 2012, an online survey was used to collect responses from 205 members of the public across five governorates of Oman. Results: A disclosure gap was revealed between the respondents’ preferences for MED and perceived current MED practices in Oman. This disclosure gap extended to both the type of error and the person most likely to disclose the error. Errors resulting in patient harm were found to have a strong influence on individuals’ perceived quality of care. In addition, full disclosure was found to be highly valued by respondents and able to mitigate for a perceived lack of care in cases where medical errors led to damages. Conclusion: The perceived disclosure gap between respondents’ MED preferences and perceptions of current MED practices in Oman needs to be addressed in order to increase public confidence in the national health care system. PMID:26052463

  12. Crystalline lens power and refractive error.

    PubMed

    Iribarren, Rafael; Morgan, Ian G; Nangia, Vinay; Jonas, Jost B

    2012-02-01

    To study the relationships between the refractive power of the crystalline lens, overall refractive error of the eye, and degree of nuclear cataract. All phakic participants of the population-based Central India Eye and Medical Study with an age of 50+ years were included. Calculation of the refractive lens power was based on distance noncycloplegic refractive error, corneal refractive power, anterior chamber depth, lens thickness, and axial length according to Bennett's formula. The study included 1885 subjects. Mean refractive lens power was 25.5 ± 3.0 D (range, 13.9-36.6). After adjustment for age and sex, the standardized correlation coefficients (β) of the association with the ocular refractive error were highest for crystalline lens power (β = -0.41; P < 0.001) and nuclear lens opacity grade (β = -0.42; P < 0.001), followed by axial length (β = -0.35; P < 0.001). They were lowest for corneal refractive power (β = -0.08; P = 0.001) and anterior chamber depth (β = -0.05; P = 0.04). In multivariate analysis, refractive error was significantly (P < 0.001) associated with shorter axial length (β = -1.26), lower refractive lens power (β = -0.95), lower corneal refractive power (β = -0.76), higher lens thickness (β = 0.30), deeper anterior chamber (β = 0.28), and less marked nuclear lens opacity (β = -0.05). Lens thickness was significantly lower in eyes with greater nuclear opacity. Variations in refractive error in adults aged 50+ years were mostly influenced by variations in axial length and in crystalline lens refractive power, followed by variations in corneal refractive power, and, to a minor degree, by variations in lens thickness and anterior chamber depth.

  13. Compact and high resolution virtual mouse using lens array and light sensor

    NASA Astrophysics Data System (ADS)

    Qin, Zong; Chang, Yu-Cheng; Su, Yu-Jie; Huang, Yi-Pai; Shieh, Han-Ping David

    2016-06-01

    Virtual mouse based on IR source, lens array and light sensor was designed and implemented. Optical architecture including lens amount, lens pitch, baseline length, sensor length, lens-sensor gap, focal length etc. was carefully designed to achieve low detective error, high resolution, and simultaneously, compact system volume. System volume is 3.1mm (thickness) × 4.5mm (length) × 2, which is much smaller than that of camera-based device. Relative detective error of 0.41mm and minimum resolution of 26ppi were verified in experiments, so that it can replace conventional touchpad/touchscreen. If system thickness is eased to 20mm, resolution higher than 200ppi can be achieved to replace real mouse.

  14. Operational hydrological forecasting in Bavaria. Part II: Ensemble forecasting

    NASA Astrophysics Data System (ADS)

    Ehret, U.; Vogelbacher, A.; Moritz, K.; Laurent, S.; Meyer, I.; Haag, I.

    2009-04-01

    In part I of this study, the operational flood forecasting system in Bavaria and an approach to identify and quantify forecast uncertainty was introduced. The approach is split into the calculation of an empirical 'overall error' from archived forecasts and the calculation of an empirical 'model error' based on hydrometeorological forecast tests, where rainfall observations were used instead of forecasts. The 'model error' can especially in upstream catchments where forecast uncertainty is strongly dependent on the current predictability of the atrmosphere be superimposed on the spread of a hydrometeorological ensemble forecast. In Bavaria, two meteorological ensemble prediction systems are currently tested for operational use: the 16-member COSMO-LEPS forecast and a poor man's ensemble composed of DWD GME, DWD Cosmo-EU, NCEP GFS, Aladin-Austria, MeteoSwiss Cosmo-7. The determination of the overall forecast uncertainty is dependent on the catchment characteristics: 1. Upstream catchment with high influence of weather forecast a) A hydrological ensemble forecast is calculated using each of the meteorological forecast members as forcing. b) Corresponding to the characteristics of the meteorological ensemble forecast, each resulting forecast hydrograph can be regarded as equally likely. c) The 'model error' distribution, with parameters dependent on hydrological case and lead time, is added to each forecast timestep of each ensemble member d) For each forecast timestep, the overall (i.e. over all 'model error' distribution of each ensemble member) error distribution is calculated e) From this distribution, the uncertainty range on a desired level (here: the 10% and 90% percentile) is extracted and drawn as forecast envelope. f) As the mean or median of an ensemble forecast does not necessarily exhibit meteorologically sound temporal evolution, a single hydrological forecast termed 'lead forecast' is chosen and shown in addition to the uncertainty bounds. This can be either an intermediate forecast between the extremes of the ensemble spread or a manually selected forecast based on a meteorologists advice. 2. Downstream catchments with low influence of weather forecast In downstream catchments with strong human impact on discharge (e.g. by reservoir operation) and large influence of upstream gauge observation quality on forecast quality, the 'overall error' may in most cases be larger than the combination of the 'model error' and an ensemble spread. Therefore, the overall forecast uncertainty bounds are calculated differently: a) A hydrological ensemble forecast is calculated using each of the meteorological forecast members as forcing. Here, additionally the corresponding inflow hydrograph from all upstream catchments must be used. b) As for an upstream catchment, the uncertainty range is determined by combination of 'model error' and the ensemble member forecasts c) In addition, the 'overall error' is superimposed on the 'lead forecast'. For reasons of consistency, the lead forecast must be based on the same meteorological forecast in the downstream and all upstream catchments. d) From the resulting two uncertainty ranges (one from the ensemble forecast and 'model error', one from the 'lead forecast' and 'overall error'), the envelope is taken as the most prudent uncertainty range. In sum, the uncertainty associated with each forecast run is calculated and communicated to the public in the form of 10% and 90% percentiles. As in part I of this study, the methodology as well as the useful- or uselessness of the resulting uncertainty ranges will be presented and discussed by typical examples.

  15. Usefulness of intraoperative radiographs in reducing errors of cup placement and leg length during total hip arthroplasty.

    PubMed

    Wind, Michael A; Morrison, J Craig; Christie, Michael J

    2013-11-01

    Traditional methods of component placement during total hip arthroplasty (THA) can lead to errors in cup abduction angle and leg length. Intraoperative radiographs were used to assess and correct errors during surgery in a consecutive series of 278 THAs performed by a single surgeon. After exclusions, 262 cases were available for cup abduction angle assessment and 224 for leg length discrepancy (LLD) assessment. Components were initially placed in a position determined as appropriate by the surgeon. Intraoperative radiographs were taken and appropriate corrections made. Postoperative radiographs were assessed at 6 weeks. Mean abduction angle on intraoperative radiographs was 39.6°±5.9° versus 38.6°±4.1° on postoperative radiographs. Thirty-eight cups were outside the target abduction range on intraoperative radiographs versus 4 on postoperative radiographs. Mean LLD was 3.7 mm ± 3.6 mm on intraoperative radiographs and 2.5 mm ± 2.7 mm on postoperative radiographs. Use of intraoperative radiographs is a valid, useful technique for minimizing errors in THA.

  16. Custom map projections for regional groundwater models

    USGS Publications Warehouse

    Kuniansky, Eve L.

    2017-01-01

    For regional groundwater flow models (areas greater than 100,000 km2), improper choice of map projection parameters can result in model error for boundary conditions dependent on area (recharge or evapotranspiration simulated by application of a rate using cell area from model discretization) and length (rivers simulated with head-dependent flux boundary). Smaller model areas can use local map coordinates, such as State Plane (United States) or Universal Transverse Mercator (correct zone) without introducing large errors. Map projections vary in order to preserve one or more of the following properties: area, shape, distance (length), or direction. Numerous map projections are developed for different purposes as all four properties cannot be preserved simultaneously. Preservation of area and length are most critical for groundwater models. The Albers equal-area conic projection with custom standard parallels, selected by dividing the length north to south by 6 and selecting standard parallels 1/6th above or below the southern and northern extent, preserves both area and length for continental areas in mid latitudes oriented east-west. Custom map projection parameters can also minimize area and length error in non-ideal projections. Additionally, one must also use consistent vertical and horizontal datums for all geographic data. The generalized polygon for the Floridan aquifer system study area (306,247.59 km2) is used to provide quantitative examples of the effect of map projections on length and area with different projections and parameter choices. Use of improper map projection is one model construction problem easily avoided.

  17. Underestimation of length by subjects in motion.

    PubMed

    Harte, D B

    1975-10-01

    To check a prior observation, in the present experiment, subjects made estimates of the lengths of both the guidelines and the spaces between guidelines on automotive highways so the magnitude of the illusion could be more accurately determined. Ten males and ten females were individually tested at 0 and 60 mph. At 60 mph, spaces were estimated with an error of 85%; lines were estimated with an error of 72%. Combining data for both stimuli, an error of 78% results, which corresponds to underestimation by a factor of 4.67. This illusory effect is considerably greater than that of the moon illusion, considered by many the most powerful of the classical illusions.

  18. Measurement of the length of pedestrian crossings and detection of traffic lights from image data

    NASA Astrophysics Data System (ADS)

    Shioyama, Tadayoshi; Wu, Haiyuan; Nakamura, Naoki; Kitawaki, Suguru

    2002-09-01

    This paper proposes a method for measurement of the length of a pedestrian crossing and for the detection of traffic lights from image data observed with a single camera. The length of a crossing is measured from image data of white lines painted on the road at a crossing by using projective geometry. Furthermore, the state of the traffic lights, green (go signal) or red (stop signal), is detected by extracting candidates for the traffic light region with colour similarity and selecting a true traffic light from them using affine moment invariants. From the experimental results, the length of a crossing is measured with an accuracy such that the maximum relative error of measured length is less than 5% and the rms error is 0.38 m. A traffic light is efficiently detected by selecting a true traffic light region with an affine moment invariant.

  19. Endodontic Procedural Errors: Frequency, Type of Error, and the Most Frequently Treated Tooth.

    PubMed

    Yousuf, Waqas; Khan, Moiz; Mehdi, Hasan

    2015-01-01

    Introduction. The aim of this study is to determine the most common endodontically treated tooth and the most common error produced during treatment and to note the association of particular errors with particular teeth. Material and Methods. Periapical radiographs were taken of all the included teeth and were stored and assessed using DIGORA Optime. Teeth in each group were evaluated for presence or absence of procedural errors (i.e., overfill, underfill, ledge formation, perforations, apical transportation, and/or instrument separation) and the most frequent tooth to undergo endodontic treatment was also noted. Results. A total of 1748 root canal treated teeth were assessed, out of which 574 (32.8%) contained a procedural error. Out of these 397 (22.7%) were overfilled, 155 (8.9%) were underfilled, 16 (0.9%) had instrument separation, and 7 (0.4%) had apical transportation. The most frequently treated tooth was right permanent mandibular first molar (11.3%). The least commonly treated teeth were the permanent mandibular third molars (0.1%). Conclusion. Practitioners should show greater care to maintain accuracy of the working length throughout the procedure, as errors in length accounted for the vast majority of errors and special care should be taken when working on molars.

  20. Descriptions of verbal communication errors between staff. An analysis of 84 root cause analysis-reports from Danish hospitals.

    PubMed

    Rabøl, Louise Isager; Andersen, Mette Lehmann; Østergaard, Doris; Bjørn, Brian; Lilja, Beth; Mogensen, Torben

    2011-03-01

    Poor teamwork and communication between healthcare staff are correlated to patient safety incidents. However, the organisational factors responsible for these issues are unexplored. Root cause analyses (RCA) use human factors thinking to analyse the systems behind severe patient safety incidents. The objective of this study is to review RCA reports (RCAR) for characteristics of verbal communication errors between hospital staff in an organisational perspective. Two independent raters analysed 84 RCARs, conducted in six Danish hospitals between 2004 and 2006, for descriptions and characteristics of verbal communication errors such as handover errors and error during teamwork. Raters found description of verbal communication errors in 44 reports (52%). These included handover errors (35 (86%)), communication errors between different staff groups (19 (43%)), misunderstandings (13 (30%)), communication errors between junior and senior staff members (11 (25%)), hesitance in speaking up (10 (23%)) and communication errors during teamwork (8 (18%)). The kappa values were 0.44-0.78. Unproceduralized communication and information exchange via telephone, related to transfer between units and consults from other specialties, were particularly vulnerable processes. With the risk of bias in mind, it is concluded that more than half of the RCARs described erroneous verbal communication between staff members as root causes of or contributing factors of severe patient safety incidents. The RCARs rich descriptions of the incidents revealed the organisational factors and needs related to these errors.

  1. High pressure capillary connector

    DOEpatents

    Renzi, Ronald F.

    2005-08-09

    A high pressure connector capable of operating at pressures of 40,000 psi or higher is provided. This connector can be employed to position a first fluid-bearing conduit that has a proximal end and a distal end to a second fluid-bearing conduit thereby providing fluid communication between the first and second fluid-bearing conduits. The connector includes (a) an internal fitting assembly having a body cavity with (i) a lower segment that defines a lower segment aperture and (ii) an interiorly threaded upper segment, (b) a first member having a first member aperture that traverses its length wherein the first member aperture is configured to accommodate the first fluid-bearing conduit and wherein the first member is positioned in the lower segment of the internal fitting assembly, and (c) a second member having a second member aperture that traverses its length wherein the second member is positioned in the upper segment of the fitting assembly and wherein a lower surface of the second member is in contact with an upper surface of the first member to assert a compressive force onto the first member and wherein the first member aperture and the second member aperture are coaxial.

  2. Ghost hunting—an assessment of ghost particle detection and removal methods for tomographic-PIV

    NASA Astrophysics Data System (ADS)

    Elsinga, G. E.; Tokgoz, S.

    2014-08-01

    This paper discusses and compares several methods, which aim to remove spurious peaks, i.e. ghost particles, from the volume intensity reconstruction in tomographic-PIV. The assessment is based on numerical simulations of time-resolved tomographic-PIV experiments in linear shear flows. Within the reconstructed volumes, intensity peaks are detected and tracked over time. These peaks are associated with particles (either ghosts or actual particles) and are characterized by their peak intensity, size and track length. Peak intensity and track length are found to be effective in discriminating between most ghosts and the actual particles, although not all ghosts can be detected using only a single threshold. The size of the reconstructed particles does not reveal an important difference between ghosts and actual particles. The joint distribution of peak intensity and track length however does, under certain conditions, allow a complete separation of ghosts and actual particles. The ghosts can have either a high intensity or a long track length, but not both combined, like all the actual particles. Removing the detected ghosts from the reconstructed volume and performing additional MART iterations can decrease the particle position error at low to moderate seeding densities, but increases the position error, velocity error and tracking errors at higher densities. The observed trends in the joint distribution of peak intensity and track length are confirmed by results from a real experiment in laminar Taylor-Couette flow. This diagnostic plot allows an estimate of the number of ghosts that are indistinguishable from the actual particles.

  3. Refractive states of eyes and associations between ametropia and age, breed, and axial globe length in domestic cats.

    PubMed

    Konrade, Kricket A; Hoffman, Allison R; Ramey, Kelli L; Goldenberg, Ruby B; Lehenbauer, Terry W

    2012-02-01

    To determine the refractive states of eyes in domestic cats and to evaluate correlations between refractive error and age, breed, and axial globe measurements. 98 healthy ophthalmologically normal domestic cats. The refractive state of 196 eyes (2 eyes/cat) was determined by use of streak retinoscopy. Cats were considered ametropic when the mean refractive state was ≥ ± 0.5 diopter (D). Amplitude-mode ultrasonography was used to determine axial globe length, anterior chamber length, and vitreous chamber depth. Mean ± SD refractive state of all eyes was -0.78 ± 1.37 D. Mean refractive error of cats changed significantly as a function of age. Mean refractive state of kittens (≤ 4 months old) was -2.45 ± 1.57 D, and mean refractive state of adult cats (> 1 year old) was -0.39 ± 0.85 D. Mean axial globe length, anterior chamber length, and vitreous chamber depth were 19.75 ± 1.59 mm, 4.66 ± 0.86 mm, and 7.92 ± 0.86 mm, respectively. Correlations were detected between age and breed and between age and refractive states of feline eyes. Mean refractive error changed significantly as a function of age, and kittens had greater negative refractive error than did adult cats. Domestic shorthair cats were significantly more likely to be myopic than were domestic mediumhair or domestic longhair cats. Domestic cats should be included in the animals in which myopia can be detected at a young age, with a likelihood of progression to emmetropia as cats mature.

  4. A lognormal distribution of the lengths of terminal twigs on self-similar branches of elm trees.

    PubMed

    Koyama, Kohei; Yamamoto, Ken; Ushio, Masayuki

    2017-01-11

    Lognormal distributions and self-similarity are characteristics associated with a wide range of biological systems. The sequential breakage model has established a link between lognormal distributions and self-similarity and has been used to explain species abundance distributions. To date, however, there has been no similar evidence in studies of multicellular organismal forms. We tested the hypotheses that the distribution of the lengths of terminal stems of Japanese elm trees (Ulmus davidiana), the end products of a self-similar branching process, approaches a lognormal distribution. We measured the length of the stem segments of three elm branches and obtained the following results: (i) each occurrence of branching caused variations or errors in the lengths of the child stems relative to their parent stems; (ii) the branches showed statistical self-similarity; the observed error distributions were similar at all scales within each branch and (iii) the multiplicative effect of these errors generated variations of the lengths of terminal twigs that were well approximated by a lognormal distribution, although some statistically significant deviations from strict lognormality were observed for one branch. Our results provide the first empirical evidence that statistical self-similarity of an organismal form generates a lognormal distribution of organ sizes. © 2017 The Author(s).

  5. Reading difficulties in Albanian.

    PubMed

    Avdyli, Rrezarta; Cuetos, Fernando

    2012-10-01

    Albanian is an Indo-European language with a shallow orthography, in which there is an absolute correspondence between graphemes and phonemes. We aimed to know reading strategies used by Albanian disabled children during word and pseudoword reading. A pool of 114 Kosovar reading disabled children matched with 150 normal readers aged 6 to 11 years old were tested. They had to read 120 stimuli varied in lexicality, frequency, and length. The results in terms of reading accuracy as well as in reading times show that both groups were affected by lexicality and length effects. In both groups, length and lexicality effects were significantly modulated by school year being greater in early grades and later diminish in length and just the opposite in lexicality. However, the reading difficulties group was less accurate and slower than the control group across all school grades. Analyses of the error patterns showed that phonological errors, when the letter replacement leading to new nonwords, are the most common error type in both groups, although as grade rises, visual errors and lexicalizations increased more in the control group than the reading difficulties group. These findings suggest that Albanian normal children use both routes (lexical and sublexical) from the beginning of reading despite of the complete regularity of Albanian, while children with reading difficulties start using sublexical reading and the lexical reading takes more time to acquire, but finally both routes are functional.

  6. Avoiding Substantive Errors in Individualized Education Program Development

    ERIC Educational Resources Information Center

    Yell, Mitchell L.; Katsiyannis, Antonis; Ennis, Robin Parks; Losinski, Mickey; Christle, Christine A.

    2016-01-01

    The purpose of this article is to discuss major substantive errors that school personnel may make when developing students' Individualized Education Programs (IEPs). School IEP team members need to understand the importance of the procedural and substantive requirements of the IEP, have an awareness of the five serious substantive errors that IEP…

  7. Frame synchronization performance and analysis

    NASA Technical Reports Server (NTRS)

    Aguilera, C. S. R.; Swanson, L.; Pitt, G. H., III

    1988-01-01

    The analysis used to generate the theoretical models showing the performance of the frame synchronizer is described for various frame lengths and marker lengths at various signal to noise ratios and bit error tolerances.

  8. Reliability of length measurements collected by community nurses and health volunteers in rural growth monitoring and promotion services.

    PubMed

    Laar, Matilda E; Marquis, Grace S; Lartey, Anna; Gray-Donald, Katherine

    2018-02-17

    Length measurements are important in growth, monitoring and promotion (GMP) for the surveillance of a child's weight-for-length and length-for-age. These two indices provide an indication of a child's risk of becoming wasted or stunted, and are more informative about a child's growth than the widely used weight-for-age index (underweight). Although the introduction of length measurements in GMP is recommended by the World Health Organization, concerns about the reliability of length measurements collected in rural outreach settings have been expressed by stakeholders. Our aim was to describe the reliability and challenges associated with community health personnel measuring length for rural outreach GMP activities. Two reliability studies (A and B), using 10 children less than 24 months each, were conducted in the GMP services of a rural district in Ghana. Fifteen nurses and 15 health volunteers (HV) with no prior experience in length measurements were trained. Intra- and inter-observer technical error of measurement (TEM), average bias from expert anthropometrist, and coefficient of reliability (R) of length measurements were assessed and compared across sessions. Observations and interviews were used to understand the ability and experiences of health personnel with measuring length at outreach GMP. Inter-observer TEM was larger than intra-observer TEM for both nurses and HV at both sessions and was unacceptably (compared to error standards) high in both groups at both time points. Average biases from expert's measurements were within acceptable limits, however, both groups tended to underestimate length measurements. The R for lengths collected by nurses (92.3%) was higher at session B compared to that of HV (87.5%). Length measurements taken by nurses and HV, and those taken by an experienced anthropometrist at GMP sessions were of moderate agreement (kappa = 0.53, p < 0.0001). The reliability of length measurements improved after two refresher trainings for nurses but not for HV. In addition, length measurements taken during GMP sessions may be susceptible to errors due to overburdened health personnel and crowded GMP clinics. There is need for both pre- and in-service training of nurses and HV on length measurements and procedures to improve reliability of length measurements.

  9. Safe and effective error rate monitors for SS7 signaling links

    NASA Astrophysics Data System (ADS)

    Schmidt, Douglas C.

    1994-04-01

    This paper describes SS7 error monitor characteristics, discusses the existing SUERM (Signal Unit Error Rate Monitor), and develops the recently proposed EIM (Error Interval Monitor) for higher speed SS7 links. A SS7 error monitor is considered safe if it ensures acceptable link quality and is considered effective if it is tolerant to short-term phenomena. Formal criteria for safe and effective error monitors are formulated in this paper. This paper develops models of changeover transients, the unstable component of queue length resulting from errors. These models are in the form of recursive digital filters. Time is divided into sequential intervals. The filter's input is the number of errors which have occurred in each interval. The output is the corresponding change in transmit queue length. Engineered EIM's are constructed by comparing an estimated changeover transient with a threshold T using a transient model modified to enforce SS7 standards. When this estimate exceeds T, a changeover will be initiated and the link will be removed from service. EIM's can be differentiated from SUERM by the fact that EIM's monitor errors over an interval while SUERM's count errored messages. EIM's offer several advantages over SUERM's, including the fact that they are safe and effective, impose uniform standards in link quality, are easily implemented, and make minimal use of real-time resources.

  10. Correspondence between AXAF TMA X-ray performance and models based upon mechanical and visible light measurements

    NASA Technical Reports Server (NTRS)

    Van Speybroeck, L.; Mckinnon, P. J.; Murray, S. S.; Primini, F. A.; Schwartz, D. A.; Zombeck, M. V.; Dailey, C. C.; Reily, J. C.; Weisskopf, M. C.; Wyman, C. L.

    1986-01-01

    The AXAF Technology Mirror Assembly (TMA) was characterized prior to X-ray testing by properties measured mechanically or with visible light; these include alignment offsets, roundness and global-axial-slope errors, axial-figure errors with characteristic lengths greater than about five mm, and surface roughness with scale lengths between about 0.005 and 0.5 mm. The X-ray data of Schwartz et al. (1985) are compared with predictions based upon the mechanical and visible light measurements.

  11. Checking-up of optical graduated rules by laser interferometry

    NASA Astrophysics Data System (ADS)

    Miron, Nicolae P.; Sporea, Dan G.

    1996-05-01

    The main aspects related to the operating principle, design, and implementation of high-productivity equipment for checking-up the graduation accuracy of optical graduated rules used as a length reference in optical measuring instruments for precision machine tools are presented. The graduation error checking-up is done with a Michelson interferometer as a length transducer. The instrument operation is managed by a computer, which controls the equipment, data acquisition, and processing. The evaluation is performed for rule lengths from 100 to 3000 mm, with a checking-up error less than 2 micrometers/m. The checking-up time is about 15 min for a 1000-mm rule, with averaging over four measurements.

  12. Relative dosimetrical verification in high dose rate brachytherapy using two-dimensional detector array IMatriXX

    PubMed Central

    Manikandan, A.; Biplab, Sarkar; David, Perianayagam A.; Holla, R.; Vivek, T. R.; Sujatha, N.

    2011-01-01

    For high dose rate (HDR) brachytherapy, independent treatment verification is needed to ensure that the treatment is performed as per prescription. This study demonstrates dosimetric quality assurance of the HDR brachytherapy using a commercially available two-dimensional ion chamber array called IMatriXX, which has a detector separation of 0.7619 cm. The reference isodose length, step size, and source dwell positional accuracy were verified. A total of 24 dwell positions, which were verified for positional accuracy gave a total error (systematic and random) of –0.45 mm, with a standard deviation of 1.01 mm and maximum error of 1.8 mm. Using a step size of 5 mm, reference isodose length (the length of 100% isodose line) was verified for single and multiple catheters of same and different source loadings. An error ≤1 mm was measured in 57% of tests analyzed. Step size verification for 2, 3, 4, and 5 cm was performed and 70% of the step size errors were below 1 mm, with maximum of 1.2 mm. The step size ≤1 cm could not be verified by the IMatriXX as it could not resolve the peaks in dose profile. PMID:21897562

  13. Quality of Impressions and Work Authorizations Submitted by Dental Students Supervised by Prosthodontists and General Dentists.

    PubMed

    Imbery, Terence A; Diaz, Nicholas; Greenfield, Kristy; Janus, Charles; Best, Al M

    2016-10-01

    Preclinical fixed prosthodontics is taught by Department of Prosthodontics faculty members at Virginia Commonwealth University School of Dentistry; however, 86% of all clinical cases in academic year 2012 were staffed by faculty members from the Department of General Practice. The aims of this retrospective study were to quantify the quality of impressions, accuracy of laboratory work authorizations, and most common errors and to determine if there were differences between the rate of errors in cases supervised by the prosthodontists and the general dentists. A total of 346 Fixed Prosthodontic Laboratory Tracking Sheets for the 2012 academic year were reviewed. The results showed that, overall, 73% of submitted impressions were acceptable at initial evaluation, 16% had to be poured first and re-evaluated for quality prior to pindexing, 7% had multiple impressions submitted for transfer dies, and 4% were rejected for poor quality. There were higher acceptance rates for impressions and work authorizations for cases staffed by prosthodontists than by general dentists, but the differences were not statistically significant (p=0.0584 and p=0.0666, respectively). Regarding the work authorizations, 43% overall did not provide sufficient information or had technical errors that delayed prosthesis fabrication. The most common errors were incorrect mountings, absence of solid casts, inadequate description of margins for porcelain fused to metal crowns, inaccurate die trimming, and margin marking. The percentages of errors in cases supervised by general dentists and prosthodontists were similar for 17 of the 18 types of errors identified; only for margin description was the percentage of errors statistically significantly higher for general dentist-supervised than prosthodontist-supervised cases. These results highlighted the ongoing need for faculty development and calibration to ensure students receive the highest quality education from all faculty members teaching fixed prosthodontics.

  14. Category-length and category-strength effects using images of scenes.

    PubMed

    Baumann, Oliver; Vromen, Joyce M G; Boddy, Adam C; Crawshaw, Eloise; Humphreys, Michael S

    2018-06-21

    Global matching models have provided an important theoretical framework for recognition memory. Key predictions of this class of models are that (1) increasing the number of occurrences in a study list of some items affects the performance on other items (list-strength effect) and that (2) adding new items results in a deterioration of performance on the other items (list-length effect). Experimental confirmation of these predictions has been difficult, and the results have been inconsistent. A review of the existing literature, however, suggests that robust length and strength effects do occur when sufficiently similar hard-to-label items are used. In an effort to investigate this further, we had participants study lists containing one or more members of visual scene categories (bathrooms, beaches, etc.). Experiments 1 and 2 replicated and extended previous findings showing that the study of additional category members decreased accuracy, providing confirmation of the category-length effect. Experiment 3 showed that repeating some category members decreased the accuracy of nonrepeated members, providing evidence for a category-strength effect. Experiment 4 eliminated a potential challenge to these results. Taken together, these findings provide robust support for global matching models of recognition memory. The overall list lengths, the category sizes, and the number of repetitions used demonstrated that scene categories are well-suited to testing the fundamental assumptions of global matching models. These include (A) interference from memories for similar items and contexts, (B) nondestructive interference, and (C) that conjunctive information is made available through a matching operation.

  15. Description, new reconstruction, comparative anatomy, and classification of the Sterkfontein Stw 53 cranium, with discussions about the taxonomy of other southern African early Homo remains.

    PubMed

    Curnoe, Darren; Tobias, Phillip V

    2006-01-01

    Specimen Stw 53 was recovered in 1976 from Member 5 of the Sterkfontein Formation. Since its incomplete initial description and comparison, the partial cranium has figured prominently in discussions about the systematics of early Homo. Despite publication of a preliminary reconstruction in 1985, Stw 53 has yet to be compared comprehensively to other Plio-Pleistocene fossils or assessed systematically. In this paper, we report on a new reconstruction of this specimen and provide a detailed description and comparison of its morphology. Our reconstruction differs in important respects from the earlier one, especially in terms of neurocranial length, breadth, and height. However, given that Stw 53 exhibits extensive damage, these dimensions are most likely prone to much error in reconstruction. In areas of well-preserved bone, Stw 53 shares many cranial features with Homo habilis, and we propose retaining it within this species. We also consider the affinities of dental remains from Sterkfontein Member 5, along with those from Swartkrans and Drimolen previously assigned to Homo. We find evidence for sympatry of H. habilis and Australopithecus robustus and possibly Plio-Pleistocene Homo sapiens sensu lato in Sterkfontein Member 5. At Swartkrans and Drimolen, we find evidence of H. habilis. We also compare the morphologies of Stw 53 and SK 847 and find compelling evidence to assign the latter specimen to H. habilis, as has been proposed.

  16. 5 CFR 1604.6 - Error correction.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... a service member requesting that a TSP contribution be deducted from bonus pay. Within 30 days of... times the number of months it would take for the service member to earn basic pay equal to the dollar... less than twice the number of months it would take for the service member to earn basic pay equal to...

  17. Peripheral Quantitative Computed Tomography: Measurement Sensitivity in Persons With and Without Spinal Cord Injury

    PubMed Central

    Shields, Richard K.; Dudley-Javoroski, Shauna; Boaldin, Kathryn M.; Corey, Trent A.; Fog, Daniel B.; Ruen, Jacquelyn M.

    2012-01-01

    Objectives To determine (1) the error attributable to external tibia-length measurements by using peripheral quantitative computed tomography (pQCT) and (2) the effect these errors have on scan location and tibia trabecular bone mineral density (BMD) after spinal cord injury (SCI). Design Blinded comparison and criterion standard in matched cohorts. Setting Primary care university hospital. Participants Eight able-bodied subjects underwent tibia length measurement. A separate cohort of 7 men with SCI and 7 able-bodied age-matched male controls underwent pQCT analysis. Interventions Not applicable. Main Outcome Measures The projected worst-case tibia-length–measurement error translated into a pQCT slice placement error of ±3mm. We collected pQCT slices at the distal 4% tibia site, 3mm proximal and 3mm distal to that site, and then quantified BMD error attributable to slice placement. Results Absolute BMD error was greater for able-bodied than for SCI subjects (5.87mg/cm3 vs 4.5mg/cm3). However, the percentage error in BMD was larger for SCI than able-bodied subjects (4.56% vs 2.23%). Conclusions During cross-sectional studies of various populations, BMD differences up to 5% may be attributable to variation in limb-length–measurement error. PMID:17023249

  18. The random coding bound is tight for the average code.

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.

    1973-01-01

    The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.

  19. Statistical relations among earthquake magnitude, surface rupture length, and surface fault displacement

    USGS Publications Warehouse

    Bonilla, Manuel G.; Mark, Robert K.; Lienkaemper, James J.

    1984-01-01

    In order to refine correlations of surface-wave magnitude, fault rupture length at the ground surface, and fault displacement at the surface by including the uncertainties in these variables, the existing data were critically reviewed and a new data base was compiled. Earthquake magnitudes were redetermined as necessary to make them as consistent as possible with the Gutenberg methods and results, which make up much of the data base. Measurement errors were estimated for the three variables for 58 moderate to large shallow-focus earthquakes. Regression analyses were then made utilizing the estimated measurement errors.The regression analysis demonstrates that the relations among the variables magnitude, length, and displacement are stochastic in nature. The stochastic variance, introduced in part by incomplete surface expression of seismogenic faulting, variation in shear modulus, and regional factors, dominates the estimated measurement errors. Thus, it is appropriate to use ordinary least squares for the regression models, rather than regression models based upon an underlying deterministic relation in which the variance results primarily from measurement errors.Significant differences exist in correlations of certain combinations of length, displacement, and magnitude when events are grouped by fault type or by region, including attenuation regions delineated by Evernden and others.Estimates of the magnitude and the standard deviation of the magnitude of a prehistoric or future earthquake associated with a fault can be made by correlating Ms with the logarithms of rupture length, fault displacement, or the product of length and displacement.Fault rupture area could be reliably estimated for about 20 of the events in the data set. Regression of Ms on rupture area did not result in a marked improvement over regressions that did not involve rupture area. Because no subduction-zone earthquakes are included in this study, the reported results do not apply to such zones.

  20. Statistical relations among earthquake magnitude, surface rupture length, and surface fault displacement

    USGS Publications Warehouse

    Bonilla, M.G.; Mark, R.K.; Lienkaemper, J.J.

    1984-01-01

    In order to refine correlations of surface-wave magnitude, fault rupture length at the ground surface, and fault displacement at the surface by including the uncertainties in these variables, the existing data were critically reviewed and a new data base was compiled. Earthquake magnitudes were redetermined as necessary to make them as consistent as possible with the Gutenberg methods and results, which necessarily make up much of the data base. Measurement errors were estimated for the three variables for 58 moderate to large shallow-focus earthquakes. Regression analyses were then made utilizing the estimated measurement errors. The regression analysis demonstrates that the relations among the variables magnitude, length, and displacement are stochastic in nature. The stochastic variance, introduced in part by incomplete surface expression of seismogenic faulting, variation in shear modulus, and regional factors, dominates the estimated measurement errors. Thus, it is appropriate to use ordinary least squares for the regression models, rather than regression models based upon an underlying deterministic relation with the variance resulting from measurement errors. Significant differences exist in correlations of certain combinations of length, displacement, and magnitude when events are qrouped by fault type or by region, including attenuation regions delineated by Evernden and others. Subdivision of the data results in too few data for some fault types and regions, and for these only regressions using all of the data as a group are reported. Estimates of the magnitude and the standard deviation of the magnitude of a prehistoric or future earthquake associated with a fault can be made by correlating M with the logarithms of rupture length, fault displacement, or the product of length and displacement. Fault rupture area could be reliably estimated for about 20 of the events in the data set. Regression of MS on rupture area did not result in a marked improvement over regressions that did not involve rupture area. Because no subduction-zone earthquakes are included in this study, the reported results do not apply to such zones.

  1. Reduced vision in highly myopic eyes without ocular pathology: the ZOC-BHVI high myopia study.

    PubMed

    Jong, Monica; Sankaridurg, Padmaja; Li, Wayne; Resnikoff, Serge; Naidoo, Kovin; He, Mingguang

    2018-01-01

    The aim was to investigate the relationship of the magnitude of myopia with visual acuity in highly myopic eyes without ocular pathology. Twelve hundred and ninety-two highly myopic eyes (up to -6.00 DS both eyes, no astigmatic cut-off) with no ocular pathology from the ZOC-BHVI high myopia study in China, had cycloplegic refraction, followed by subjective refraction and visual acuities and axial length measurement. Two logistic regression models were undertaken to test the association of age, gender, refractive error, axial length and parental myopia with reduced vision. Mean group age was 19.0 ± 8.6 years; subjective spherical equivalent refractive error was -9.03 ± 2.73 D; objective spherical equivalent refractive error was -8.90 ± 2.60 D and axial length was 27.0 ± 1.3 mm. Using visual acuity, 82.4 per cent had normal vision, 16.0 per cent had mildly reduced vision, 1.2 per cent had moderately reduced vision, 0.3 per cent had severely reduced vision and no subjects were blind. The percentage with reduced vision increased with spherical equivalent to 74.5 per cent from -15.00 to -39.99 D, axial length to 67.7 per cent of eyes from 30.01 to 32.00 mm and age to 22.9 per cent of those 41 years and over. Spherical equivalent and axial length were significantly associated with reduced vision (p < 0.0001). Age and parental myopia were not significantly associated with reduced vision. Gender was significant for one model (p = 0.04). Mildly reduced vision is common in high myopia without ocular pathology and is strongly correlated with greater magnitudes of refractive error and axial length. Better understanding is required to minimise reduced vision in high myopes. © 2017 Optometry Australia.

  2. Forensic dental age estimation by measuring root dentin translucency area using a new digital technique.

    PubMed

    Acharya, Ashith B

    2014-05-01

    Dentin translucency measurement is an easy yet relatively accurate approach to postmortem age estimation. Translucency area represents a two-dimensional change and may reflect age variations better than length. Manually measuring area is challenging and this paper proposes a new digital method using commercially available computer hardware and software. Area and length were measured on 100 tooth sections (age range, 19-82 years) of 250 μm thickness. Regression analysis revealed lower standard error of estimate and higher correlation with age for length than for area (R = 0.62 vs. 0.60). However, test of regression formulae on a control sample (n = 33, 21-85 years) showed smaller mean absolute difference (8.3 vs. 8.8 years) and greater frequency of smaller errors (73% vs. 67% age estimates ≤ ± 10 years) for area than for length. These suggest that digital area measurements of root translucency may be used as an alternative to length in forensic age estimation. © 2014 American Academy of Forensic Sciences.

  3. Effects of cane length and diameter and judgment type on the constant error ratio for estimated height in blindfolded, visually impaired, and sighted participants.

    PubMed

    Huang, Kuo-Chen; Leung, Cherng-Yee; Wang, Hsiu-Feng

    2010-04-01

    The purpose of this study was to assess the ability of blindfolded, visually impaired, and sighted individuals to estimate object height as a function of cane length, cane diameter, and judgment type. 48 undergraduate students (ages 20 to 23 years) were recruited to participate in the study. Participants were divided into low-vision, severely myopic, and normal-vision groups. Five stimulus heights were explored with three cane lengths, varying cane diameters, and judgment types. The participants were asked to estimate the stimulus height with or without reference to a standard block. Results showed that the constant error ratio for estimated height improved with decreasing cane length and comparative judgment. The findings were unclear regarding the effect of cane length on haptic perception of height. Implications were discussed for designing environments, such as stair heights, chairs, the magnitude of apertures, etc., for visually impaired individuals.

  4. How Preservice Teachers Interpret and Respond to Student Errors: Ratio and Proportion in Similar Rectangles

    ERIC Educational Resources Information Center

    Son, Ji-Won

    2013-01-01

    Interpreting and responding to student thinking are central tasks of reform-minded mathematics teaching. This study examined preservice teachers' (PSTs) interpretations of and responses to a student's error(s) involving finding a missing length in similar rectangles through a teaching scenario task. Fifty-seven PSTs' responses were…

  5. Automated River Reach Definition Strategies: Applications for the Surface Water and Ocean Topography Mission

    NASA Astrophysics Data System (ADS)

    Frasson, Renato Prata de Moraes; Wei, Rui; Durand, Michael; Minear, J. Toby; Domeneghetti, Alessio; Schumann, Guy; Williams, Brent A.; Rodriguez, Ernesto; Picamilh, Christophe; Lion, Christine; Pavelsky, Tamlin; Garambois, Pierre-André

    2017-10-01

    The upcoming Surface Water and Ocean Topography (SWOT) mission will measure water surface heights and widths for rivers wider than 100 m. At its native resolution, SWOT height errors are expected to be on the order of meters, which prevent the calculation of water surface slopes and the use of slope-dependent discharge equations. To mitigate height and width errors, the high-resolution measurements will be grouped into reaches (˜5 to 15 km), where slope and discharge are estimated. We describe three automated river segmentation strategies for defining optimum reaches for discharge estimation: (1) arbitrary lengths, (2) identification of hydraulic controls, and (3) sinuosity. We test our methodologies on 9 and 14 simulated SWOT overpasses over the Sacramento and the Po Rivers, respectively, which we compare against hydraulic models of each river. Our results show that generally, height, width, and slope errors decrease with increasing reach length. However, the hydraulic controls and the sinuosity methods led to better slopes and often height errors that were either smaller or comparable to those of arbitrary reaches of compatible sizes. Estimated discharge errors caused by the propagation of height, width, and slope errors through the discharge equation were often smaller for sinuosity (on average 8.5% for the Sacramento and 6.9% for the Po) and hydraulic control (Sacramento: 7.3% and Po: 5.9%) reaches than for arbitrary reaches of comparable lengths (Sacramento: 8.6% and Po: 7.8%). This analysis suggests that reach definition methods that preserve the hydraulic properties of the river network may lead to better discharge estimates.

  6. Measuring a Fiber-Optic Delay Line Using a Mode-Locked Laser

    NASA Technical Reports Server (NTRS)

    Tu, Meirong; McKee, Michael R.; Pak, Kyung S.; Yu, Nan

    2010-01-01

    The figure schematically depicts a laboratory setup for determining the optical length of a fiber-optic delay line at a precision greater than that obtainable by use of optical time-domain reflectometry or of mechanical measurement of length during the delay-line-winding process. In this setup, the delay line becomes part of the resonant optical cavity that governs the frequency of oscillation of a mode-locked laser. The length can then be determined from frequency-domain measurements, as described below. The laboratory setup is basically an all-fiber ring laser in which the delay line constitutes part of the ring. Another part of the ring - the laser gain medium - is an erbium-doped fiber amplifier pumped by a diode laser at a wavelength of 980 nm. The loop also includes an optical isolator, two polarization controllers, and a polarizing beam splitter. The optical isolator enforces unidirectional lasing. The polarization beam splitter allows light in only one polarization mode to pass through the ring; light in the orthogonal polarization mode is rejected from the ring and utilized as a diagnostic output, which is fed to an optical spectrum analyzer and a photodetector. The photodetector output is fed to a radio-frequency spectrum analyzer and an oscilloscope. The fiber ring laser can generate continuous-wave radiation in non-mode-locked operation or ultrashort optical pulses in mode-locked operation. The mode-locked operation exhibited by this ring is said to be passive in the sense that no electro-optical modulator or other active optical component is used to achieve it. Passive mode locking is achieved by exploiting optical nonlinearity of passive components in such a manner as to obtain ultra-short optical pulses. In this setup, the particular nonlinear optical property exploited to achieve passive mode locking is nonlinear polarization rotation. This or any ring laser can support oscillation in multiple modes as long as sufficient gain is present to overcome losses in the ring. When mode locking is achieved, oscillation occurs in all the modes having the same phase and same polarization. The frequency interval between modes, often denoted the free spectral range (FSR), is given by c/nL, where c is the speed of light in vacuum, n is the effective index of refraction of the fiber, and L is the total length of optical path around the ring. Therefore, the length of the fiber-optic delay line, as part of the length around the ring, can be calculated from the FSRs measured with and without the delay line incorporated into the ring. For this purpose, the FSR measurements are made by use of the optical and radio-frequency spectrum analyzers. In experimentation on a 10-km-long fiber-optic delay line, it was found that this setup made it possible to measure the length to within a fractional error of about 3 10(exp -6), corresponding to a length error of 3 cm. In contrast, measurements by optical time-domain reflectometry and mechanical measurement were found to be much less precise: For optical time-domain reflectometry, the fractional error was found no less than 10(exp -4) (corresponding to a length error of 1 m) and for mechanical measurement, the fractional error was found to be about 10(exp -2) (corresponding to a length error of 100 m).

  7. Expected Position Error for an Onboard Satellite GPS Receiver

    DTIC Science & Technology

    2015-03-01

    Committee Membership: Dr. Alan Jennings, PhD Chairman Dr. Eric D. Swenson, Ph.D. Member Dr. Marshall E. Haker , Ph.D. Member AFIT-ENY-MS-15-M-029 Abstract...acknowledge both Dr. Jennings and Maj Haker for taking on the role of being my advisor and as well as committee member at various times during this

  8. Hard turning micro-machine tool

    DOEpatents

    DeVor, Richard E; Adair, Kurt; Kapoor, Shiv G

    2013-10-22

    A micro-scale apparatus for supporting a tool for hard turning comprises a base, a pivot coupled to the base, an actuator coupled to the base, and at least one member coupled to the actuator at one end and rotatably coupled to the pivot at another end. A tool mount is disposed on the at least one member. The at least one member defines a first lever arm between the pivot and the tool mount, and a second lever arm between the pivot and the actuator. The first lever arm has a length that is less than a length of the second lever arm. The actuator moves the tool mount along an arc.

  9. The Birch Street Irregulars: mysteries found and resolved in the AAVSO data archives

    NASA Astrophysics Data System (ADS)

    Beck, Sara J.; Saladyga, Michael; Mattei, Janet A.

    As they evaluate AAVSO data, AAVSO technical staff members run across several kinds of errors. This paper takes a humorous and Sherlock Holmes-style look at some of the most common kinds of errors detected, from observers recording the wrong Julian Date, misidentifying stars, transposing entries on the observer form, to garden-variety data entry errors.

  10. [Remote system of natural gas leakage based on multi-wavelength characteristics spectrum analysis].

    PubMed

    Li, Jing; Lu, Xu-Tao; Yang, Ze-Hui

    2014-05-01

    In order to be able to quickly, to a wide range of natural gas pipeline leakage monitoring, the remote detection system for concentration of methane gas was designed based on static Fourier transform interferometer. The system used infrared light, which the center wavelength was calibrated to absorption peaks of methane molecules, to irradiated tested area, and then got the interference fringes by converging collimation system and interference module. Finally, the system calculated the concentration-path-length product in tested area by multi-wavelength characteristics spectrum analysis algorithm, furthermore the inversion of the corresponding concentration of methane. By HITRAN spectrum database, Selected wavelength position of 1. 65 microm as the main characteristic absorption peaks, thereby using 1. 65 pm DFB laser as the light source. In order to improve the detection accuracy and stability without increasing the hardware configuration of the system, solved absorbance ratio by the auxiliary wave-length, and then get concentration-path-length product of measured gas by the method of the calculation proportion of multi-wavelength characteristics. The measurement error from external disturbance is caused by this innovative approach, and it is more similar to a differential measurement. It will eliminate errors in the process of solving the ratio of multi-wavelength characteristics, and can improve accuracy and stability of the system. The infrared absorption spectrum of methane is constant, the ratio of absorbance of any two wavelengths by methane is also constant. The error coefficients produced by the system is the same when it received the same external interference, so the measured noise of the system can be effectively reduced by the ratio method. Experimental tested standards methane gas tank with leaking rate constant. Using the tested data of PN1000 type portable methane detector as the standard data, and were compared to the tested data of the system, while tested distance of the system were 100, 200 and 500 m. Experimental results show that the methane concentration detected value was stable after a certain time leakage, the concentration-path-length product value of the system was stable. For detection distance of 100 m, the detection error of the concentration-path-length product was less than 1. 0%. With increasing distance from tested area, the detection error is increased correspondingly. When the distance was 500 m, the detection error was less than 4. 5%. In short, the detected error of the system is less than 5. 0% after the gas leakage stable, to meet the requirements of the field of natural gas leakage remote sensing.

  11. Determining geometric error model parameters of a terrestrial laser scanner through Two-face, Length-consistency, and Network methods

    PubMed Central

    Wang, Ling; Muralikrishnan, Bala; Rachakonda, Prem; Sawyer, Daniel

    2017-01-01

    Terrestrial laser scanners (TLS) are increasingly used in large-scale manufacturing and assembly where required measurement uncertainties are on the order of few tenths of a millimeter or smaller. In order to meet these stringent requirements, systematic errors within a TLS are compensated in-situ through self-calibration. In the Network method of self-calibration, numerous targets distributed in the work-volume are measured from multiple locations with the TLS to determine parameters of the TLS error model. In this paper, we propose two new self-calibration methods, the Two-face method and the Length-consistency method. The Length-consistency method is proposed as a more efficient way of realizing the Network method where the length between any pair of targets from multiple TLS positions are compared to determine TLS model parameters. The Two-face method is a two-step process. In the first step, many model parameters are determined directly from the difference between front-face and back-face measurements of targets distributed in the work volume. In the second step, all remaining model parameters are determined through the Length-consistency method. We compare the Two-face method, the Length-consistency method, and the Network method in terms of the uncertainties in the model parameters, and demonstrate the validity of our techniques using a calibrated scale bar and front-face back-face target measurements. The clear advantage of these self-calibration methods is that a reference instrument or calibrated artifacts are not required, thus significantly lowering the cost involved in the calibration process. PMID:28890607

  12. Usability of a CKD educational website targeted to patients and their family members.

    PubMed

    Diamantidis, Clarissa J; Zuckerman, Marni; Fink, Wanda; Hu, Peter; Yang, Shiming; Fink, Jeffrey C

    2012-10-01

    Web-based technology is critical to the future of healthcare. As part of the Safe Kidney Care cohort study evaluating patient safety in CKD, this study determined how effectively a representative sample of patients with CKD or family members could interpret and use the Safe Kidney Care website (www.safekidneycare.org), an informational website on safety in CKD. Between November of 2011 and January of 2012, persons with CKD or their family members underwent formal usability testing administered by a single interviewer with a second recording observer. Each participant was independently provided a list of 21 tasks to complete, with each task rated as either easily completed/noncritical error or critical error (user cannot complete the task without significant interviewer intervention). Twelve participants completed formal usability testing. Median completion time for all tasks was 17.5 minutes (range=10-44 minutes). In total, 10 participants had greater than or equal to one critical error. There were 55 critical errors in 252 tasks (22%), with the highest proportion of critical errors occurring when participants were asked to find information on treatments that may damage kidneys, find the website on the internet, increase font size, and scroll to the bottom of the webpage. Participants were generally satisfied with the content and usability of the website. Web-based educational materials for patients with CKD should target a wide range of computer literacy levels and anticipate variability in competency in use of the computer and internet.

  13. Modeling the probability distribution of positional errors incurred by residential address geocoding.

    PubMed

    Zimmerman, Dale L; Fang, Xiangming; Mazumdar, Soumya; Rushton, Gerard

    2007-01-10

    The assignment of a point-level geocode to subjects' residences is an important data assimilation component of many geographic public health studies. Often, these assignments are made by a method known as automated geocoding, which attempts to match each subject's address to an address-ranged street segment georeferenced within a streetline database and then interpolate the position of the address along that segment. Unfortunately, this process results in positional errors. Our study sought to model the probability distribution of positional errors associated with automated geocoding and E911 geocoding. Positional errors were determined for 1423 rural addresses in Carroll County, Iowa as the vector difference between each 100%-matched automated geocode and its true location as determined by orthophoto and parcel information. Errors were also determined for 1449 60%-matched geocodes and 2354 E911 geocodes. Huge (> 15 km) outliers occurred among the 60%-matched geocoding errors; outliers occurred for the other two types of geocoding errors also but were much smaller. E911 geocoding was more accurate (median error length = 44 m) than 100%-matched automated geocoding (median error length = 168 m). The empirical distributions of positional errors associated with 100%-matched automated geocoding and E911 geocoding exhibited a distinctive Greek-cross shape and had many other interesting features that were not capable of being fitted adequately by a single bivariate normal or t distribution. However, mixtures of t distributions with two or three components fit the errors very well. Mixtures of bivariate t distributions with few components appear to be flexible enough to fit many positional error datasets associated with geocoding, yet parsimonious enough to be feasible for nascent applications of measurement-error methodology to spatial epidemiology.

  14. Effect of photogrammetric reading error on slope-frequency distributions. [obtained from Apollo 17 mission

    NASA Technical Reports Server (NTRS)

    Moore, H. J.; Wu, S. C.

    1973-01-01

    The effect of reading error on two hypothetical slope frequency distributions and two slope frequency distributions from actual lunar data in order to ensure that these errors do not cause excessive overestimates of algebraic standard deviations for the slope frequency distributions. The errors introduced are insignificant when the reading error is small and the slope length is large. A method for correcting the errors in slope frequency distributions is presented and applied to 11 distributions obtained from Apollo 15, 16, and 17 panoramic camera photographs and Apollo 16 metric camera photographs.

  15. Relationship between lenticular power and refractive error in children with hyperopia.

    PubMed

    Tomomatsu, Takeshi; Kono, Shinjiro; Arimura, Shogo; Tomomatsu, Yoko; Matsumura, Takehiro; Takihara, Yuji; Inatani, Masaru; Takamura, Yoshihiro

    2013-01-01

    To evaluate the contribution of axial length, and lenticular and corneal power to the spherical equivalent refractive error in children with hyperopia between 3 and 13 years of age, using noncontact optical biometry. There were 62 children between 3 and 13 years of age with hyperopia (+2 diopters [D] or more) who underwent automated refraction measurement with cycloplegia, to measure spherical equivalent refractive error and corneal power. Axial length was measured using an optic biometer that does not require contact with the cornea. The refractive power of the lens was calculated using the Sanders-Retzlaff-Kraff formula. Single regression analysis was used to evaluate the correlation among the optical parameters. There was a significant positive correlation between age and axial length (P = 0.0014); however, the degree of hyperopia did not decrease with aging (P = 0.59). There was a significant negative correlation between age and the refractive power of the lens (P = 0.0001) but not that of the cornea (P = 0.43). A significant negative correlation was observed between the degree of hyperopia and lenticular power (P < 0.0001). Although this study is small scale and cross sectional, the analysis, using noncontact biometry, showed that lenticular power was negatively correlated with refractive error and age, indicating that lower lens power may contribute to the degree of hyperopia.

  16. Medication Administration Practices of School Nurses.

    ERIC Educational Resources Information Center

    McCarthy, Ann Marie; Kelly, Michael W.; Reed, David

    2000-01-01

    Assessed medication administration practices among school nurses, surveying members of the National Association of School Nurses. Respondents were extremely concerned about medication administration. Errors in administering medications were reported by 48.5 percent of respondents, with missed doses the most common error. Most nurses followed…

  17. Filipino, Indonesian and Thai Listening Test Errors

    ERIC Educational Resources Information Center

    Castro, C. S.; And Others

    1975-01-01

    This article reports on a study to identify listening, and aural comprehension difficulties experienced by students of English, specifically RELC (Regional English Language Centre in Singapore) course members. The most critical errors are discussed and conclusions about foreign language learning are drawn. (CLK)

  18. 5 CFR 890.1107 - Length of temporary continuation of coverage.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... the requirements for being considered a child who is a covered family member, unless it is terminated... the day before ceasing to meet the requirements for being considered children who are covered family members, were covered family members of a former employee receiving continued coverage under this subpart...

  19. Interocular symmetry in macular choroidal thickness in children.

    PubMed

    Al-Haddad, Christiane; El Chaar, Lama; Antonios, Rafic; El-Dairi, Mays; Noureddin, Baha'

    2014-01-01

    Objective. To report interocular differences in choroidal thickness in children using spectral domain optical coherence tomography (SD-OCT) and correlate findings with biometric data. Methods. This observational cross-sectional study included 91 (182 eyes) healthy children aged 6 to 17 years with no ocular abnormality except refractive error. After a comprehensive eye exam and axial length measurement, high definition macular scans were performed using SD-OCT. Two observers manually measured the choroidal thickness at the foveal center and at 1500 µm nasally, temporally, inferiorly, and superiorly. Interocular differences were computed; correlations with age, gender, refractive error, and axial length were performed. Results. Mean age was 10.40 ± 3.17 years; mean axial length and refractive error values were similar between fellow eyes. There was excellent correlation between the two observers' measurements. No significant interocular differences were observed at any location. There was only a trend for right eyes to have higher values in all thicknesses, except the superior thickness. Most of the choroidal thickness measurements correlated positively with spherical equivalent but not with axial length, age, or gender. Conclusion. Choroidal thickness measurements in children as performed using SD-OCT revealed a high level of interobserver agreement and consistent interocular symmetry. Values correlated positively with spherical equivalent refraction.

  20. A survey of community members' perceptions of medical errors in Oman

    PubMed Central

    Al-Mandhari, Ahmed S; Al-Shafaee, Mohammed A; Al-Azri, Mohammed H; Al-Zakwani, Ibrahim S; Khan, Mushtaq; Al-Waily, Ahmed M; Rizvi, Syed

    2008-01-01

    Background Errors have been the concern of providers and consumers of health care services. However, consumers' perception of medical errors in developing countries is rarely explored. The aim of this study is to assess community members' perceptions about medical errors and to analyse the factors affecting this perception in one Middle East country, Oman. Methods Face to face interviews were conducted with heads of 212 households in two villages in North Al-Batinah region of Oman selected because of close proximity to the Sultan Qaboos University (SQU), Muscat, Oman. Participants' perceived knowledge about medical errors was assessed. Responses were coded and categorised. Analyses were performed using Pearson's χ2, Fisher's exact tests, and multivariate logistic regression model wherever appropriate. Results Seventy-eight percent (n = 165) of participants believed they knew what was meant by medical errors. Of these, 34% and 26.5% related medical errors to wrong medications or diagnoses, respectively. Understanding of medical errors was correlated inversely with age and positively with family income. Multivariate logistic regression revealed that a one-year increase in age was associated with a 4% reduction in perceived knowledge of medical errors (CI: 1% to 7%; p = 0.045). The study found that 49% of those who believed they knew the meaning of medical errors had experienced such errors. The most common consequence of the errors was severe pain (45%). Of the 165 informed participants, 49% felt that an uncaring health care professional was the main cause of medical errors. Younger participants were able to list more possible causes of medical errors than were older subjects (Incident Rate Ratio of 0.98; p < 0.001). Conclusion The majority of participants believed they knew the meaning of medical errors. Younger participants were more likely to be aware of such errors and could list one or more causes. PMID:18664245

  1. The interaction of the flux errors and transport errors in modeled atmospheric carbon dioxide concentrations

    NASA Astrophysics Data System (ADS)

    Feng, S.; Lauvaux, T.; Butler, M. P.; Keller, K.; Davis, K. J.; Jacobson, A. R.; Schuh, A. E.; Basu, S.; Liu, J.; Baker, D.; Crowell, S.; Zhou, Y.; Williams, C. A.

    2017-12-01

    Regional estimates of biogenic carbon fluxes over North America from top-down atmospheric inversions and terrestrial biogeochemical (or bottom-up) models remain inconsistent at annual and sub-annual time scales. While top-down estimates are impacted by limited atmospheric data, uncertain prior flux estimates and errors in the atmospheric transport models, bottom-up fluxes are affected by uncertain driver data, uncertain model parameters and missing mechanisms across ecosystems. This study quantifies both flux errors and transport errors, and their interaction in the CO2 atmospheric simulation. These errors are assessed by an ensemble approach. The WRF-Chem model is set up with 17 biospheric fluxes from the Multiscale Synthesis and Terrestrial Model Intercomparison Project, CarbonTracker-Near Real Time, and the Simple Biosphere model. The spread of the flux ensemble members represents the flux uncertainty in the modeled CO2 concentrations. For the transport errors, WRF-Chem is run using three physical model configurations with three stochastic perturbations to sample the errors from both the physical parameterizations of the model and the initial conditions. Additionally, the uncertainties from boundary conditions are assessed using four CO2 global inversion models which have assimilated tower and satellite CO2 observations. The error structures are assessed in time and space. The flux ensemble members overall overestimate CO2 concentrations. They also show larger temporal variability than the observations. These results suggest that the flux ensemble is overdispersive. In contrast, the transport ensemble is underdispersive. The averaged spatial distribution of modeled CO2 shows strong positive biogenic signal in the southern US and strong negative signals along the eastern coast of Canada. We hypothesize that the former is caused by the 3-hourly downscaling algorithm from which the nighttime respiration dominates the daytime modeled CO2 signals and that the latter is mainly caused by the large-scale transport associated with the jet stream that carries the negative biogenic CO2 signals to the northeastern coast. We apply comprehensive statistics to eliminate outliers. We generate a set of flux perturbations based on pre-calibrated flux ensemble members and apply them to the simulations.

  2. Long distance quantum communication using quantum error correction

    NASA Technical Reports Server (NTRS)

    Gingrich, R. M.; Lee, H.; Dowling, J. P.

    2004-01-01

    We describe a quantum error correction scheme that can increase the effective absorption length of the communication channel. This device can play the role of a quantum transponder when placed in series, or a cyclic quantum memory when inserted in an optical loop.

  3. Are gestational age, birth weight, and birth length indicators of favorable fetal growth conditions? A structural equation analysis of Filipino infants.

    PubMed

    Bollen, Kenneth A; Noble, Mark D; Adair, Linda S

    2013-07-30

    The fetal origins hypothesis emphasizes the life-long health impacts of prenatal conditions. Birth weight, birth length, and gestational age are indicators of the fetal environment. However, these variables often have missing data and are subject to random and systematic errors caused by delays in measurement, differences in measurement instruments, and human error. With data from the Cebu (Philippines) Longitudinal Health and Nutrition Survey, we use structural equation models, to explore random and systematic errors in these birth outcome measures, to analyze how maternal characteristics relate to birth outcomes, and to take account of missing data. We assess whether birth weight, birth length, and gestational age are influenced by a single latent variable that we call favorable fetal growth conditions (FFGC) and if so, which variable is most closely related to FFGC. We find that a model with FFGC as a latent variable fits as well as a less parsimonious model that has birth weight, birth length, and gestational age as distinct individual variables. We also demonstrate that birth weight is more reliably measured than is gestational age. FFGCs were significantly influenced by taller maternal stature, better nutritional stores indexed by maternal arm fat and muscle area during pregnancy, higher birth order, avoidance of smoking, and maternal age 20-35 years. Effects of maternal characteristics on newborn weight, length, and gestational age were largely indirect, operating through FFGC. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Monte Carlo simulation of edge placement error

    NASA Astrophysics Data System (ADS)

    Kobayashi, Shinji; Okada, Soichiro; Shimura, Satoru; Nafus, Kathleen; Fonseca, Carlos; Estrella, Joel; Enomoto, Masashi

    2018-03-01

    In the discussion of edge placement error (EPE), we proposed interactive pattern fidelity error (IPFE) as an indicator to judge pass/fail of integrated patterns. IPFE consists of lower and upper layer EPEs (CD and center of gravity: COG) and overlay, which is decided from the combination of each maximum variation. We succeeded in obtaining the IPFE density function by Monte Carlo simulation. In the results, we also found that the standard deviation (σ) of each indicator should be controlled by 4.0σ, at the semiconductor grade, such as 100 billion patterns per die. Moreover, CD, COG and overlay were analyzed by analysis of variance (ANOVA); we can discuss all variations from wafer to wafer (WTW), pattern to pattern (PTP), line edge roughness (LWR) and stochastic pattern noise (SPN) on an equal footing. From the analysis results, we can determine that these variations belong to which process and tools. Furthermore, measurement length of LWR is also discussed in ANOVA. We propose that the measurement length for IPFE analysis should not be decided to the micro meter order, such as >2 μm length, but for which device is actually desired.

  5. Effect of twist on single-mode fiber-optic 3 × 3 couplers

    NASA Astrophysics Data System (ADS)

    Chen, Dandan; Ji, Minning; Peng, Lei

    2018-01-01

    In the fabricating process of a 3 × 3 fused tapered coupler, the three fibers are usually twisted to be close-contact. The effect of twist on 3 × 3 fused tapered couplers is investigated in this paper. It is found that though a linear 3 × 3 coupler may realize equal power splitting ratio theoretically by twisting a special angle, it is hard to be fabricated actually because the twist angle and the coupler's length must be determined in advance. While an equilateral 3 × 3 coupler can not only realize approximate equal power splitting ratio theoretically but can also be fabricated just by controlling the elongation length. The effect of twist on the equilateral 3 × 3 coupler lies in the relationship between the equal ratio error and the twist angle. The more the twist angle is, the larger the equal ratio error may be. The twist angle usually should be no larger than 90° on one coupling period length in order to keep the equal ratio error small enough. The simulation results agree well with the experimental data.

  6. Effect of various digital processing algorithms on the measurement accuracy of endodontic file length.

    PubMed

    Kal, Betül Ilhan; Baksi, B Güniz; Dündar, Nesrin; Sen, Bilge Hakan

    2007-02-01

    The aim of this study was to compare the accuracy of endodontic file lengths after application of various image enhancement modalities. Endodontic files of three different ISO sizes were inserted in 20 single-rooted extracted permanent mandibular premolar teeth and standardized images were obtained. Original digital images were then enhanced using five processing algorithms. Six evaluators measured the length of each file on each image. The measurements from each processing algorithm and each file size were compared using repeated measures ANOVA and Bonferroni tests (P = 0.05). Paired t test was performed to compare the measurements with the true lengths of the files (P = 0.05). All of the processing algorithms provided significantly shorter measurements than the true length of each file size (P < 0.05). The threshold enhancement modality produced significantly higher mean error values (P < 0.05), while there was no significant difference among the other enhancement modalities (P > 0.05). Decrease in mean error value was observed with increasing file size (P < 0.05). Invert, contrast/brightness and edge enhancement algorithms may be recommended for accurate file length measurements when utilizing storage phosphor plates.

  7. High capacity reversible watermarking for audio by histogram shifting and predicted error expansion.

    PubMed

    Wang, Fei; Xie, Zhaoxin; Chen, Zuo

    2014-01-01

    Being reversible, the watermarking information embedded in audio signals can be extracted while the original audio data can achieve lossless recovery. Currently, the few reversible audio watermarking algorithms are confronted with following problems: relatively low SNR (signal-to-noise) of embedded audio; a large amount of auxiliary embedded location information; and the absence of accurate capacity control capability. In this paper, we present a novel reversible audio watermarking scheme based on improved prediction error expansion and histogram shifting. First, we use differential evolution algorithm to optimize prediction coefficients and then apply prediction error expansion to output stego data. Second, in order to reduce location map bits length, we introduced histogram shifting scheme. Meanwhile, the prediction error modification threshold according to a given embedding capacity can be computed by our proposed scheme. Experiments show that this algorithm improves the SNR of embedded audio signals and embedding capacity, drastically reduces location map bits length, and enhances capacity control capability.

  8. A cognitive model for multidigit number reading: Inferences from individuals with selective impairments.

    PubMed

    Dotan, Dror; Friedmann, Naama

    2018-04-01

    We propose a detailed cognitive model of multi-digit number reading. The model postulates separate processes for visual analysis of the digit string and for oral production of the verbal number. Within visual analysis, separate sub-processes encode the digit identities and the digit order, and additional sub-processes encode the number's decimal structure: its length, the positions of 0, and the way it is parsed into triplets (e.g., 314987 → 314,987). Verbal production consists of a process that generates the verbal structure of the number, and another process that retrieves the phonological forms of each number word. The verbal number structure is first encoded in a tree-like structure, similarly to syntactic trees of sentences, and then linearized to a sequence of number-word specifiers. This model is based on an investigation of the number processing abilities of seven individuals with different selective deficits in number reading. We report participants with impairment in specific sub-processes of the visual analysis of digit strings - in encoding the digit order, in encoding the number length, or in parsing the digit string to triplets. Other participants were impaired in verbal production, making errors in the number structure (shifts of digits to another decimal position, e.g., 3,040 → 30,004). Their selective deficits yielded several dissociations: first, we found a double dissociation between visual analysis deficits and verbal production deficits. Second, several dissociations were found within visual analysis: a double dissociation between errors in digit order and errors in the number length; a dissociation between order/length errors and errors in parsing the digit string into triplets; and a dissociation between the processing of different digits - impaired order encoding of the digits 2-9, without errors in the 0 position. Third, within verbal production, a dissociation was found between digit shifts and substitutions of number words. A selective deficit in any of the processes described by the model would cause difficulties in number reading, which we propose to term "dysnumeria". Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. A comprehensive allometric analysis of 2nd digit length to 4th digit length in humans.

    PubMed

    Lolli, Lorenzo; Batterham, Alan M; Kratochvíl, Lukáš; Flegr, Jaroslav; Weston, Kathryn L; Atkinson, Greg

    2017-06-28

    It has been widely reported that men have a lower ratio of the 2nd and 4th human finger lengths (2D : 4D). Size-scaling ratios, however, have the seldom-appreciated potential for providing biased estimates. Using an information-theoretic approach, we compared 12 candidate models, with different assumptions and error structures, for scaling untransformed 2D to 4D lengths from 154 men and 262 women. In each hand, the two-parameter power function and the straight line with intercept models, both with normal, homoscedastic error, were superior to the other models and essentially equivalent to each other for normalizing 2D to 4D lengths. The conventional 2D : 4D ratio biased relative 2D length low for the generally bigger hands of men, and vice versa for women, thereby leading to an artefactual indication that mean relative 2D length is lower in men than women. Conversely, use of the more appropriate allometric or linear regression models revealed that mean relative 2D length was, in fact, greater in men than women. We conclude that 2D does not vary in direct proportion to 4D for both men and women, rendering the use of the simple 2D : 4D ratio inappropriate for size-scaling purposes and intergroup comparisons. © 2017 The Author(s).

  10. Feedback stabilization system for pulsed single longitudinal mode tunable lasers

    DOEpatents

    Esherick, Peter; Raymond, Thomas D.

    1991-10-01

    A feedback stabilization system for pulse single longitudinal mode tunable lasers having an excited laser medium contained within an adjustable length cavity and producing a laser beam through the use of an internal dispersive element, including detection of angular deviation in the output laser beam resulting from detuning between the cavity mode frequency and the passband of the internal dispersive element, and generating an error signal based thereon. The error signal can be integrated and amplified and then applied as a correcting signal to a piezoelectric transducer mounted on a mirror of the laser cavity for controlling the cavity length.

  11. Evaluation of a Short-Form of the Berg Card Sorting Test

    PubMed Central

    Fox, Christopher J.; Mueller, Shane T.; Gray, Hilary M.; Raber, Jacob; Piper, Brian J.

    2013-01-01

    The Psychology Experimental Building Language http://pebl.sourceforge.net/ Berg Card Sorting Test is an open-source neurobehavioral test. Participants (N = 207, ages 6 to 74) completed the Berg Card Sorting Test. Performance on the first 64 trials were isolated and compared to that on the full-length (128 trials) test. Strong correlations between the short and long forms (total errors: r = .87, perseverative response: r = .83, perseverative errors r = .77, categories completed r = .86) support the Berg Card Sorting Test-64 as an abbreviated alternative for the full-length executive function test. PMID:23691107

  12. Error and bias in size estimates of whale sharks: implications for understanding demography.

    PubMed

    Sequeira, Ana M M; Thums, Michele; Brooks, Kim; Meekan, Mark G

    2016-03-01

    Body size and age at maturity are indicative of the vulnerability of a species to extinction. However, they are both difficult to estimate for large animals that cannot be restrained for measurement. For very large species such as whale sharks, body size is commonly estimated visually, potentially resulting in the addition of errors and bias. Here, we investigate the errors and bias associated with total lengths of whale sharks estimated visually by comparing them with measurements collected using a stereo-video camera system at Ningaloo Reef, Western Australia. Using linear mixed-effects models, we found that visual lengths were biased towards underestimation with increasing size of the shark. When using the stereo-video camera, the number of larger individuals that were possibly mature (or close to maturity) that were detected increased by approximately 10%. Mean lengths calculated by each method were, however, comparable (5.002 ± 1.194 and 6.128 ± 1.609 m, s.d.), confirming that the population at Ningaloo is mostly composed of immature sharks based on published lengths at maturity. We then collated data sets of total lengths sampled from aggregations of whale sharks worldwide between 1995 and 2013. Except for locations in the East Pacific where large females have been reported, these aggregations also largely consisted of juveniles (mean lengths less than 7 m). Sightings of the largest individuals were limited and occurred mostly prior to 2006. This result highlights the urgent need to locate and quantify the numbers of mature male and female whale sharks in order to ascertain the conservation status and ensure persistence of the species.

  13. Stunting, adiposity, and the individual-level "dual burden" among urban lowland and rural highland Peruvian children.

    PubMed

    Pomeroy, Emma; Stock, Jay T; Stanojevic, Sanja; Miranda, J Jaime; Cole, Tim J; Wells, Jonathan C K

    2014-01-01

    The causes of the "dual burden" of stunting and obesity remain unclear, and its existence at the individual level varies between populations. We investigate whether the individual dual burden differentially affects low socioeconomic status Peruvian children from contrasting environments (urban lowlands and rural highlands), and whether tibia length can discount the possible autocorrelation between adiposity proxies and height due to height measurement error. Stature, tibia length, weight, and waist circumference were measured in children aged 3-8.5 years (n = 201). Height and body mass index (BMI) z scores were calculated using international reference data. Age-sex-specific centile curves were also calculated for height, BMI, and tibia length. Adiposity proxies (BMI z score, waist circumference-height ratio (WCHtR)) were regressed on height and also on tibia length z scores. Regression model interaction terms between site (highland vs. lowland) and height indicate that relationships between adiposity and linear growth measures differed significantly between samples (P < 0.001). Height was positively associated with BMI among urban lowland children, and more weakly with WCHtR. Among rural highland children, height was negatively associated with WCHtR but unrelated to BMI. Similar results using tibia length rather than stature indicate that stature measurement error was not a major concern. Lowland and rural highland children differ in their patterns of stunting, BMI, and WCHtR. These contrasts likely reflect environmental differences and overall environmental stress exposure. Tibia length or knee height can be used to assess the influence of measurement error in height on the relationship between stature and BMI or WCHtR. Copyright © 2014 Wiley Periodicals, Inc.

  14. Bond Strength of Composite CFRP Reinforcing Bars in Timber

    PubMed Central

    Corradi, Marco; Righetti, Luca; Borri, Antonio

    2015-01-01

    The use of near-surface mounted (NSM) fibre-reinforced polymer (FRP) bars is an interesting method for increasing the shear and flexural strength of existing timber members. This article examines the behaviour of carbon FRP (CFRP) bars in timber under direct pull-out conditions. The objective of this experimental program is to investigate the bond strength between composite bars and timber: bars were epoxied into small notches made into chestnut and fir wood members using a commercially-available epoxy system. Bonded lengths varied from 150 to 300 mm. Failure modes, stress and strain distributions and the bond strength of CFRP bars have been evaluated and discussed. The pull-out capacity in NSM CFRP bars at the onset of debonding increased with bonded length up to a length of 250 mm. While CFRP bar’s pull-out was achieved only for specimens with bonded lengths of 150 and 200 mm, bar tensile failure was mainly recorded for bonded lengths of 250 and 300 mm. PMID:28793423

  15. Usability of a CKD Educational Website Targeted to Patients and Their Family Members

    PubMed Central

    Zuckerman, Marni; Fink, Wanda; Hu, Peter; Yang, Shiming; Fink, Jeffrey C.

    2012-01-01

    Summary Background and objectives Web-based technology is critical to the future of healthcare. As part of the Safe Kidney Care cohort study evaluating patient safety in CKD, this study determined how effectively a representative sample of patients with CKD or family members could interpret and use the Safe Kidney Care website (www.safekidneycare.org), an informational website on safety in CKD. Design, setting, participants, & measurements Between November of 2011 and January of 2012, persons with CKD or their family members underwent formal usability testing administered by a single interviewer with a second recording observer. Each participant was independently provided a list of 21 tasks to complete, with each task rated as either easily completed/noncritical error or critical error (user cannot complete the task without significant interviewer intervention). Results Twelve participants completed formal usability testing. Median completion time for all tasks was 17.5 minutes (range=10–44 minutes). In total, 10 participants had greater than or equal to one critical error. There were 55 critical errors in 252 tasks (22%), with the highest proportion of critical errors occurring when participants were asked to find information on treatments that may damage kidneys, find the website on the internet, increase font size, and scroll to the bottom of the webpage. Participants were generally satisfied with the content and usability of the website. Conclusions Web-based educational materials for patients with CKD should target a wide range of computer literacy levels and anticipate variability in competency in use of the computer and internet. PMID:22798537

  16. Structure of the highly repeated, long interspersed DNA family (LINE or L1Rn) of the rat.

    PubMed Central

    D'Ambrosio, E; Waitzkin, S D; Witney, F R; Salemme, A; Furano, A V

    1986-01-01

    We present the DNA sequence of a 6.7-kilobase member of the rat long interspersed repeated DNA family (LINE or L1Rn). This member (LINE 3) is flanked by a perfect 14-base-pair (bp) direct repeat and is a full-length, or close-to-full-length, member of this family. LINE 3 contains an approximately 100-bp A-rich right end, a number of long (greater than 400-bp) open reading frames, and a ca. 200-bp G + C-rich (ca. 60%) cluster near each terminus. Comparison of the LINE 3 sequence with the sequence of about one-half of another member, which we also present, as well as restriction enzyme analysis of the genomic copies of this family, indicates that in length and overall structure LINE 3 is quite typical of the 40,000 or so other genomic members of this family which would account for as much as 10% of the rat genome. Therefore, the rat LINE family is relatively homogeneous, which contrasts with the heterogeneous LINE families in primates and mice. Transcripts corresponding to the entire LINE sequence are abundant in the nuclear RNA of rat liver. The characteristics of the rat LINE family are discussed with respect to the possible function and evolution of this family of DNA sequences. Images PMID:3023845

  17. Comparing K-mer based methods for improved classification of 16S sequences.

    PubMed

    Vinje, Hilde; Liland, Kristian Hovde; Almøy, Trygve; Snipen, Lars

    2015-07-01

    The need for precise and stable taxonomic classification is highly relevant in modern microbiology. Parallel to the explosion in the amount of sequence data accessible, there has also been a shift in focus for classification methods. Previously, alignment-based methods were the most applicable tools. Now, methods based on counting K-mers by sliding windows are the most interesting classification approach with respect to both speed and accuracy. Here, we present a systematic comparison on five different K-mer based classification methods for the 16S rRNA gene. The methods differ from each other both in data usage and modelling strategies. We have based our study on the commonly known and well-used naïve Bayes classifier from the RDP project, and four other methods were implemented and tested on two different data sets, on full-length sequences as well as fragments of typical read-length. The difference in classification error obtained by the methods seemed to be small, but they were stable and for both data sets tested. The Preprocessed nearest-neighbour (PLSNN) method performed best for full-length 16S rRNA sequences, significantly better than the naïve Bayes RDP method. On fragmented sequences the naïve Bayes Multinomial method performed best, significantly better than all other methods. For both data sets explored, and on both full-length and fragmented sequences, all the five methods reached an error-plateau. We conclude that no K-mer based method is universally best for classifying both full-length sequences and fragments (reads). All methods approach an error plateau indicating improved training data is needed to improve classification from here. Classification errors occur most frequent for genera with few sequences present. For improving the taxonomy and testing new classification methods, the need for a better and more universal and robust training data set is crucial.

  18. Dependence of the bit error rate on the signal power and length of a single-channel coherent single-span communication line (100 Gbit s{sup -1}) with polarisation division multiplexing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gurkin, N V; Konyshev, V A; Novikov, A G

    2015-01-31

    We have studied experimentally and using numerical simulations and a phenomenological analytical model the dependences of the bit error rate (BER) on the signal power and length of a coherent single-span communication line with transponders employing polarisation division multiplexing and four-level phase modulation (100 Gbit s{sup -1} DP-QPSK format). In comparing the data of the experiment, numerical simulations and theoretical analysis, we have found two optimal powers: the power at which the BER is minimal and the power at which the fade margin in the line is maximal. We have derived and analysed the dependences of the BER on themore » optical signal power at the fibre line input and the dependence of the admissible input signal power range for implementation of the communication lines with a length from 30 – 50 km up to a maximum length of 250 km. (optical transmission of information)« less

  19. Comparison of Errors Using Two Length-Based Tape Systems for Prehospital Care in Children.

    PubMed

    Rappaport, Lara D; Brou, Lina; Givens, Tim; Mandt, Maria; Balakas, Ashley; Roswell, Kelley; Kotas, Jason; Adelgais, Kathleen M

    2016-01-01

    The use of a length/weight-based tape (LBT) for equipment size and drug dosing for pediatric patients is recommended in a joint statement by multiple national organizations. A new system, known as Handtevy™, allows for rapid determination of critical drug doses without performing calculations. To compare two LBT systems for dosing errors and time to medication administration in simulated prehospital scenarios. This was a prospective randomized trial comparing the Broselow Pediatric Emergency Tape™ (Broselow) and Handtevy LBT™ (Handtevy). Paramedics performed 2 pediatric simulations: cardiac arrest with epinephrine administration and hypoglycemia mandating dextrose. Each scenario was repeated utilizing both systems with a 1-year-old and 5-year-old size manikin. Facilitators recorded identified errors and time points of critical actions including time to medication. We enrolled 80 paramedics, performing 320 simulations. For Dextrose, there were significantly more errors with Broselow (63.8%) compared to Handtevy (13.8%) and time to administration was longer with the Broselow system (220 seconds vs. 173 seconds). For epinephrine, the LBTs were similar in overall error rate (Broselow 21.3% vs. Handtevy 16.3%) and time to administration (89 vs. 91 seconds). Cognitive errors were more frequent when using the Broselow compared to Handtevy, particularly with dextrose administration. The frequency of procedural errors was similar between the two LBT systems. In simulated prehospital scenarios, use of the Handtevy LBT system resulted in fewer errors for dextrose administration compared to the Broselow LBT, with similar time to administration and accuracy of epinephrine administration.

  20. Geometric calibration of a coordinate measuring machine using a laser tracking system

    NASA Astrophysics Data System (ADS)

    Umetsu, Kenta; Furutnani, Ryosyu; Osawa, Sonko; Takatsuji, Toshiyuki; Kurosawa, Tomizo

    2005-12-01

    This paper proposes a calibration method for a coordinate measuring machine (CMM) using a laser tracking system. The laser tracking system can measure three-dimensional coordinates based on the principle of trilateration with high accuracy and is easy to set up. The accuracy of length measurement of a single laser tracking interferometer (laser tracker) is about 0.3 µm over a length of 600 mm. In this study, we first measured 3D coordinates using the laser tracking system. Secondly, 21 geometric errors, namely, parametric errors of the CMM, were estimated by the comparison of the coordinates obtained by the laser tracking system and those obtained by the CMM. As a result, the estimated parametric errors agreed with those estimated by a ball plate measurement, which demonstrates the validity of the proposed calibration system.

  1. Social contact patterns can buffer costs of forgetting in the evolution of cooperation.

    PubMed

    Stevens, Jeffrey R; Woike, Jan K; Schooler, Lael J; Lindner, Stefan; Pachur, Thorsten

    2018-06-13

    Analyses of the evolution of cooperation often rely on two simplifying assumptions: (i) individuals interact equally frequently with all social network members and (ii) they accurately remember each partner's past cooperation or defection. Here, we examine how more realistic, skewed patterns of contact-in which individuals interact primarily with only a subset of their network's members-influence cooperation. In addition, we test whether skewed contact patterns can counteract the decrease in cooperation caused by memory errors (i.e. forgetting). Finally, we compare two types of memory error that vary in whether forgotten interactions are replaced with random actions or with actions from previous encounters. We use evolutionary simulations of repeated prisoner's dilemma games that vary agents' contact patterns, forgetting rates and types of memory error. We find that highly skewed contact patterns foster cooperation and also buffer the detrimental effects of forgetting. The type of memory error used also influences cooperation rates. Our findings reveal previously neglected but important roles of contact pattern, type of memory error and the interaction of contact pattern and memory on cooperation. Although cognitive limitations may constrain the evolution of cooperation, social contact patterns can counteract some of these constraints. © 2018 The Author(s).

  2. Relationship between lenticular power and refractive error in children with hyperopia

    PubMed Central

    Tomomatsu, Takeshi; Kono, Shinjiro; Arimura, Shogo; Tomomatsu, Yoko; Matsumura, Takehiro; Takihara, Yuji; Inatani, Masaru; Takamura, Yoshihiro

    2013-01-01

    Objectives To evaluate the contribution of axial length, and lenticular and corneal power to the spherical equivalent refractive error in children with hyperopia between 3 and 13 years of age, using noncontact optical biometry. Methods There were 62 children between 3 and 13 years of age with hyperopia (+2 diopters [D] or more) who underwent automated refraction measurement with cycloplegia, to measure spherical equivalent refractive error and corneal power. Axial length was measured using an optic biometer that does not require contact with the cornea. The refractive power of the lens was calculated using the Sanders-Retzlaff-Kraff formula. Single regression analysis was used to evaluate the correlation among the optical parameters. Results There was a significant positive correlation between age and axial length (P = 0.0014); however, the degree of hyperopia did not decrease with aging (P = 0.59). There was a significant negative correlation between age and the refractive power of the lens (P = 0.0001) but not that of the cornea (P = 0.43). A significant negative correlation was observed between the degree of hyperopia and lenticular power (P < 0.0001). Conclusion Although this study is small scale and cross sectional, the analysis, using noncontact biometry, showed that lenticular power was negatively correlated with refractive error and age, indicating that lower lens power may contribute to the degree of hyperopia. PMID:23576859

  3. The statistical properties and possible causes of polar motion prediction errors

    NASA Astrophysics Data System (ADS)

    Kosek, Wieslaw; Kalarus, Maciej; Wnek, Agnieszka; Zbylut-Gorska, Maria

    2015-08-01

    The pole coordinate data predictions from different prediction contributors of the Earth Orientation Parameters Combination of Prediction Pilot Project (EOPCPPP) were studied to determine the statistical properties of polar motion forecasts by looking at the time series of differences between them and the future IERS pole coordinates data. The mean absolute errors, standard deviations as well as the skewness and kurtosis of these differences were computed together with their error bars as a function of prediction length. The ensemble predictions show a little smaller mean absolute errors or standard deviations however their skewness and kurtosis values are similar as the for predictions from different contributors. The skewness and kurtosis enable to check whether these prediction differences satisfy normal distribution. The kurtosis values diminish with the prediction length which means that the probability distribution of these prediction differences is becoming more platykurtic than letptokurtic. Non zero skewness values result from oscillating character of these differences for particular prediction lengths which can be due to the irregular change of the annual oscillation phase in the joint fluid (atmospheric + ocean + land hydrology) excitation functions. The variations of the annual oscillation phase computed by the combination of the Fourier transform band pass filter and the Hilbert transform from pole coordinates data as well as from pole coordinates model data obtained from fluid excitations are in a good agreement.

  4. Dye filled security seal

    DOEpatents

    Wilson, Dennis C. W.

    1982-04-27

    A security seal for providing an indication of unauthorized access to a sealed object includes an elongate member to be entwined in the object such that access is denied unless the member is removed. The elongate member has a hollow, pressurizable chamber extending throughout its length that is filled with a permanent dye under greater than atmospheric pressure. Attempts to cut the member and weld it together are revealed when dye flows through a rupture in the chamber wall and stains the outside surface of the member.

  5. Spatial range of illusory effects in Müller-Lyer figures.

    PubMed

    Predebon, J

    2001-11-01

    The spatial range of the illusory effects in Müller-Lyer (M-L) figures was examined in three experiments. Experiments 1 and 2 assessed the pattern of bisection errors along the shaft of the standard or double-angle (experiment 1) and the single-angle (experiment 2) M-L figures: Subjects bisected the shaft and the resulting two half-segments of the shaft to produce apparently equal quarters, and then each of the quarters to produce eight equal-appearing segments. The bisection judgments of each segment were referenced to the segment's physical midpoints. The expansion or wings-out and the contraction or wings-in figures yielded similar patterns of bisection errors. For the standard M-L figures, there were significant errors in bisecting each half, and each end-quarter, but not the two central quarters of the shaft. For the single-angle M-L figures, there were significant errors in bisecting the length of the shaft, the half-segment, and the quarter, of the shaft adjacent to the vertex but not the second quarter from the vertex nor in dividing the half of the shaft at the open end of the figure into four equal intervals. Experiment 3 assessed the apparent length of the half-segment of the shaft at the open end of the single-angle figures. Length judgments were unaffected by the vertex at the opposite end of the shaft. Taken together, the results indicate that the length distortions in both the standard and single-angle M-L figures are not uniformly distributed along the shaft but rather are confined mainly to the quarters adjacent to the vertices. The present findings imply that theories of the M-L illusion which assume uniform expansion or contraction of the shafts are incomplete.

  6. Statistical modelling of thermal annealing of fission tracks in apatite

    NASA Astrophysics Data System (ADS)

    Laslett, G. M.; Galbraith, R. F.

    1996-12-01

    We develop an improved methodology for modelling the relationship between mean track length, temperature, and time in fission track annealing experiments. We consider "fanning Arrhenius" models, in which contours of constant mean length on an Arrhenius plot are straight lines meeting at a common point. Features of our approach are explicit use of subject matter knowledge, treating mean length as the response variable, modelling of the mean-variance relationship with two components of variance, improved modelling of the control sample, and using information from experiments in which no tracks are seen. This approach overcomes several weaknesses in previous models and provides a robust six parameter model that is widely applicable. Estimation is via direct maximum likelihood which can be implemented using a standard numerical optimisation package. Because the model is highly nonlinear, some reparameterisations are needed to achieve stable estimation and calculation of precisions. Experience suggests that precisions are more convincingly estimated from profile log-likelihood functions than from the information matrix. We apply our method to the B-5 and Sr fluorapatite data of Crowley et al. (1991) and obtain well-fitting models in both cases. For the B-5 fluorapatite, our model exhibits less fanning than that of Crowley et al. (1991), although fitted mean values above 12 μm are fairly similar. However, predictions can be different, particularly for heavy annealing at geological time scales, where our model is less retentive. In addition, the refined error structure of our model results in tighter prediction errors, and has components of error that are easier to verify or modify. For the Sr fluorapatite, our fitted model for mean lengths does not differ greatly from that of Crowley et al. (1991), but our error structure is quite different.

  7. Musical training generalises across modalities and reveals efficient and adaptive mechanisms for reproducing temporal intervals.

    PubMed

    Aagten-Murphy, David; Cappagli, Giulia; Burr, David

    2014-03-01

    Expert musicians are able to time their actions accurately and consistently during a musical performance. We investigated how musical expertise influences the ability to reproduce auditory intervals and how this generalises across different techniques and sensory modalities. We first compared various reproduction strategies and interval length, to examine the effects in general and to optimise experimental conditions for testing the effect of music, and found that the effects were robust and consistent across different paradigms. Focussing on a 'ready-set-go' paradigm subjects reproduced time intervals drawn from distributions varying in total length (176, 352 or 704 ms) or in the number of discrete intervals within the total length (3, 5, 11 or 21 discrete intervals). Overall, Musicians performed more veridical than Non-Musicians, and all subjects reproduced auditory-defined intervals more accurately than visually-defined intervals. However, Non-Musicians, particularly with visual stimuli, consistently exhibited a substantial and systematic regression towards the mean interval. When subjects judged intervals from distributions of longer total length they tended to regress more towards the mean, while the ability to discriminate between discrete intervals within the distribution had little influence on subject error. These results are consistent with a Bayesian model that minimizes reproduction errors by incorporating a central tendency prior weighted by the subject's own temporal precision relative to the current distribution of intervals. Finally a strong correlation was observed between all durations of formal musical training and total reproduction errors in both modalities (accounting for 30% of the variance). Taken together these results demonstrate that formal musical training improves temporal reproduction, and that this improvement transfers from audition to vision. They further demonstrate the flexibility of sensorimotor mechanisms in adapting to different task conditions to minimise temporal estimation errors. © 2013.

  8. Psychosocial work environment and prediction of quality of care indicators in one Canadian health center.

    PubMed

    Paquet, Maxime; Courcy, François; Lavoie-Tremblay, Mélanie; Gagnon, Serge; Maillet, Stéphanie

    2013-05-01

    Few studies link organizational variables and outcomes to quality indicators. This approach would expose operant mechanisms by which work environment characteristics and organizational outcomes affect clinical effectiveness, safety, and quality indicators. What are the predominant psychosocial variables in the explanation of organizational outcomes and quality indicators (in this case, medication errors and length of stay)? The primary objective of this study was to link the fields of evidence-based practice to the field of decision making, by providing an effective model of intervention to improve safety and quality. The study involved healthcare workers (n = 243) from 13 different care units of a university affiliated health center in Canada. Data regarding the psychosocial work environment (10 work climate scales, effort/reward imbalance, and social support) was linked to organizational outcomes (absenteeism, turnover, overtime), to the nurse/patient ratio and quality indicators (medication errors and length of stay) using path analyses. The models produced in this study revealed a contribution of some psychosocial factors to quality indicators, through an indirect effect of personnel- or human resources-related variables, more precisely: turnover, absenteeism, overtime, and nurse/patient ratio. Four perceptions of work environment appear to play an important part in the indirect effect on both medication errors and length of stay: apparent social support from supervisors, appreciation of the workload demands, pride in being part of one's work team, and effort/reward balance. This study reveals the importance of employee perceptions of the work environment as an indirect predictor of quality of care. Working to improve these perceptions is a good investment for loyalty and attendance. In general, better personnel conditions lead to fewer medication errors and shorter length of stay. © Sigma Theta Tau International.

  9. CCSDT calculations of molecular equilibrium geometries

    NASA Astrophysics Data System (ADS)

    Halkier, Asger; Jørgensen, Poul; Gauss, Jürgen; Helgaker, Trygve

    1997-08-01

    CCSDT equilibrium geometries of CO, CH 2, F 2, HF, H 2O and N 2 have been calculated using the correlation-consistent cc-pVXZ basis sets. Similar calculations have been performed for SCF, CCSD and CCSD(T). In general, bond lengths decrease when improving the basis set and increase when improving the N-electron treatment. CCSD(T) provides an excellent approximation to CCSDT for bond lengths as the largest difference between CCSDT and CCSD(T) is 0.06 pm. At the CCSDT/cc-pVQZ level, basis set deficiencies, neglect of higher-order excitations, and incomplete treatment of core-correlation all give rise to errors of a few tenths of a pm, but to a large extent, these errors cancel. The CCSDT/cc-pVQZ bond lengths deviate on average only by 0.11 pm from experiment.

  10. Most suitable mother wavelet for the analysis of fractal properties of stride interval time series via the average wavelet coefficient

    PubMed Central

    Zhang, Zhenwei; VanSwearingen, Jessie; Brach, Jennifer S.; Perera, Subashan

    2016-01-01

    Human gait is a complex interaction of many nonlinear systems and stride intervals exhibit self-similarity over long time scales that can be modeled as a fractal process. The scaling exponent represents the fractal degree and can be interpreted as a biomarker of relative diseases. The previous study showed that the average wavelet method provides the most accurate results to estimate this scaling exponent when applied to stride interval time series. The purpose of this paper is to determine the most suitable mother wavelet for the average wavelet method. This paper presents a comparative numerical analysis of sixteen mother wavelets using simulated and real fractal signals. Simulated fractal signals were generated under varying signal lengths and scaling exponents that indicate a range of physiologically conceivable fractal signals. The five candidates were chosen due to their good performance on the mean square error test for both short and long signals. Next, we comparatively analyzed these five mother wavelets for physiologically relevant stride time series lengths. Our analysis showed that the symlet 2 mother wavelet provides a low mean square error and low variance for long time intervals and relatively low errors for short signal lengths. It can be considered as the most suitable mother function without the burden of considering the signal length. PMID:27960102

  11. INCREASING THE ACCURACY OF MAYFIELD ESTIMATES USING KNOWLEDGE OF NEST AGE

    EPA Science Inventory

    This presentation will focus on the error introduced in nest-survival modeling when nest-cycles are assumed to be of constant length. I will present the types of error that may occur, including biases resulting from incorrect estimates of expected values, as well as biases that o...

  12. The Outcome of ATC Message Length and Complexity on En Route Pilot Readback Performance

    DTIC Science & Technology

    2009-01-01

    ngs.as.ordnal.data.produced.α=. . 945 ,.ndcatng.hgh. nter-coder.agreement . sector Descriptions Chicago ARTCC. The.transcrptons.are.of.plot/con...12 were.categorzed.accordng.to.three.types.of.errors:.Er- rors . of. omsson. only. (67 .4%),. Readback. errors. only. (0 .9

  13. 7 CFR 917.68 - Liability of committee members.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Liability of committee members. 917.68 Section 917.68 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing... others, in any way whatsoever, to any shipper or any other person for errors in judgment, mistakes, or...

  14. Entanglement renormalization, quantum error correction, and bulk causality

    NASA Astrophysics Data System (ADS)

    Kim, Isaac H.; Kastoryano, Michael J.

    2017-04-01

    Entanglement renormalization can be viewed as an encoding circuit for a family of approximate quantum error correcting codes. The logical information becomes progres-sively more well-protected against erasure errors at larger length scales. In particular, an approximate variant of holographic quantum error correcting code emerges at low energy for critical systems. This implies that two operators that are largely separated in scales behave as if they are spatially separated operators, in the sense that they obey a Lieb-Robinson type locality bound under a time evolution generated by a local Hamiltonian.

  15. How learning shapes the empathic brain.

    PubMed

    Hein, Grit; Engelmann, Jan B; Vollberg, Marius C; Tobler, Philippe N

    2016-01-05

    Deficits in empathy enhance conflicts and human suffering. Thus, it is crucial to understand how empathy can be learned and how learning experiences shape empathy-related processes in the human brain. As a model of empathy deficits, we used the well-established suppression of empathy-related brain responses for the suffering of out-groups and tested whether and how out-group empathy is boosted by a learning intervention. During this intervention, participants received costly help equally often from an out-group member (experimental group) or an in-group member (control group). We show that receiving help from an out-group member elicits a classical learning signal (prediction error) in the anterior insular cortex. This signal in turn predicts a subsequent increase of empathy for a different out-group member (generalization). The enhancement of empathy-related insula responses by the neural prediction error signal was mediated by an establishment of positive emotions toward the out-group member. Finally, we show that surprisingly few positive learning experiences are sufficient to increase empathy. Our results specify the neural and psychological mechanisms through which learning interacts with empathy, and thus provide a neurobiological account for the plasticity of empathic reactions.

  16. How learning shapes the empathic brain

    PubMed Central

    Hein, Grit; Vollberg, Marius C.; Tobler, Philippe N.

    2016-01-01

    Deficits in empathy enhance conflicts and human suffering. Thus, it is crucial to understand how empathy can be learned and how learning experiences shape empathy-related processes in the human brain. As a model of empathy deficits, we used the well-established suppression of empathy-related brain responses for the suffering of out-groups and tested whether and how out-group empathy is boosted by a learning intervention. During this intervention, participants received costly help equally often from an out-group member (experimental group) or an in-group member (control group). We show that receiving help from an out-group member elicits a classical learning signal (prediction error) in the anterior insular cortex. This signal in turn predicts a subsequent increase of empathy for a different out-group member (generalization). The enhancement of empathy-related insula responses by the neural prediction error signal was mediated by an establishment of positive emotions toward the out-group member. Finally, we show that surprisingly few positive learning experiences are sufficient to increase empathy. Our results specify the neural and psychological mechanisms through which learning interacts with empathy, and thus provide a neurobiological account for the plasticity of empathic reactions. PMID:26699464

  17. Limitations of the paraxial Debye approximation.

    PubMed

    Sheppard, Colin J R

    2013-04-01

    In the paraxial form of the Debye integral for focusing, higher order defocus terms are ignored, which can result in errors in dealing with aberrations, even for low numerical aperture. These errors can be avoided by using a different integration variable. The aberrations of a glass slab, such as a coverslip, are expanded in terms of the new variable, and expressed in terms of Zernike polynomials to assist with aberration balancing. Tube length error is also discussed.

  18. Vehicle Speed and Length Estimation Using Data from Two Anisotropic Magneto-Resistive (AMR) Sensors

    PubMed Central

    Markevicius, Vytautas; Navikas, Dangirutis; Valinevicius, Algimantas; Zilys, Mindaugas

    2017-01-01

    Methods for estimating a car’s length are presented in this paper, as well as the results achieved by using a self-designed system equipped with two anisotropic magneto-resistive (AMR) sensors, which were placed on a road lane. The purpose of the research was to compare the lengths of mid-size cars, i.e., family cars (hatchbacks), saloons (sedans), station wagons and SUVs. Four methods were used in the research: a simple threshold based method, a threshold method based on moving average and standard deviation, a two-extreme-peak detection method and a method based on the amplitude and time normalization using linear extrapolation (or interpolation). The results were achieved by analyzing changes in the magnitude and in the absolute z-component of the magnetic field as well. The tests, which were performed in four different Earth directions, show differences in the values of estimated lengths. The magnitude-based results in the case when cars drove from the South to the North direction were even up to 1.2 m higher than the other results achieved using the threshold methods. Smaller differences in lengths were observed when the distances were measured between two extreme peaks in the car magnetic signatures. The results were summarized in tables and the errors of estimated lengths were presented. The maximal errors, related to real lengths, were up to 22%. PMID:28771171

  19. Flame thermometry

    NASA Astrophysics Data System (ADS)

    Strojnik, Marija; Páez, Gonzalo; Granados, Juan C.

    2006-08-01

    We determine the temperature distribution within the flame as a function of position. We determined temperature distribution and the length of a flame by dual-wavelength thermometry, at 470 nm and 515 nm. The error percentages on the temperature and the flame length measurements are 1.9% as compared with the predicted thermodynamic results.

  20. Quantum communication beyond the localization length in disordered spin chains.

    PubMed

    Allcock, Jonathan; Linden, Noah

    2009-03-20

    We study the effects of localization on quantum state transfer in spin chains. We show how to use quantum error correction and multiple parallel spin chains to send a qubit with high fidelity over arbitrary distances, in particular, distances much greater than the localization length of the chain.

  1. A simple method for evaluating the wavefront compensation error of diffractive liquid-crystal wavefront correctors.

    PubMed

    Cao, Zhaoliang; Mu, Quanquan; Hu, Lifa; Lu, Xinghai; Xuan, Li

    2009-09-28

    A simple method for evaluating the wavefront compensation error of diffractive liquid-crystal wavefront correctors (DLCWFCs) for atmospheric turbulence correction is reported. A simple formula which describes the relationship between pixel number, DLCWFC aperture, quantization level, and atmospheric coherence length was derived based on the calculated atmospheric turbulence wavefronts using Kolmogorov atmospheric turbulence theory. It was found that the pixel number across the DLCWFC aperture is a linear function of the telescope aperture and the quantization level, and it is an exponential function of the atmosphere coherence length. These results are useful for people using DLCWFCs in atmospheric turbulence correction for large-aperture telescopes.

  2. Homing by path integration when a locomotion trajectory crosses itself.

    PubMed

    Yamamoto, Naohide; Meléndez, Jayleen A; Menzies, Derek T

    2014-01-01

    Path integration is a process with which navigators derive their current position and orientation by integrating self-motion signals along a locomotion trajectory. It has been suggested that path integration becomes disproportionately erroneous when the trajectory crosses itself. However, there is a possibility that this previous finding was confounded by effects of the length of a traveled path and the amount of turns experienced along the path, two factors that are known to affect path integration performance. The present study was designed to investigate whether the crossover of a locomotion trajectory truly increases errors of path integration. In an experiment, blindfolded human navigators were guided along four paths that varied in their lengths and turns, and attempted to walk directly back to the beginning of the paths. Only one of the four paths contained a crossover. Results showed that errors yielded from the path containing the crossover were not always larger than those observed in other paths, and the errors were attributed solely to the effects of longer path lengths or greater degrees of turns. These results demonstrated that path crossover does not always cause significant disruption in path integration processes. Implications of the present findings for models of path integration are discussed.

  3. Antecedents of willingness to report medical treatment errors in health care organizations: a multilevel theoretical framework.

    PubMed

    Naveh, Eitan; Katz-Navon, Tal

    2014-01-01

    To avoid errors and improve patient safety and quality of care, health care organizations need to identify the sources of failures and facilitate implementation of corrective actions. Hence, health care organizations try to collect reports and data about errors by investing enormous resources in reporting systems. However, despite health care organizations' declared goal of increasing the voluntary reporting of errors and although the Patient Safety and Quality Improvement Act of 2005 (S.544, Public Law 109-41) legalizes efforts to secure reporters from specific liabilities, the problem of underreporting of adverse events by staff members remains. The purpose of the paper is to develop a theory-based model and a set of propositions to understand the antecedents of staff members' willingness to report errors based on a literature synthesis. The model aims to explore a complex system of considerations employees use when deciding whether to report their errors or be silent about them. The model integrates the influences of three types of organizational climates (psychological safety, psychological contracts, and safety climate) and individual perceptions of the applicability of the organization's procedures and proposes their mutual influence on willingness to report errors and, as a consequence, patient safety. The model suggests that managers should try to control and influence both the way employees perceive procedure applicability and organizational context-i.e., psychological safety, no-blame contracts, and safety climate-to increase reporting and improve patient safety.

  4. System for Controlling the Stirring Pin of a Friction Stir Welding Apparatus

    NASA Technical Reports Server (NTRS)

    Ding, R. Jeffrey (Inventor); Romine, Peter L. (Inventor); Oelgoetz, Peter A. (Inventor)

    2002-01-01

    A control is provided for a friction stir welding apparatus comprising a pin tool which includes a shoulder and a rotating pin extending outwardly from the shoulder of the pin tool and which, in use, is plunged into a workpiece formed contacting workpiece members to stir weld the members together. The control system controls the penetration of the pin tool into the workpiece members which are mounted on a support anvil. The control system includes a pin length controller for controlling pin length relative to the shoulder and for producing a corresponding pin length signal. A pin force sensor senses the force being exerted on the pin during welding and produces a corresponding actual pin force signal. A probe controller controls a probe extending outwardly from the pin, senses a parameter related to the distance between the probe and the supporting anvil and produces a corresponding probe signal. A workpiece standoff sensor senses the standoff distance between the workpiece and the standoff sensor and produces a corresponding standoff signal. A control unit receives the various signals, together with a weld schedule, and, based on these signals and the weld schedule, controls the pin length controller so as to control pin penetration into the workpiece.

  5. The genomic structure: proof of the role of non-coding DNA.

    PubMed

    Bouaynaya, Nidhal; Schonfeld, Dan

    2006-01-01

    We prove that the introns play the role of a decoy in absorbing mutations in the same way hollow uninhabited structures are used by the military to protect important installations. Our approach is based on a probability of error analysis, where errors are mutations which occur in the exon sequences. We derive the optimal exon length distribution, which minimizes the probability of error in the genome. Furthermore, to understand how can Nature generate the optimal distribution, we propose a diffusive random walk model for exon generation throughout evolution. This model results in an alpha stable exon length distribution, which is asymptotically equivalent to the optimal distribution. Experimental results show that both distributions accurately fit the real data. Given that introns also drive biological evolution by increasing the rate of unequal crossover between genes, we conclude that the role of introns is to maintain a genius balance between stability and adaptability in eukaryotic genomes.

  6. An arbitrary-order staggered time integrator for the linear acoustic wave equation

    NASA Astrophysics Data System (ADS)

    Lee, Jaejoon; Park, Hyunseo; Park, Yoonseo; Shin, Changsoo

    2018-02-01

    We suggest a staggered time integrator whose order of accuracy can arbitrarily be extended to solve the linear acoustic wave equation. A strategy to select the appropriate order of accuracy is also proposed based on the error analysis that quantitatively predicts the truncation error of the numerical solution. This strategy not only reduces the computational cost several times, but also allows us to flexibly set the modelling parameters such as the time step length, grid interval and P-wave speed. It is demonstrated that the proposed method can almost eliminate temporal dispersive errors during long term simulations regardless of the heterogeneity of the media and time step lengths. The method can also be successfully applied to the source problem with an absorbing boundary condition, which is frequently encountered in the practical usage for the imaging algorithms or the inverse problems.

  7. Injection Molding Parameters Calculations by Using Visual Basic (VB) Programming

    NASA Astrophysics Data System (ADS)

    Tony, B. Jain A. R.; Karthikeyen, S.; Alex, B. Jeslin A. R.; Hasan, Z. Jahid Ali

    2018-03-01

    Now a day’s manufacturing industry plays a vital role in production sectors. To fabricate a component lot of design calculation has to be done. There is a chance of human errors occurs during design calculations. The aim of this project is to create a special module using visual basic (VB) programming to calculate injection molding parameters to avoid human errors. To create an injection mold for a spur gear component the following parameters have to be calculated such as Cooling Capacity, Cooling Channel Diameter, and Cooling Channel Length, Runner Length and Runner Diameter, Gate Diameter and Gate Pressure. To calculate the above injection molding parameters a separate module has been created using Visual Basic (VB) Programming to reduce the human errors. The outcome of the module dimensions is the injection molding components such as mold cavity and core design, ejector plate design.

  8. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

    PubMed

    Tarone, Aaron M; Foran, David R

    2008-07-01

    Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

  9. Compression member response of double steel angles on truss structure with member length variation

    NASA Astrophysics Data System (ADS)

    Hasibuan, Purwandy; Panjaitan, Arief; Haiqal, Muhammad

    2018-05-01

    One type of structures that implements steel angles as its members is truss system of telecommunication tower. For this structure, reinforcements on tower legs are also needed when antennas and microwaves installation placed on the peak of tower increases in quantity. One type of reinforcement methods commonly used is by increasing areas section capacity, where tower leg consisted of single angle section will be reinforced to be double angle sections. Regarding this case, this research discussed behavior two types of double angle steel section 2L 30.30.3 that were designed identically in area section but vary in length: 103 cm and 83 cm. At the first step, compression member together with tension member was formed to be a truss system, where compression and tension member were met at the joint plate. Schematic loading was implemented by giving tension loading on the joint plate, and this loading was terminated when each specimen reached its failure. Research findings showed that implementing shorter double angle (83 cm) sections, increased compression strength of steel angle section up to 13 %. Significant deformation occurring only on the flange for both of specimens indicated that implementing double angle is effective to prevent lateral-torsional buckling.

  10. Factors Influencing the Production of MFSV Full-Length Clone: Maize Fine Streak Virus Proteins in Drosophila S2 Cells

    USDA-ARS?s Scientific Manuscript database

    Maize fine streak virus (MFSV) is negative-sense RNA virus member of the genus Nucleorhabdovirus. Our goal is to determine whether Drosophila S2 cells can support the production of a full-length clone of MFSV. We have previously demonstrated that the full-length MFSV nucleoprotein (N) and phosphopro...

  11. Influence of Lexical Factors on Word-Finding Accuracy, Error Patterns, and Substitution Types

    ERIC Educational Resources Information Center

    Newman, Rochelle S.; German, Diane J.; Jagielko, Jennifer R.

    2018-01-01

    This retrospective, exploratory investigation examined the types of target words that 66 children with/without word-finding difficulties (WFD) had difficulty naming, and the types of errors they made. Words were studied with reference to lexical factors (LFs) that might influence naming performance: word frequency, familiarity, length, phonotactic…

  12. The statistical fluctuation study of quantum key distribution in means of uncertainty principle

    NASA Astrophysics Data System (ADS)

    Liu, Dunwei; An, Huiyao; Zhang, Xiaoyu; Shi, Xuemei

    2018-03-01

    Laser defects in emitting single photon, photon signal attenuation and propagation of error cause our serious headaches in practical long-distance quantum key distribution (QKD) experiment for a long time. In this paper, we study the uncertainty principle in metrology and use this tool to analyze the statistical fluctuation of the number of received single photons, the yield of single photons and quantum bit error rate (QBER). After that we calculate the error between measured value and real value of every parameter, and concern the propagation error among all the measure values. We paraphrase the Gottesman-Lo-Lutkenhaus-Preskill (GLLP) formula in consideration of those parameters and generate the QKD simulation result. In this study, with the increase in coding photon length, the safe distribution distance is longer and longer. When the coding photon's length is N = 10^{11}, the safe distribution distance can be almost 118 km. It gives a lower bound of safe transmission distance than without uncertainty principle's 127 km. So our study is in line with established theory, but we make it more realistic.

  13. SBAR improves communication and safety climate and decreases incident reports due to communication errors in an anaesthetic clinic: a prospective intervention study.

    PubMed

    Randmaa, Maria; Mårtensson, Gunilla; Leo Swenne, Christine; Engström, Maria

    2014-01-21

    We aimed to examine staff members' perceptions of communication within and between different professions, safety attitudes and psychological empowerment, prior to and after implementation of the communication tool Situation-Background-Assessment-Recommendation (SBAR) at an anaesthetic clinic. The aim was also to study whether there was any change in the proportion of incident reports caused by communication errors. A prospective intervention study with comparison group using preassessments and postassessments. Questionnaire data were collected from staff in an intervention (n=100) and a comparison group (n=69) at the anaesthetic clinic in two hospitals prior to (2011) and after (2012) implementation of SBAR. The proportion of incident reports due to communication errors was calculated during a 1-year period prior to and after implementation. Anaesthetic clinics at two hospitals in Sweden. All licensed practical nurses, registered nurses and physicians working in the operating theatres, intensive care units and postanaesthesia care units at anaesthetic clinics in two hospitals were invited to participate. Implementation of SBAR in an anaesthetic clinic. The primary outcomes were staff members' perception of communication within and between different professions, as well as their perceptions of safety attitudes. Secondary outcomes were psychological empowerment and incident reports due to error of communication. In the intervention group, there were statistically significant improvements in the factors 'Between-group communication accuracy' (p=0.039) and 'Safety climate' (p=0.011). The proportion of incident reports due to communication errors decreased significantly (p<0.0001) in the intervention group, from 31% to 11%. Implementing the communication tool SBAR in anaesthetic clinics was associated with improvement in staff members' perception of communication between professionals and their perception of the safety climate as well as with a decreased proportion of incident reports related to communication errors. ISRCTN37251313.

  14. Quetzal: a transposon of the Tc1 family in the mosquito Anopheles albimanus.

    PubMed

    Ke, Z; Grossman, G L; Cornel, A J; Collins, F H

    1996-10-01

    A member of the Tc1 family of transposable elements has been identified in the Central and South American mosquito Anopheles albimanus. The full-length Quetzal element is 1680 base pairs (bp) in length, possesses 236 bp inverted terminal repeats (ITRs), and has a single open reading frame (ORF) with the potential of encoding a 341-amino-acid (aa) protein that is similar to the transposases of other members of the Tc1 family, particularly elements described from three different Drosophila species. The approximately 10-12 copies per genome of Quetzal are found in the euchromatin of all three chromosomes of A. albimanus. One full-length clone, Que27, appears capable of encoding a complete transposase and may represent a functional copy of this element.

  15. Interferometer for Measuring Displacement to Within 20 pm

    NASA Technical Reports Server (NTRS)

    Zhao, Feng

    2003-01-01

    An optical heterodyne interferometer that can be used to measure linear displacements with an error <=20 pm has been developed. The remarkable accuracy of this interferometer is achieved through a design that includes (1) a wavefront split that reduces (relative to amplitude splits used in other interferometers) self interference and (2) a common-optical-path configuration that affords common-mode cancellation of the interference effects of thermal-expansion changes in optical-path lengths. The most popular method of displacement- measuring interferometry involves two beams, the polarizations of which are meant to be kept orthogonal upstream of the final interference location, where the difference between the phases of the two beams is measured. Polarization leakages (deviations from the desired perfect orthogonality) contaminate the phase measurement with periodic nonlinear errors. In commercial interferometers, these phase-measurement errors result in displacement errors in the approximate range of 1 to 10 nm. Moreover, because prior interferometers lack compensation for thermal-expansion changes in optical-path lengths, they are subject to additional displacement errors characterized by a temperature sensitivity of about 100 nm/K. Because the present interferometer does not utilize polarization in the separation and combination of the two interfering beams and because of the common-mode cancellation of thermal-expansion effects, the periodic nonlinear errors and the sensitivity to temperature changes are much smaller than in other interferometers

  16. Correction to: Incidence of severe sepsis and septic shock in German intensive care units: the prospective, multicentre INSEP study.

    PubMed

    Marx, Gernot

    2018-01-01

    The members of the SepNet Critical Care Trials Group were provided in such a way that they could not be indexed as collaborators on PubMed. The publisher apologizes for this error and is pleased to list the members of the group here.

  17. Sharps container

    NASA Technical Reports Server (NTRS)

    Lee, Angelene M. (Inventor)

    1992-01-01

    This invention relates to a system for use in disposing of potentially hazardous items and more particularly a Sharps receptacle for used hypodermic needles and the like. A Sharps container is constructed from lightweight alodined nonmagnetic metal material with a cup member having an elongated tapered shape and length greater than its transverse dimensions. A magnet in the cup member provides for metal retention in the container. A nonmagnetic lid member has an opening and spring biased closure flap member. The flap member is constructed from stainless steel. A Velcro patch on the container permits selective attachment at desired locations.

  18. Errors and uncertainties in regional climate simulations of rainfall variability over Tunisia: a multi-model and multi-member approach

    NASA Astrophysics Data System (ADS)

    Fathalli, Bilel; Pohl, Benjamin; Castel, Thierry; Safi, Mohamed Jomâa

    2018-02-01

    Temporal and spatial variability of rainfall over Tunisia (at 12 km spatial resolution) is analyzed in a multi-year (1992-2011) ten-member ensemble simulation performed using the WRF model, and a sample of regional climate hindcast simulations from Euro-CORDEX. RCM errors and skills are evaluated against a dense network of local rain gauges. Uncertainties arising, on the one hand, from the different model configurations and, on the other hand, from internal variability are furthermore quantified and ranked at different timescales using simple spread metrics. Overall, the WRF simulation shows good skill for simulating spatial patterns of rainfall amounts over Tunisia, marked by strong altitudinal and latitudinal gradients, as well as the rainfall interannual variability, in spite of systematic errors. Mean rainfall biases are wet in both DJF and JJA seasons for the WRF ensemble, while they are dry in winter and wet in summer for most of the used Euro-CORDEX models. The sign of mean annual rainfall biases over Tunisia can also change from one member of the WRF ensemble to another. Skills in regionalizing precipitation over Tunisia are season dependent, with better correlations and weaker biases in winter. Larger inter-member spreads are observed in summer, likely because of (1) an attenuated large-scale control on Mediterranean and Tunisian climate, and (2) a larger contribution of local convective rainfall to the seasonal amounts. Inter-model uncertainties are globally stronger than those attributed to model's internal variability. However, inter-member spreads can be of the same magnitude in summer, emphasizing the important stochastic nature of the summertime rainfall variability over Tunisia.

  19. An error covariance model for sea surface topography and velocity derived from TOPEX/POSEIDON altimetry

    NASA Technical Reports Server (NTRS)

    Tsaoussi, Lucia S.; Koblinsky, Chester J.

    1994-01-01

    In order to facilitate the use of satellite-derived sea surface topography and velocity oceanographic models, methodology is presented for deriving the total error covariance and its geographic distribution from TOPEX/POSEIDON measurements. The model is formulated using a parametric model fit to the altimeter range observations. The topography and velocity modeled with spherical harmonic expansions whose coefficients are found through optimal adjustment to the altimeter range residuals using Bayesian statistics. All other parameters, including the orbit, geoid, surface models, and range corrections are provided as unadjusted parameters. The maximum likelihood estimates and errors are derived from the probability density function of the altimeter range residuals conditioned with a priori information. Estimates of model errors for the unadjusted parameters are obtained from the TOPEX/POSEIDON postlaunch verification results and the error covariances for the orbit and the geoid, except for the ocean tides. The error in the ocean tides is modeled, first, as the difference between two global tide models and, second, as the correction to the present tide model, the correction derived from the TOPEX/POSEIDON data. A formal error covariance propagation scheme is used to derive the total error. Our global total error estimate for the TOPEX/POSEIDON topography relative to the geoid for one 10-day period is found tio be 11 cm RMS. When the error in the geoid is removed, thereby providing an estimate of the time dependent error, the uncertainty in the topography is 3.5 cm root mean square (RMS). This level of accuracy is consistent with direct comparisons of TOPEX/POSEIDON altimeter heights with tide gauge measurements at 28 stations. In addition, the error correlation length scales are derived globally in both east-west and north-south directions, which should prove useful for data assimilation. The largest error correlation length scales are found in the tropics. Errors in the velocity field are smallest in midlatitude regions. For both variables the largest errors caused by uncertainty in the geoid. More accurate representations of the geoid await a dedicated geopotential satellite mission. Substantial improvements in the accuracy of ocean tide models are expected in the very near future from research with TOPEX/POSEIDON data.

  20. Bond, transfer length, and development length of prestressing strand in self-consolidating concrete.

    DOT National Transportation Integrated Search

    2014-07-01

    Due to its economic advantages, the use of self-consolidating concrete (SCC) has increased rapidly in recent years. However, because : SCC mixes typically have decreased amounts of coarse aggregate and high amounts of admixtures, industry members hav...

  1. Eye size and shape in newborn children and their relation to axial length and refraction at 3 years.

    PubMed

    Lim, Laurence Shen; Chua, Sharon; Tan, Pei Ting; Cai, Shirong; Chong, Yap-Seng; Kwek, Kenneth; Gluckman, Peter D; Fortier, Marielle V; Ngo, Cheryl; Qiu, Anqi; Saw, Seang-Mei

    2015-07-01

    To determine if eye size and shape at birth are associated with eye size and refractive error 3 years later. A subset of 173 full-term newborn infants from the Growing Up in Singapore Towards healthy Outcomes (GUSTO) birth cohort underwent magnetic resonance imaging (MRI) to measure the dimensions of the internal eye. Eye shape was assessed by an oblateness index, calculated as 1 - (axial length/width) or 1 - (axial length/height). Cycloplegic autorefraction (Canon Autorefractor RK-F1) and optical biometry (IOLMaster) were performed 3 years later. Both eyes of 173 children were analysed. Eyes with longer axial length at birth had smaller increases in axial length at 3 years (p < 0.001). Eyes with larger baseline volumes and surface areas had smaller increases in axial length at 3 years (p < 0.001 for both). Eyes which were more oblate at birth had greater increases in axial length at 3 years (p < 0.001). Using width to calculate oblateness, prolate eyes had smaller increases in axial length at 3 years compared to oblate eyes (p < 0.001), and, using height, prolate and spherical eyes had smaller increases in axial length at 3 years compared to oblate eyes (p < 0.001 for both). There were no associations between eye size and shape at birth and refraction, corneal curvature or myopia at 3 years. Eyes that are larger and have prolate or spherical shapes at birth exhibit smaller increases in axial length over the first 3 years of life. Eye size and shape at birth influence subsequent eye growth but not refractive error development. © 2015 The Authors Ophthalmic & Physiological Optics © 2015 The College of Optometrists.

  2. Swing arm profilometer: analytical solutions of misalignment errors for testing axisymmetric optics

    NASA Astrophysics Data System (ADS)

    Xiong, Ling; Luo, Xiao; Liu, Zhenyu; Wang, Xiaokun; Hu, Haixiang; Zhang, Feng; Zheng, Ligong; Zhang, Xuejun

    2016-07-01

    The swing arm profilometer (SAP) has been playing a very important role in testing large aspheric optics. As one of most significant error sources that affects the test accuracy, misalignment error leads to low-order errors such as aspherical aberrations and coma apart from power. In order to analyze the effect of misalignment errors, the relation between alignment parameters and test results of axisymmetric optics is presented. Analytical solutions of SAP system errors from tested mirror misalignment, arm length L deviation, tilt-angle θ deviation, air-table spin error, and air-table misalignment are derived, respectively; and misalignment tolerance is given to guide surface measurement. In addition, experiments on a 2-m diameter parabolic mirror are demonstrated to verify the model; according to the error budget, we achieve the SAP test for low-order errors except power with accuracy of 0.1 μm root-mean-square.

  3. Optimal application of Morrison's iterative noise removal for deconvolution. Appendices

    NASA Technical Reports Server (NTRS)

    Ioup, George E.; Ioup, Juliette W.

    1987-01-01

    Morrison's iterative method of noise removal, or Morrison's smoothing, is applied in a simulation to noise-added data sets of various noise levels to determine its optimum use. Morrison's smoothing is applied for noise removal alone, and for noise removal prior to deconvolution. For the latter, an accurate method is analyzed to provide confidence in the optimization. The method consists of convolving the data with an inverse filter calculated by taking the inverse discrete Fourier transform of the reciprocal of the transform of the response of the system. Various length filters are calculated for the narrow and wide Gaussian response functions used. Deconvolution of non-noisy data is performed, and the error in each deconvolution calculated. Plots are produced of error versus filter length; and from these plots the most accurate length filters determined. The statistical methodologies employed in the optimizations of Morrison's method are similar. A typical peak-type input is selected and convolved with the two response functions to produce the data sets to be analyzed. Both constant and ordinate-dependent Gaussian distributed noise is added to the data, where the noise levels of the data are characterized by their signal-to-noise ratios. The error measures employed in the optimizations are the L1 and L2 norms. Results of the optimizations for both Gaussians, both noise types, and both norms include figures of optimum iteration number and error improvement versus signal-to-noise ratio, and tables of results. The statistical variation of all quantities considered is also given.

  4. Comparison of community and hospital pharmacists' attitudes and behaviors on medication error disclosure to the patient: A pilot study.

    PubMed

    Kim, ChungYun; Mazan, Jennifer L; Quiñones-Boex, Ana C

    To determine pharmacists' attitudes and behaviors on medication errors and their disclosure and to compare community and hospital pharmacists on such views. An online questionnaire was developed from previous studies on physicians' disclosure of errors. Questionnaire items included demographics, environment, personal experiences, and attitudes on medication errors and the disclosure process. An invitation to participate along with the link to the questionnaire was electronically distributed to members of two Illinois pharmacy associations. A follow-up reminder was sent 4 weeks after the original message. Data were collected for 3 months, and statistical analyses were performed with the use of IBM SPSS version 22.0. The overall response rate was 23.3% (n = 422). The average employed respondent was a 51-year-old white woman with a BS Pharmacy degree working in a hospital pharmacy as a clinical staff member. Regardless of practice settings, pharmacist respondents agreed that medication errors were inevitable and that a disclosure process is necessary. Respondents from community and hospital settings were further analyzed to assess any differences. Community pharmacist respondents were more likely to agree that medication errors were inevitable and that pharmacists should address the patient's emotions when disclosing an error. Community pharmacist respondents were also more likely to agree that the health care professional most closely involved with the error should disclose the error to the patient and thought that it was the pharmacists' responsibility to disclose the error. Hospital pharmacist respondents were more likely to agree that it was important to include all details in a disclosure process and more likely to disagree on putting a "positive spin" on the event. Regardless of practice setting, responding pharmacists generally agreed that errors should be disclosed to patients. There were, however, significant differences in their attitudes and behaviors depending on their particular practice setting. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  5. Stochastic Surface Mesh Reconstruction

    NASA Astrophysics Data System (ADS)

    Ozendi, M.; Akca, D.; Topan, H.

    2018-05-01

    A generic and practical methodology is presented for 3D surface mesh reconstruction from the terrestrial laser scanner (TLS) derived point clouds. It has two main steps. The first step deals with developing an anisotropic point error model, which is capable of computing the theoretical precisions of 3D coordinates of each individual point in the point cloud. The magnitude and direction of the errors are represented in the form of error ellipsoids. The following second step is focused on the stochastic surface mesh reconstruction. It exploits the previously determined error ellipsoids by computing a point-wise quality measure, which takes into account the semi-diagonal axis length of the error ellipsoid. The points only with the least errors are used in the surface triangulation. The remaining ones are automatically discarded.

  6. A multiphysical ensemble system of numerical snow modelling

    NASA Astrophysics Data System (ADS)

    Lafaysse, Matthieu; Cluzet, Bertrand; Dumont, Marie; Lejeune, Yves; Vionnet, Vincent; Morin, Samuel

    2017-05-01

    Physically based multilayer snowpack models suffer from various modelling errors. To represent these errors, we built the new multiphysical ensemble system ESCROC (Ensemble System Crocus) by implementing new representations of different physical processes in the deterministic coupled multilayer ground/snowpack model SURFEX/ISBA/Crocus. This ensemble was driven and evaluated at Col de Porte (1325 m a.s.l., French alps) over 18 years with a high-quality meteorological and snow data set. A total number of 7776 simulations were evaluated separately, accounting for the uncertainties of evaluation data. The ability of the ensemble to capture the uncertainty associated to modelling errors is assessed for snow depth, snow water equivalent, bulk density, albedo and surface temperature. Different sub-ensembles of the ESCROC system were studied with probabilistic tools to compare their performance. Results show that optimal members of the ESCROC system are able to explain more than half of the total simulation errors. Integrating members with biases exceeding the range corresponding to observational uncertainty is necessary to obtain an optimal dispersion, but this issue can also be a consequence of the fact that meteorological forcing uncertainties were not accounted for. The ESCROC system promises the integration of numerical snow-modelling errors in ensemble forecasting and ensemble assimilation systems in support of avalanche hazard forecasting and other snowpack-modelling applications.

  7. Language Sample Analysis and Elicitation Technique Effects in Bilingual Children with and without Language Impairment

    ERIC Educational Resources Information Center

    Kapantzoglou, Maria; Fergadiotis, Gerasimos; Restrepo, M. Adelaida

    2017-01-01

    Purpose: This study examined whether the language sample elicitation technique (i.e., storytelling and story-retelling tasks with pictorial support) affects lexical diversity (D), grammaticality (grammatical errors per communication unit [GE/CU]), sentence length (mean length of utterance in words [MLUw]), and sentence complexity (subordination…

  8. The Length of a Pestle: A Class Exercise in Measurement and Statistical Analysis.

    ERIC Educational Resources Information Center

    O'Reilly, James E.

    1986-01-01

    Outlines the simple exercise of measuring the length of an object as a concrete paradigm of the entire process of making chemical measurements and treating the resulting data. Discusses the procedure, significant figures, measurement error, spurious data, rejection of results, precision and accuracy, and student responses. (TW)

  9. Curricular Treatments of Length Measurement in the United States: Do They Address Known Learning Challenges?

    ERIC Educational Resources Information Center

    Smith, John P., III; Males, Lorraine M.; Dietiker, Leslie C.; Lee, KoSze; Mosier, Aaron

    2013-01-01

    Extensive research has shown that elementary students struggle to learn the basic principles of length measurement. However, where patterns of errors have been documented, the origins of students' difficulties have not been identified. This study investigated the hypothesis that written elementary mathematics curricula contribute to the…

  10. Shape adjustment optimization and experiment of cable-membrane reflectors

    NASA Astrophysics Data System (ADS)

    Du, Jingli; Gu, Yongzhen; Bao, Hong; Wang, Congsi; Chen, Xiaofeng

    2018-05-01

    Cable-membrane structures are widely employed for large space reflectors due to their lightweight, compact and easy package. In these structures, membranes are attached to cable net, serving as reflectors themselves or as supporting structures for other reflective surface. The cable length and membrane shape have to be carefully designed and fabricated to guarantee the desired reflector surface shape. However, due to inevitable error in cable length and membrane shape during the manufacture and assembly of cable-membrane reflectors, some cables have to be designed to be capable of length adjustment. By carefully adjusting the length of these cables, the degeneration in reflector shape precision due to this inevitable error can be effectively reduced. In the paper a shape adjustment algorithm for cable-membrane reflectors is proposed. Meanwhile, model updating is employed during shape adjustment to decrease the discrepancy of the numerical model with respect to the actual reflector. This discrepancy has to be considered because during attaching membranes to cable net, the accuracy of the membrane shape is hard to guarantee. Numerical examples and experimental results demonstrate the proposed method.

  11. Association between Refractive Errors and Ocular Biometry in Iranian Adults

    PubMed Central

    Hashemi, Hassan; Khabazkhoob, Mehdi; Emamian, Mohammad Hassan; Shariati, Mohammad; Miraftab, Mohammad; Yekta, Abbasali; Ostadimoghaddam, Hadi; Fotouhi, Akbar

    2015-01-01

    Purpose: To investigate the association between ocular biometrics such as axial length (AL), anterior chamber depth (ACD), lens thickness (LT), vitreous chamber depth (VCD) and corneal power (CP) with different refractive errors. Methods: In a cross-sectional study on the 40 to 64-year-old population of Shahroud, random cluster sampling was performed. Ocular biometrics were measured using the Allegro Biograph (WaveLight AG, Erlangen, Germany) for all participants. Refractive errors were determined using cycloplegic refraction. Results: In the first model, the strongest correlations were found between spherical equivalent with axial length and corneal power. Spherical equivalent was strongly correlated with axial length in high myopic and high hyperopic cases, and with corneal power in high hyperopic cases; 69.5% of variability in spherical equivalent was attributed to changes in these variables. In the second model, the correlations between vitreous chamber depth and corneal power with spherical equivalent were stronger in myopes than hyperopes, while the correlations between lens thickness and anterior chamber depth with spherical equivalent were stronger in hyperopic cases than myopic ones. In the third model, anterior chamber depth + lens thickness correlated with spherical equivalent only in moderate and severe cases of hyperopia, and this index was not correlated with spherical equivalent in moderate to severe myopia. Conclusion: In individuals aged 40-64 years, corneal power and axial length make the greatest contribution to spherical equivalent in high hyperopia and high myopia. Anterior segment biometric components have a more important role in hyperopia than myopia. PMID:26730304

  12. [Can the scattering of differences from the target refraction be avoided?].

    PubMed

    Janknecht, P

    2008-10-01

    We wanted to check how the stochastic error is affected by two lens formulae. The power of the intraocular lens was calculated using the SRK-II formula and the Haigis formula after eye length measurement with ultrasound and the IOL Master. Both lens formulae were partially derived and Gauss error analysis was used for examination of the propagated error. 61 patients with a mean age of 73.8 years were analysed. The postoperative refraction differed from the calculated refraction after ultrasound biometry using the SRK-II formula by 0.05 D (-1.56 to + 1.31, S. D.: 0.59 D; 92 % within +/- 1.0 D), after IOL Master biometry using the SRK-II formula by -0.15 D (-1.18 to + 1.25, S. D.: 0.52 D; 97 % within +/- 1.0 D), and after IOL Master biometry using the Haigis formula by -0.11 D (-1.14 to + 1.14, S. D.: 0.48 D; 95 % within +/- 1.0 D). The results did not differ from one another. The propagated error of the Haigis formula can be calculated according to DeltaP = square root (deltaL x (-4.206))(2) + (deltaVK x 0.9496)(2) + (DeltaDC x (-1.4950))(2). (DeltaL: error measuring axial length, DeltaVK error measuring anterior chamber depth, DeltaDC error measuring corneal power), the propagated error of the SRK-II formula according to DeltaP = square root (DeltaL x (-2.5))(2) + (DeltaDC x (-0.9))(2). The propagated error of the Haigis formula is always larger than the propagated error of the SRK-II formula. Scattering of the postoperative difference from the expected refraction cannot be avoided completely. It is possible to limit the systematic error by developing complicated formulae like the Haigis formula. However, increasing the number of parameters which need to be measured increases the dispersion of the calculated postoperative refraction. A compromise has to be found, and therefore the SRK-II formula is not outdated.

  13. [Fire behavior of ground surface fuels in Pinus koraiensis and Quercus mongolica mixed forest under no wind and zero slope condition: a prediction with extended Rothermel model].

    PubMed

    Zhang, Ji-Li; Liu, Bo-Fei; Chu, Teng-Fei; Di, Xue-Ying; Jin, Sen

    2012-06-01

    A laboratory burning experiment was conducted to measure the fire spread speed, residual time, reaction intensity, fireline intensity, and flame length of the ground surface fuels collected from a Korean pine (Pinus koraiensis) and Mongolian oak (Quercus mongolica) mixed stand in Maoer Mountains of Northeast China under the conditions of no wind, zero slope, and different moisture content, load, and mixture ratio of the fuels. The results measured were compared with those predicted by the extended Rothermel model to test the performance of the model, especially for the effects of two different weighting methods on the fire behavior modeling of the mixed fuels. With the prediction of the model, the mean absolute errors of the fire spread speed and reaction intensity of the fuels were 0.04 m X min(-1) and 77 kW X m(-2), their mean relative errors were 16% and 22%, while the mean absolute errors of residual time, fireline intensity and flame length were 15.5 s, 17.3 kW X m(-1), and 9.7 cm, and their mean relative errors were 55.5%, 48.7%, and 24%, respectively, indicating that the predicted values of residual time, fireline intensity, and flame length were lower than the observed ones. These errors could be regarded as the lower limits for the application of the extended Rothermel model in predicting the fire behavior of similar fuel types, and provide valuable information for using the model to predict the fire behavior under the similar field conditions. As a whole, the two different weighting methods did not show significant difference in predicting the fire behavior of the mixed fuels by extended Rothermel model. When the proportion of Korean pine fuels was lower, the predicted values of spread speed and reaction intensity obtained by surface area weighting method and those of fireline intensity and flame length obtained by load weighting method were higher; when the proportion of Korean pine needles was higher, the contrary results were obtained.

  14. Afocal optical flow sensor for reducing vertical height sensitivity in indoor robot localization and navigation.

    PubMed

    Yi, Dong-Hoon; Lee, Tae-Jae; Cho, Dong-Il Dan

    2015-05-13

    This paper introduces a novel afocal optical flow sensor (OFS) system for odometry estimation in indoor robotic navigation. The OFS used in computer optical mouse has been adopted for mobile robots because it is not affected by wheel slippage. Vertical height variance is thought to be a dominant factor in systematic error when estimating moving distances in mobile robots driving on uneven surfaces. We propose an approach to mitigate this error by using an afocal (infinite effective focal length) system. We conducted experiments in a linear guide on carpet and three other materials with varying sensor heights from 30 to 50 mm and a moving distance of 80 cm. The same experiments were repeated 10 times. For the proposed afocal OFS module, a 1 mm change in sensor height induces a 0.1% systematic error; for comparison, the error for a conventional fixed-focal-length OFS module is 14.7%. Finally, the proposed afocal OFS module was installed on a mobile robot and tested 10 times on a carpet for distances of 1 m. The average distance estimation error and standard deviation are 0.02% and 17.6%, respectively, whereas those for a conventional OFS module are 4.09% and 25.7%, respectively.

  15. Swath-altimetry measurements of the main stem Amazon River: measurement errors and hydraulic implications

    NASA Astrophysics Data System (ADS)

    Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.

    2015-04-01

    The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface-water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water-surface elevations. In this paper, we aimed to (i) characterise and illustrate in two dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water-surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a virtual mission for a ~260 km reach of the central Amazon (Solimões) River, using a hydraulic model to provide water-surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimensional height error spectrum derived from the SWOT design requirements. We thereby obtained water-surface elevation measurements for the Amazon main stem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-sectional averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1 % average overall error in discharge, respectively. We extend the results to other rivers worldwide and infer that SWOT-derived discharge estimates may be more accurate for rivers with larger channel widths (permitting a greater level of cross-sectional averaging and the use of shorter reach lengths) and higher water-surface slopes (reducing the proportional impact of slope errors on discharge calculation).

  16. Error quantification of osteometric data in forensic anthropology.

    PubMed

    Langley, Natalie R; Meadows Jantz, Lee; McNulty, Shauna; Maijanen, Heli; Ousley, Stephen D; Jantz, Richard L

    2018-06-01

    This study evaluates the reliability of osteometric data commonly used in forensic case analyses, with specific reference to the measurements in Data Collection Procedures 2.0 (DCP 2.0). Four observers took a set of 99 measurements four times on a sample of 50 skeletons (each measurement was taken 200 times by each observer). Two-way mixed ANOVAs and repeated measures ANOVAs with pairwise comparisons were used to examine interobserver (between-subjects) and intraobserver (within-subjects) variability. Relative technical error of measurement (TEM) was calculated for measurements with significant ANOVA results to examine the error among a single observer repeating a measurement multiple times (e.g. repeatability or intraobserver error), as well as the variability between multiple observers (interobserver error). Two general trends emerged from these analyses: (1) maximum lengths and breadths have the lowest error across the board (TEM<0.5), and (2) maximum and minimum diameters at midshaft are more reliable than their positionally-dependent counterparts (i.e. sagittal, vertical, transverse, dorso-volar). Therefore, maxima and minima are specified for all midshaft measurements in DCP 2.0. Twenty-two measurements were flagged for excessive variability (either interobserver, intraobserver, or both); 15 of these measurements were part of the standard set of measurements in Data Collection Procedures for Forensic Skeletal Material, 3rd edition. Each measurement was examined carefully to determine the likely source of the error (e.g. data input, instrumentation, observer's method, or measurement definition). For several measurements (e.g. anterior sacral breadth, distal epiphyseal breadth of the tibia) only one observer differed significantly from the remaining observers, indicating a likely problem with the measurement definition as interpreted by that observer; these definitions were clarified in DCP 2.0 to eliminate this confusion. Other measurements were taken from landmarks that are difficult to locate consistently (e.g. pubis length, ischium length); these measurements were omitted from DCP 2.0. This manual is available for free download online (https://fac.utk.edu/wp-content/uploads/2016/03/DCP20_webversion.pdf), along with an accompanying instructional video (https://www.youtube.com/watch?v=BtkLFl3vim4). Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Percolation galaxy groups and clusters in the sdss redshift survey: identification, catalogs, and the multiplicity function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berlind, Andreas A.; Frieman, Joshua A.; Weinberg, David H.

    2006-01-01

    We identify galaxy groups and clusters in volume-limited samples of the SDSS redshift survey, using a redshift-space friends-of-friends algorithm. We optimize the friends-of-friends linking lengths to recover galaxy systems that occupy the same dark matter halos, using a set of mock catalogs created by populating halos of N-body simulations with galaxies. Extensive tests with these mock catalogs show that no combination of perpendicular and line-of-sight linking lengths is able to yield groups and clusters that simultaneously recover the true halo multiplicity function, projected size distribution, and velocity dispersion. We adopt a linking length combination that yields, for galaxy groups withmore » ten or more members: a group multiplicity function that is unbiased with respect to the true halo multiplicity function; an unbiased median relation between the multiplicities of groups and their associated halos; a spurious group fraction of less than {approx}1%; a halo completeness of more than {approx}97%; the correct projected size distribution as a function of multiplicity; and a velocity dispersion distribution that is {approx}20% too low at all multiplicities. These results hold over a range of mock catalogs that use different input recipes of populating halos with galaxies. We apply our group-finding algorithm to the SDSS data and obtain three group and cluster catalogs for three volume-limited samples that cover 3495.1 square degrees on the sky. We correct for incompleteness caused by fiber collisions and survey edges, and obtain measurements of the group multiplicity function, with errors calculated from realistic mock catalogs. These multiplicity function measurements provide a key constraint on the relation between galaxy populations and dark matter halos.« less

  18. Parametric Studies Of Lightweight Reflectors Supported On Linear Actuator Arrays

    NASA Astrophysics Data System (ADS)

    Seibert, George E.

    1987-10-01

    This paper presents the results of numerous design studies carried out at Perkin-Elmer in support of the design of large diameter controllable mirrors for use in laser beam control, surveillance, and astronomy programs. The results include relationships between actuator location and spacing and the associated degree of correctability attainable for a variety of faceplate configurations subjected to typical disturbance environments. Normalizations and design curves obtained from closed-form equations based on thin shallow shell theory and computer based finite-element analyses are presented for use in preliminary design estimates of actuator count, faceplate structural properties, system performance prediction and weight assessments. The results of the analyses were obtained from a very wide range of mirror configurations, including both continuous and segmented mirror geometries. Typically, the designs consisted of a thin facesheet controlled by point force actuators which in turn were mounted on a structurally efficient base panel, or "reaction structure". The faceplate materials considered were fused silica, ULE fused silica, Zerodur, aluminum and beryllium. Thin solid faceplates as well as rib-reinforced cross-sections were treated, with a wide variation in thickness and/or rib patterns. The magnitude and spatial frequency distribution of the residual or uncorrected errors were related to the input error functions for mirrors of many different diameters and focal ratios. The error functions include simple sphere-to-sphere corrections, "parabolization" of spheres, and higher spatial frequency input error maps ranging from 0.5 to 7.5 cycles per diameter. The parameter which dominates all of the results obtained to date, is a structural descriptor of thin shell behavior called the characteristic length. This parameter is a function of the shell's radius of curvature, thickness, and Poisson's ratio of the material used. The value of this constant, in itself, describes the extent to which the deflection under a point force is localized by the shell's curvature. The deflection shape is typically a near-gaussian "bump" with a zero-crossing at a local radius of approximately 3.5 characteristic lengths. The amplitude is a function of the shells elastic modulus, radius, and thickness, and is linearly proportional to the applied force. This basic shell behavior is well-treated in an excellent set of papers by Eric Reissner entitled "Stresses and Small Displacements of Shallow Spherical Shells".1'2 Building on the insight offered by these papers, we developed our design tools around two derived parameters, the ratio of the mirror's diameter to its characteristic length (D/l), and the ratio of the actuator spacing to the characteristic length (b/l). The D/1 ratio determines the "finiteness" of the shell, or its dependence on edge boundary conditions. For D/1 values greater than 10, the influence of edges is almost totally absent on interior behavior. The b/1 ratio, the basis of all our normalizations is the most universal term in the description of correctability or ratio of residual/input errors. The data presented in the paper, shows that the rms residual error divided by the peak amplitude of the input error function is related to the actuator spacing to characteristic length ratio by the following expression RMS Residual Error b 3.5 k (I) (1) Initial Error Ampl. The value of k ranges from approximately 0.001 for low spatial frequency initial errors up to 0.05 for higher error frequencies (e.g. 5 cycles/diameter). The studies also yielded insight to the forces required to produce typical corrections at both the center and edges of the mirror panels. Additionally, the data lends itself to rapid evaluation of the effects of trading faceplate weight for increased actuator count,

  19. Observations on Polar Coding with CRC-Aided List Decoding

    DTIC Science & Technology

    2016-09-01

    9 v 1. INTRODUCTION Polar codes are a new type of forward error correction (FEC) codes, introduced by Arikan in [1], in which he...error correction (FEC) currently used and planned for use in Navy wireless communication systems. The project’s results from FY14 and FY15 are...good error- correction per- formance. We used the Tal/Vardy method of [5]. The polar encoder uses a row vector u of length N . Let uA be the subvector

  20. Star tracker error analysis: Roll-to-pitch nonorthogonality

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1979-01-01

    An error analysis is described on an anomaly isolated in the star tracker software line of sight (LOS) rate test. The LOS rate cosine was found to be greater than one in certain cases which implied that either one or both of the star tracker measured end point unit vectors used to compute the LOS rate cosine had lengths greater than unity. The roll/pitch nonorthogonality matrix in the TNB CL module of the IMU software is examined as the source of error.

  1. Reading sentences of uniform word length - II: Very rapid adaptation of the preferred saccade length.

    PubMed

    Cutter, Michael G; Drieghe, Denis; Liversedge, Simon P

    2018-04-25

    In the current study we investigated whether readers adjust their preferred saccade length (PSL) during reading on a trial-by-trial basis. The PSL refers to the distance between a saccade launch site and saccade target (i.e., the word center during reading) when participants neither undershoot nor overshoot this target (McConkie, Kerr, Reddix, & Zola in Vision Research, 28, 1107-1118, 1988). The tendency for saccades longer or shorter than the PSL to under or overshoot their target is referred to as the range error. Recent research by Cutter, Drieghe, and Liversedge (Journal of Experimental Psychology: Human Perception and Performance, 2017) has shown that the PSL changes to be shorter when readers are presented with 30 consecutive sentences exclusively made of three-letter words, and longer when presented with 30 consecutive sentences exclusively made of five-letter words. We replicated and extended this work by this time presenting participants with these uniform sentences in an unblocked design. We found that adaptation still occurred across different sentence types despite participants only having one trial to adapt. Our analyses suggested that this effect was driven by the length of the words readers were making saccades away from, rather than the length of the words in the rest of the sentence. We propose an account of the range error in which readers use parafoveal word length information to estimate the length of a saccade between the center of two parafoveal words (termed the Centre-Based Saccade Length) prior to landing on the first of these words.

  2. The Iatroref study: medical errors are associated with symptoms of depression in ICU staff but not burnout or safety culture.

    PubMed

    Garrouste-Orgeas, Maité; Perrin, Marion; Soufir, Lilia; Vesin, Aurélien; Blot, François; Maxime, Virginie; Beuret, Pascal; Troché, Gilles; Klouche, Kada; Argaud, Laurent; Azoulay, Elie; Timsit, Jean-François

    2015-02-01

    Staff behaviours to optimise patient safety may be influenced by burnout, depression and strength of the safety culture. We evaluated whether burnout, symptoms of depression and safety culture affected the frequency of medical errors and adverse events (selected using Delphi techniques) in ICUs. Prospective, observational, multicentre (31 ICUs) study from August 2009 to December 2011. Burnout, depression symptoms and safety culture were evaluated using the Maslach Burnout Inventory (MBI), CES-Depression scale and Safety Attitudes Questionnaire, respectively. Of 1,988 staff members, 1,534 (77.2 %) participated. Frequencies of medical errors and adverse events were 804.5/1,000 and 167.4/1,000 patient-days, respectively. Burnout prevalence was 3 or 40 % depending on the definition (severe emotional exhaustion, depersonalisation and low personal accomplishment; or MBI score greater than -9). Depression symptoms were identified in 62/330 (18.8 %) physicians and 188/1,204 (15.6 %) nurses/nursing assistants. Median safety culture score was 60.7/100 [56.8-64.7] in physicians and 57.5/100 [52.4-61.9] in nurses/nursing assistants. Depression symptoms were an independent risk factor for medical errors. Burnout was not associated with medical errors. The safety culture score had a limited influence on medical errors. Other independent risk factors for medical errors or adverse events were related to ICU organisation (40 % of ICU staff off work on the previous day), staff (specific safety training) and patients (workload). One-on-one training of junior physicians during duties and existence of a hospital risk-management unit were associated with lower risks. The frequency of selected medical errors in ICUs was high and was increased when staff members had symptoms of depression.

  3. Assessment of a model for achieving competency in administration and scoring of the WAIS-IV in post-graduate psychology students.

    PubMed

    Roberts, Rachel M; Davis, Melissa C

    2015-01-01

    There is a need for an evidence-based approach to training professional psychologists in the administration and scoring of standardized tests such as the Wechsler Adult Intelligence Scale (WAIS) due to substantial evidence that these tasks are associated with numerous errors that have the potential to significantly impact clients' lives. Twenty three post-graduate psychology students underwent training in using the WAIS-IV according to a best-practice teaching model that involved didactic teaching, independent study of the test manual, and in-class practice with teacher supervision and feedback. Video recordings and test protocols from a role-played test administration were analyzed for errors according to a comprehensive checklist with self, peer, and faculty member reviews. 91.3% of students were rated as having demonstrated competency in administration and scoring. All students were found to make errors, with substantially more errors being detected by the faculty member than by self or peers. Across all subtests, the most frequent errors related to failure to deliver standardized instructions verbatim from the manual. The failure of peer and self-reviews to detect the majority of the errors suggests that novice feedback (self or peers) may be ineffective to eliminate errors and the use of more senior peers may be preferable. It is suggested that involving senior trainees, recent graduates and/or experienced practitioners in the training of post-graduate students may have benefits for both parties, promoting a peer-learning and continuous professional development approach to the development and maintenance of skills in psychological assessment.

  4. Investigation of the nonlinear seismic behavior of knee braced frames using the incremental dynamic analysis method

    NASA Astrophysics Data System (ADS)

    Sheidaii, Mohammad Reza; TahamouliRoudsari, Mehrzad; Gordini, Mehrdad

    2016-06-01

    In knee braced frames, the braces are attached to the knee element rather than the intersection of beams and columns. This bracing system is widely used and preferred over the other commonly used systems for reasons such as having lateral stiffness while having adequate ductility, damage concentration on the second degree convenience of repairing and replacing of these elements after Earthquake. The lateral stiffness of this system is supplied by the bracing member and the ductility of the frame attached to the knee length is supplied through the bending or shear yield of the knee member. In this paper, the nonlinear seismic behavior of knee braced frame systems has been investigated using incremental dynamic analysis (IDA) and the effects of the number of stories in a building, length and the moment of inertia of the knee member on the seismic behavior, elastic stiffness, ductility and the probability of failure of these systems has been determined. In the incremental dynamic analysis, after plotting the IDA diagrams of the accelerograms, the collapse diagrams in the limit states are determined. These diagrams yield that for a constant knee length with reduced moment of inertia, the probability of collapse in limit states heightens and also for a constant knee moment of inertia with increasing length, the probability of collapse in limit states increases.

  5. The prevalence of medical error related to end-of-life communication in Canadian hospitals: results of a multicentre observational study.

    PubMed

    Heyland, Daren K; Ilan, Roy; Jiang, Xuran; You, John J; Dodek, Peter

    2016-09-01

    In the hospital setting, inadequate engagement between healthcare professionals and seriously ill patients and their families regarding end-of-life decisions is common. This problem may lead to medical orders for life-sustaining treatments that are inconsistent with patient preferences. The prevalence of this patient safety problem has not been previously described. Using data from a multi-institutional audit, we quantified the mismatch between patients' and family members' expressed preferences for care and orders for life-sustaining treatments. We recruited seriously ill, elderly medical patients and/or their family members to participate in this audit. We considered it a medical error if a patient preferred not to be resuscitated and there were orders to undergo resuscitation (overtreatment), or if a patient preferred resuscitation (cardiopulmonary resuscitation, CPR) and there were orders not to be resuscitated (undertreatment). From 16 hospitals in Canada, 808 patients and 631 family members were included in this study. When comparing expressed preferences and documented orders for use of CPR, 37% of patients experienced a medical error. Very few patients (8, 2%) expressed a preference for CPR and had CPR withheld in their documented medical orders (Undertreatment). Of patients who preferred not to have CPR, 174 (35%) had orders to receive it (Overtreatment). There was considerable variability in overtreatment rates across sites (range: 14-82%). Patients who were frail were less likely to be overtreated; patients who did not have a participating family member were more likely to be overtreated. Medical errors related to the use of life-sustaining treatments are very common in internal medicine wards. Many patients are at risk of receiving inappropriate end-of-life care. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  6. Bilateral step length estimation using a single inertial measurement unit attached to the pelvis

    PubMed Central

    2012-01-01

    Background The estimation of the spatio-temporal gait parameters is of primary importance in both physical activity monitoring and clinical contexts. A method for estimating step length bilaterally, during level walking, using a single inertial measurement unit (IMU) attached to the pelvis is proposed. In contrast to previous studies, based either on a simplified representation of the human gait mechanics or on a general linear regressive model, the proposed method estimates the step length directly from the integration of the acceleration along the direction of progression. Methods The IMU was placed at pelvis level fixed to the subject's belt on the right side. The method was validated using measurements from a stereo-photogrammetric system as a gold standard on nine subjects walking ten laps along a closed loop track of about 25 m, varying their speed. For each loop, only the IMU data recorded in a 4 m long portion of the track included in the calibrated volume of the SP system, were used for the analysis. The method takes advantage of the cyclic nature of gait and it requires an accurate determination of the foot contact instances. A combination of a Kalman filter and of an optimally filtered direct and reverse integration applied to the IMU signals formed a single novel method (Kalman and Optimally filtered Step length Estimation - KOSE method). A correction of the IMU displacement due to the pelvic rotation occurring in gait was implemented to estimate the step length and the traversed distance. Results The step length was estimated for all subjects with less than 3% error. Traversed distance was assessed with less than 2% error. Conclusions The proposed method provided estimates of step length and traversed distance more accurate than any other method applied to measurements obtained from a single IMU that can be found in the literature. In healthy subjects, it is reasonable to expect that, errors in traversed distance estimation during daily monitoring activity would be of the same order of magnitude of those presented. PMID:22316235

  7. Designing robust watermark barcodes for multiplex long-read sequencing.

    PubMed

    Ezpeleta, Joaquín; Krsticevic, Flavia J; Bulacio, Pilar; Tapia, Elizabeth

    2017-03-15

    To attain acceptable sample misassignment rates, current approaches to multiplex single-molecule real-time sequencing require upstream quality improvement, which is obtained from multiple passes over the sequenced insert and significantly reduces the effective read length. In order to fully exploit the raw read length on multiplex applications, robust barcodes capable of dealing with the full single-pass error rates are needed. We present a method for designing sequencing barcodes that can withstand a large number of insertion, deletion and substitution errors and are suitable for use in multiplex single-molecule real-time sequencing. The manuscript focuses on the design of barcodes for full-length single-pass reads, impaired by challenging error rates in the order of 11%. The proposed barcodes can multiplex hundreds or thousands of samples while achieving sample misassignment probabilities as low as 10-7 under the above conditions, and are designed to be compatible with chemical constraints imposed by the sequencing process. Software tools for constructing watermark barcode sets and demultiplexing barcoded reads, together with example sets of barcodes and synthetic barcoded reads, are freely available at www.cifasis-conicet.gov.ar/ezpeleta/NS-watermark . ezpeleta@cifasis-conicet.gov.ar. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  8. Modeling work zone crash frequency by quantifying measurement errors in work zone length.

    PubMed

    Yang, Hong; Ozbay, Kaan; Ozturk, Ozgur; Yildirimoglu, Mehmet

    2013-06-01

    Work zones are temporary traffic control zones that can potentially cause safety problems. Maintaining safety, while implementing necessary changes on roadways, is an important challenge traffic engineers and researchers have to confront. In this study, the risk factors in work zone safety evaluation were identified through the estimation of a crash frequency (CF) model. Measurement errors in explanatory variables of a CF model can lead to unreliable estimates of certain parameters. Among these, work zone length raises a major concern in this analysis because it may change as the construction schedule progresses generally without being properly documented. This paper proposes an improved modeling and estimation approach that involves the use of a measurement error (ME) model integrated with the traditional negative binomial (NB) model. The proposed approach was compared with the traditional NB approach. Both models were estimated using a large dataset that consists of 60 work zones in New Jersey. Results showed that the proposed improved approach outperformed the traditional approach in terms of goodness-of-fit statistics. Moreover it is shown that the use of the traditional NB approach in this context can lead to the overestimation of the effect of work zone length on the crash occurrence. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Family matters: dyadic agreement in end-of-life medical decision making.

    PubMed

    Schmid, Bettina; Allen, Rebecca S; Haley, Philip P; Decoster, Jamie

    2010-04-01

    We examined race/ethnicity and cultural context within hypothetical end-of-life medical decision scenarios and its influence on patient-proxy agreement. Family dyads consisting of an older adult and 1 family member, typically an adult child, responded to questions regarding the older adult's preferences for cardiopulmonary resuscitation, artificial feeding and fluids, and palliative care in hypothetical illness scenarios. The responses of 34 Caucasian dyads and 30 African American dyads were compared to determine the extent to which family members could accurately predict the treatment preferences of their older relative. We found higher treatment preference agreement among African American dyads compared with Caucasian dyads when considering overall raw difference scores (i.e., overtreatment errors can compensate for undertreatment errors). Prior advance care planning moderated the effect such that lower levels of advance care planning predicted undertreatment errors among African American proxies and overtreatment errors among Caucasian proxies. In contrast, no racial/ethnic differences in treatment preference agreement were found within absolute difference scores (i.e., total error, regardless of the direction of error). This project is one of the first to examine the mediators and moderators of dyadic racial/cultural differences in treatment preference agreement for end-of-life care in hypothetical illness scenarios. Future studies should use mixed method approaches to explore underlying factors for racial differences in patient-proxy agreement as a basis for developing culturally sensitive interventions to reduce racial disparities in end-of-life care options.

  10. [Improvement of team competence in the operating room : Training programs from aviation].

    PubMed

    Schmidt, C E; Hardt, F; Möller, J; Malchow, B; Schmidt, K; Bauer, M

    2010-08-01

    Growing attention has been drawn to patient safety during recent months due to media reports of clinical errors. To date only clinical incident reporting systems have been implemented in acute care hospitals as instruments of risk management. However, these systems only have a limited impact on human factors which account for the majority of all errors in medicine. Crew resource management (CRM) starts here. For the commissioning of a new hospital in Minden, training programs were installed in order to maintain patient safety in a new complex environment. The training was planned in three parts: All relevant processes were defined as standard operating procedures (SOP), visualized and then simulated in the new building. In addition, staff members (trainers) in leading positions were trained in CRM in order to train the complete staff. The training programs were analyzed by questionnaires. Selection of topics, relevance for practice and mode of presentation were rated as very good by 73% of the participants. The staff members ranked the topics communication in crisis situations, individual errors and compensating measures as most important followed by case studies and teamwork. Employees improved in compliance to the SOP, team competence and communication. In high technology environments with escalating workloads and interdisciplinary organization, staff members are confronted with increasing demands in knowledge and skills. To reduce errors under such working conditions relevant processes should be standardized and trained for the emergency situation. Human performance can be supported by well-trained interpersonal skills which are evolved in CRM training. In combination these training programs make a significant contribution to maintaining patient safety.

  11. Using Contemporary Leadership Skills in Medication Safety Programs.

    PubMed

    Hertig, John B; Hultgren, Kyle E; Weber, Robert J

    2016-04-01

    The discipline of studying medication errors and implementing medication safety programs in hospitals dates to the 1970s. These initial programs to prevent errors focused only on pharmacy operation changes - and not the broad medication use system. In the late 1990s, research showed that faulty systems, and not faulty people, are responsible for errors and require a multidisciplinary approach. The 2013 ASHP Statement on the Role of the Medication Safety Leader recommended that medication safety leaders be integrated team members rather than a single point of contact. Successful medication safety programs must employ a new approach - one that embraces the skills of all health care team members and positions many leaders to improve safety. This approach requires a new set of leadership skills based on contemporary management principles, including followership, team-building, tracking and assessing progress, storytelling and communication, and cultivating innovation, all of which promote transformational change. The application of these skills in developing or changing a medication safety program is reviewed in this article.

  12. Using Contemporary Leadership Skills in Medication Safety Programs

    PubMed Central

    Hertig, John B.; Hultgren, Kyle E.; Weber, Robert J.

    2016-01-01

    The discipline of studying medication errors and implementing medication safety programs in hospitals dates to the 1970s. These initial programs to prevent errors focused only on pharmacy operation changes – and not the broad medication use system. In the late 1990s, research showed that faulty systems, and not faulty people, are responsible for errors and require a multidisciplinary approach. The 2013 ASHP Statement on the Role of the Medication Safety Leader recommended that medication safety leaders be integrated team members rather than a single point of contact. Successful medication safety programs must employ a new approach – one that embraces the skills of all health care team members and positions many leaders to improve safety. This approach requires a new set of leadership skills based on contemporary management principles, including followership, team-building, tracking and assessing progress, storytelling and communication, and cultivating innovation, all of which promote transformational change. The application of these skills in developing or changing a medication safety program is reviewed in this article. PMID:27303083

  13. 5 CFR 890.1107 - Length of temporary continuation of coverage.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... dependent children, were covered family members of a former employee receiving continued coverage under this... after the former spouse ceased meeting the requirements for coverage as a family member, unless it is...) Whose marriage to the former employee terminates after the former employee's separation but before the...

  14. 5 CFR 890.1107 - Length of temporary continuation of coverage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... dependent children, were covered family members of a former employee receiving continued coverage under this... after the former spouse ceased meeting the requirements for coverage as a family member, unless it is...) Whose marriage to the former employee terminates after the former employee's separation but before the...

  15. 5 CFR 890.1107 - Length of temporary continuation of coverage.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... dependent children, were covered family members of a former employee receiving continued coverage under this... after the former spouse ceased meeting the requirements for coverage as a family member, unless it is...) Whose marriage to the former employee terminates after the former employee's separation but before the...

  16. 5 CFR 890.1107 - Length of temporary continuation of coverage.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... dependent children, were covered family members of a former employee receiving continued coverage under this... after the former spouse ceased meeting the requirements for coverage as a family member, unless it is...) Whose marriage to the former employee terminates after the former employee's separation but before the...

  17. FFA Leadership Handbook.

    ERIC Educational Resources Information Center

    Moody, Sidney B.; Miller, L. E.

    The handbook is designed to assist youth leaders in the Future Farmers of America (FFA). It is organized into nine sections of varying length which consider the following facets of FFA (with sample sub-topics in parentheses): FFA members (things to know to become an effective member, membership policy); FFA officers (duties and qualifications of…

  18. Report B : self-consolidating concrete (SCC) for infrastructure elements - bond, transfer length, and development length of prestressing strand in SCC.

    DOT National Transportation Integrated Search

    2012-08-01

    Due to its economic advantages, the use of self-consolidating concrete (SCC) has : increased rapidly in recent years. However, because SCC mixes typically have decreased : amounts of coarse aggregate and high amounts of admixtures, industry members h...

  19. Mapping the Origins of Time: Scalar Errors in Infant Time Estimation

    ERIC Educational Resources Information Center

    Addyman, Caspar; Rocha, Sinead; Mareschal, Denis

    2014-01-01

    Time is central to any understanding of the world. In adults, estimation errors grow linearly with the length of the interval, much faster than would be expected of a clock-like mechanism. Here we present the first direct demonstration that this is also true in human infants. Using an eye-tracking paradigm, we examined 4-, 6-, 10-, and…

  20. A Comparison of the Forecast Skills among Three Numerical Models

    NASA Astrophysics Data System (ADS)

    Lu, D.; Reddy, S. R.; White, L. J.

    2003-12-01

    Three numerical weather forecast models, MM5, COAMPS and WRF, operating with a joint effort of NOAA HU-NCAS and Jackson State University (JSU) during summer 2003 have been chosen to study their forecast skills against observations. The models forecast over the same region with the same initialization, boundary condition, forecast length and spatial resolution. AVN global dataset have been ingested as initial conditions. Grib resolution of 27 km is chosen to represent the current mesoscale model. The forecasts with the length of 36h are performed to output the result with 12h interval. The key parameters used to evaluate the forecast skill include 12h accumulated precipitation, sea level pressure, wind, surface temperature and dew point. Precipitation is evaluated statistically using conventional skill scores, Threat Score (TS) and Bias Score (BS), for different threshold values based on 12h rainfall observations whereas other statistical methods such as Mean Error (ME), Mean Absolute Error(MAE) and Root Mean Square Error (RMSE) are applied to other forecast parameters.

  1. Estimating a child's age from an image using whole body proportions.

    PubMed

    Lucas, Teghan; Henneberg, Maciej

    2017-09-01

    The use and distribution of child pornography is an increasing problem. Forensic anthropologists are often asked to estimate a child's age from a photograph. Previous studies have attempted to estimate the age of children from photographs using ratios of the face. Here, we propose to include body measurement ratios into age estimates. A total of 1603 boys and 1833 girls aged 5-16 years were measured over a 10-year period. They are 'Cape Coloured' children from South Africa. Their age was regressed on ratios derived from anthropometric measurements of the head as well as the body. Multiple regression equations including four ratios for each sex (head height to shoulder and hip width, knee width, leg length and trunk length) have a standard error of 1.6-1.7 years. The error is of the same order as variation of differences between biological and chronological ages of the children. Thus, the error cannot be minimised any further as it is a direct reflection of a naturally occurring phenomenon.

  2. Development and characterisation of FPGA modems using forward error correction for FSOC

    NASA Astrophysics Data System (ADS)

    Mudge, Kerry A.; Grant, Kenneth J.; Clare, Bradley A.; Biggs, Colin L.; Cowley, William G.; Manning, Sean; Lechner, Gottfried

    2016-05-01

    In this paper we report on the performance of a free-space optical communications (FSOC) modem implemented in FPGA, with data rate variable up to 60 Mbps. To combat the effects of atmospheric scintillation, a 7/8 rate low density parity check (LDPC) forward error correction is implemented along with custom bit and frame synchronisation and a variable length interleaver. We report on the systematic performance evaluation of an optical communications link employing the FPGA modems using a laboratory test-bed to simulate the effects of atmospheric turbulence. Log-normal fading is imposed onto the transmitted free-space beam using a custom LabVIEW program and an acoustic-optic modulator. The scintillation index, transmitted optical power and the scintillation bandwidth can all be independently varied allowing testing over a wide range of optical channel conditions. In particular, bit-error-ratio (BER) performance for different interleaver lengths is investigated as a function of the scintillation bandwidth. The laboratory results are compared to field measurements over 1.5km.

  3. Modified linear predictive coding approach for moving target tracking by Doppler radar

    NASA Astrophysics Data System (ADS)

    Ding, Yipeng; Lin, Xiaoyi; Sun, Ke-Hui; Xu, Xue-Mei; Liu, Xi-Yao

    2016-07-01

    Doppler radar is a cost-effective tool for moving target tracking, which can support a large range of civilian and military applications. A modified linear predictive coding (LPC) approach is proposed to increase the target localization accuracy of the Doppler radar. Based on the time-frequency analysis of the received echo, the proposed approach first real-time estimates the noise statistical parameters and constructs an adaptive filter to intelligently suppress the noise interference. Then, a linear predictive model is applied to extend the available data, which can help improve the resolution of the target localization result. Compared with the traditional LPC method, which empirically decides the extension data length, the proposed approach develops an error array to evaluate the prediction accuracy and thus, adjust the optimum extension data length intelligently. Finally, the prediction error array is superimposed with the predictor output to correct the prediction error. A series of experiments are conducted to illustrate the validity and performance of the proposed techniques.

  4. A Very Efficient Transfer Function Bounding Technique on Bit Error Rate for Viterbi Decoded, Rate 1/N Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Lee, P. J.

    1984-01-01

    For rate 1/N convolutional codes, a recursive algorithm for finding the transfer function bound on bit error rate (BER) at the output of a Viterbi decoder is described. This technique is very fast and requires very little storage since all the unnecessary operations are eliminated. Using this technique, we find and plot bounds on the BER performance of known codes of rate 1/2 with K 18, rate 1/3 with K 14. When more than one reported code with the same parameter is known, we select the code that minimizes the required signal to noise ratio for a desired bit error rate of 0.000001. This criterion of determining goodness of a code had previously been found to be more useful than the maximum free distance criterion and was used in the code search procedures of very short constraint length codes. This very efficient technique can also be used for searches of longer constraint length codes.

  5. Complete chloroplast genome sequence of MD-2 pineapple and its comparative analysis among nine other plants from the subclass Commelinidae.

    PubMed

    Redwan, R M; Saidin, A; Kumar, S V

    2015-08-12

    Pineapple (Ananas comosus var. comosus) is known as the king of fruits for its crown and is the third most important tropical fruit after banana and citrus. The plant, which is indigenous to South America, is the most important species in the Bromeliaceae family and is largely traded for fresh fruit consumption. Here, we report the complete chloroplast sequence of the MD-2 pineapple that was sequenced using the PacBio sequencing technology. In this study, the high error rate of PacBio long sequence reads of A. comosus's total genomic DNA were improved by leveraging on the high accuracy but short Illumina reads for error-correction via the latest error correction module from Novocraft. Error corrected long PacBio reads were assembled by using a single tool to produce a contig representing the pineapple chloroplast genome. The genome of 159,636 bp in length is featured with the conserved quadripartite structure of chloroplast containing a large single copy region (LSC) with a size of 87,482 bp, a small single copy region (SSC) with a size of 18,622 bp and two inverted repeat regions (IRA and IRB) each with the size of 26,766 bp. Overall, the genome contained 117 unique coding regions and 30 were repeated in the IR region with its genes contents, structure and arrangement similar to its sister taxon, Typha latifolia. A total of 35 repeats structure were detected in both the coding and non-coding regions with a majority being tandem repeats. In addition, 205 SSRs were detected in the genome with six protein-coding genes contained more than two SSRs. Comparative chloroplast genomes from the subclass Commelinidae revealed a conservative protein coding gene albeit located in a highly divergence region. Analysis of selection pressure on protein-coding genes using Ka/Ks ratio showed significant positive selection exerted on the rps7 gene of the pineapple chloroplast with P less than 0.05. Phylogenetic analysis confirmed the recent taxonomical relation among the member of commelinids which support the monophyly relationship between Arecales and Dasypogonaceae and between Zingiberales to the Poales, which includes the A. comosus. The complete sequence of the chloroplast of pineapple provides insights to the divergence of genic chloroplast sequences from the members of the subclass Commelinidae. The complete pineapple chloroplast will serve as a reference for in-depth taxonomical studies in the Bromeliaceae family when more species under the family are sequenced in the future. The genetic sequence information will also make feasible other molecular applications of the pineapple chloroplast for plant genetic improvement.

  6. Energy conversion device with support member having pore channels

    DOEpatents

    Routkevitch, Dmitri [Longmont, CO; Wind, Rikard A [Johnstown, CO

    2014-01-07

    Energy devices such as energy conversion devices and energy storage devices and methods for the manufacture of such devices. The devices include a support member having an array of pore channels having a small average pore channel diameter and having a pore channel length. Material layers that may include energy conversion materials and conductive materials are coaxially disposed within the pore channels to form material rods having a relatively small cross-section and a relatively long length. By varying the structure of the materials in the pore channels, various energy devices can be fabricated, such as photovoltaic (PV) devices, radiation detectors, capacitors, batteries and the like.

  7. 77 FR 15171 - Self-Regulatory Organizations; The National Securities Clearing Corporation; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-14

    ... automates the transmission of data with respect to in force policy transactions among participating I&RS... completeness or accuracy of any data transmitted between I&RS Members through NSCC's I&RS, nor for any errors... of such data between I&RS Members. The changes to Rule 57 being proposed hereby are subject to the...

  8. Design and analysis of a sub-aperture scanning machine for the transmittance measurements of large-aperture optical system

    NASA Astrophysics Data System (ADS)

    He, Yingwei; Li, Ping; Feng, Guojin; Cheng, Li; Wang, Yu; Wu, Houping; Liu, Zilong; Zheng, Chundi; Sha, Dingguo

    2010-11-01

    For measuring large-aperture optical system transmittance, a novel sub-aperture scanning machine with double-rotating arms (SSMDA) was designed to obtain sub-aperture beam spot. Optical system full-aperture transmittance measurements can be achieved by applying sub-aperture beam spot scanning technology. The mathematical model of the SSMDA based on a homogeneous coordinate transformation matrix is established to develop a detailed methodology for analyzing the beam spot scanning errors. The error analysis methodology considers two fundamental sources of scanning errors, namely (1) the length systematic errors and (2) the rotational systematic errors. As the systematic errors of the parameters are given beforehand, computational results of scanning errors are between -0.007~0.028mm while scanning radius is not lager than 400.000mm. The results offer theoretical and data basis to the research on transmission characteristics of large optical system.

  9. Prediction of boiling points of organic compounds by QSPR tools.

    PubMed

    Dai, Yi-min; Zhu, Zhi-ping; Cao, Zhong; Zhang, Yue-fei; Zeng, Ju-lan; Li, Xun

    2013-07-01

    The novel electro-negativity topological descriptors of YC, WC were derived from molecular structure by equilibrium electro-negativity of atom and relative bond length of molecule. The quantitative structure-property relationships (QSPR) between descriptors of YC, WC as well as path number parameter P3 and the normal boiling points of 80 alkanes, 65 unsaturated hydrocarbons and 70 alcohols were obtained separately. The high-quality prediction models were evidenced by coefficient of determination (R(2)), the standard error (S), average absolute errors (AAE) and predictive parameters (Qext(2),RCV(2),Rm(2)). According to the regression equations, the influences of the length of carbon backbone, the size, the degree of branching of a molecule and the role of functional groups on the normal boiling point were analyzed. Comparison results with reference models demonstrated that novel topological descriptors based on the equilibrium electro-negativity of atom and the relative bond length were useful molecular descriptors for predicting the normal boiling points of organic compounds. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Clustering of tethered satellite system simulation data by an adaptive neuro-fuzzy algorithm

    NASA Technical Reports Server (NTRS)

    Mitra, Sunanda; Pemmaraju, Surya

    1992-01-01

    Recent developments in neuro-fuzzy systems indicate that the concepts of adaptive pattern recognition, when used to identify appropriate control actions corresponding to clusters of patterns representing system states in dynamic nonlinear control systems, may result in innovative designs. A modular, unsupervised neural network architecture, in which fuzzy learning rules have been embedded is used for on-line identification of similar states. The architecture and control rules involved in Adaptive Fuzzy Leader Clustering (AFLC) allow this system to be incorporated in control systems for identification of system states corresponding to specific control actions. We have used this algorithm to cluster the simulation data of Tethered Satellite System (TSS) to estimate the range of delta voltages necessary to maintain the desired length rate of the tether. The AFLC algorithm is capable of on-line estimation of the appropriate control voltages from the corresponding length error and length rate error without a priori knowledge of their membership functions and familarity with the behavior of the Tethered Satellite System.

  11. Extensibility in local sensor based planning for hyper-redundant manipulators (robot snakes)

    NASA Technical Reports Server (NTRS)

    Choset, Howie; Burdick, Joel

    1994-01-01

    Partial Shape Modification (PSM) is a local sensor feedback method used for hyper-redundant robot manipulators, in which the redundancy is very large or infinite such as that of a robot snake. This aspect of redundancy enables local obstacle avoidance and end-effector placement in real time. Due to the large number of joints or actuators in a hyper-redundant manipulator, small displacement errors of such easily accumulate to large errors in the position of the tip relative to the base. The accuracy could be improved by a local sensor based planning method in which sensors are distributed along the length of the hyper-redundant robot. This paper extends the local sensor based planning strategy beyond the limitations of the fixed length of such a manipulator when its joint limits are met. This is achieved with an algorithm where the length of the deforming part of the robot is variable. Thus , the robot's local avoidance of obstacles is improved through the enhancement of its extensibility.

  12. Measuring a diffusion coefficient by single-particle tracking: statistical analysis of experimental mean squared displacement curves.

    PubMed

    Ernst, Dominique; Köhler, Jürgen

    2013-01-21

    We provide experimental results on the accuracy of diffusion coefficients obtained by a mean squared displacement (MSD) analysis of single-particle trajectories. We have recorded very long trajectories comprising more than 1.5 × 10(5) data points and decomposed these long trajectories into shorter segments providing us with ensembles of trajectories of variable lengths. This enabled a statistical analysis of the resulting MSD curves as a function of the lengths of the segments. We find that the relative error of the diffusion coefficient can be minimized by taking an optimum number of points into account for fitting the MSD curves, and that this optimum does not depend on the segment length. Yet, the magnitude of the relative error for the diffusion coefficient does, and achieving an accuracy in the order of 10% requires the recording of trajectories with about 1000 data points. Finally, we compare our results with theoretical predictions and find very good qualitative and quantitative agreement between experiment and theory.

  13. Pound--Drever--Hall error signals for the length control of three-port grating coupled cavities

    NASA Astrophysics Data System (ADS)

    Britzger, Michael; Friedrich, Daniel; Kroker, Stefanie; Brückner, Frank; Burmeister, Oliver; Kley, Ernst-Bernhard; Tünnermann, Andreas; Danzmann, Karsten; Schnabel, Roman

    2011-08-01

    Gratings enable light coupling into an optical cavity without transmission through any substrate. This concept reduces light absorption and substrate heating and was suggested for light coupling into the arm cavities of future gravitational wave detectors. One particularly interesting approach is based on all-reflective gratings with low diffraction efficiencies and three diffraction orders (three ports). However, it was discovered that, generally, three-port grating coupled cavities show an asymmetric resonance profile that results in asymmetric and low quality Pound--Drever--Hall error signals for cavity length control. We experimentally demonstrate that this problem is solved by the detection of light at both reflection ports of the cavity and the postprocessing of the two demodulated electronic signals.

  14. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    NASA Astrophysics Data System (ADS)

    Marinkovic, Slavica; Guillemot, Christine

    2006-12-01

    Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  15. An improved methodology for heliostat testing and evaluation at the Plataforma Solar de Almería

    NASA Astrophysics Data System (ADS)

    Monterreal, Rafael; Enrique, Raúl; Fernández-Reche, Jesús

    2017-06-01

    The optical quality of a heliostat basically quantifies the difference between the scattering effects of the actual solar radiation reflected on its optical surface, compared to the so called canonical dispersion, that is, the one reflected on an optical surface free of constructional errors (paradigm). However, apart from the uncertainties of the measuring process itself, the value of the optical quality must be independent of the measuring instrument; so, any new measuring techniques that provide additional information about the error sources on the heliostat reflecting surface would be welcome. That error sources are responsible for the final optical quality value, with different degrees of influence. For the constructor of heliostats it will be extremely useful to know the value of the classical sources of error and their weight on the overall optical quality of a heliostat, such as facets geometry or focal length, as well as the characteristics of the heliostat as a whole, i.e., its geometry, focal length, facets misalignment and also the possible dependence of these effects with mechanical and/or meteorological factors. It is the goal of the present paper to unfold these optical quality error sources by exploring directly the reflecting surface of the heliostat with the help of a laser-scanner device and link the result with the traditional methods of heliostat evaluation at the Plataforma Solar de Almería.

  16. Forward scattering in two-beam laser interferometry

    NASA Astrophysics Data System (ADS)

    Mana, G.; Massa, E.; Sasso, C. P.

    2018-04-01

    A fractional error as large as 25 pm mm-1 at the zero optical-path difference has been observed in an optical interferometer measuring the displacement of an x-ray interferometer used to determine the lattice parameter of silicon. Detailed investigations have brought to light that the error was caused by light forward-scattered from the beam feeding the interferometer. This paper reports on the impact of forward-scattered light on the accuracy of two-beam optical interferometry applied to length metrology, and supplies a model capable of explaining the observed error.

  17. Regression-assisted deconvolution.

    PubMed

    McIntyre, Julie; Stefanski, Leonard A

    2011-06-30

    We present a semi-parametric deconvolution estimator for the density function of a random variable biX that is measured with error, a common challenge in many epidemiological studies. Traditional deconvolution estimators rely only on assumptions about the distribution of X and the error in its measurement, and ignore information available in auxiliary variables. Our method assumes the availability of a covariate vector statistically related to X by a mean-variance function regression model, where regression errors are normally distributed and independent of the measurement errors. Simulations suggest that the estimator achieves a much lower integrated squared error than the observed-data kernel density estimator when models are correctly specified and the assumption of normal regression errors is met. We illustrate the method using anthropometric measurements of newborns to estimate the density function of newborn length. Copyright © 2011 John Wiley & Sons, Ltd.

  18. Multiple-bolted joints in wood members : a literature review

    Treesearch

    Peter James Moss

    1997-01-01

    This study reviewed the literature on experimental and analytical research for the connection of wood members using multiple laterally loaded bolts. From this, the influence of geometric factors were ascertained, such as staggered and aligned fasteners, optimum fastener configurations, row factors and length-to-diameter bolt ratios, spacing, end and edge distances, and...

  19. Inflatable Column Structure

    NASA Technical Reports Server (NTRS)

    Hedgepeth, J. M.

    1985-01-01

    Lightweight structural member easy to store. Billowing between circumferential loops of fiber inflated column becomes series of cells. Each fiber subjected to same tension along entire length (though tension is different in different fibers). Member is called "isotensoid" column. Serves as jack for automobiles or structures during repairs. Also used as support for temporary bleachers or swimming pools.

  20. 5 CFR 353.203 - Length of service.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... DUTY FROM UNIFORMED SERVICE OR COMPENSABLE INJURY Uniformed Service § 353.203 Length of service. (a... member of a uniformed service who is: (i) Ordered to or retained on active duty under sections 12301(a..., 360, 367, or 712; (ii) Ordered to or retained on active duty (other than for training) under any...

  1. Evaluation of very long baseline interferometry atmospheric modeling improvements

    NASA Technical Reports Server (NTRS)

    Macmillan, D. S.; Ma, C.

    1994-01-01

    We determine the improvement in baseline length precision and accuracy using new atmospheric delay mapping functions and MTT by analyzing the NASA Crustal Dynamics Project research and development (R&D) experiments and the International Radio Interferometric Surveying (IRIS) A experiments. These mapping functions reduce baseline length scatter by about 20% below that using the CfA2.2 dry and Chao wet mapping functions. With the newer mapping functions, average station vertical scatter inferred from observed length precision (given by length repeatabilites) is 11.4 mm for the 1987-1990 monthly R&D series of experiments and 5.6 mm for the 3-week-long extended research and development experiment (ERDE) series. The inferred monthly R&D station vertical scatter is reduced by 2 mm or by 7 mm is a root-sum-square (rss) sense. Length repeatabilities are optimum when observations below a 7-8 deg elevation cutoff are removed from the geodetic solution. Analyses of IRIS-A data from 1984 through 1991 and the monthly R&D experiments both yielded a nonatmospheric unmodeled station vertical error or about 8 mm. In addition, analysis of the IRIS-A exeriments revealed systematic effects in the evolution of some baseline length measurements. The length rate of change has an apparent acceleration, and the length evolution has a quasi-annual signature. We show that the origin of these effects is unlikely to be related to atmospheric modeling errors. Rates of change of the transatlantic Westford-Wettzell and Richmond-Wettzell baseline lengths calculated from 1988 through 1991 agree with the NUVEL-1 plate motion model (Argus and Gordon, 1991) to within 1 mm/yr. Short-term (less than 90 days) variations of IRIS-A baseline length measurements contribute more than 90% of the observed scatter about a best fit line, and this short-term scatter has large variations on an annual time scale.

  2. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns

    PubMed Central

    Severns, Paul M.

    2015-01-01

    Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches. PMID:26312190

  3. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns.

    PubMed

    Breed, Greg A; Severns, Paul M

    2015-01-01

    Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches.

  4. Effects of Test Level Discrimination and Difficulty on Answer-Copying Indices

    ERIC Educational Resources Information Center

    Sunbul, Onder; Yormaz, Seha

    2018-01-01

    In this study Type I Error and the power rates of omega (?) and GBT (generalized binomial test) indices were investigated for several nominal alpha levels and for 40 and 80-item test lengths with 10,000-examinee sample size under several test level restrictions. As a result, Type I error rates of both indices were found to be below the acceptable…

  5. JAK and STAT members of yellow catfish Pelteobagrus fulvidraco and their roles in leptin affecting lipid metabolism.

    PubMed

    Wu, Kun; Tan, Xiao-Ying; Xu, Yi-Huan; Chen, Qi-Liang; Pan, Ya-Xiong

    2016-01-15

    The present study clones and characterizes the full-length cDNA sequences of members in JAK-STAT pathway, explores their mRNA tissue expression and the biological role in leptin influencing lipid metabolism in yellow catfish Pelteobagrus fulvidraco. Full-length cDNA sequences of five JAKs and seven STAT members, including some splicing variants, were obtained from yellow catfish. Compared to mammals, more members of the JAKs and STATs family were found in yellow catfish, which provided evidence that the JAK and STAT family members had arisen by the whole genome duplications during vertebrate evolution. All of these members were widely expressed across the eleven tissues (liver, white muscle, spleen, brain, gill, mesenteric fat, anterior intestine, heart, mid-kidney, testis and ovary) but at the variable levels. Intraperitoneal injection in vivo and incubation in vitro of recombinant human leptin changed triglyceride content and mRNA expression of several JAKs and STATs members, and genes involved in lipid metabolism. AG490, a specific inhibitor of JAK2-STAT pathway, partially reversed leptin-induced effects, indicating that the JAK2a/b-STAT3 pathway exerts main regulating actions of leptin on lipid metabolism at transcriptional level. Meanwhile, the different splicing variants were differentially regulated by leptin incubation. Thus, our data suggest that leptin activated the JAK/STAT pathway and increases the expression of target genes, which partially accounts for the leptin-induced changes in lipid metabolism in yellow catfish. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Error analysis of 3D-PTV through unsteady interfaces

    NASA Astrophysics Data System (ADS)

    Akutina, Yulia; Mydlarski, Laurent; Gaskin, Susan; Eiff, Olivier

    2018-03-01

    The feasibility of stereoscopic flow measurements through an unsteady optical interface is investigated. Position errors produced by a wavy optical surface are determined analytically, as are the optimal viewing angles of the cameras to minimize such errors. Two methods of measuring the resulting velocity errors are proposed. These methods are applied to 3D particle tracking velocimetry (3D-PTV) data obtained through the free surface of a water flow within a cavity adjacent to a shallow channel. The experiments were performed using two sets of conditions, one having no strong surface perturbations, and the other exhibiting surface gravity waves. In the latter case, the amplitude of the gravity waves was 6% of the water depth, resulting in water surface inclinations of about 0.2°. (The water depth is used herein as a relevant length scale, because the measurements are performed in the entire water column. In a more general case, the relevant scale is the maximum distance from the interface to the measurement plane, H, which here is the same as the water depth.) It was found that the contribution of the waves to the overall measurement error is low. The absolute position errors of the system were moderate (1.2% of H). However, given that the velocity is calculated from the relative displacement of a particle between two frames, the errors in the measured water velocities were reasonably small, because the error in the velocity is the relative position error over the average displacement distance. The relative position error was measured to be 0.04% of H, resulting in small velocity errors of 0.3% of the free-stream velocity (equivalent to 1.1% of the average velocity in the domain). It is concluded that even though the absolute positions to which the velocity vectors are assigned is distorted by the unsteady interface, the magnitude of the velocity vectors themselves remains accurate as long as the waves are slowly varying (have low curvature). The stronger the disturbances on the interface are (high amplitude, short wave length), the smaller is the distance from the interface at which the measurements can be performed.

  7. Evaluation of NMME temperature and precipitation bias and forecast skill for South Asia

    NASA Astrophysics Data System (ADS)

    Cash, Benjamin A.; Manganello, Julia V.; Kinter, James L.

    2017-08-01

    Systematic error and forecast skill for temperature and precipitation in two regions of Southern Asia are investigated using hindcasts initialized May 1 from the North American Multi-Model Ensemble. We focus on two contiguous but geographically and dynamically diverse regions: the Extended Indian Monsoon Rainfall (70-100E, 10-30 N) and the nearby mountainous area of Pakistan and Afghanistan (60-75E, 23-39 N). Forecast skill is assessed using the Sign test framework, a rigorous statistical method that can be applied to non-Gaussian variables such as precipitation and to different ensemble sizes without introducing bias. We find that models show significant systematic error in both precipitation and temperature for both regions. The multi-model ensemble mean (MMEM) consistently yields the lowest systematic error and the highest forecast skill for both regions and variables. However, we also find that the MMEM consistently provides a statistically significant increase in skill over climatology only in the first month of the forecast. While the MMEM tends to provide higher overall skill than climatology later in the forecast, the differences are not significant at the 95% level. We also find that MMEMs constructed with a relatively small number of ensemble members per model can equal or outperform MMEMs constructed with more members in skill. This suggests some ensemble members either provide no contribution to overall skill or even detract from it.

  8. Choice of word length in the design of a specialized hardware for lossless wavelet compression of medical images

    NASA Astrophysics Data System (ADS)

    Urriza, Isidro; Barragan, Luis A.; Artigas, Jose I.; Garcia, Jose I.; Navarro, Denis

    1997-11-01

    Image compression plays an important role in the archiving and transmission of medical images. Discrete cosine transform (DCT)-based compression methods are not suitable for medical images because of block-like image artifacts that could mask or be mistaken for pathology. Wavelet transforms (WTs) are used to overcome this problem. When implementing WTs in hardware, finite precision arithmetic introduces quantization errors. However, lossless compression is usually required in the medical image field. Thus, the hardware designer must look for the optimum register length that, while ensuring the lossless accuracy criteria, will also lead to a high-speed implementation with small chip area. In addition, wavelet choice is a critical issue that affects image quality as well as system design. We analyze the filters best suited to image compression that appear in the literature. For them, we obtain the maximum quantization errors produced in the calculation of the WT components. Thus, we deduce the minimum word length required for the reconstructed image to be numerically identical to the original image. The theoretical results are compared with experimental results obtained from algorithm simulations on random test images. These results enable us to compare the hardware implementation cost of the different filter banks. Moreover, to reduce the word length, we have analyzed the case of increasing the integer part of the numbers while maintaining constant the word length when the scale increases.

  9. The use of compressive sensing and peak detection in the reconstruction of microtubules length time series in the process of dynamic instability.

    PubMed

    Mahrooghy, Majid; Yarahmadian, Shantia; Menon, Vineetha; Rezania, Vahid; Tuszynski, Jack A

    2015-10-01

    Microtubules (MTs) are intra-cellular cylindrical protein filaments. They exhibit a unique phenomenon of stochastic growth and shrinkage, called dynamic instability. In this paper, we introduce a theoretical framework for applying Compressive Sensing (CS) to the sampled data of the microtubule length in the process of dynamic instability. To reduce data density and reconstruct the original signal with relatively low sampling rates, we have applied CS to experimental MT lament length time series modeled as a Dichotomous Markov Noise (DMN). The results show that using CS along with the wavelet transform significantly reduces the recovery errors comparing in the absence of wavelet transform, especially in the low and the medium sampling rates. In a sampling rate ranging from 0.2 to 0.5, the Root-Mean-Squared Error (RMSE) decreases by approximately 3 times and between 0.5 and 1, RMSE is small. We also apply a peak detection technique to the wavelet coefficients to detect and closely approximate the growth and shrinkage of MTs for computing the essential dynamic instability parameters, i.e., transition frequencies and specially growth and shrinkage rates. The results show that using compressed sensing along with the peak detection technique and wavelet transform in sampling rates reduces the recovery errors for the parameters. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Construct validity and expert benchmarking of the haptic virtual reality dental simulator.

    PubMed

    Suebnukarn, Siriwan; Chaisombat, Monthalee; Kongpunwijit, Thanapohn; Rhienmora, Phattanapon

    2014-10-01

    The aim of this study was to demonstrate construct validation of the haptic virtual reality (VR) dental simulator and to define expert benchmarking criteria for skills assessment. Thirty-four self-selected participants (fourteen novices, fourteen intermediates, and six experts in endodontics) at one dental school performed ten repetitions of three mode tasks of endodontic cavity preparation: easy (mandibular premolar with one canal), medium (maxillary premolar with two canals), and hard (mandibular molar with three canals). The virtual instrument's path length was registered by the simulator. The outcomes were assessed by an expert. The error scores in easy and medium modes accurately distinguished the experts from novices and intermediates at the onset of training, when there was a significant difference between groups (ANOVA, p<0.05). The trend was consistent until trial 5. From trial 6 on, the three groups achieved similar scores. No significant difference was found between groups at the end of training. Error score analysis was not able to distinguish any group at the hard level of training. Instrument path length showed a difference in performance according to groups at the onset of training (ANOVA, p<0.05). This study established construct validity for the haptic VR dental simulator by demonstrating its discriminant capabilities between that of experts and non-experts. The experts' error scores and path length were used to define benchmarking criteria for optimal performance.

  11. Myopia progression control lens reverses induced myopia in chicks.

    PubMed

    Irving, Elizabeth L; Yakobchuk-Stanger, Cristina

    2017-09-01

    To determine whether lens induced myopia in chicks can be reversed or reduced by wearing myopia progression control lenses of the same nominal (central) power but different peripheral designs. Newly hatched chicks wore -10D Conventional lenses unilaterally for 7 days. The myopic chicks were then randomly divided into three groups: one fitted with Type 1 myopia progression control lenses, the second with Type 2 myopia progression control lenses and the third continued to wear Conventional lenses for seven more days. All lenses had -10D central power, but Type 1 and Type 2 lenses had differing peripheral designs; +2.75D and +1.32D power rise at pupil edge, respectively. Axial length and refractive error were measured on Days 0, 7 and 14. Analyses were performed on the mean differences between treated and untreated eyes. Refractive error and axial length differences between treated and untreated eyes were insignificant on Day 0. On Day 7 treated eyes were longer (T1; 0.44 ± 0.07 mm, T2; 0.27 ± 0.06 mm, C; 0.40 ± 0.06 mm) and more myopic (T1; -9.61 ± 0.52D, T2; -9.57 ± 0.61D, C; -9.50 ± 0.58D) than untreated eyes with no significant differences between treatment groups. On Day 14 myopia was reversed (+2.91 ± 1.08D), reduced (-3.83 ± 0.94D) or insignificantly increased (-11.89 ± 0.79D) in treated eyes of Type 1, Type 2 and Conventional treated chicks respectively. Relative changes in axial lengths (T1; -0.13 ± 0.09 mm, T2; 0.36 ± 0.09 mm, C; 0.56 ± 0.05 mm) were consistent with changes in refraction. Refractive error differences were significant for all group comparisons (p < 0.001). Type 1 length differences were significantly different from Conventional and Type 2 groups (p < 0.001). Myopia progression control lens designs can reverse lens-induced myopia in chicks. The effect is primarily due to axial length changes. Different lens designs produce different effects indicating that lens design is important in modifying refractive error. © 2017 The Authors. Ophthalmic and Physiological Optics published by John Wiley & Sons Ltd on behalf of College of Optometrists.

  12. Performance of IUCN proxies for generation length.

    PubMed

    Fung, Han Chi; Waples, Robin S

    2017-08-01

    One of the criteria used by the International Union for Conservation of Nature (IUCN) to assess threat status is the rate of decline in abundance over 3 generations or 10 years, whichever is longer. The traditional method for calculating generation length (T) uses age-specific survival and fecundity, but these data are rarely available. Consequently, proxies that require less information are often used, which introduces potential biases. The IUCN recommends 2 proxies based on adult mortality rate, T̂d = α + 1/d, and reproductive life span, T̂z = α + z * RL, where α is age at first reproduction, d is adult mortality rate, RL is reproductive life span, and z is a coefficient derived from data for comparable species. We used published life tables for 78 animal and plant populations to evaluate precision and bias of these proxies by comparing T̂d and T̂z with true generation length. Mean error rates in estimating T were 31% for T̂d and 20% for T̂z, but error rates for T̂d were 16% when we subtracted 1 year ( T̂d( adj )=T̂d-1 ), as suggested by theory; T̂d( adj ) also provided largely unbiased estimates regardless of the true generation length. Performance of T̂z depends on compilation of detailed data for comparable species, but our results suggest taxonomy is not a reliable indicator of comparability. All 3 proxies depend heavily on a reliable estimate of age at first reproduction, as we illustrated with 2 test species. The relatively large mean errors for all proxies emphasized the importance of collecting the detailed life-history information necessary to calculate true generation length. Unfortunately, publication of such data is less common than it was decades ago. We identified generic patterns of age-specific change in vital rates that can be used to predict expected patterns of bias from applying T̂d( adj ). Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  13. Passive Multistatic Detection of Maritime Targets using Opportunistic Radars

    DTIC Science & Technology

    2014-03-01

    coordinate position for aa =1:1:length(Err_time1) Err_Total1( aa ) = max(abs(Err_time1( aa )),abs(Err_L1( aa ))+abs(Err_thetaR1( aa ))); end for aa =1...1:length(Err_time1) Err_Total1( aa ) = max(abs(Err_Total1( aa )),abs(Err_thetaR1( aa ))); end 93 %%%%%%%%%%%%%%%%%%%%% Tx 2...Err_Total2=zeros(size(Err_time2)); % Find th emax of the three source of error and use that for that % coordinate position for aa =1:1:length

  14. Measurement of lengths and angles by means of a photoelectric direct reading-off microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Priver, L.S.

    1995-11-01

    We consider the measurement of lengths and angles over a broad range with error amounting to fractions of a micrometer or angular second using a newly designed mockup of a photoelectric direct reading-off microscope. The microscope implements a pulse-position method of transforming information through application of a scanner in the form of a rotating polyhedral mirror.

  15. Effectiveness of the surgical safety checklist in correcting errors: a literature review applying Reason's Swiss cheese model.

    PubMed

    Collins, Susan J; Newhouse, Robin; Porter, Jody; Talsma, AkkeNeel

    2014-07-01

    Approximately 2,700 patients are harmed by wrong-site surgery each year. The World Health Organization created the surgical safety checklist to reduce the incidence of wrong-site surgery. A project team conducted a narrative review of the literature to determine the effectiveness of the surgical safety checklist in correcting and preventing errors in the OR. Team members used Swiss cheese model of error by Reason to analyze the findings. Analysis of results indicated the effectiveness of the surgical checklist in reducing the incidence of wrong-site surgeries and other medical errors; however, checklists alone will not prevent all errors. Successful implementation requires perioperative stakeholders to understand the nature of errors, recognize the complex dynamic between systems and individuals, and create a just culture that encourages a shared vision of patient safety. Copyright © 2014 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  16. Nuclear reactor spacer grid and ductless core component

    DOEpatents

    Christiansen, David W.; Karnesky, Richard A.

    1989-01-01

    The invention relates to a nuclear reactor spacer grid member for use in a liquid cooled nuclear reactor and to a ductless core component employing a plurality of these spacer grid members. The spacer grid member is of the egg-shell type and is constructed so that the walls of the cell members of the grid member are formed of a single thickness of metal to avoid tolerance problems. Within each cell member is a hydraulic spring which laterally constrains the nuclear material bearing rod which passes through each cell member against a hardstop in response to coolant flow through the cell member. This hydraulic spring is also suitable for use in a water cooled nuclear reactor. A core component constructed of, among other components, a plurality of these spacer grid members, avoids the use of a full length duct by providing spacer sleeves about the sodium tubes passing through the spacer grid members at locations between the grid members, thereby maintaining a predetermined space between adjacent grid members.

  17. Accuracy of Jump-Mat Systems for Measuring Jump Height.

    PubMed

    Pueo, Basilio; Lipinska, Patrycja; Jiménez-Olmedo, José M; Zmijewski, Piotr; Hopkins, Will G

    2017-08-01

    Vertical-jump tests are commonly used to evaluate lower-limb power of athletes and nonathletes. Several types of equipment are available for this purpose. To compare the error of measurement of 2 jump-mat systems (Chronojump-Boscosystem and Globus Ergo Tester) with that of a motion-capture system as a criterion and to determine the modifying effect of foot length on jump height. Thirty-one young adult men alternated 4 countermovement jumps with 4 squat jumps. Mean jump height and standard deviations representing technical error of measurement arising from each device and variability arising from the subjects themselves were estimated with a novel mixed model and evaluated via standardization and magnitude-based inference. The jump-mat systems produced nearly identical measures of jump height (differences in means and in technical errors of measurement ≤1 mm). Countermovement and squat-jump height were both 13.6 cm higher with motion capture (90% confidence limits ±0.3 cm), but this very large difference was reduced to small unclear differences when adjusted to a foot length of zero. Variability in countermovement and squat-jump height arising from the subjects was small (1.1 and 1.5 cm, respectively, 90% confidence limits ±0.3 cm); technical error of motion capture was similar in magnitude (1.7 and 1.6 cm, ±0.3 and ±0.4 cm), and that of the jump mats was similar or smaller (1.2 and 0.3 cm, ±0.5 and ±0.9 cm). The jump-mat systems provide trustworthy measurements for monitoring changes in jump height. Foot length can explain the substantially higher jump height observed with motion capture.

  18. Accurate typing of short tandem repeats from genome-wide sequencing data and its applications.

    PubMed

    Fungtammasan, Arkarachai; Ananda, Guruprasad; Hile, Suzanne E; Su, Marcia Shu-Wei; Sun, Chen; Harris, Robert; Medvedev, Paul; Eckert, Kristin; Makova, Kateryna D

    2015-05-01

    Short tandem repeats (STRs) are implicated in dozens of human genetic diseases and contribute significantly to genome variation and instability. Yet profiling STRs from short-read sequencing data is challenging because of their high sequencing error rates. Here, we developed STR-FM, short tandem repeat profiling using flank-based mapping, a computational pipeline that can detect the full spectrum of STR alleles from short-read data, can adapt to emerging read-mapping algorithms, and can be applied to heterogeneous genetic samples (e.g., tumors, viruses, and genomes of organelles). We used STR-FM to study STR error rates and patterns in publicly available human and in-house generated ultradeep plasmid sequencing data sets. We discovered that STRs sequenced with a PCR-free protocol have up to ninefold fewer errors than those sequenced with a PCR-containing protocol. We constructed an error correction model for genotyping STRs that can distinguish heterozygous alleles containing STRs with consecutive repeat numbers. Applying our model and pipeline to Illumina sequencing data with 100-bp reads, we could confidently genotype several disease-related long trinucleotide STRs. Utilizing this pipeline, for the first time we determined the genome-wide STR germline mutation rate from a deeply sequenced human pedigree. Additionally, we built a tool that recommends minimal sequencing depth for accurate STR genotyping, depending on repeat length and sequencing read length. The required read depth increases with STR length and is lower for a PCR-free protocol. This suite of tools addresses the pressing challenges surrounding STR genotyping, and thus is of wide interest to researchers investigating disease-related STRs and STR evolution. © 2015 Fungtammasan et al.; Published by Cold Spring Harbor Laboratory Press.

  19. Trauma Non-Technical Training (TNT-2): the development, piloting and multilevel assessment of a simulation-based, interprofessional curriculum for team-based trauma resuscitation.

    PubMed

    Doumouras, Aristithes G; Keshet, Itay; Nathens, Avery B; Ahmed, Najma; Hicks, Christopher M

    2014-10-01

    Medical error is common during trauma resuscitations. Most errors are nontechnical, stemming from ineffective team leadership, nonstandardized communication among team members, lack of global situational awareness, poor use of resources and inappropriate triage and prioritization. We developed an interprofessional, simulation-based trauma team training curriculum for Canadian surgical trainees. Here we discuss its piloting and evaluation.

  20. Mounting Systems for Structural Members, Fastening Assemblies Thereof, and Vibration Isolation Systems Including the Same

    NASA Technical Reports Server (NTRS)

    Young, Ken (Inventor); Hindle, Timothy (Inventor); Barber, Tim Daniel (Inventor)

    2016-01-01

    Mounting systems for structural members, fastening assemblies thereof, and vibration isolation systems including the same are provided. Mounting systems comprise a pair of mounting brackets, each clamped against a fastening assembly forming a mounting assembly. Fastening assemblies comprise a spherical rod end comprising a spherical member having a through opening and an integrally threaded shaft, first and second seating members on opposite sides of the spherical member and each having a through opening that is substantially coaxial with the spherical member through opening, and a partially threaded fastener that threadably engages each mounting bracket forming the mounting assembly. Structural members have axial end portions, each releasably coupled to a mounting bracket by the integrally threaded shaft. Axial end portions are threaded in opposite directions for permitting structural member rotation to adjust a length thereof to a substantially zero strain position. Structural members may be vibration isolator struts in vibration isolation systems.

  1. Post-processing through linear regression

    NASA Astrophysics Data System (ADS)

    van Schaeybroeck, B.; Vannitsem, S.

    2011-03-01

    Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.

  2. Professional and Semi-Professional Organizations--A Comparison of the Degree of Participation Desired.

    ERIC Educational Resources Information Center

    Cullers, Benjamin D.

    Two educational organizations, a community junior college and a junior high school, were examined to ascertain the amount of participation desired by members of each organization. It was believed that the higher the degree of professional authority within the organization, i.e., the greater the length of training required of its members, the…

  3. An investigation of error correcting techniques for OMV and AXAF

    NASA Technical Reports Server (NTRS)

    Ingels, Frank; Fryer, John

    1991-01-01

    The original objectives of this project were to build a test system for the NASA 255/223 Reed/Solomon encoding/decoding chip set and circuit board. This test system was then to be interfaced with a convolutional system at MSFC to examine the performance of the concantinated codes. After considerable work, it was discovered that the convolutional system could not function as needed. This report documents the design, construction, and testing of the test apparatus for the R/S chip set. The approach taken was to verify the error correcting behavior of the chip set by injecting known error patterns onto data and observing the results. Error sequences were generated using pseudo-random number generator programs, with Poisson time distribution between errors and Gaussian burst lengths. Sample means, variances, and number of un-correctable errors were calculated for each data set before testing.

  4. Partially Key Distribution with Public Key Cryptosystem Based on Error Control Codes

    NASA Astrophysics Data System (ADS)

    Tavallaei, Saeed Ebadi; Falahati, Abolfazl

    Due to the low level of security in public key cryptosystems based on number theory, fundamental difficulties such as "key escrow" in Public Key Infrastructure (PKI) and a secure channel in ID-based cryptography, a new key distribution cryptosystem based on Error Control Codes (ECC) is proposed . This idea is done by some modification on McEliece cryptosystem. The security of ECC cryptosystem obtains from the NP-Completeness of block codes decoding. The capability of generating public keys with variable lengths which is suitable for different applications will be provided by using ECC. It seems that usage of these cryptosystems because of decreasing in the security of cryptosystems based on number theory and increasing the lengths of their keys would be unavoidable in future.

  5. Cleared for the visual approach: Human factor problems in air carrier operations

    NASA Technical Reports Server (NTRS)

    Monan, W. P.

    1983-01-01

    The study described herein, a set of 353 ASRS reports of unique aviation occurrences significantly involving visual approaches was examined to identify hazards and pitfalls embedded in the visual approach procedure and to consider operational practices that might help avoid future mishaps. Analysis of the report set identified nine aspects of the visual approach procedure that appeared to be predisposing conditions for inducing or exacerbating the effects of operational errors by flight crew members or controllers. Predisposing conditions, errors, and operational consequences of the errors are discussed. In a summary, operational policies that might mitigate the problems are examined.

  6. Continuous correction of differential path length factor in near-infrared spectroscopy

    PubMed Central

    Moore, Jason H.; Diamond, Solomon G.

    2013-01-01

    Abstract. In continuous-wave near-infrared spectroscopy (CW-NIRS), changes in the concentration of oxyhemoglobin and deoxyhemoglobin can be calculated by solving a set of linear equations from the modified Beer-Lambert Law. Cross-talk error in the calculated hemodynamics can arise from inaccurate knowledge of the wavelength-dependent differential path length factor (DPF). We apply the extended Kalman filter (EKF) with a dynamical systems model to calculate relative concentration changes in oxy- and deoxyhemoglobin while simultaneously estimating relative changes in DPF. Results from simulated and experimental CW-NIRS data are compared with results from a weighted least squares (WLSQ) method. The EKF method was found to effectively correct for artificially introduced errors in DPF and to reduce the cross-talk error in simulation. With experimental CW-NIRS data, the hemodynamic estimates from EKF differ significantly from the WLSQ (p<0.001). The cross-correlations among residuals at different wavelengths were found to be significantly reduced by the EKF method compared to WLSQ in three physiologically relevant spectral bands 0.04 to 0.15 Hz, 0.15 to 0.4 Hz and 0.4 to 2.0 Hz (p<0.001). This observed reduction in residual cross-correlation is consistent with reduced cross-talk error in the hemodynamic estimates from the proposed EKF method. PMID:23640027

  7. Continuous correction of differential path length factor in near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Talukdar, Tanveer; Moore, Jason H.; Diamond, Solomon G.

    2013-05-01

    In continuous-wave near-infrared spectroscopy (CW-NIRS), changes in the concentration of oxyhemoglobin and deoxyhemoglobin can be calculated by solving a set of linear equations from the modified Beer-Lambert Law. Cross-talk error in the calculated hemodynamics can arise from inaccurate knowledge of the wavelength-dependent differential path length factor (DPF). We apply the extended Kalman filter (EKF) with a dynamical systems model to calculate relative concentration changes in oxy- and deoxyhemoglobin while simultaneously estimating relative changes in DPF. Results from simulated and experimental CW-NIRS data are compared with results from a weighted least squares (WLSQ) method. The EKF method was found to effectively correct for artificially introduced errors in DPF and to reduce the cross-talk error in simulation. With experimental CW-NIRS data, the hemodynamic estimates from EKF differ significantly from the WLSQ (p<0.001). The cross-correlations among residuals at different wavelengths were found to be significantly reduced by the EKF method compared to WLSQ in three physiologically relevant spectral bands 0.04 to 0.15 Hz, 0.15 to 0.4 Hz and 0.4 to 2.0 Hz (p<0.001). This observed reduction in residual cross-correlation is consistent with reduced cross-talk error in the hemodynamic estimates from the proposed EKF method.

  8. Speech serial control in healthy speakers and speakers with hypokinetic or ataxic dysarthria: effects of sequence length and practice

    PubMed Central

    Reilly, Kevin J.; Spencer, Kristie A.

    2013-01-01

    The current study investigated the processes responsible for selection of sounds and syllables during production of speech sequences in 10 adults with hypokinetic dysarthria from Parkinson’s disease, five adults with ataxic dysarthria, and 14 healthy control speakers. Speech production data from a choice reaction time task were analyzed to evaluate the effects of sequence length and practice on speech sound sequencing. Speakers produced sequences that were between one and five syllables in length over five experimental runs of 60 trials each. In contrast to the healthy speakers, speakers with hypokinetic dysarthria demonstrated exaggerated sequence length effects for both inter-syllable intervals (ISIs) and speech error rates. Conversely, speakers with ataxic dysarthria failed to demonstrate a sequence length effect on ISIs and were also the only group that did not exhibit practice-related changes in ISIs and speech error rates over the five experimental runs. The exaggerated sequence length effects in the hypokinetic speakers with Parkinson’s disease are consistent with an impairment of action selection during speech sequence production. The absent length effects observed in the speakers with ataxic dysarthria is consistent with previous findings that indicate a limited capacity to buffer speech sequences in advance of their execution. In addition, the lack of practice effects in these speakers suggests that learning-related improvements in the production rate and accuracy of speech sequences involves processing by structures of the cerebellum. Together, the current findings inform models of serial control for speech in healthy speakers and support the notion that sequencing deficits contribute to speech symptoms in speakers with hypokinetic or ataxic dysarthria. In addition, these findings indicate that speech sequencing is differentially impaired in hypokinetic and ataxic dysarthria. PMID:24137121

  9. MacWilliams Identity for M-Spotty Weight Enumerator

    NASA Astrophysics Data System (ADS)

    Suzuki, Kazuyoshi; Fujiwara, Eiji

    M-spotty byte error control codes are very effective for correcting/detecting errors in semiconductor memory systems that employ recent high-density RAM chips with wide I/O data (e.g., 8, 16, or 32bits). In this case, the width of the I/O data is one byte. A spotty byte error is defined as random t-bit errors within a byte of length b bits, where 1 le t ≤ b. Then, an error is called an m-spotty byte error if at least one spotty byte error is present in a byte. M-spotty byte error control codes are characterized by the m-spotty distance, which includes the Hamming distance as a special case for t =1 or t = b. The MacWilliams identity provides the relationship between the weight distribution of a code and that of its dual code. The present paper presents the MacWilliams identity for the m-spotty weight enumerator of m-spotty byte error control codes. In addition, the present paper clarifies that the indicated identity includes the MacWilliams identity for the Hamming weight enumerator as a special case.

  10. Fault-tolerant measurement-based quantum computing with continuous-variable cluster states.

    PubMed

    Menicucci, Nicolas C

    2014-03-28

    A long-standing open question about Gaussian continuous-variable cluster states is whether they enable fault-tolerant measurement-based quantum computation. The answer is yes. Initial squeezing in the cluster above a threshold value of 20.5 dB ensures that errors from finite squeezing acting on encoded qubits are below the fault-tolerance threshold of known qubit-based error-correcting codes. By concatenating with one of these codes and using ancilla-based error correction, fault-tolerant measurement-based quantum computation of theoretically indefinite length is possible with finitely squeezed cluster states.

  11. TEST BIAS--VALIDITY OF THE SCHOLASTIC APTITUDE TEST FOR NEGRO AND WHITE STUDENTS IN INTEGRATED COLLEGES.

    ERIC Educational Resources Information Center

    CLEARY, T. ANNE

    FOR THIS RESEARCH, A TEST WAS SAID TO BE BIASED FOR MEMBERS OF A SUBGROUP OF THE POPULATION IF, IN THE PREDICTION OF A CRITERION FOR WHICH THE TEST WAS DESIGNED, CONSISTENT NONZERO ERRORS OF PREDICTION ARE MADE FOR MEMBERS OF THE SUBGROUP. SAMPLES OF NEGRO AND WHITE STUDENTS FROM THREE INTEGRATED COLLEGES WERE STUDIED. IN THE TWO EASTERN COLLEGES,…

  12. Emergency department discharge prescription errors in an academic medical center

    PubMed Central

    Belanger, April; Devine, Lauren T.; Lane, Aaron; Condren, Michelle E.

    2017-01-01

    This study described discharge prescription medication errors written for emergency department patients. This study used content analysis in a cross-sectional design to systematically categorize prescription errors found in a report of 1000 discharge prescriptions submitted in the electronic medical record in February 2015. Two pharmacy team members reviewed the discharge prescription list for errors. Open-ended data were coded by an additional rater for agreement on coding categories. Coding was based upon majority rule. Descriptive statistics were used to address the study objective. Categories evaluated were patient age, provider type, drug class, and type and time of error. The discharge prescription error rate out of 1000 prescriptions was 13.4%, with “incomplete or inadequate prescription” being the most commonly detected error (58.2%). The adult and pediatric error rates were 11.7% and 22.7%, respectively. The antibiotics reviewed had the highest number of errors. The highest within-class error rates were with antianginal medications, antiparasitic medications, antacids, appetite stimulants, and probiotics. Emergency medicine residents wrote the highest percentage of prescriptions (46.7%) and had an error rate of 9.2%. Residents of other specialties wrote 340 prescriptions and had an error rate of 20.9%. Errors occurred most often between 10:00 am and 6:00 pm. PMID:28405061

  13. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part II—Experimental Implementation

    PubMed Central

    Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario

    2016-01-01

    Coordinate measuring machines (CMM) are main instruments of measurement in laboratories and in industrial quality control. A compensation error model has been formulated (Part I). It integrates error and uncertainty in the feature measurement model. Experimental implementation for the verification of this model is carried out based on the direct testing on a moving bridge CMM. The regression results by axis are quantified and compared to CMM indication with respect to the assigned values of the measurand. Next, testing of selected measurements of length, flatness, dihedral angle, and roundness features are accomplished. The measurement of calibrated gauge blocks for length or angle, flatness verification of the CMM granite table and roundness of a precision glass hemisphere are presented under a setup of repeatability conditions. The results are analysed and compared with alternative methods of estimation. The overall performance of the model is endorsed through experimental verification, as well as the practical use and the model capability to contribute in the improvement of current standard CMM measuring capabilities. PMID:27754441

  14. Hartman Testing of X-Ray Telescopes

    NASA Technical Reports Server (NTRS)

    Saha, Timo T.; Biskasch, Michael; Zhang, William W.

    2013-01-01

    Hartmann testing of x-ray telescopes is a simple test method to retrieve and analyze alignment errors and low-order circumferential errors of x-ray telescopes and their components. A narrow slit is scanned along the circumference of the telescope in front of the mirror and the centroids of the images are calculated. From the centroid data, alignment errors, radius variation errors, and cone-angle variation errors can be calculated. Mean cone angle, mean radial height (average radius), and the focal length of the telescope can also be estimated if the centroid data is measured at multiple focal plane locations. In this paper we present the basic equations that are used in the analysis process. These equations can be applied to full circumference or segmented x-ray telescopes. We use the Optical Surface Analysis Code (OSAC) to model a segmented x-ray telescope and show that the derived equations and accompanying analysis retrieves the alignment errors and low order circumferential errors accurately.

  15. Thermomechanical responses of concrete members strengthened with cfrp sheets

    NASA Astrophysics Data System (ADS)

    Alqurashi, Abdulaziz

    Strengthening structural members means to be able to carry additional loads. Since, 1990s, a lot of materials and techniques have been established to not only increasing the capacity of member but also facing deterioration. Deterioration has become one of the worst highly maintenance cost. According to The ASCE, 27.1% of all bridges in the United States are not effectual. This is because the high traffic reflects negatively to structural members and cause deterioration of these members. This problem has been cost a lot of money. In addition, FRP has approved that it can increase the capacity of member and overcome some disadvantages such as deterioration. Therefore, CFRP sheet has become widely used. However, high temperatures affect the performance of externally bonded CFRP sheet negatively. Investigation should be carried out on relaxation and flexural performance of members under different temperatures. Therefore, this thesis focus on analyzing and investigating the performance of strengthened members exposed to elevated temperatures (25 to 175 °C). The experimental program was divided to two main parts. First, 144 strengthen concrete blocks 100mm X 150mm X 75mm has been exposed to elevated temperatures. These blocks have two main categories, which are different CFRP sheet width, and different CFRP sheet length. Different CFRP width has three types, which are type 0.25B (25mm x 100mm), type 0.5B (50mm x 100mm) and type 0.75B (75mm x 100mm). Also, Different CFRP length has three types, which are type L e (bonded area of 50 mm by 90mm), 1.25 Le (area of 50mm by 125mm) and type 1.5Le (50mm by 137 mm). Second, studying the performance of RC beams exposed to elevated temperatures.

  16. Procedural error monitoring and smart checklists

    NASA Technical Reports Server (NTRS)

    Palmer, Everett

    1990-01-01

    Human beings make and usually detect errors routinely. The same mental processes that allow humans to cope with novel problems can also lead to error. Bill Rouse has argued that errors are not inherently bad but their consequences may be. He proposes the development of error-tolerant systems that detect errors and take steps to prevent the consequences of the error from occurring. Research should be done on self and automatic detection of random and unanticipated errors. For self detection, displays should be developed that make the consequences of errors immediately apparent. For example, electronic map displays graphically show the consequences of horizontal flight plan entry errors. Vertical profile displays should be developed to make apparent vertical flight planning errors. Other concepts such as energy circles could also help the crew detect gross flight planning errors. For automatic detection, systems should be developed that can track pilot activity, infer pilot intent and inform the crew of potential errors before their consequences are realized. Systems that perform a reasonableness check on flight plan modifications by checking route length and magnitude of course changes are simple examples. Another example would be a system that checked the aircraft's planned altitude against a data base of world terrain elevations. Information is given in viewgraph form.

  17. Physician attitudes and practices related to voluntary error and near-miss reporting.

    PubMed

    Smith, Koren S; Harris, Kendra M; Potters, Louis; Sharma, Rajiv; Mutic, Sasa; Gay, Hiram A; Wright, Jean; Samuels, Michael; Ye, Xiaobu; Ford, Eric; Terezakis, Stephanie

    2014-09-01

    Incident learning systems are important tools to improve patient safety in radiation oncology, but physician participation in these systems is poor. To understand reporting practices and attitudes, a survey was sent to staff members of four large academic radiation oncology centers, all of which have in-house reporting systems. Institutional review board approval was obtained to send a survey to employees including physicians, dosimetrists, nurses, physicists, and radiation therapists. The survey evaluated barriers to reporting, perceptions of errors, and reporting practices. The responses of physicians were compared with those of other professional groups. There were 274 respondents to the survey, with a response rate of 81.3%. Physicians and other staff agreed that errors and near-misses were happening in their clinics (93.8% v 88.7%, respectively) and that they have a responsibility to report (97% overall). Physicians were significantly less likely to report minor near-misses (P = .001) and minor errors (P = .024) than other groups. Physicians were significantly more concerned about getting colleagues in trouble (P = .015), liability (P = .009), effect on departmental reputation (P = .006), and embarrassment (P < .001) than their colleagues. Regression analysis identified embarrassment among physicians as a critical barrier. If not embarrassed, participants were 2.5 and 4.5 times more likely to report minor errors and major near-miss events, respectively. All members of the radiation oncology team observe errors and near-misses. Physicians, however, are significantly less likely to report events than other colleagues. There are important, specific barriers to physician reporting that need to be addressed to encourage reporting and create a fair culture around reporting. Copyright © 2014 by American Society of Clinical Oncology.

  18. An Evaluation of the Outcomes of Mutual Health Organizations in Benin

    PubMed Central

    Haddad, Slim; Ridde, Valery; Yacoubou, Ismaelou; Mák, Geneviève; Gbetié, Michel

    2012-01-01

    Background Mutual health organizations (MHO) have been seen as a promising alternative to the fee-based funding model but scientific foundations to support their generalization are still limited. Very little is known about the extent of the impact of MHOs on health-seeking behaviours, quality and costs. Methodology/Principal Findings We present the results of an evaluation of the effects attributable to membership in an MHO in a rural region of Benin. Two prospective studies of users (parturients and hospitalized patients) were conducted on the territory of an inter-mutual consisting of 10 MHOs and as many healthcare centres (one, Ouessé, serving as a referral hospital) and one hospital (Papané). Members and non-members were matched (142 pairs of parturients and 109 triads of hospitalized patients) and multilevel multiple regression was used. Results show that member parturients went to healthcare centres sooner (p = 0.049) and were discharged more quickly after delivery (p = 0.001) than non-members. Length of stay in some cases was longer for hospitalized member parturients (+41%). Being a member did not shorten hospital stay, total length of episode of care, or time between appearance of symptoms and recourse to care. Regarding expenses, member parturients paid one-third less than non-members for a delivery. For hospitalized patients, the average savings for members was around $35 US. Total expenses incurred by patients hospitalized at Papané Hospital were higher than at Ouessé but the two hospitals’ relative advantages were comparable at −36% and −39%, respectively. Conclusion/Significance These results confirm mutual health organizations’ capacity to protect households financially, even if benefits for the poor have not been clearly determined. The search for scientific evidence should continue, to understand their impacts with regard to services obtained by their members. PMID:23077556

  19. Community Health Coalitions in Context: Associations between Geographic Context, Member Type and Length of Membership with Coalition Functions

    ERIC Educational Resources Information Center

    Sánchez, V.; Sanders, M.; Andrews, M. L.; Hale, R.; Carrillo, C.

    2014-01-01

    The coalition literature recognizes context (geography, demographics and history) as a variable of interest, yet few coalition evaluation studies have focused on it. This study explores the association between geographic context and structures (e.g. member type) with functional characteristics (e.g. decision making or levels of conflict) in a…

  20. REACTOR CONTROL DEVICE

    DOEpatents

    Kaufman, H.B.; Weiss, A.A.

    1959-08-18

    A shadow control device for controlling a nuclear reactor is described. The device comprises a series of hollow neutron-absorbing elements arranged in groups, each element having a cavity for substantially housing an adjoining element and a longitudinal member for commonly supporting the groups of elements. Longitudinal actuation of the longitudinal member distributes the elements along its entire length in which position maximum worth is achieved.

  1. Relationship between axial length of the emmetropic eye and the age, body height, and body weight of schoolchildren.

    PubMed

    Selović, Alen; Juresa, Vesna; Ivankovic, Davor; Malcic, Davor; Selović Bobonj, Gordana

    2005-01-01

    This report assesses the relationship of axial length of emmetropic (without refractive error) eyes to age, height, and weight in 1,600 Croatian schoolchildren. Axial eye lengths were determined by an ultrasonic eye biometry (A scan). Axial length of both eyes increases with age, height, and weight but shows a closer correlation to height and weight than to age. Boys have a significantly longer axial eye length than girls (P < 0.01). Boys or girls of similar or nearing body height and body weight and with emmetropic eyes have close linear measures of anatomic eye structures within their sex, regardless their age. Body height demonstrates the closest correlation to the growth and development of the emmetropic eye. Copyright 2005 Wiley-Liss, Inc.

  2. The structure of the regulatory region of the rat L1 (L1Rn, long interspersed repeated) DNA family of transposable elements.

    PubMed Central

    Furano, A V; Robb, S M; Robb, F T

    1988-01-01

    Here we report the DNA structure of the left 1.5 kb of two newly isolated full length members of the rat L1 DNA family (L1Rn, long interspersed repeated DNA). In contrast to earlier isolated rat L1 members, both of these contain promoter-like regions that are most likely full length. In addition, the promoter-like region of both members has undergone a partial tandem duplication. A second internal region of the left end of one of the reported members is also tandemly duplicated. The propensity of the left end of rat L1 elements to undergo this form of genetic rearrangement, as well as other structural features revealed by the present work, is discussed in light of the fact that during evolution the otherwise conserved mammalian L1 DNA families have each acquired completely different promoter-like regions. In an accompanying paper [Nur, I., Pascale, E., and Furano, A. V. (1988) Nucleic Acids Res. 16, submitted], we report that one of the rat promoter-like regions can function as a promoter in rat cells when fused to the Escherichia coli chloramphenicol acyltransferase gene. PMID:2845369

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Jared A.; Hacker, Joshua P.; Monache, Luca Delle

    A current barrier to greater deployment of offshore wind turbines is the poor quality of numerical weather prediction model wind and turbulence forecasts over open ocean. The bulk of development for atmospheric boundary layer (ABL) parameterization schemes has focused on land, partly due to a scarcity of observations over ocean. The 100-m FINO1 tower in the North Sea is one of the few sources worldwide of atmospheric profile observations from the sea surface to turbine hub height. These observations are crucial to developing a better understanding and modeling of physical processes in the marine ABL. In this paper we usemore » the WRF single column model (SCM), coupled with an ensemble Kalman filter from the Data Assimilation Research Testbed (DART), to create 100-member ensembles at the FINO1 location. The goal of this study is to determine the extent to which model parameter estimation can improve offshore wind forecasts. Combining two datasets that provide lateral forcing for the SCM and two methods for determining z 0, the time-varying sea-surface roughness length, we conduct four WRF-SCM/DART experiments over the October-December 2006 period. The two methods for determining z 0 are the default Fairall-adjusted Charnock formulation in WRF, and using parameter estimation techniques to estimate z 0 in DART. Using DART to estimate z 0 is found to reduce 1-h forecast errors of wind speed over the Charnock-Fairall z 0 ensembles by 4%–22%. Finally, however, parameter estimation of z 0 does not simultaneously reduce turbulent flux forecast errors, indicating limitations of this approach and the need for new marine ABL parameterizations.« less

  4. Structural Turnbuckle Bears Compressive or Tensile Loads

    NASA Technical Reports Server (NTRS)

    Bateman, W. A.; Lang, C. H.

    1985-01-01

    Column length adjuster based on turnbuckle principle. Device consists of internally and externally threaded bushing, threaded housing and threaded rod. Housing attached to one part and threaded rod attached to other part of structure. Turning double threaded bushing contracts or extends rod in relation to housing. Once adjusted, bushing secured with jamnuts. Device used for axially loaded members requiring length adjustment during installation.

  5. Fish measurement using Android smart phone: the example of swamp eel

    NASA Astrophysics Data System (ADS)

    Chen, Baisong; Fu, Zhuo; Ouyang, Haiying; Sun, Yingze; Ge, Changshui; Hu, Jing

    The body length and weight are critical physiological parameters for fishes, especially eel-like fishes like swamp eel(Monopterusalbus).Fast and accurate measuring of body length is significant for swamp eel culturing as well as its resource investigation and protection. This paper presents an Android smart phone-based photogrammetry technology for measuring and estimating the length and weight of swamp eel. This method utilizes the feature that the ratio of lengths of two objects within an image is equal to that of in reality to measure the length of swamp eels. And then, it estimates the weight via a pre-built length-weight regression model. Analysis and experimental results have indicated that this method is a fast and accurate method for length and weight measurements of swamp eel. The cross-validation results shows that the RMSE (root-mean-square error) of total length measurement of swamp eel is0.4 cm, and the RMSE of weight estimation is 11 grams.

  6. Testing of Error-Correcting Sparse Permutation Channel Codes

    NASA Technical Reports Server (NTRS)

    Shcheglov, Kirill, V.; Orlov, Sergei S.

    2008-01-01

    A computer program performs Monte Carlo direct numerical simulations for testing sparse permutation channel codes, which offer strong error-correction capabilities at high code rates and are considered especially suitable for storage of digital data in holographic and volume memories. A word in a code of this type is characterized by, among other things, a sparseness parameter (M) and a fixed number (K) of 1 or "on" bits in a channel block length of N.

  7. Testing the accuracy of redshift-space group-finding algorithms

    NASA Astrophysics Data System (ADS)

    Frederic, James J.

    1995-04-01

    Using simulated redshift surveys generated from a high-resolution N-body cosmological structure simulation, we study algorithms used to identify groups of galaxies in redshift space. Two algorithms are investigated; both are friends-of-friends schemes with variable linking lengths in the radial and transverse dimenisons. The chief difference between the algorithms is in the redshift linking length. The algorithm proposed by Huchra & Geller (1982) uses a generous linking length designed to find 'fingers of god,' while that of Nolthenius & White (1987) uses a smaller linking length to minimize contamination by projection. We find that neither of the algorithms studied is intrinsically superior to the other; rather, the ideal algorithm as well as the ideal algorithm parameters depends on the purpose for which groups are to be studied. The Huchra & Geller algorithm misses few real groups, at the cost of including some spurious groups and members, while the Nolthenius & White algorithm misses high velocity dispersion groups and members but is less likely to include interlopers in its group assignments. Adjusting the parameters of either algorithm results in a trade-off between group accuracy and completeness. In a companion paper we investigate the accuracy of virial mass estimates and clustering properties of groups identified using these algorithms.

  8. Objective determination of image end-members in spectral mixture analysis of AVIRIS data

    NASA Technical Reports Server (NTRS)

    Tompkins, Stefanie; Mustard, John F.; Pieters, Carle M.; Forsyth, Donald W.

    1993-01-01

    Spectral mixture analysis has been shown to be a powerful, multifaceted tool for analysis of multi- and hyper-spectral data. Applications of AVIRIS data have ranged from mapping soils and bedrock to ecosystem studies. During the first phase of the approach, a set of end-members are selected from an image cube (image end-members) that best account for its spectral variance within a constrained, linear least squares mixing model. These image end-members are usually selected using a priori knowledge and successive trial and error solutions to refine the total number and physical location of the end-members. However, in many situations a more objective method of determining these essential components is desired. We approach the problem of image end-member determination objectively by using the inherent variance of the data. Unlike purely statistical methods such as factor analysis, this approach derives solutions that conform to a physically realistic model.

  9. Hand-assisted versus straight laparoscopic sigmoid colectomy on a training simulator: what is the difference? A stepwise comparison of hand-assisted versus straight laparoscopic sigmoid colectomy performance on an augmented reality simulator.

    PubMed

    Leblanc, Fabien; Delaney, Conor P; Ellis, Clyde N; Neary, Paul C; Champagne, Bradley J; Senagore, Anthony J

    2010-12-01

    We hypothesized that simulator-generated metrics and intraoperative errors may be able to differentiate the technical differences between hand-assisted laparoscopic (HAL) and straight laparoscopic (SL) approaches. Thirty-eight trainees performed two laparoscopic sigmoid colectomies on an augmented reality simulator, randomly starting by a SL (n = 19) or HAL (n = 19) approach. Both approaches were compared according to simulator-generated metrics, and intraoperative errors were collected by faculty. Sixty-four percent of surgeons were experienced (>50 procedures) with open colon surgery. Fifty-five percent and 69% of surgeons were inexperienced (<10 procedures) with SL and HAL colon surgery, respectively. Time (P < 0.001), path length (P < 0.001), and smoothness (P < 0.001) were lower with the HAL approach. Operative times for sigmoid and splenic flexure mobilization and for the colorectal anastomosis were significantly shorter with the HAL approach. Time to control the vascular pedicle was similar between both approaches. Error rates were similar between both approaches. Operative time, path length, and smoothness correlated directly with the error rate for the HAL approach. In contrast, error rate inversely correlated with the operative time for the SL approach. A HAL approach for sigmoid colectomy accelerated colonic mobilization and anastomosis. The difference in correlation between both laparoscopic approaches and error rates suggests the need for different skills to perform the HAL and the SL sigmoid colectomy. These findings may explain the preference of some surgeons for a HAL approach early in the learning of laparoscopic colorectal surgery.

  10. Magnetic antenna using metallic glass

    NASA Technical Reports Server (NTRS)

    Desch, Michael D. (Inventor); Farrell, William M. (Inventor); Houser, Jeffrey G. (Inventor)

    1996-01-01

    A lightweight search-coil antenna or sensor assembly for detecting magnetic fields and including a multi-turn electromagnetic induction coil wound on a spool type coil form through which is inserted an elongated coil loading member comprised of metallic glass material wrapped around a dielectric rod. The dielectric rod consists of a plastic or a wooden dowel having a length which is relatively larger than its thickness so as to provide a large length-to-diameter ratio. A tri-axial configuration includes a housing in which is located three substantially identical mutually orthogonal electromagnetic induction coil assemblies of the type described above wherein each of the assemblies include an electromagnetic coil wound on a dielectric spool with an elongated metallic glass coil loading member projecting therethrough.

  11. Improved cutback method measuring beat-length for high-birefringence optical fiber by fitting data of photoelectric signal

    NASA Astrophysics Data System (ADS)

    Shi, Zhi-Dong; Lin, Jian-Qiang; Bao, Huan-Huan; Liu, Shu; Xiang, Xue-Nong

    2008-03-01

    A photoelectric measurement system for measuring the beat length of birefringence fiber is set up including a set of rotating-wave-plate polarimeter using single photodiode. And two improved cutback methods suitable for measuring beat-length within millimeter range of high birefringence fiber are proposed through data processing technique. The cut length needs not to be restricted shorter than one centimeter so that the auto-cleaving machine is freely used, and no need to carefully operate the manually cleaving blade with low efficiency and poor success. The first method adopts the parameter-fitting to a saw-tooth function of tried beat length by the criterion of minimum square deviations, without special limitation on the cut length. The second method adopts linear-fitting in the divided length ranges, only restrict condition is the increment between different cut lengths less than one beat-length. For a section of holey high-birefringence fiber, we do experiments respectively by the two methods. The detecting error of beat-length is discussed and the advantage is compared.

  12. Zn Coordination Chemistry:  Development of Benchmark Suites for Geometries, Dipole Moments, and Bond Dissociation Energies and Their Use To Test and Validate Density Functionals and Molecular Orbital Theory.

    PubMed

    Amin, Elizabeth A; Truhlar, Donald G

    2008-01-01

    We present nonrelativistic and relativistic benchmark databases (obtained by coupled cluster calculations) of 10 Zn-ligand bond distances, 8 dipole moments, and 12 bond dissociation energies in Zn coordination compounds with O, S, NH3, H2O, OH, SCH3, and H ligands. These are used to test the predictions of 39 density functionals, Hartree-Fock theory, and seven more approximate molecular orbital theories. In the nonrelativisitic case, the M05-2X, B97-2, and mPW1PW functionals emerge as the most accurate ones for this test data, with unitless balanced mean unsigned errors (BMUEs) of 0.33, 0.38, and 0.43, respectively. The best local functionals (i.e., functionals with no Hartree-Fock exchange) are M06-L and τ-HCTH with BMUEs of 0.54 and 0.60, respectively. The popular B3LYP functional has a BMUE of 0.51, only slightly better than the value of 0.54 for the best local functional, which is less expensive. Hartree-Fock theory itself has a BMUE of 1.22. The M05-2X functional has a mean unsigned error of 0.008 Å for bond lengths, 0.19 D for dipole moments, and 4.30 kcal/mol for bond energies. The X3LYP functional has a smaller mean unsigned error (0.007 Å) for bond lengths but has mean unsigned errors of 0.43 D for dipole moments and 5.6 kcal/mol for bond energies. The M06-2X functional has a smaller mean unsigned error (3.3 kcal/mol) for bond energies but has mean unsigned errors of 0.017 Å for bond lengths and 0.37 D for dipole moments. The best of the semiempirical molecular orbital theories are PM3 and PM6, with BMUEs of 1.96 and 2.02, respectively. The ten most accurate functionals from the nonrelativistic benchmark analysis are then tested in relativistic calculations against new benchmarks obtained with coupled-cluster calculations and a relativistic effective core potential, resulting in M05-2X (BMUE = 0.895), PW6B95 (BMUE = 0.90), and B97-2 (BMUE = 0.93) as the top three functionals. We find significant relativistic effects (∼0.01 Å in bond lengths, ∼0.2 D in dipole moments, and ∼4 kcal/mol in Zn-ligand bond energies) that cannot be neglected for accurate modeling, but the same density functionals that do well in all-electron nonrelativistic calculations do well with relativistic effective core potentials. Although most tests are carried out with augmented polarized triple-ζ basis sets, we also carried out some tests with an augmented polarized double-ζ basis set, and we found, on average, that with the smaller basis set DFT has no loss in accuracy for dipole moments and only ∼10% less accurate bond lengths.

  13. Large diameter lasing tube cooling arrangement

    DOEpatents

    Hall, Jerome P [Livermore, CA; Alger, Terry W [Tracy, CA; Anderson, Andrew T [Livermore, CA; Arnold, Phillip A [Livermore, CA

    2004-05-18

    A cooling structure (16) for use inside a ceramic cylindrical tube (11) of a metal vapor laser (10) to cool the plasma in the tube (11), the cooling structure (16) comprising a plurality of circular metal members (17, 31) and mounting members (18, 34) that position the metal members (17, 31) coaxially in the tube (11) to form an annular lasing volume, with the metal members (17, 31) being axially spaced from each other along the length of the tube (11) to prevent the metal members from shorting out the current flow through the plasma in the tube (11) and to provide spaces through which the heat from localized hot spots in the plasma may radiate to the other side of the tube (11).

  14. Large Diameter Lasing Tube Cooling Arrangement

    DOEpatents

    Hall, Jerome P.; Alger, Terry W.; Anderson, Andrew T.; Arnold, Philip A.

    2004-05-18

    A cooling structure (16) for use inside a ceramic cylindrical tube (11) of a metal vapor laser (10) to cool the plasma in the tube (11), the cooling structure (16) comprising a plurality of circular metal members (17,31) and mounting members (18, 34) that position the metal members (17,31) coaxially in the tube (11) to form an annular lasing volume, with the metal members (17, 31) being axially spaced from each other along the length of the tube (11) to prevent the metal members from shorting out the current flow through the plasma in the tube (11) and to provide spaces through which the heat from localized hot spots in the plasma may radiate to the other side of the tube (11).

  15. On the Quality of Point-Clouds Derived from Sfm-Photogrammetry Applied to UAS Imagery

    NASA Astrophysics Data System (ADS)

    Carbonneau, P.; James, T.

    2014-12-01

    Structure from Motion photogrammetry (SfM-photogrammetry) recently appeared in environmental sciences as an impressive tool allowing for the creation of topographic data from unstructured imagery. Several authors have tested the performance of SfM-photogrammetry vs that of TLS or dGPS. Whilst the initial results were very promising, there is currently a growing awareness that systematic deformations occur in DEMs and point-clouds derived from SfM-photogrammetry. Notably, some authors have identified a systematic doming manifest as an increasing error vs distance to the model centre. Simulation studies have confirmed that this error is due to errors in the calibration of camera distortions. This work aims to further investigate these effects in the presence of real data. We start with a dataset of 220 images acquired from a sUAS. After obtaining an initial self-calibration of the camera lens with Agisoft Photoscan, our method consists in applying systematic perturbations to 2 key lens parameters: Focal length and the k1 distortion parameter. For each perturbation, a point-cloud was produced and compared to LiDAR data. After deriving the mean and standard deviation of the error residuals (ɛ), a 2nd order polynomial surface was fitted to the errors point-cloud and the peak ɛ defined as the mathematical extrema of this surface. The results are presented in figure 1. This figure shows that lens perturbations can induce a range of errors with systematic behaviours. Peak ɛ is primarily controlled by K1 with a secondary control exerted by the focal length. These results allow us to state that: To limit the peak ɛ to 10cm, the K1 parameter must be calibrated to within 0.00025 and the focal length to within 2.5 pixels (≈10 µm). This level of calibration accuracy can only be achieved with proper design of image acquisition and control network geometry. Our main point is therefore that SfM is not a bypass to a rigorous and well-informed photogrammetric approach. Users of SfM-photogrammetry will still require basic training and knowledge in the fundamentals of photogrammetry. This is especially true for applications where very small topographic changes need to be detected or where gradient-sensitive processes need to be modelled.

  16. Verifying and Postprocesing the Ensemble Spread-Error Relationship

    NASA Astrophysics Data System (ADS)

    Hopson, Tom; Knievel, Jason; Liu, Yubao; Roux, Gregory; Wu, Wanli

    2013-04-01

    With the increased utilization of ensemble forecasts in weather and hydrologic applications, there is a need to verify their benefit over less expensive deterministic forecasts. One such potential benefit of ensemble systems is their capacity to forecast their own forecast error through the ensemble spread-error relationship. The paper begins by revisiting the limitations of the Pearson correlation alone in assessing this relationship. Next, we introduce two new metrics to consider in assessing the utility an ensemble's varying dispersion. We argue there are two aspects of an ensemble's dispersion that should be assessed. First, and perhaps more fundamentally: is there enough variability in the ensembles dispersion to justify the maintenance of an expensive ensemble prediction system (EPS), irrespective of whether the EPS is well-calibrated or not? To diagnose this, the factor that controls the theoretical upper limit of the spread-error correlation can be useful. Secondly, does the variable dispersion of an ensemble relate to variable expectation of forecast error? Representing the spread-error correlation in relation to its theoretical limit can provide a simple diagnostic of this attribute. A context for these concepts is provided by assessing two operational ensembles: 30-member Western US temperature forecasts for the U.S. Army Test and Evaluation Command and 51-member Brahmaputra River flow forecasts of the Climate Forecast and Applications Project for Bangladesh. Both of these systems utilize a postprocessing technique based on quantile regression (QR) under a step-wise forward selection framework leading to ensemble forecasts with both good reliability and sharpness. In addition, the methodology utilizes the ensemble's ability to self-diagnose forecast instability to produce calibrated forecasts with informative skill-spread relationships. We will describe both ensemble systems briefly, review the steps used to calibrate the ensemble forecast, and present verification statistics using error-spread metrics, along with figures from operational ensemble forecasts before and after calibration.

  17. Behaviour of wrapped cold-formed steel columns under different loading conditions

    NASA Astrophysics Data System (ADS)

    Baabu, B. Hari; Sreenath, S.

    2017-07-01

    The use of Cold Formed Steel (CFS) sections as structural members is widely accepted because of its light nature. However, the load carrying capacity of these sections will be less compared to hot rolled sections. This study is meant to analyze the possibility of strengthening cold formed members by wrapping it with Glass Fiber Reinforced Polymer (GFRP) laminates. Light gauge steel columns of cross sectional dimensions 100mm x 50mm x 3.15mm were taken for this study. The effective length of the section is about 750mm. A total of 8 specimens including the control specimen is tested under axial and eccentric loading. The columns were tested keeping both ends hinged. For both loading cases the buckling behaviour, ultimate load carrying capacity and load-deflection characteristics of the CFS columns were analyzed. The GFRP laminates were wrapped on columns in three different ways such that wrapping the outer surface of web and flange throughout the length of specimen, wrapping the outer surface of web alone throughout the length of specimen and wrapping the outer surface of web and flange for the upper half length of the specimen where the buckling is expected. For both loading cases, the results indicated that the column with wrapping at the outer surface of web and flange throughout the length of specimen provides better strength for it.

  18. Interprofessional conflict and medical errors: results of a national multi-specialty survey of hospital residents in the US.

    PubMed

    Baldwin, Dewitt C; Daugherty, Steven R

    2008-12-01

    Clear communication is considered the sine qua non of effective teamwork. Breakdowns in communication resulting from interprofessional conflict are believed to potentiate errors in the care of patients, although there is little supportive empirical evidence. In 1999, we surveyed a national, multi-specialty sample of 6,106 residents (64.2% response rate). Three questions inquired about "serious conflict" with another staff member. Residents were also asked whether they had made a "significant medical error" (SME) during their current year of training, and whether this resulted in an "adverse patient outcome" (APO). Just over 20% (n = 722) reported "serious conflict" with another staff member. Ten percent involved another resident, 8.3% supervisory faculty, and 8.9% nursing staff. Of the 2,813 residents reporting no conflict with other professional colleagues, 669, or 23.8%, recorded having made an SME, with 3.4% APOs. By contrast, the 523 residents who reported conflict with at least one other professional had 36.4% SMEs and 8.3% APOs. For the 187 reporting conflict with two or more other professionals, the SME rate was 51%, with 16% APOs. The empirical association between interprofessional conflict and medical errors is both alarming and intriguing, although the exact nature of this relationship cannot currently be determined from these data. Several theoretical constructs are advanced to assist our thinking about this complex issue.

  19. Continuous equal channel angular pressing

    DOEpatents

    Zhu, Yuntian T.; Lowe, Terry C.; Valiev, Ruslan Z.; Raab, Georgy J.

    2006-12-26

    An apparatus that continuously processes a metal workpiece without substantially altering its cross section includes a wheel member having an endless circumferential groove, and a stationary constraint die that surrounds the wheel member, covers most of the length of the groove, and forms a passageway with the groove. The passageway has a rectangular shaped cross section. An abutment member projects from the die into the groove and blocks one end of the passageway. The wheel member rotates relative to the die in the direction toward the abutment member. An output channel in the die adjacent the abutment member has substantially the same cross section as the passageway. A metal workpiece is fed through an input channel into the passageway and carried in the groove by frictional drag in the direction towards the abutment member, and is extruded through the output channel without any substantial change in cross section.

  20. Saturation of the anisoplanatic error in horizontal imaging scenarios

    NASA Astrophysics Data System (ADS)

    Beck, Jeffrey; Bos, Jeremy P.

    2017-09-01

    We evaluate the piston-removed anisoplanatic error for smaller apertures imaging over long horizontal paths. Previous works have shown that the piston and tilt compensated anisoplanatic error saturates to values less than one squared radian. Under these conditions the definition of the isoplanatic angle is unclear. These works focused on nadir pointing telescope systems with aperture sizes between five meters and one half meter. We directly extend this work to horizontal imaging scenarios with aperture sizes smaller than one half meter. We assume turbulence is constant along the imaging path and that the ratio of the aperture size to the atmospheric coherence length is on the order of unity.

  1. Medication errors versus time of admission in a subpopulation of stroke patients undergoing inpatient rehabilitation complications and considerations.

    PubMed

    Pitts, Eric P

    2011-01-01

    This study looked at the medication ordering error frequency and the length of inpatient hospital stay in a subpopulation of stroke patients (n-60) as a function of time of patient admission to an inpatient rehabilitation hospital service. A total of 60 inpatient rehabilitation patients, 30 arriving before 4 pm, and 30 arriving after 4 pm, with as admitting diagnosis of stroke were randomly selected from a larger sample (N=426). There was a statistically significant increase in medication ordering errors and the number of inpatient rehabilitation hospital days in the group of patients who arrived after 4 pm.

  2. My copilot is a nurse--using crew resource management in the OR.

    PubMed

    Powell, Stephen M; Hill, Ruth Kimberly

    2006-01-01

    Crew resource management (CRM) has been used for more than 20 years in the aviation industry to teach individual error countermeasures by developing nontechnical (ie, cognitive, social) skills based on the observed traits of successful individuals and crews. The health care industry began to investigate aviation CRM after the Institute of Medicine's report, To Err is Human: Building a Safer Health System, recommended that medicine adopt aviation's approach to safety and error management. Initial results of implementing CRM in health care arenas have demonstrated reduced adverse outcomes, reduced errors, reduced length of stay, improved nurse retention, and changed attitudes and behaviors toward teamwork.

  3. Connector Mechanism Has Smaller Stroke

    NASA Technical Reports Server (NTRS)

    Milam, M. Bruce

    1992-01-01

    System for connecting electrical and/or fluid lines includes mechanism reducing length of stroke necessary to make or break connections. Feature enables connection and disconnection in confined space, and compensates for misalignment between connectors. Connector in active member moves upward at twice the speed of downward stroke of passive member. Stroke amplified within connector system. Applications include connections between modular electronic units, coupled vehicles, and hydraulic systems.

  4. Influence of OPD in wavelength-shifting interferometry

    NASA Astrophysics Data System (ADS)

    Wang, Hongjun; Tian, Ailing; Liu, Bingcai; Dang, Juanjuan

    2009-12-01

    Phase-shifting interferometry is a powerful tool for high accuracy optical measurement. It operates by change the optical path length in the reference arm or test arm. This method practices by move optical device. So it has much problem when the optical device is very large and heavy. For solve this problem, the wavelength-shifting interferometry was put forwarded. In wavelength-shifting interferometry, the phase shifting angle was achieved by change the wavelength of optical source. The phase shifting angle was decided by wavelength and OPD (Optical Path Difference) between test and reference wavefront. So the OPD is an important factor to measure results. But in measurement, because the positional error and profile error of under testing optical element is exist, the phase shifting angle is different in different test point when wavelength scanning, it will introduce phase shifting angle error, so it will introduce optical surface measure error. For analysis influence of OPD on optical surface error, the relation between surface error and OPD was researched. By simulation, the relation between phase shifting error and OPD was established. By analysis, the error compensation method was put forward. After error compensation, the measure results can be improved to great extend.

  5. Influence of OPD in wavelength-shifting interferometry

    NASA Astrophysics Data System (ADS)

    Wang, Hongjun; Tian, Ailing; Liu, Bingcai; Dang, Juanjuan

    2010-03-01

    Phase-shifting interferometry is a powerful tool for high accuracy optical measurement. It operates by change the optical path length in the reference arm or test arm. This method practices by move optical device. So it has much problem when the optical device is very large and heavy. For solve this problem, the wavelength-shifting interferometry was put forwarded. In wavelength-shifting interferometry, the phase shifting angle was achieved by change the wavelength of optical source. The phase shifting angle was decided by wavelength and OPD (Optical Path Difference) between test and reference wavefront. So the OPD is an important factor to measure results. But in measurement, because the positional error and profile error of under testing optical element is exist, the phase shifting angle is different in different test point when wavelength scanning, it will introduce phase shifting angle error, so it will introduce optical surface measure error. For analysis influence of OPD on optical surface error, the relation between surface error and OPD was researched. By simulation, the relation between phase shifting error and OPD was established. By analysis, the error compensation method was put forward. After error compensation, the measure results can be improved to great extend.

  6. A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting

    PubMed Central

    Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao

    2014-01-01

    We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813

  7. Optimum Cyclic Redundancy Codes for Noisy Channels

    NASA Technical Reports Server (NTRS)

    Posner, E. C.; Merkey, P.

    1986-01-01

    Capabilities and limitations of cyclic redundancy codes (CRC's) for detecting transmission errors in data sent over relatively noisy channels (e.g., voice-grade telephone lines or very-high-density storage media) discussed in 16-page report. Due to prevalent use of bytes in multiples of 8 bits data transmission, report primarily concerned with cases in which both block length and number of redundant bits (check bits for use in error detection) included in each block are multiples of 8 bits.

  8. Investigation of methods for estimating hand bone dimensions using X-ray hand anthropometric data.

    PubMed

    Kong, Yong-Ku; Freivalds, Andris; Kim, Dae-Min; Chang, Joonho

    2017-06-01

    This study examined two conversion methods, M1 and M2, to predict finger/phalange bone lengths based on finger/phalange surface lengths. Forty-one Korean college students (25 males and 16 females) were recruited and their finger/phalange surface lengths, bone lengths and grip strengths were measured using a vernier caliper, an X-ray generator and a double-handle force measurement system, respectively. M1 and M2 were defined as formulas able to estimate finger/phalange bone lengths based on one dimension (i.e., surface hand length) and four finger dimensions (surface finger lengths), respectively. As a result of conversion, the estimation errors by M1 presented mean 1.22 mm, which was smaller than those (1.29 mm) by M2. The bone lengths estimated by M1 (mean r = 0.81) presented higher correlations with the measured bone lengths than those estimated by M2 (0.79). Thus, the M1 method was recommended in the present study, based on conversion simplicity and accuracy.

  9. A fresh look at the predictors of naming accuracy and errors in Alzheimer's disease.

    PubMed

    Cuetos, Fernando; Rodríguez-Ferreiro, Javier; Sage, Karen; Ellis, Andrew W

    2012-09-01

    In recent years, a considerable number of studies have tried to establish which characteristics of objects and their names predict the responses of patients with Alzheimer's disease (AD) in the picture-naming task. The frequency of use of words and their age of acquisition (AoA) have been implicated as two of the most influential variables, with naming being best preserved for objects with high-frequency, early-acquired names. The present study takes a fresh look at the predictors of naming success in Spanish and English AD patients using a range of measures of word frequency and AoA along with visual complexity, imageability, and word length as predictors. Analyses using generalized linear mixed modelling found that naming accuracy was better predicted by AoA ratings taken from older adults than conventional ratings from young adults. Older frequency measures based on written language samples predicted accuracy better than more modern measures based on the frequencies of words in film subtitles. Replacing adult frequency with an estimate of cumulative (lifespan) frequency did not reduce the impact of AoA. Semantic error rates were predicted by both written word frequency and senior AoA while null response errors were only predicted by frequency. Visual complexity, imageability, and word length did not predict naming accuracy or errors. ©2012 The British Psychological Society.

  10. An innovative method for coordinate measuring machine one-dimensional self-calibration with simplified experimental process.

    PubMed

    Fang, Cheng; Butler, David Lee

    2013-05-01

    In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.

  11. Sleep and errors in a group of Australian hospital nurses at work and during the commute.

    PubMed

    Dorrian, Jillian; Tolley, Carolyn; Lamond, Nicole; van den Heuvel, Cameron; Pincombe, Jan; Rogers, Ann E; Drew, Dawson

    2008-09-01

    There is a paucity of information regarding Australian nurses' sleep and fatigue levels, and whether they result in impairment. Forty-one Australian hospital nurses completed daily logbooks for one month recording work hours, sleep, sleepiness, stress, errors, near errors and observed errors (made by others). Nurses reported exhaustion, stress and struggling to remain (STR) awake at work during one in three shifts. Sleep was significantly reduced on workdays in general, and workdays when an error was reported relative to days off. The primary predictor of error was STR, followed by stress. The primary predictor of extreme drowsiness during the commute was also STR awake, followed by exhaustion, and consecutive shifts. In turn, STR awake was predicted by exhaustion, prior sleep and shift length. Findings highlight the need for further attention to these issues to optimise the safety of nurses and patients in our hospitals, and the community at large on our roads.

  12. How the credit assignment problems in motor control could be solved after the cerebellum predicts increases in error.

    PubMed

    Verduzco-Flores, Sergio O; O'Reilly, Randall C

    2015-01-01

    We present a cerebellar architecture with two main characteristics. The first one is that complex spikes respond to increases in sensory errors. The second one is that cerebellar modules associate particular contexts where errors have increased in the past with corrective commands that stop the increase in error. We analyze our architecture formally and computationally for the case of reaching in a 3D environment. In the case of motor control, we show that there are synergies of this architecture with the Equilibrium-Point hypothesis, leading to novel ways to solve the motor error and distal learning problems. In particular, the presence of desired equilibrium lengths for muscles provides a way to know when the error is increasing, and which corrections to apply. In the context of Threshold Control Theory and Perceptual Control Theory we show how to extend our model so it implements anticipative corrections in cascade control systems that span from muscle contractions to cognitive operations.

  13. Error Analysis for High Resolution Topography with Bi-Static Single-Pass SAR Interferometry

    NASA Technical Reports Server (NTRS)

    Muellerschoen, Ronald J.; Chen, Curtis W.; Hensley, Scott; Rodriguez, Ernesto

    2006-01-01

    We present a flow down error analysis from the radar system to topographic height errors for bi-static single pass SAR interferometry for a satellite tandem pair. Because of orbital dynamics the baseline length and baseline orientation evolve spatially and temporally, the height accuracy of the system is modeled as a function of the spacecraft position and ground location. Vector sensitivity equations of height and the planar error components due to metrology, media effects, and radar system errors are derived and evaluated globally for a baseline mission. Included in the model are terrain effects that contribute to layover and shadow and slope effects on height errors. The analysis also accounts for nonoverlapping spectra and the non-overlapping bandwidth due to differences between the two platforms' viewing geometries. The model is applied to a 514 km altitude 97.4 degree inclination tandem satellite mission with a 300 m baseline separation and X-band SAR. Results from our model indicate that global DTED level 3 can be achieved.

  14. How the credit assignment problems in motor control could be solved after the cerebellum predicts increases in error

    PubMed Central

    Verduzco-Flores, Sergio O.; O'Reilly, Randall C.

    2015-01-01

    We present a cerebellar architecture with two main characteristics. The first one is that complex spikes respond to increases in sensory errors. The second one is that cerebellar modules associate particular contexts where errors have increased in the past with corrective commands that stop the increase in error. We analyze our architecture formally and computationally for the case of reaching in a 3D environment. In the case of motor control, we show that there are synergies of this architecture with the Equilibrium-Point hypothesis, leading to novel ways to solve the motor error and distal learning problems. In particular, the presence of desired equilibrium lengths for muscles provides a way to know when the error is increasing, and which corrections to apply. In the context of Threshold Control Theory and Perceptual Control Theory we show how to extend our model so it implements anticipative corrections in cascade control systems that span from muscle contractions to cognitive operations. PMID:25852535

  15. At least some errors are randomly generated (Freud was wrong)

    NASA Technical Reports Server (NTRS)

    Sellen, A. J.; Senders, J. W.

    1986-01-01

    An experiment was carried out to expose something about human error generating mechanisms. In the context of the experiment, an error was made when a subject pressed the wrong key on a computer keyboard or pressed no key at all in the time allotted. These might be considered, respectively, errors of substitution and errors of omission. Each of seven subjects saw a sequence of three digital numbers, made an easily learned binary judgement about each, and was to press the appropriate one of two keys. Each session consisted of 1,000 presentations of randomly permuted, fixed numbers broken into 10 blocks of 100. One of two keys should have been pressed within one second of the onset of each stimulus. These data were subjected to statistical analyses in order to probe the nature of the error generating mechanisms. Goodness of fit tests for a Poisson distribution for the number of errors per 50 trial interval and for an exponential distribution of the length of the intervals between errors were carried out. There is evidence for an endogenous mechanism that may best be described as a random error generator. Furthermore, an item analysis of the number of errors produced per stimulus suggests the existence of a second mechanism operating on task driven factors producing exogenous errors. Some errors, at least, are the result of constant probability generating mechanisms with error rate idiosyncratically determined for each subject.

  16. A Novel Error Model of Optical Systems and an On-Orbit Calibration Method for Star Sensors.

    PubMed

    Wang, Shuang; Geng, Yunhai; Jin, Rongyu

    2015-12-12

    In order to improve the on-orbit measurement accuracy of star sensors, the effects of image-plane rotary error, image-plane tilt error and distortions of optical systems resulting from the on-orbit thermal environment were studied in this paper. Since these issues will affect the precision of star image point positions, in this paper, a novel measurement error model based on the traditional error model is explored. Due to the orthonormal characteristics of image-plane rotary-tilt errors and the strong nonlinearity among these error parameters, it is difficult to calibrate all the parameters simultaneously. To solve this difficulty, for the new error model, a modified two-step calibration method based on the Extended Kalman Filter (EKF) and Least Square Methods (LSM) is presented. The former one is used to calibrate the main point drift, focal length error and distortions of optical systems while the latter estimates the image-plane rotary-tilt errors. With this calibration method, the precision of star image point position influenced by the above errors is greatly improved from 15.42% to 1.389%. Finally, the simulation results demonstrate that the presented measurement error model for star sensors has higher precision. Moreover, the proposed two-step method can effectively calibrate model error parameters, and the calibration precision of on-orbit star sensors is also improved obviously.

  17. Application Bayesian Model Averaging method for ensemble system for Poland

    NASA Astrophysics Data System (ADS)

    Guzikowski, Jakub; Czerwinska, Agnieszka

    2014-05-01

    The aim of the project is to evaluate methods for generating numerical ensemble weather prediction using a meteorological data from The Weather Research & Forecasting Model and calibrating this data by means of Bayesian Model Averaging (WRF BMA) approach. We are constructing height resolution short range ensemble forecasts using meteorological data (temperature) generated by nine WRF's models. WRF models have 35 vertical levels and 2.5 km x 2.5 km horizontal resolution. The main emphasis is that the used ensemble members has a different parameterization of the physical phenomena occurring in the boundary layer. To calibrate an ensemble forecast we use Bayesian Model Averaging (BMA) approach. The BMA predictive Probability Density Function (PDF) is a weighted average of predictive PDFs associated with each individual ensemble member, with weights that reflect the member's relative skill. For test we chose a case with heat wave and convective weather conditions in Poland area from 23th July to 1st August 2013. From 23th July to 29th July 2013 temperature oscillated below or above 30 Celsius degree in many meteorology stations and new temperature records were added. During this time the growth of the hospitalized patients with cardiovascular system problems was registered. On 29th July 2013 an advection of moist tropical air masses was recorded in the area of Poland causes strong convection event with mesoscale convection system (MCS). MCS caused local flooding, damage to the transport infrastructure, destroyed buildings, trees and injuries and direct threat of life. Comparison of the meteorological data from ensemble system with the data recorded on 74 weather stations localized in Poland is made. We prepare a set of the model - observations pairs. Then, the obtained data from single ensemble members and median from WRF BMA system are evaluated on the basis of the deterministic statistical error Root Mean Square Error (RMSE), Mean Absolute Error (MAE). To evaluation probabilistic data The Brier Score (BS) and Continuous Ranked Probability Score (CRPS) were used. Finally comparison between BMA calibrated data and data from ensemble members will be displayed.

  18. Minimum probe length for unique identification of all open reading frames in a microbial genome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sokhansanj, B A; Ng, J; Fitch, J P

    2000-03-05

    In this paper, we determine the minimum hybridization probe length to uniquely identify at least 95% of the open reading frame (ORF) in an organism. We analyze the whole genome sequences of 17 species, 11 bacteria, 4 archaea, and 2 eukaryotes. We also present a mathematical model for minimum probe length based on assuming that all ORFs are random, of constant length, and contain an equal distribution of bases. The model accurately predicts the minimum probe length for all species, but it incorrectly predicts that all ORFs may be uniquely identified. However, a probe length of just 9 bases ismore » adequate to identify over 95% of the ORFs for all 15 prokaryotic species we studied. Using a minimum probe length, while accepting that some ORFs may not be identified and that data will be lost due to hybridization error, may result in significant savings in microarray and oligonucleotide probe design.« less

  19. Comparison of rate one-half, equivalent constraint length 24, binary convolutional codes for use with sequential decoding on the deep-space channel

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.

  20. Medical students' experiences with medical errors: an analysis of medical student essays.

    PubMed

    Martinez, William; Lo, Bernard

    2008-07-01

    This study aimed to examine medical students' experiences with medical errors. In 2001 and 2002, 172 fourth-year medical students wrote an anonymous description of a significant medical error they had witnessed or committed during their clinical clerkships. The assignment represented part of a required medical ethics course. We analysed 147 of these essays using thematic content analysis. Many medical students made or observed significant errors. In either situation, some students experienced distress that seemingly went unaddressed. Furthermore, this distress was sometimes severe and persisted after the initial event. Some students also experienced considerable uncertainty as to whether an error had occurred and how to prevent future errors. Many errors may not have been disclosed to patients, and some students who desired to discuss or disclose errors were apparently discouraged from doing so by senior doctors. Some students criticised senior doctors who attempted to hide errors or avoid responsibility. By contrast, students who witnessed senior doctors take responsibility for errors and candidly disclose errors to patients appeared to recognise the importance of honesty and integrity and said they aspired to these standards. There are many missed opportunities to teach students how to respond to and learn from errors. Some faculty members and housestaff may at times respond to errors in ways that appear to contradict professional standards. Medical educators should increase exposure to exemplary responses to errors and help students to learn from and cope with errors.

  1. Reliability of fish size estimates obtained from multibeam imaging sonar

    USGS Publications Warehouse

    Hightower, Joseph E.; Magowan, Kevin J.; Brown, Lori M.; Fox, Dewayne A.

    2013-01-01

    Multibeam imaging sonars have considerable potential for use in fisheries surveys because the video-like images are easy to interpret, and they contain information about fish size, shape, and swimming behavior, as well as characteristics of occupied habitats. We examined images obtained using a dual-frequency identification sonar (DIDSON) multibeam sonar for Atlantic sturgeon Acipenser oxyrinchus oxyrinchus, striped bass Morone saxatilis, white perch M. americana, and channel catfish Ictalurus punctatus of known size (20–141 cm) to determine the reliability of length estimates. For ranges up to 11 m, percent measurement error (sonar estimate – total length)/total length × 100 varied by species but was not related to the fish's range or aspect angle (orientation relative to the sonar beam). Least-square mean percent error was significantly different from 0.0 for Atlantic sturgeon (x̄  =  −8.34, SE  =  2.39) and white perch (x̄  = 14.48, SE  =  3.99) but not striped bass (x̄  =  3.71, SE  =  2.58) or channel catfish (x̄  = 3.97, SE  =  5.16). Underestimating lengths of Atlantic sturgeon may be due to difficulty in detecting the snout or the longer dorsal lobe of the heterocercal tail. White perch was the smallest species tested, and it had the largest percent measurement errors (both positive and negative) and the lowest percentage of images classified as good or acceptable. Automated length estimates for the four species using Echoview software varied with position in the view-field. Estimates tended to be low at more extreme azimuthal angles (fish's angle off-axis within the view-field), but mean and maximum estimates were highly correlated with total length. Software estimates also were biased by fish images partially outside the view-field and when acoustic crosstalk occurred (when a fish perpendicular to the sonar and at relatively close range is detected in the side lobes of adjacent beams). These sources of bias are apparent when files are processed manually and can be filtered out when producing automated software estimates. Multibeam sonar estimates of fish size should be useful for research and management if these potential sources of bias and imprecision are addressed.

  2. Development of Super-Ensemble techniques for ocean analyses: the Mediterranean Sea case

    NASA Astrophysics Data System (ADS)

    Pistoia, Jenny; Pinardi, Nadia; Oddo, Paolo; Collins, Matthew; Korres, Gerasimos; Drillet, Yann

    2017-04-01

    Short-term ocean analyses for Sea Surface Temperature SST in the Mediterranean Sea can be improved by a statistical post-processing technique, called super-ensemble. This technique consists in a multi-linear regression algorithm applied to a Multi-Physics Multi-Model Super-Ensemble (MMSE) dataset, a collection of different operational forecasting analyses together with ad-hoc simulations produced by modifying selected numerical model parameterizations. A new linear regression algorithm based on Empirical Orthogonal Function filtering techniques is capable to prevent overfitting problems, even if best performances are achieved when we add correlation to the super-ensemble structure using a simple spatial filter applied after the linear regression. Our outcomes show that super-ensemble performances depend on the selection of an unbiased operator and the length of the learning period, but the quality of the generating MMSE dataset has the largest impact on the MMSE analysis Root Mean Square Error (RMSE) evaluated with respect to observed satellite SST. Lower RMSE analysis estimates result from the following choices: 15 days training period, an overconfident MMSE dataset (a subset with the higher quality ensemble members), and the least square algorithm being filtered a posteriori.

  3. Neural markers of errors as endophenotypes in neuropsychiatric disorders

    PubMed Central

    Manoach, Dara S.; Agam, Yigal

    2013-01-01

    Learning from errors is fundamental to adaptive human behavior. It requires detecting errors, evaluating what went wrong, and adjusting behavior accordingly. These dynamic adjustments are at the heart of behavioral flexibility and accumulating evidence suggests that deficient error processing contributes to maladaptively rigid and repetitive behavior in a range of neuropsychiatric disorders. Neuroimaging and electrophysiological studies reveal highly reliable neural markers of error processing. In this review, we evaluate the evidence that abnormalities in these neural markers can serve as sensitive endophenotypes of neuropsychiatric disorders. We describe the behavioral and neural hallmarks of error processing, their mediation by common genetic polymorphisms, and impairments in schizophrenia, obsessive-compulsive disorder, and autism spectrum disorders. We conclude that neural markers of errors meet several important criteria as endophenotypes including heritability, established neuroanatomical and neurochemical substrates, association with neuropsychiatric disorders, presence in syndromally-unaffected family members, and evidence of genetic mediation. Understanding the mechanisms of error processing deficits in neuropsychiatric disorders may provide novel neural and behavioral targets for treatment and sensitive surrogate markers of treatment response. Treating error processing deficits may improve functional outcome since error signals provide crucial information for flexible adaptation to changing environments. Given the dearth of effective interventions for cognitive deficits in neuropsychiatric disorders, this represents a potentially promising approach. PMID:23882201

  4. Neural markers of errors as endophenotypes in neuropsychiatric disorders.

    PubMed

    Manoach, Dara S; Agam, Yigal

    2013-01-01

    Learning from errors is fundamental to adaptive human behavior. It requires detecting errors, evaluating what went wrong, and adjusting behavior accordingly. These dynamic adjustments are at the heart of behavioral flexibility and accumulating evidence suggests that deficient error processing contributes to maladaptively rigid and repetitive behavior in a range of neuropsychiatric disorders. Neuroimaging and electrophysiological studies reveal highly reliable neural markers of error processing. In this review, we evaluate the evidence that abnormalities in these neural markers can serve as sensitive endophenotypes of neuropsychiatric disorders. We describe the behavioral and neural hallmarks of error processing, their mediation by common genetic polymorphisms, and impairments in schizophrenia, obsessive-compulsive disorder, and autism spectrum disorders. We conclude that neural markers of errors meet several important criteria as endophenotypes including heritability, established neuroanatomical and neurochemical substrates, association with neuropsychiatric disorders, presence in syndromally-unaffected family members, and evidence of genetic mediation. Understanding the mechanisms of error processing deficits in neuropsychiatric disorders may provide novel neural and behavioral targets for treatment and sensitive surrogate markers of treatment response. Treating error processing deficits may improve functional outcome since error signals provide crucial information for flexible adaptation to changing environments. Given the dearth of effective interventions for cognitive deficits in neuropsychiatric disorders, this represents a potentially promising approach.

  5. Automatic Alignment of Displacement-Measuring Interferometer

    NASA Technical Reports Server (NTRS)

    Halverson, Peter; Regehr, Martin; Spero, Robert; Alvarez-Salazar, Oscar; Loya, Frank; Logan, Jennifer

    2006-01-01

    A control system strives to maintain the correct alignment of a laser beam in an interferometer dedicated to measuring the displacement or distance between two fiducial corner-cube reflectors. The correct alignment of the laser beam is parallel to the line between the corner points of the corner-cube reflectors: Any deviation from parallelism changes the length of the optical path between the reflectors, thereby introducing a displacement or distance measurement error. On the basis of the geometrical optics of corner-cube reflectors, the length of the optical path can be shown to be L = L(sub 0)cos theta, where L(sub 0) is the distance between the corner points and theta is the misalignment angle. Therefore, the measurement error is given by DeltaL = L(sub 0)(cos theta - 1). In the usual case in which the misalignment is small, this error can be approximated as DeltaL approximately equal to -L(sub 0)theta sup 2/2. The control system (see figure) is implemented partly in hardware and partly in software. The control system includes three piezoelectric actuators for rapid, fine adjustment of the direction of the laser beam. The voltages applied to the piezoelectric actuators include components designed to scan the beam in a circular pattern so that the beam traces out a narrow cone (60 microradians wide in the initial application) about the direction in which it is nominally aimed. This scan is performed at a frequency (2.5 Hz in the initial application) well below the resonance frequency of any vibration of the interferometer. The laser beam makes a round trip to both corner-cube reflectors and then interferes with the launched beam. The interference is detected on a photodiode. The length of the optical path is measured by a heterodyne technique: A 100- kHz frequency shift between the launched beam and a reference beam imposes, on the detected signal, an interferometric phase shift proportional to the length of the optical path. A phase meter comprising analog filters and specialized digital circuitry converts the phase shift to an indication of displacement, generating a digital signal proportional to the path length.

  6. Error tracking control for underactuated overhead cranes against arbitrary initial payload swing angles

    NASA Astrophysics Data System (ADS)

    Zhang, Menghua; Ma, Xin; Rong, Xuewen; Tian, Xincheng; Li, Yibin

    2017-02-01

    This paper exploits an error tracking control method for overhead crane systems for which the error trajectories for the trolley and the payload swing can be pre-specified. The proposed method does not require that the initial payload swing angle remains zero, whereas this requirement is usually assumed in conventional methods. The significant feature of the proposed method is its superior control performance as well as its strong robustness over different or uncertain rope lengths, payload masses, desired positions, initial payload swing angles, and external disturbances. Owing to the same attenuation behavior, the desired error trajectory for the trolley for each traveling distance is not needed to be reset, which is easy to implement in practical applications. By converting the error tracking overhead crane dynamics to the objective system, we obtain the error tracking control law for arbitrary initial payload swing angles. Lyapunov techniques and LaSalle's invariance theorem are utilized to prove the convergence and stability of the closed-loop system. Simulation and experimental results are illustrated to validate the superior performance of the proposed error tracking control method.

  7. Toward diagnostic and phenotype markers for genetically transmitted speech delay.

    PubMed

    Shriberg, Lawrence D; Lewis, Barbara A; Tomblin, J Bruce; McSweeny, Jane L; Karlsson, Heather B; Scheer, Alison R

    2005-08-01

    Converging evidence supports the hypothesis that the most common subtype of childhood speech sound disorder (SSD) of currently unknown origin is genetically transmitted. We report the first findings toward a set of diagnostic markers to differentiate this proposed etiological subtype (provisionally termed speech delay-genetic) from other proposed subtypes of SSD of unknown origin. Conversational speech samples from 72 preschool children with speech delay of unknown origin from 3 research centers were selected from an audio archive. Participants differed on the number of biological, nuclear family members (0 or 2+) classified as positive for current and/or prior speech-language disorder. Although participants in the 2 groups were found to have similar speech competence, as indexed by their Percentage of Consonants Correct scores, their speech error patterns differed significantly in 3 ways. Compared with children who may have reduced genetic load for speech delay (no affected nuclear family members), children with possibly higher genetic load (2+ affected members) had (a) a significantly higher proportion of relative omission errors on the Late-8 consonants; (b) a significantly lower proportion of relative distortion errors on these consonants, particularly on the sibilant fricatives /s/, /z/, and //; and (c) a significantly lower proportion of backed /s/ distortions, as assessed by both perceptual and acoustic methods. Machine learning routines identified a 3-part classification rule that included differential weightings of these variables. The classification rule had diagnostic accuracy value of 0.83 (95% confidence limits = 0.74-0.92), with positive and negative likelihood ratios of 9.6 (95% confidence limits = 3.1-29.9) and 0.40 (95% confidence limits = 0.24-0.68), respectively. The diagnostic accuracy findings are viewed as promising. The error pattern for this proposed subtype of SSD is viewed as consistent with the cognitive-linguistic processing deficits that have been reported for genetically transmitted verbal disorders.

  8. How can we better capture food away from Home? Lessons from India's linking person-level meal and household-level food data.

    PubMed

    Fiedler, John L; Yadav, Suryakant

    2017-10-01

    Despite acknowledged shortcomings, household consumption and expenditure surveys (HCES) are increasingly being used to proxy food consumption because they are relatively more available and affordable than surveys using more precise dietary assessment methods. One of the most common, significant sources of HCES measurement error is their under-estimation of food away from home (FAFH). In 2011, India's National Survey Sample Organization introduced revisions in its HCES questionnaire that included replacing "cooked meals"-the single item in the food consumption module designed to capture FAFH at the household level-with five more detailed and explicitly FAFH sub-categories. The survey also contained a section with seven, household member-specific questions about meal patterns during the reference period and included three sources of meals away from home (MAFH) that overlapped three of the new FAFH categories. By providing a conceptual framework with which to organize and consider each household member's meal pattern throughout the reference period, and breaking down the recalling (or estimating) process into household member-specific responses, we assume the MAFH approach makes the key respondent's task less memory- and arithmetically-demanding, and thus more accurate than the FAFH household level approach. We use the MAFH estimates as a reference point, and approximate one portion of FAFH measurement error as the differences in MAFH and FAFH estimates. The MAFH estimates reveal marked heterogeneity in intra-household meal patterns, reflecting the complexity of the HCES's key informant task of reporting household level data, and underscoring its importance as a source of measurement error. We find the household level-based estimates of FAFH increase from just 60.4% of the individual-based estimates in the round prior to the questionnaire modifications to 96.7% after the changes. We conclude that the MFAH-FAFH linked approach substantially reduced FAFH measurement error in India. The approach has wider applicability in global efforts to improve HCES.

  9. Effective one-dimensional images of arterial trees in the cardiovascular system

    NASA Astrophysics Data System (ADS)

    Kozlov, V. A.; Nazarov, S. A.

    2017-03-01

    An exponential smallness of the errors in the one-dimensional model of the Stokes flow in a branching thin vessel with rigid walls is achieved by introducing effective lengths of the one-dimensional image of internodal fragments of vessels. Such lengths are eluated through the pressure-drop matrix at each node describing the boundary-layer phenomenon. The medical interpretation and the accessible generalizations of the result, in particular, for the Navier-Stokes equations are presented.

  10. Evaluation of methods for calculating maximum allowable standing height in amputees competing in Paralympic athletics.

    PubMed

    Connick, M J; Beckman, E; Ibusuki, T; Malone, L; Tweedy, S M

    2016-11-01

    The International Paralympic Committee has a maximum allowable standing height (MASH) rule that limits stature to a pre-trauma estimation. The MASH rule reduces the probability that bilateral lower limb amputees use disproportionately long prostheses in competition. Although there are several methods for estimating stature, the validity of these methods has not been compared. To identify the most appropriate method for the MASH rule, this study aimed to compare the criterion validity of estimations resulting from the current method, the Contini method, and four Canda methods (Canda-1, Canda-2, Canda-3, and Canda-4). Stature, ulna length, demispan, sitting height, thigh length, upper arm length, and forearm length measurements in 31 males and 30 females were used to calculate the respective estimation for each method. Results showed that Canda-1 (based on four anthropometric variables) produced the smallest error and best fitted the data in males and females. The current method was associated with the largest error of those tests because it increasingly overestimated height in people with smaller stature. The results suggest that the set of Canda equations provide a more valid MASH estimation in people with a range of upper limb and bilateral lower limb amputations compared with the current method. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Geodesy by radio interferometry - Water vapor radiometry for estimation of the wet delay

    NASA Technical Reports Server (NTRS)

    Elgered, G.; Davis, J. L.; Herring, T. A.; Shapiro, I. I.

    1991-01-01

    An important source of error in VLBI estimates of baseline length is unmodeled variations of the refractivity of the neutral atmosphere along the propagation path of the radio signals. This paper presents and discusses the method of using data from a water vapor radiomete (WVR) to correct for the propagation delay caused by atmospheric water vapor, the major cause of these variations. Data from different WVRs are compared with estimated propagation delays obtained by Kalman filtering of the VLBI data themselves. The consequences of using either WVR data or Kalman filtering to correct for atmospheric propagation delay at the Onsala VLBI site are investigated by studying the repeatability of estimated baseline lengths from Onsala to several other sites. The repeatability obtained for baseline length estimates shows that the methods of water vapor radiometry and Kalman filtering offer comparable accuracies when applied to VLBI observations obtained in the climate of the Swedish west coast. For the most frequently measured baseline in this study, the use of WVR data yielded a 13 percent smaller weighted-root-mean-square (WRMS) scatter of the baseline length estimates compared to the use of a Kalman filter. It is also clear that the 'best' minimum elevationi angle for VLBI observations depends on the accuracy of the determinations of the total propagation delay to be used, since the error in this delay increases with increasing air mass.

  12. Cryogenic support system

    DOEpatents

    Nicol, Thomas H.; Niemann, Ralph C.; Gonczy, John D.

    1988-01-01

    A support system is disclosed for restraining large masses at very low or cryogenic temperatures. The support system employs a tie bar that is pivotally connected at opposite ends to an anchoring support member and a sliding support member. The tie bar extends substantially parallel to the longitudinal axis of the cold mass assembly, and comprises a rod that lengthens when cooled and a pair of end attachments that contract when cooled. The rod and end attachments are sized so that when the tie bar is cooled to cryogenic temperature, the net change in tie bar length is approximately zero. Longitudinal force directed against the cold mass assembly is distributed by the tie bar between the anchoring support member and the sliding support member.

  13. Peripheral dysgraphia characterized by the co-occurrence of case substitutions in uppercase and letter substitutions in lowercase writing.

    PubMed

    Di Pietro, M; Schnider, A; Ptak, R

    2011-10-01

    Patients with peripheral dysgraphia due to impairment at the allographic level produce writing errors that affect the letter-form and are characterized by case confusions or the failure to write in a specific case or style (e.g., cursive). We studied the writing errors of a patient with pure peripheral dysgraphia who had entirely intact oral spelling, but produced many well-formed letter errors in written spelling. The comparison of uppercase print and lowercase cursive spelling revealed an uncommon pattern: while most uppercase errors were case substitutions (e.g., A - a), almost all lowercase errors were letter substitutions (e.g., n - r). Analyses of the relationship between target letters and substitution errors showed that errors were neither influenced by consonant-vowel status nor by letter frequency, though word length affected error frequency in lowercase writing. Moreover, while graphomotor similarity did not predict either the occurrence of uppercase or lowercase errors, visuospatial similarity was a significant predictor of lowercase errors. These results suggest that lowercase representations of cursive letter-forms are based on a description of entire letters (visuospatial features) and are not - as previously found for uppercase letters - specified in terms of strokes (graphomotor features). Copyright © 2010 Elsevier Srl. All rights reserved.

  14. A systematic comparison of error correction enzymes by next-generation sequencing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lubock, Nathan B.; Zhang, Di; Sidore, Angus M.

    Gene synthesis, the process of assembling genelength fragments from shorter groups of oligonucleotides (oligos), is becoming an increasingly important tool in molecular and synthetic biology. The length, quality and cost of gene synthesis are limited by errors produced during oligo synthesis and subsequent assembly. Enzymatic error correction methods are cost-effective means to ameliorate errors in gene synthesis. Previous analyses of these methods relied on cloning and Sanger sequencing to evaluate their efficiencies, limiting quantitative assessment. Here, we develop a method to quantify errors in synthetic DNA by next-generation sequencing. We analyzed errors in model gene assemblies and systematically compared sixmore » different error correction enzymes across 11 conditions. We find that ErrASE and T7 Endonuclease I are the most effective at decreasing average error rates (up to 5.8-fold relative to the input), whereas MutS is the best for increasing the number of perfect assemblies (up to 25.2-fold). We are able to quantify differential specificities such as ErrASE preferentially corrects C/G transversions whereas T7 Endonuclease I preferentially corrects A/T transversions. More generally, this experimental and computational pipeline is a fast, scalable and extensible way to analyze errors in gene assemblies, to profile error correction methods, and to benchmark DNA synthesis methods.« less

  15. A systematic comparison of error correction enzymes by next-generation sequencing

    DOE PAGES

    Lubock, Nathan B.; Zhang, Di; Sidore, Angus M.; ...

    2017-08-01

    Gene synthesis, the process of assembling genelength fragments from shorter groups of oligonucleotides (oligos), is becoming an increasingly important tool in molecular and synthetic biology. The length, quality and cost of gene synthesis are limited by errors produced during oligo synthesis and subsequent assembly. Enzymatic error correction methods are cost-effective means to ameliorate errors in gene synthesis. Previous analyses of these methods relied on cloning and Sanger sequencing to evaluate their efficiencies, limiting quantitative assessment. Here, we develop a method to quantify errors in synthetic DNA by next-generation sequencing. We analyzed errors in model gene assemblies and systematically compared sixmore » different error correction enzymes across 11 conditions. We find that ErrASE and T7 Endonuclease I are the most effective at decreasing average error rates (up to 5.8-fold relative to the input), whereas MutS is the best for increasing the number of perfect assemblies (up to 25.2-fold). We are able to quantify differential specificities such as ErrASE preferentially corrects C/G transversions whereas T7 Endonuclease I preferentially corrects A/T transversions. More generally, this experimental and computational pipeline is a fast, scalable and extensible way to analyze errors in gene assemblies, to profile error correction methods, and to benchmark DNA synthesis methods.« less

  16. Measurement effects of seasonal and monthly variability on pedometer-determined data.

    PubMed

    Kang, Minsoo; Bassett, David R; Barreira, Tiago V; Tudor-Locke, Catrine; Ainsworth, Barbara E

    2012-03-01

    The seasonal and monthly variability of pedometer-determined physical activity and its effects on accurate measurement have not been examined. The purpose of the study was to reduce measurement error in step-count data by controlling a) the length of the measurement period and b) the season or month of the year in which sampling was conducted. Twenty-three middle-aged adults were instructed to wear a Yamax SW-200 pedometer over 365 consecutive days. The step-count measurement periods of various lengths (eg, 2, 3, 4, 5, 6, 7 days, etc.) were randomly selected 10 times for each season and month. To determine accurate estimates of yearly step-count measurement, mean absolute percentage error (MAPE) and bias were calculated. The year-round average was considered as a criterion measure. A smaller MAPE and bias represent a better estimate. Differences in MAPE and bias among seasons were trivial; however, they varied among different months. The months in which seasonal changes occur presented the highest MAPE and bias. Targeting the data collection during certain months (eg, May) may reduce pedometer measurement error and provide more accurate estimates of year-round averages.

  17. Prediction error and trace dominance determine the fate of fear memories after post-training manipulations

    PubMed Central

    Alfei, Joaquín M.; Ferrer Monti, Roque I.; Molina, Victor A.; Bueno, Adrián M.

    2015-01-01

    Different mnemonic outcomes have been observed when associative memories are reactivated by CS exposure and followed by amnestics. These outcomes include mere retrieval, destabilization–reconsolidation, a transitional period (which is insensitive to amnestics), and extinction learning. However, little is known about the interaction between initial learning conditions and these outcomes during a reinforced or nonreinforced reactivation. Here we systematically combined temporally specific memories with different reactivation parameters to observe whether these four outcomes are determined by the conditions established during training. First, we validated two training regimens with different temporal expectations about US arrival. Then, using Midazolam (MDZ) as an amnestic agent, fear memories in both learning conditions were submitted to retraining either under identical or different parameters to the original training. Destabilization (i.e., susceptibly to MDZ) occurred when reactivation was reinforced, provided the occurrence of a temporal prediction error about US arrival. In subsequent experiments, both treatments were systematically reactivated by nonreinforced context exposure of different lengths, which allowed to explore the interaction between training and reactivation lengths. These results suggest that temporal prediction error and trace dominance determine the extent to which reactivation produces the different outcomes. PMID:26179232

  18. Assessment of the pseudo-tracking approach for the calculation of material acceleration and pressure fields from time-resolved PIV: part I. Error propagation

    NASA Astrophysics Data System (ADS)

    van Gent, P. L.; Schrijer, F. F. J.; van Oudheusden, B. W.

    2018-04-01

    Pseudo-tracking refers to the construction of imaginary particle paths from PIV velocity fields and the subsequent estimation of the particle (material) acceleration. In view of the variety of existing and possible alternative ways to perform the pseudo-tracking method, it is not straightforward to select a suitable combination of numerical procedures for its implementation. To address this situation, this paper extends the theoretical framework for the approach. The developed theory is verified by applying various implementations of pseudo-tracking to a simulated PIV experiment. The findings of the investigations allow us to formulate the following insights and practical recommendations: (1) the velocity errors along the imaginary particle track are primarily a function of velocity measurement errors and spatial velocity gradients; (2) the particle path may best be calculated with second-order accurate numerical procedures while ensuring that the CFL condition is met; (3) least-square fitting of a first-order polynomial is a suitable method to estimate the material acceleration from the track; and (4) a suitable track length may be selected on the basis of the variation in material acceleration with track length.

  19. Measurement error of Young’s modulus considering the gravity and thermal expansion of thin specimens for in situ tensile testing

    NASA Astrophysics Data System (ADS)

    Ma, Zhichao; Zhao, Hongwei; Ren, Luquan

    2016-06-01

    Most miniature in situ tensile devices compatible with scanning/transmission electron microscopes or optical microscopes adopt a horizontal layout. In order to analyze and calculate the measurement error of the tensile Young’s modulus, the effects of gravity and temperature changes, which would respectively lead to and intensify the bending deformation of thin specimens, are considered as influencing factors. On the basis of a decomposition method of static indeterminacy, equations of simplified deflection curves are obtained and, accordingly, the actual gage length is confirmed. By comparing the effects of uniaxial tensile load on the change of the deflection curve with gravity, the relation between the actual and directly measured tensile Young’s modulus is obtained. Furthermore, the quantitative effects of ideal gage length l o, temperature change ΔT and the density ρ of the specimen on the modulus difference and modulus ratio are calculated. Specimens with larger l o and ρ present more obvious measurement errors for Young’s modulus, but the effect of ΔT is not significant. The calculation method of Young’s modulus is particularly suitable for thin specimens.

  20. Performance of the likelihood ratio difference (G2 Diff) test for detecting unidimensionality in applications of the multidimensional Rasch model.

    PubMed

    Harrell-Williams, Leigh; Wolfe, Edward W

    2014-01-01

    Previous research has investigated the influence of sample size, model misspecification, test length, ability distribution offset, and generating model on the likelihood ratio difference test in applications of item response models. This study extended that research to the evaluation of dimensionality using the multidimensional random coefficients multinomial logit model (MRCMLM). Logistic regression analysis of simulated data reveal that sample size and test length have a large effect on the capacity of the LR difference test to correctly identify unidimensionality, with shorter tests and smaller sample sizes leading to smaller Type I error rates. Higher levels of simulated misfit resulted in fewer incorrect decisions than data with no or little misfit. However, Type I error rates indicate that the likelihood ratio difference test is not suitable under any of the simulated conditions for evaluating dimensionality in applications of the MRCMLM.

  1. Error Budgeting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vinyard, Natalia Sergeevna; Perry, Theodore Sonne; Usov, Igor Olegovich

    2017-10-04

    We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk =more » $$\\partial k$$\\ $$\\partial T$$ ΔT + $$\\partial k$$\\ $$\\partial (pL)$$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B 0 is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB 0/B 0, and consequently Δk/k = 1/T (ΔB/B + ΔB$$_0$$/B$$_0$$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2« less

  2. Calibration Method of an Ultrasonic System for Temperature Measurement

    PubMed Central

    Zhou, Chao; Wang, Yueke; Qiao, Chunjie; Dai, Weihua

    2016-01-01

    System calibration is fundamental to the overall accuracy of the ultrasonic temperature measurement, and it is basically involved in accurately measuring the path length and the system latency of the ultrasonic system. This paper proposes a method of high accuracy system calibration. By estimating the time delay between the transmitted signal and the received signal at several different temperatures, the calibration equations are constructed, and the calibrated results are determined with the use of the least squares algorithm. The formulas are deduced for calculating the calibration uncertainties, and the possible influential factors are analyzed. The experimental results in distilled water show that the calibrated path length and system latency can achieve uncertainties of 0.058 mm and 0.038 μs, respectively, and the temperature accuracy is significantly improved by using the calibrated results. The temperature error remains within ±0.04°C consistently, and the percentage error is less than 0.15%. PMID:27788252

  3. Simultaneous Water Vapor and Dry Air Optical Path Length Measurements and Compensation with the Large Binocular Telescope Interferometer

    NASA Technical Reports Server (NTRS)

    Defrere, D.; Hinz, P.; Downey, E.; Boehm, M.; Danchi, W. C.; Durney, O.; Ertel, S.; Hill, J. M.; Hoffmann, W. F.; Mennesson, B.; hide

    2016-01-01

    The Large Binocular Telescope Interferometer uses a near-infrared camera to measure the optical path length variations between the two AO-corrected apertures and provide high-angular resolution observations for all its science channels (1.5-13 microns). There is however a wavelength dependent component to the atmospheric turbulence, which can introduce optical path length errors when observing at a wavelength different from that of the fringe sensing camera. Water vapor in particular is highly dispersive and its effect must be taken into account for high-precision infrared interferometric observations as described previously for VLTI/MIDI or the Keck Interferometer Nuller. In this paper, we describe the new sensing approach that has been developed at the LBT to measure and monitor the optical path length fluctuations due to dry air and water vapor separately. After reviewing the current performance of the system for dry air seeing compensation, we present simultaneous H-, K-, and N-band observations that illustrate the feasibility of our feed forward approach to stabilize the path length fluctuations seen by the LBTI nuller uses a near-infrared camera to measure the optical path length variations between the two AO-corrected apertures and provide high-angular resolution observations for all its science channels (1.5-13 microns). There is however a wavelength dependent component to the atmospheric turbulence, which can introduce optical path length errors when observing at a wavelength different from that of the fringe sensing camera. Water vapor in particular is highly dispersive and its effect must be taken into account for high-precision infrared interferometric observations as described previously for VLTI MIDI or the Keck Interferometer Nuller. In this paper, we describe the new sensing approach that has been developed at the LBT to measure and monitor the optical path length fluctuations due to dry air and water vapor separately. After reviewing the current performance of the system for dry air seeing compensation, we present simultaneous H-, K-, and N-band observations that illustrate the feasibility of our feed forward approach to stabilize the path length fluctuations seen by the LBTI nuller.

  4. Electric circuit breaker comprising a plurality of vacuum interrupters simultaneously operated by a common operator

    DOEpatents

    Barkan, Philip; Imam, Imdad

    1980-01-01

    This circuit breaker comprises a plurality of a vacuum-type circuit interrupters, each having a movable contact rod. A common operating device for the interrupters comprises a linearly-movable operating member. The interrupters are mounted at one side of the operating member with their movable contact rods extending in a direction generally toward the operating member. Means is provided for mechanically coupling the operating member to the contact rods, and this means comprises a plurality of insulating operating rods, each connected at one end to the operating member and at its opposite end to one of the movable contact rods. The operating rods are of substantially equal length and have longitudinal axes that converge and intersect at substantially a common point.

  5. Active member vibration control for a 4 meter primary reflector support structure

    NASA Technical Reports Server (NTRS)

    Umland, J. W.; Chen, G.-S.

    1992-01-01

    The design and testing of a new low voltage piezoelectric active member with integrated load cell and displacement sensor is described. This active member is intended for micron level vibration and structural shape control of the Precision Segmented Reflector test-bed. The test-bed is an erectable 4 meter diameter backup support truss for a 2.4 meter focal length parabolic reflector. Active damping of the test-bed is then demonstrated using the newly developed active members. The control technique used is referred to as bridge feedback. With this technique the internal sensors are used in a local feedback loop to match the active member's input impedance to the structure's load impedance, which then maximizes vibrational energy dissipation. The active damping effectiveness is then evaluated from closed loop frequency responses.

  6. Benefits of an ultra large and multiresolution ensemble for estimating available wind power

    NASA Astrophysics Data System (ADS)

    Berndt, Jonas; Hoppe, Charlotte; Elbern, Hendrik

    2016-04-01

    In this study we investigate the benefits of an ultra large ensemble with up to 1000 members including multiple nesting with a target horizontal resolution of 1 km. The ensemble shall be used as a basis to detect events of extreme errors in wind power forecasting. Forecast value is the wind vector at wind turbine hub height (~ 100 m) in the short range (1 to 24 hour). Current wind power forecast systems rest already on NWP ensemble models. However, only calibrated ensembles from meteorological institutions serve as input so far, with limited spatial resolution (˜10 - 80 km) and member number (˜ 50). Perturbations related to the specific merits of wind power production are yet missing. Thus, single extreme error events which are not detected by such ensemble power forecasts occur infrequently. The numerical forecast model used in this study is the Weather Research and Forecasting Model (WRF). Model uncertainties are represented by stochastic parametrization of sub-grid processes via stochastically perturbed parametrization tendencies and in conjunction via the complementary stochastic kinetic-energy backscatter scheme already provided by WRF. We perform continuous ensemble updates by comparing each ensemble member with available observations using a sequential importance resampling filter to improve the model accuracy while maintaining ensemble spread. Additionally, we use different ensemble systems from global models (ECMWF and GFS) as input and boundary conditions to capture different synoptic conditions. Critical weather situations which are connected to extreme error events are located and corresponding perturbation techniques are applied. The demanding computational effort is overcome by utilising the supercomputer JUQUEEN at the Forschungszentrum Juelich.

  7. High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin

    2016-01-01

    Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.

  8. On Inertial Body Tracking in the Presence of Model Calibration Errors

    PubMed Central

    Miezal, Markus; Taetz, Bertram; Bleser, Gabriele

    2016-01-01

    In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments—the IMU-to-segment calibrations, subsequently called I2S calibrations—to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and segment length errors in the tested ranges. Errors in the I2S orientations were, however, linearly propagated into the estimated segment orientations. In the absence of magnetic disturbances, severe model calibration errors and fast motion changes, the newly developed IMU centered EKF-based method yielded comparable results with lower computational complexity. PMID:27455266

  9. Perfluorooctanoic acid and environmental risks

    EPA Science Inventory

    Perfluorooctanoic acid (PFOA) is a member of the perfluoroalkyl acids (PFAA) family of chemicals, which consist of a carbon backbone typically four to fourteen carbons in length and a charged functional moiety.

  10. Error and Error Mitigation in Low-Coverage Genome Assemblies

    PubMed Central

    Hubisz, Melissa J.; Lin, Michael F.; Kellis, Manolis; Siepel, Adam

    2011-01-01

    The recent release of twenty-two new genome sequences has dramatically increased the data available for mammalian comparative genomics, but twenty of these new sequences are currently limited to ∼2× coverage. Here we examine the extent of sequencing error in these 2× assemblies, and its potential impact in downstream analyses. By comparing 2× assemblies with high-quality sequences from the ENCODE regions, we estimate the rate of sequencing error to be 1–4 errors per kilobase. While this error rate is fairly modest, sequencing error can still have surprising effects. For example, an apparent lineage-specific insertion in a coding region is more likely to reflect sequencing error than a true biological event, and the length distribution of coding indels is strongly distorted by error. We find that most errors are contributed by a small fraction of bases with low quality scores, in particular, by the ends of reads in regions of single-read coverage in the assembly. We explore several approaches for automatic sequencing error mitigation (SEM), making use of the localized nature of sequencing error, the fact that it is well predicted by quality scores, and information about errors that comes from comparisons across species. Our automatic methods for error mitigation cannot replace the need for additional sequencing, but they do allow substantial fractions of errors to be masked or eliminated at the cost of modest amounts of over-correction, and they can reduce the impact of error in downstream phylogenomic analyses. Our error-mitigated alignments are available for download. PMID:21340033

  11. A Mobile, Collaborative, Real Time Task List for Inpatient Environments

    PubMed Central

    Ho, T.; Pelletier, A.; Al Ayubi, S.; Bourgeois, F.

    2015-01-01

    Summary Background Inpatient teams commonly track their tasks using paper checklists that are not shared between team members. Team members frequently communicate redundantly in order to prevent errors. Methods We created a mobile, collaborative, real-time task list application on the iOS platform. The application listed tasks for each patient, allowed users to check them off as completed, and transmitted that information to all other team members. In this report, we qualitatively describe our experience designing and piloting the application with an inpatient pediatric ward team at an academic pediatric hospital. Results We successfully created the tasklist application, however team members showed limited usage. Conclusion Physicians described that they preferred the immediacy and familiarity of paper, and did not experience an efficiency benefit when using the electronic tasklist. PMID:26767063

  12. Managing human error in aviation.

    PubMed

    Helmreich, R L

    1997-05-01

    Crew resource management (CRM) programs were developed to address team and leadership aspects of piloting modern airplanes. The goal is to reduce errors through team work. Human factors research and social, cognitive, and organizational psychology are used to develop programs tailored for individual airlines. Flight crews study accident case histories, group dynamics, and human error. Simulators provide pilots with the opportunity to solve complex flight problems. CRM in the simulator is called line-oriented flight training (LOFT). In automated cockpits CRM promotes the idea of automation as a crew member. Cultural aspects of aviation include professional, business, and national culture. The aviation CRM model has been adapted for training surgeons and operating room staff in human factors.

  13. Lenticular accommodation in relation to ametropia: the chick model.

    PubMed

    Choh, Vivian; Sivak, Jacob G

    2005-03-04

    Our goal was to determine whether experimentally induced ametropias have an effect on lenticular accommodation and spherical aberration. Form-deprivation myopia and hyperopia were induced in one eye of hatchling chicks by application of a translucent goggle and +15 D lens, respectively. After 7 days, eyes were enucleated and lenses were optically scanned prior to accommodation, during accommodation, and after accommodation. Accommodation was induced by electrical stimulation of the ciliary nerve. Lenticular focal lengths for form-deprived eyes were significantly shorter than for their controls and accommodation-associated changes in focal length were significantly smaller in myopic eyes compared to their controls. For eyes imposed with +15 D blur, focal lengths were longer than those for their controls and accommodative changes were greater. Spherical aberration of the lens increased with accommodation in both form-deprived and lens-treated birds, but induction of ametropia had no effect on lenticular spherical aberration in general. Nonmonotonicity from lenticular spherical aberration increased during accommodation but effects of refractive error were equivocal. The crystalline lens contributes to refractive error changes of the eye both in the case of myopia and hyperopia. These changes are likely attributable to global changes in the size and shape of the eye.

  14. Analytical model and error analysis of arbitrary phasing technique for bunch length measurement

    NASA Astrophysics Data System (ADS)

    Chen, Qushan; Qin, Bin; Chen, Wei; Fan, Kuanjun; Pei, Yuanji

    2018-05-01

    An analytical model of an RF phasing method using arbitrary phase scanning for bunch length measurement is reported. We set up a statistical model instead of a linear chirp approximation to analyze the energy modulation process. It is found that, assuming a short bunch (σφ / 2 π → 0) and small relative energy spread (σγ /γr → 0), the energy spread (Y =σγ 2) at the exit of the traveling wave linac has a parabolic relationship with the cosine value of the injection phase (X = cosφr|z=0), i.e., Y = AX2 + BX + C. Analogous to quadrupole strength scanning for emittance measurement, this phase scanning method can be used to obtain the bunch length by measuring the energy spread at different injection phases. The injection phases can be randomly chosen, which is significantly different from the commonly used zero-phasing method. Further, the systematic error of the reported method, such as the influence of the space charge effect, is analyzed. This technique will be especially useful at low energies when the beam quality is dramatically degraded and is hard to measure using the zero-phasing method.

  15. Effects of Including Misidentified Sharks in Life History Analyses: A Case Study on the Grey Reef Shark Carcharhinus amblyrhynchos from Papua New Guinea

    PubMed Central

    Smart, Jonathan J.; Chin, Andrew; Baje, Leontine; Green, Madeline E.; Appleyard, Sharon A.; Tobin, Andrew J.; Simpfendorfer, Colin A.; White, William T.

    2016-01-01

    Fisheries observer programs are used around the world to collect crucial information and samples that inform fisheries management. However, observer error may misidentify similar-looking shark species. This raises questions about the level of error that species misidentifications could introduce to estimates of species’ life history parameters. This study addressed these questions using the Grey Reef Shark Carcharhinus amblyrhynchos as a case study. Observer misidentification rates were quantified by validating species identifications using diagnostic photographs taken on board supplemented with DNA barcoding. Length-at-age and maturity ogive analyses were then estimated and compared with and without the misidentified individuals. Vertebrae were retained from a total of 155 sharks identified by observers as C. amblyrhynchos. However, 22 (14%) of these were sharks were misidentified by the observers and were subsequently re-identified based on photographs and/or DNA barcoding. Of the 22 individuals misidentified as C. amblyrhynchos, 16 (73%) were detected using photographs and a further 6 via genetic validation. If misidentified individuals had been included, substantial error would have been introduced to both the length-at-age and the maturity estimates. Thus validating the species identification, increased the accuracy of estimated life history parameters for C. amblyrhynchos. From the corrected sample a multi-model inference approach was used to estimate growth for C. amblyrhynchos using three candidate models. The model averaged length-at-age parameters for C. amblyrhynchos with the sexes combined were  L¯∞ = 159 cm TL and  L¯0 = 72 cm TL. Females mature at a greater length (l50 = 136 cm TL) and older age (A50 = 9.1 years) than males (l50 = 123 cm TL; A50 = 5.9 years). The inclusion of techniques to reduce misidentification in observer programs will improve the results of life history studies and ultimately improve management through the use of more accurate data for assessments. PMID:27058734

  16. Effects of Including Misidentified Sharks in Life History Analyses: A Case Study on the Grey Reef Shark Carcharhinus amblyrhynchos from Papua New Guinea.

    PubMed

    Smart, Jonathan J; Chin, Andrew; Baje, Leontine; Green, Madeline E; Appleyard, Sharon A; Tobin, Andrew J; Simpfendorfer, Colin A; White, William T

    2016-01-01

    Fisheries observer programs are used around the world to collect crucial information and samples that inform fisheries management. However, observer error may misidentify similar-looking shark species. This raises questions about the level of error that species misidentifications could introduce to estimates of species' life history parameters. This study addressed these questions using the Grey Reef Shark Carcharhinus amblyrhynchos as a case study. Observer misidentification rates were quantified by validating species identifications using diagnostic photographs taken on board supplemented with DNA barcoding. Length-at-age and maturity ogive analyses were then estimated and compared with and without the misidentified individuals. Vertebrae were retained from a total of 155 sharks identified by observers as C. amblyrhynchos. However, 22 (14%) of these were sharks were misidentified by the observers and were subsequently re-identified based on photographs and/or DNA barcoding. Of the 22 individuals misidentified as C. amblyrhynchos, 16 (73%) were detected using photographs and a further 6 via genetic validation. If misidentified individuals had been included, substantial error would have been introduced to both the length-at-age and the maturity estimates. Thus validating the species identification, increased the accuracy of estimated life history parameters for C. amblyrhynchos. From the corrected sample a multi-model inference approach was used to estimate growth for C. amblyrhynchos using three candidate models. The model averaged length-at-age parameters for C. amblyrhynchos with the sexes combined were L∞ = 159 cm TL and L0 = 72 cm TL. Females mature at a greater length (l50 = 136 cm TL) and older age (A50 = 9.1 years) than males (l50 = 123 cm TL; A50 = 5.9 years). The inclusion of techniques to reduce misidentification in observer programs will improve the results of life history studies and ultimately improve management through the use of more accurate data for assessments.

  17. Estimation of body density based on hydrostatic weighing without head submersion in young Japanese adults.

    PubMed

    Demura, S; Sato, S; Kitabayashi, T

    2006-06-01

    This study examined a method of predicting body density based on hydrostatic weighing without head submersion (HWwithoutHS). Donnelly and Sintek (1984) developed a method to predict body density based on hydrostatic weight without head submersion. This method predicts the difference (D) between HWwithoutHS and hydrostatic weight with head submersion (HWwithHS) from anthropometric variables (head length and head width), and then calculates body density using D as a correction factor. We developed several prediction equations to estimate D based on head anthropometry and differences between the sexes, and compared their prediction accuracy with Donnelly and Sintek's equation. Thirty-two males and 32 females aged 17-26 years participated in the study. Multiple linear regression analysis was performed to obtain the prediction equations, and the systematic errors of their predictions were assessed by Bland-Altman plots. The best prediction equations obtained were: Males: D(g) = -164.12X1 - 125.81X2 - 111.03X3 + 100.66X4 + 6488.63, where X1 = head length (cm), X2 = head circumference (cm), X3 = head breadth (cm), X4 = head thickness (cm) (R = 0.858, R2 = 0.737, adjusted R2 = 0.687, standard error of the estimate = 224.1); Females: D(g) = -156.03X1 - 14.03X2 - 38.45X3 - 8.87X4 + 7852.45, where X1 = head circumference (cm), X2 = body mass (g), X3 = head length (cm), X4 = height (cm) (R = 0.913, R2 = 0.833, adjusted R2 = 0.808, standard error of the estimate = 137.7). The effective predictors in these prediction equations differed from those of Donnelly and Sintek's equation, and head circumference and head length were included in both equations. The prediction accuracy was improved by statistically selecting effective predictors. Since we did not assess cross-validity, the equations cannot be used to generalize to other populations, and further investigation is required.

  18. Heritability of refractive error and ocular biometrics: the Genes in Myopia (GEM) twin study.

    PubMed

    Dirani, Mohamed; Chamberlain, Matthew; Shekar, Sri N; Islam, Amirul F M; Garoufalis, Pam; Chen, Christine Y; Guymer, Robyn H; Baird, Paul N

    2006-11-01

    A classic twin study was undertaken to assess the contribution of genes and environment to the development of refractive errors and ocular biometrics in a twin population. A total of 1224 twins (345 monozygotic [MZ] and 267 dizygotic [DZ] twin pairs) aged between 18 and 88 years were examined. All twins completed a questionnaire consisting of a medical history, education, and zygosity. Objective refraction was measured in all twins, and biometric measurements were obtained using partial coherence interferometry. Intrapair correlations for spherical equivalent and ocular biometrics were significantly higher in the MZ than in the DZ twin pairs (P < 0.05), when refraction was considered as a continuous variable. A significant gender difference in the variation of spherical equivalent and ocular biometrics was found (P < 0.05). A genetic model specifying an additive, dominant, and unique environmental factor that was sex limited was the best fit for all measured variables. Heritability of spherical equivalents of 88% and 75% were found in the men and women, respectively, whereas, that of axial length was 94% and 92%, respectively. Additive genetic effects accounted for a greater proportion of the variance in spherical equivalent, whereas the variance in ocular biometrics, particularly axial length was explained mostly by dominant genetic effects. Genetic factors, both additive and dominant, play a significant role in refractive error (myopia and hypermetropia) as well as in ocular biometrics, particularly axial length. The sex limitation ADE model (additive genetic, nonadditive genetic, and environmental components) provided the best-fit genetic model for all parameters.

  19. Axial Length Variation Impacts on Superficial Retinal Vessel Density and Foveal Avascular Zone Area Measurements Using Optical Coherence Tomography Angiography.

    PubMed

    Sampson, Danuta M; Gong, Peijun; An, Di; Menghini, Moreno; Hansen, Alex; Mackey, David A; Sampson, David D; Chen, Fred K

    2017-06-01

    To evaluate the impact of image magnification correction on superficial retinal vessel density (SRVD) and foveal avascular zone area (FAZA) measurements using optical coherence tomography angiography (OCTA). Participants with healthy retinas were recruited for ocular biometry, refraction, and RTVue XR Avanti OCTA imaging with the 3 × 3-mm protocol. The foveal and parafoveal SRVD and FAZA were quantified with custom software before and after correction for magnification error using the Littman and the modified Bennett formulae. Relative changes between corrected and uncorrected SRVD and FAZA were calculated. Forty subjects were enrolled and the median (range) age of the participants was 30 (18-74) years. The mean (range) spherical equivalent refractive error was -1.65 (-8.00 to +4.88) diopters and mean (range) axial length was 24.42 mm (21.27-28.85). Images from 13 eyes were excluded due to poor image quality leaving 67 for analysis. Relative changes in foveal and parafoveal SRVD and FAZA after correction ranged from -20% to +10%, -3% to +2%, and -20% to +51%, respectively. Image size correction in measurements of foveal SRVD and FAZA was greater than 5% in 51% and 74% of eyes, respectively. In contrast, 100% of eyes had less than 5% correction in measurements of parafoveal SRVD. Ocular biometry should be performed with OCTA to correct image magnification error induced by axial length variation. We advise caution when interpreting interocular and interindividual comparisons of SRVD and FAZA derived from OCTA without image size correction.

  20. Correction to: The Intensive Care Global Study on Severe Acute Respiratory Infection (IC-GLOSSARI): a multicenter, multinational, 14-day inception cohort study.

    PubMed

    Sakr, Yasser; Ferrer, Ricard; Reinhart, Konrad; Beale, Richard; Rhodes, Andrew; Moreno, Rui; Timsit, Jean Francois; Brochard, Laurent; Thompson, B Taylor; Rezende, Ederlon; Chiche, Jean Daniel

    2018-01-01

    In both the original publication (DOI 10.1007/s00134-015-4206-2) and the first erratum (DOI 10.1007/s00134-016-4317-4), the members of the IC-GLOSSARI Investigators and the ESICM Trials Group were provided in such a way that they could not be indexed as collaborators on PubMed. The publisher apologizes for these errors and is pleased to list the members of the groups here.

  1. cWINNOWER algorithm for finding fuzzy dna motifs

    NASA Technical Reports Server (NTRS)

    Liang, S.; Samanta, M. P.; Biegel, B. A.

    2004-01-01

    The cWINNOWER algorithm detects fuzzy motifs in DNA sequences rich in protein-binding signals. A signal is defined as any short nucleotide pattern having up to d mutations differing from a motif of length l. The algorithm finds such motifs if a clique consisting of a sufficiently large number of mutated copies of the motif (i.e., the signals) is present in the DNA sequence. The cWINNOWER algorithm substantially improves the sensitivity of the winnower method of Pevzner and Sze by imposing a consensus constraint, enabling it to detect much weaker signals. We studied the minimum detectable clique size qc as a function of sequence length N for random sequences. We found that qc increases linearly with N for a fast version of the algorithm based on counting three-member sub-cliques. Imposing consensus constraints reduces qc by a factor of three in this case, which makes the algorithm dramatically more sensitive. Our most sensitive algorithm, which counts four-member sub-cliques, needs a minimum of only 13 signals to detect motifs in a sequence of length N = 12,000 for (l, d) = (15, 4). Copyright Imperial College Press.

  2. cWINNOWER Algorithm for Finding Fuzzy DNA Motifs

    NASA Technical Reports Server (NTRS)

    Liang, Shoudan

    2003-01-01

    The cWINNOWER algorithm detects fuzzy motifs in DNA sequences rich in protein-binding signals. A signal is defined as any short nucleotide pattern having up to d mutations differing from a motif of length l. The algorithm finds such motifs if multiple mutated copies of the motif (i.e., the signals) are present in the DNA sequence in sufficient abundance. The cWINNOWER algorithm substantially improves the sensitivity of the winnower method of Pevzner and Sze by imposing a consensus constraint, enabling it to detect much weaker signals. We studied the minimum number of detectable motifs qc as a function of sequence length N for random sequences. We found that qc increases linearly with N for a fast version of the algorithm based on counting three-member sub-cliques. Imposing consensus constraints reduces qc, by a factor of three in this case, which makes the algorithm dramatically more sensitive. Our most sensitive algorithm, which counts four-member sub-cliques, needs a minimum of only 13 signals to detect motifs in a sequence of length N = 12000 for (l,d) = (15,4).

  3. Lightning current detector

    NASA Technical Reports Server (NTRS)

    Livermore, S. F. (Inventor)

    1978-01-01

    An apparatus for measuring the intensity of current produced in an elongated electrical conductive member by a lightning strike for determining the intensity of the lightning strike is presented. The apparatus includes an elongated strip of magnetic material that is carried within an elongated tubular housing. A predetermined electrical signal is recorded along the length of said elongated strip of magnetic material. One end of the magnetic material is positioned closely adjacent to the electrically conductive member so that the magnetic field produced by current flowing through said electrically conductive member disturbs a portion of the recorded electrical signal directly proportional to the intensity of the lightning strike.

  4. We are Family: the Conformations of 1-FLUOROALKANES, C_nH2n+1F (n = 2,3,4,5,6,7,8)

    NASA Astrophysics Data System (ADS)

    Obenchain, Daniel A.; Orellana, W.; Cooke, S. A.

    2016-06-01

    he pure rotational spectra of the n = 5, 6, 7, and 8 members of the 1-fluoroalkane family have been recorded between 7 GHz and 14 GHz using chirped pulse Fourier transform microwave spectroscopy. The spectra have been analyzed and results will be presented and compared with previous work on the n= 2, 3, and 4 members. The lowest energy conformer for all family members has the common feature that the fluorine is in a gauche position relative to the alkyl tail for which all other heavy atom dihedral angles, where appropriate, are 180 degrees. For the n = 3 and higher family members the second lowest energy conformer has all heavy atom dihedral angles equal to 180 degrees. For each family member transitions carried by both low energy conformers were observed in the collected rotational spectra. Quantum chemical calculations were performed and trends in the energy separations between these two common conformers will be presented as a function of chain length. Furthermore, longer chain lengths have been examined using only quantum chemical calculations and results will be presented. M. Hayashi, M. Fujitake, T. Inagusa, S. Miyazaki, J.Mol.Struct., 216, 9-26, 1990 W. Caminati, A. C. Fantoni, F. Manescalchi, F. Scappini, Mol.Phys., 64, 1089 ,1988 L. B. Favero, A. Maris, A. Degli Esposti, P. G. Favero, W. Caminati, G. Pawelke, Chem.Eur.J., 6(16), 3018-3025, 2000

  5. Neural network approximation of nonlinearity in laser nano-metrology system based on TLMI

    NASA Astrophysics Data System (ADS)

    Olyaee, Saeed; Hamedi, Samaneh

    2011-02-01

    In this paper, an approach based on neural network (NN) for nonlinearity modeling in a nano-metrology system using three-longitudinal-mode laser heterodyne interferometer (TLMI) for length and displacement measurements is presented. We model nonlinearity errors that arise from elliptically and non-orthogonally polarized laser beams, rotational error in the alignment of laser head with respect to the polarizing beam splitter, rotational error in the alignment of the mixing polarizer, and unequal transmission coefficients in the polarizing beam splitter. Here we use a neural network algorithm based on the multi-layer perceptron (MLP) network. The simulation results show that multi-layer feed forward perceptron network is successfully applicable to real noisy interferometer signals.

  6. Enabling Predictive Simulation and UQ of Complex Multiphysics PDE Systems by the Development of Goal-Oriented Variational Sensitivity Analysis and a-Posteriori Error Estimation Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estep, Donald

    2015-11-30

    This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.

  7. A modified JPEG-LS lossless compression method for remote sensing images

    NASA Astrophysics Data System (ADS)

    Deng, Lihua; Huang, Zhenghua

    2015-12-01

    As many variable length source coders, JPEG-LS is highly vulnerable to channel errors which occur in the transmission of remote sensing images. The error diffusion is one of the important factors which infect its robustness. The common method of improving the error resilience of JPEG-LS is dividing the image into many strips or blocks, and then coding each of them independently, but this method reduces the coding efficiency. In this paper, a block based JPEP-LS lossless compression method with an adaptive parameter is proposed. In the modified scheme, the threshold parameter RESET is adapted to an image and the compression efficiency is close to that of the conventional JPEG-LS.

  8. The relationship between perceived length and egocentric location in Muller-Lyer figures with one versus two chevrons

    NASA Technical Reports Server (NTRS)

    Welch, Robert B.; Post, Robert B.; Lum, Wayland; Prinzmetal, William

    2004-01-01

    We examined the apparent dissociation of perceived length and perceived position with respect to the Muller-Lyer (M-L) illusion. With the traditional (two-chevron) figure, participants made accurate open-loop pointing responses at the endpoints of the shaft, despite the presence of a strong length illusion. This apparently non-Euclidean outcome replicated that of Mack, Heuer, Villardi, and Chambers (1985) and Gillam and Chambers (1985) and contradicts any theory of the M-L illusion in which mislocalization of shaft endpoints plays a role. However, when one of the chevrons was removed, a constant pointing error occurred in the predicted direction, as well as a strong length illusion. Thus, with one-chevron stimuli, perceived length and location were no longer completely dissociated. We speculated that the presence of two opposing chevrons suppresses the mislocalizing effects of a single chevron, especially for figures with relatively short shafts.

  9. Analysis of error-correction constraints in an optical disk.

    PubMed

    Roberts, J D; Ryley, A; Jones, D M; Burke, D

    1996-07-10

    The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.

  10. Analysis of error-correction constraints in an optical disk

    NASA Astrophysics Data System (ADS)

    Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David

    1996-07-01

    The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.

  11. Minimax confidence intervals in geomagnetism

    NASA Technical Reports Server (NTRS)

    Stark, Philip B.

    1992-01-01

    The present paper uses theory of Donoho (1989) to find lower bounds on the lengths of optimally short fixed-length confidence intervals (minimax confidence intervals) for Gauss coefficients of the field of degree 1-12 using the heat flow constraint. The bounds on optimal minimax intervals are about 40 percent shorter than Backus' intervals: no procedure for producing fixed-length confidence intervals, linear or nonlinear, can give intervals shorter than about 60 percent the length of Backus' in this problem. While both methods rigorously account for the fact that core field models are infinite-dimensional, the application of the techniques to the geomagnetic problem involves approximations and counterfactual assumptions about the data errors, and so these results are likely to be extremely optimistic estimates of the actual uncertainty in Gauss coefficients.

  12. Automatic interface measurement and analysis. [shoreline length of Alabama using LANDSAT imagery

    NASA Technical Reports Server (NTRS)

    Faller, K. H.

    1975-01-01

    A technique for detecting and measuring the interface between two categories in classified scanner data is described together with two application demonstrations. Measurements were found to be accurate to 1.5% root mean square error on features of known length while comparison of measurements made using the technique on LANDSAT data to opisometer measurements on 1:24,000 scale maps shows excellent agreement. Application of the technique to two frames of LANDSAT data classified using a two channel, two class classifier resulted in a computation of 64 km annual decrease in shoreline length. The tidal shoreline of a portion of Alabama was measured using LANDSAT data. Based on the measurement of this portion, the total tidal shoreline length of Alabama is estimated to be 1313 kilometers.

  13. Is Coefficient Alpha Robust to Non-Normal Data?

    PubMed Central

    Sheng, Yanyan; Sheng, Zhaohui

    2011-01-01

    Coefficient alpha has been a widely used measure by which internal consistency reliability is assessed. In addition to essential tau-equivalence and uncorrelated errors, normality has been noted as another important assumption for alpha. Earlier work on evaluating this assumption considered either exclusively non-normal error score distributions, or limited conditions. In view of this and the availability of advanced methods for generating univariate non-normal data, Monte Carlo simulations were conducted to show that non-normal distributions for true or error scores do create problems for using alpha to estimate the internal consistency reliability. The sample coefficient alpha is affected by leptokurtic true score distributions, or skewed and/or kurtotic error score distributions. Increased sample sizes, not test lengths, help improve the accuracy, bias, or precision of using it with non-normal data. PMID:22363306

  14. Use of an OSSE to Evaluate Background Error Covariances Estimated by the 'NMC Method'

    NASA Technical Reports Server (NTRS)

    Errico, Ronald M.; Prive, Nikki C.; Gu, Wei

    2014-01-01

    The NMC method has proven utility for prescribing approximate background-error covariances required by variational data assimilation systems. Here, untunedNMCmethod estimates are compared with explicitly determined error covariances produced within an OSSE context by exploiting availability of the true simulated states. Such a comparison provides insights into what kind of rescaling is required to render the NMC method estimates usable. It is shown that rescaling of variances and directional correlation lengths depends greatly on both pressure and latitude. In particular, some scaling coefficients appropriate in the Tropics are the reciprocal of those in the Extratropics. Also, the degree of dynamic balance is grossly overestimated by the NMC method. These results agree with previous examinations of the NMC method which used ensembles as an alternative for estimating background-error statistics.

  15. Improved accuracy in Wigner-Ville distribution-based sizing of rod-shaped particle using flip and replication technique

    NASA Astrophysics Data System (ADS)

    Chuamchaitrakool, Porntip; Widjaja, Joewono; Yoshimura, Hiroyuki

    2018-01-01

    A method for improving accuracy in Wigner-Ville distribution (WVD)-based particle size measurements from inline holograms using flip and replication technique (FRT) is proposed. The FRT extends the length of hologram signals being analyzed, yielding better spatial-frequency resolution of the WVD output. Experimental results verify reduction in measurement error as the length of the hologram signals increases. The proposed method is suitable for particle sizing from holograms recorded using small-sized image sensors.

  16. "Fragment errors" in deep dysgraphia: further support for a lexical hypothesis.

    PubMed

    Bormann, Tobias; Wallesch, Claus-W; Blanken, Gerhard

    2008-07-01

    In addition to various lexical errors, the writing of patients with deep dysgraphia may include a large number of segmental spelling errors, which increase towards the end of the word. Frequently, these errors involve deletion of two or more letters resulting in so-called "fragment errors". Different positions have been brought forward regarding their origin, including rapid decay of activation in the graphemic buffer and an impairment of more central (i.e., lexical or semantic) processing. We present data from a patient (M.D.) with deep dysgraphia who showed an increase of segmental spelling errors towards the end of the word. Several tasks were carried out to explore M.D.'s underlying functional impairment. Errors affected word-final positions in tasks like backward spelling and fragment completion. In a delayed copying task, length of the delay had no influence. In addition, when asked to recall three serially presented letters, a task which had not been carried out before, M.D. exhibited a preference for the first and the third letter and poor performance for the second letter. M.D.'s performance on these tasks contradicts the rapid decay account and instead supports a lexical-semantic account of segmental errors in deep dysgraphia. In addition, the results fit well with an implemented computational model of deep dysgraphia and segmental spelling errors.

  17. [Shushu (ancient Chinese numerology) in Lingshu: Gudu (Miraculous Pivot: Bone-Length Measurement)].

    PubMed

    Zhuo, Lian-Shi

    2010-10-01

    Lingshu: Gudu (Miraculous Pivot: Bone-Length Measurement) is compared with literatures concerning the Shushu (ancient Chinese numerology) of the Qin Dynasty (221 B. C. - 206 B. C. ) and the Han Dynasty (206 B. C.-220 A. D.) in this article. And it is discovered that "the number of heaven and earth" in Yijing (The Book of Change) was implied in the bone-length measurement. The theory of Shushu is hidden in the sized of head, neck, chest, abdomen, back and 4 extremities according to the measurement. The meaning of establishment of bone-length measurement, which is found to have universality, laid in setting down the measurement of meridians. And it is the origin of the proportional measurement of locating acupoints. Checked with the theory of Shushu, errors in the description of bone-length measurement could also be found in Lingshu: Gudu (Miraculous Pivot: Bone-Length Measurement) of the present edition, which is helpful for the modern study on the measurement.

  18. Complex Problem Solving in a Workplace Setting.

    ERIC Educational Resources Information Center

    Middleton, Howard

    2002-01-01

    Studied complex problem solving in the hospitality industry through interviews with six office staff members and managers. Findings show it is possible to construct a taxonomy of problem types and that the most common approach can be termed "trial and error." (SLD)

  19. Improving Wind Predictions in the Marine Atmospheric Boundary Layer Through Parameter Estimation in a Single Column Model

    DOE PAGES

    Lee, Jared A.; Hacker, Joshua P.; Monache, Luca Delle; ...

    2016-08-03

    A current barrier to greater deployment of offshore wind turbines is the poor quality of numerical weather prediction model wind and turbulence forecasts over open ocean. The bulk of development for atmospheric boundary layer (ABL) parameterization schemes has focused on land, partly due to a scarcity of observations over ocean. The 100-m FINO1 tower in the North Sea is one of the few sources worldwide of atmospheric profile observations from the sea surface to turbine hub height. These observations are crucial to developing a better understanding and modeling of physical processes in the marine ABL. In this paper we usemore » the WRF single column model (SCM), coupled with an ensemble Kalman filter from the Data Assimilation Research Testbed (DART), to create 100-member ensembles at the FINO1 location. The goal of this study is to determine the extent to which model parameter estimation can improve offshore wind forecasts. Combining two datasets that provide lateral forcing for the SCM and two methods for determining z 0, the time-varying sea-surface roughness length, we conduct four WRF-SCM/DART experiments over the October-December 2006 period. The two methods for determining z 0 are the default Fairall-adjusted Charnock formulation in WRF, and using parameter estimation techniques to estimate z 0 in DART. Using DART to estimate z 0 is found to reduce 1-h forecast errors of wind speed over the Charnock-Fairall z 0 ensembles by 4%–22%. Finally, however, parameter estimation of z 0 does not simultaneously reduce turbulent flux forecast errors, indicating limitations of this approach and the need for new marine ABL parameterizations.« less

  20. Gender differences in promotion and scholarly impact: an analysis of 1460 academic ophthalmologists.

    PubMed

    Lopez, Santiago A; Svider, Peter F; Misra, Poonam; Bhagat, Neelakshi; Langer, Paul D; Eloy, Jean Anderson

    2014-01-01

    In recent years, gender differences in academic promotion have been documented within surgical fields. To the best of our knowledge, gender discrepancies in association with scholarly productivity have not been well assessed among academic ophthalmologists. Because research productivity is strongly associated with academic career advancement, we sought to determine whether gender differences in scholarly impact, measured by the h-index, exist among academic ophthalmologists. Academic rank and gender were determined using faculty listings from academic ophthalmology departments. h-index and publication experience (in years) of faculty members were determined using the Scopus database. Academic medical center. From assistant professor through professor, the h-index increased with subsequent academic rank (p < 0.001), although between chairpersons and professors no statistical difference was found (p > 0.05). Overall, men had higher h-indices (h = 10.4 ± 0.34 standard error of mean) than women (h = 6.0 ± 0.38 standard error of mean), a finding that was only statistically significant among assistant professors in a subgroup analysis. Women were generally underrepresented among senior positions. When controlling for publication range (i.e., length of time publishing), men had higher h-indices among those with 1 to 10 years of publication experience (p < 0.0001), whereas women had scholarly impact equivalent to and even exceeding that of men later in their careers. Women in academic ophthalmology continue to be underrepresented among senior faculty. Although women surpass men in scholarly productivity during the later stages of their careers, low scholarly impact during the earlier stages may impede academic advancement and partly explain the gender disparity in senior academic positions. Copyright © 2014 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  1. Characteristics of Interruptions During Medication Administration:An Integrative Review of Direct Observational Studies.

    PubMed

    Schroers, Ginger

    2018-06-26

    The purpose of this review was to synthesize and summarize data gathered by direct observation of the characteristics of interruptions in the context of nursing medication administration in hospital settings. Interruptions are prevalent during the medication administration process performed by nurses in hospital settings and have been found to be associated with an increase in frequency and severity of nursing medication administration errors. In addition, interruptions decrease task efficiency, leading to longer medication administration completion times. Integrative review. The electronic databases Cumulative Index of Nursing and Allied Health Literature (CINAHL), PubMED, PsyARTICLES, and Google Scholar were searched using the terms "interruptions" AND "medication administration" AND "direct observation". Nine articles met the inclusion criteria. Interruptions are likely to occur at least once during nursing medication administration processes in hospital settings. This finding applies to medication administered to one patient, termed a medication pass, and medication administered to multiple patients, termed a mediation round. Interruptions are most commonly caused by another nurse, staff member, or are self-initiated, and last approximately one minute in length. A raised awareness among staff of the most common sources of interruptions may encourage changes that lead to a decrease in the occurrence of interruptions. In addition, nurse leaders can apply an understanding of the common characteristics of interruptions to guide research, policies, and educational methods aimed at interruption management strategies. The findings from this review can be used to guide the identification and development of targeted interventions and strategies that would have the most substantial impact to reduce and manage interruptions during medication administration. Interruption management strategies have the potential to lead to a decrease in medication errors and an increase in task efficiency. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  2. Avoiding the ensemble decorrelation problem using member-by-member post-processing

    NASA Astrophysics Data System (ADS)

    Van Schaeybroeck, Bert; Vannitsem, Stéphane

    2014-05-01

    Forecast calibration or post-processing has become a standard tool in atmospheric and climatological science due to the presence of systematic initial condition and model errors. For ensemble forecasts the most competitive methods derive from the assumption of a fixed ensemble distribution. However, when independently applying such 'statistical' methods at different locations, lead times or for multiple variables the correlation structure for individual ensemble members is destroyed. Instead of reastablishing the correlation structure as in Schefzik et al. (2013) we instead propose a calibration method that avoids such problem by correcting each ensemble member individually. Moreover, we analyse the fundamental mechanisms by which the probabilistic ensemble skill can be enhanced. In terms of continuous ranked probability score, our member-by-member approach amounts to skill gain that extends for lead times far beyond the error doubling time and which is as good as the one of the most competitive statistical approach, non-homogeneous Gaussian regression (Gneiting et al. 2005). Besides the conservation of correlation structure, additional benefits arise including the fact that higher-order ensemble moments like kurtosis and skewness are inherited from the uncorrected forecasts. Our detailed analysis is performed in the context of the Kuramoto-Sivashinsky equation and different simple models but the results extent succesfully to the ensemble forecast of the European Centre for Medium-Range Weather Forecasts (Van Schaeybroeck and Vannitsem, 2013, 2014) . References [1] Gneiting, T., Raftery, A. E., Westveld, A., Goldman, T., 2005: Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation. Mon. Weather Rev. 133, 1098-1118. [2] Schefzik, R., T.L. Thorarinsdottir, and T. Gneiting, 2013: Uncertainty Quantification in Complex Simulation Models Using Ensemble Copula Coupling. To appear in Statistical Science 28. [3] Van Schaeybroeck, B., and S. Vannitsem, 2013: Reliable probabilities through statistical post-processing of ensemble forecasts. Proceedings of the European Conference on Complex Systems 2012, Springer proceedings on complexity, XVI, p. 347-352. [4] Van Schaeybroeck, B., and S. Vannitsem, 2014: Ensemble post-processing using member-by-member approaches: theoretical aspects, under review.

  3. Improving qPCR telomere length assays: Controlling for well position effects increases statistical power.

    PubMed

    Eisenberg, Dan T A; Kuzawa, Christopher W; Hayes, M Geoffrey

    2015-01-01

    Telomere length (TL) is commonly measured using quantitative PCR (qPCR). Although, easier than the southern blot of terminal restriction fragments (TRF) TL measurement method, one drawback of qPCR is that it introduces greater measurement error and thus reduces the statistical power of analyses. To address a potential source of measurement error, we consider the effect of well position on qPCR TL measurements. qPCR TL data from 3,638 people run on a Bio-Rad iCycler iQ are reanalyzed here. To evaluate measurement validity, correspondence with TRF, age, and between mother and offspring are examined. First, we present evidence for systematic variation in qPCR TL measurements in relation to thermocycler well position. Controlling for these well-position effects consistently improves measurement validity and yields estimated improvements in statistical power equivalent to increasing sample sizes by 16%. We additionally evaluated the linearity of the relationships between telomere and single copy gene control amplicons and between qPCR and TRF measures. We find that, unlike some previous reports, our data exhibit linear relationships. We introduce the standard error in percent, a superior method for quantifying measurement error as compared to the commonly used coefficient of variation. Using this measure, we find that excluding samples with high measurement error does not improve measurement validity in our study. Future studies using block-based thermocyclers should consider well position effects. Since additional information can be gleaned from well position corrections, rerunning analyses of previous results with well position correction could serve as an independent test of the validity of these results. © 2015 Wiley Periodicals, Inc.

  4. Frequency and analysis of non-clinical errors made in radiology reports using the National Integrated Medical Imaging System voice recognition dictation software.

    PubMed

    Motyer, R E; Liddy, S; Torreggiani, W C; Buckley, O

    2016-11-01

    Voice recognition (VR) dictation of radiology reports has become the mainstay of reporting in many institutions worldwide. Despite benefit, such software is not without limitations, and transcription errors have been widely reported. Evaluate the frequency and nature of non-clinical transcription error using VR dictation software. Retrospective audit of 378 finalised radiology reports. Errors were counted and categorised by significance, error type and sub-type. Data regarding imaging modality, report length and dictation time was collected. 67 (17.72 %) reports contained ≥1 errors, with 7 (1.85 %) containing 'significant' and 9 (2.38 %) containing 'very significant' errors. A total of 90 errors were identified from the 378 reports analysed, with 74 (82.22 %) classified as 'insignificant', 7 (7.78 %) as 'significant', 9 (10 %) as 'very significant'. 68 (75.56 %) errors were 'spelling and grammar', 20 (22.22 %) 'missense' and 2 (2.22 %) 'nonsense'. 'Punctuation' error was most common sub-type, accounting for 27 errors (30 %). Complex imaging modalities had higher error rates per report and sentence. Computed tomography contained 0.040 errors per sentence compared to plain film with 0.030. Longer reports had a higher error rate, with reports >25 sentences containing an average of 1.23 errors per report compared to 0-5 sentences containing 0.09. These findings highlight the limitations of VR dictation software. While most error was deemed insignificant, there were occurrences of error with potential to alter report interpretation and patient management. Longer reports and reports on more complex imaging had higher error rates and this should be taken into account by the reporting radiologist.

  5. An ensemble-ANFIS based uncertainty assessment model for forecasting multi-scalar standardized precipitation index

    NASA Astrophysics Data System (ADS)

    Ali, Mumtaz; Deo, Ravinesh C.; Downs, Nathan J.; Maraseni, Tek

    2018-07-01

    Forecasting drought by means of the World Meteorological Organization-approved Standardized Precipitation Index (SPI) is considered to be a fundamental task to support socio-economic initiatives and effectively mitigating the climate-risk. This study aims to develop a robust drought modelling strategy to forecast multi-scalar SPI in drought-rich regions of Pakistan where statistically significant lagged combinations of antecedent SPI are used to forecast future SPI. With ensemble-Adaptive Neuro Fuzzy Inference System ('ensemble-ANFIS') executed via a 10-fold cross-validation procedure, a model is constructed by randomly partitioned input-target data. Resulting in 10-member ensemble-ANFIS outputs, judged by mean square error and correlation coefficient in the training period, the optimal forecasts are attained by the averaged simulations, and the model is benchmarked with M5 Model Tree and Minimax Probability Machine Regression (MPMR). The results show the proposed ensemble-ANFIS model's preciseness was notably better (in terms of the root mean square and mean absolute error including the Willmott's, Nash-Sutcliffe and Legates McCabe's index) for the 6- and 12- month compared to the 3-month forecasts as verified by the largest error proportions that registered in smallest error band. Applying 10-member simulations, ensemble-ANFIS model was validated for its ability to forecast severity (S), duration (D) and intensity (I) of drought (including the error bound). This enabled uncertainty between multi-models to be rationalized more efficiently, leading to a reduction in forecast error caused by stochasticity in drought behaviours. Through cross-validations at diverse sites, a geographic signature in modelled uncertainties was also calculated. Considering the superiority of ensemble-ANFIS approach and its ability to generate uncertainty-based information, the study advocates the versatility of a multi-model approach for drought-risk forecasting and its prime importance for estimating drought properties over confidence intervals to generate better information for strategic decision-making.

  6. The Relationship between Crystalline Lens Power and Refractive Error in Older Chinese Adults: The Shanghai Eye Study.

    PubMed

    He, Jiangnan; Lu, Lina; He, Xiangui; Xu, Xian; Du, Xuan; Zhang, Bo; Zhao, Huijuan; Sha, Jida; Zhu, Jianfeng; Zou, Haidong; Xu, Xun

    2017-01-01

    To report calculated crystalline lens power and describe the distribution of ocular biometry and its association with refractive error in older Chinese adults. Random clustering sampling was used to identify adults aged 50 years and above in Xuhui and Baoshan districts of Shanghai. Refraction was determined by subjective refraction that achieved the best corrected vision based on monocular measurement. Ocular biometry was measured by IOL Master. The crystalline lens power of right eyes was calculated using modified Bennett-Rabbetts formula. We analyzed 6099 normal phakic right eyes. The mean crystalline lens power was 20.34 ± 2.24D (range: 13.40-36.08). Lens power, spherical equivalent, and anterior chamber depth changed linearly with age; however, axial length, corneal power and AL/CR ratio did not vary with age. The overall prevalence of hyperopia, myopia, and high myopia was 48.48% (95% CI: 47.23%-49.74%), 22.82% (95% CI: 21.77%-23.88%), and 4.57% (95% CI: 4.05-5.10), respectively. The prevalence of hyperopia increased linearly with age while lens power decreased with age. In multivariate models, refractive error was strongly correlated with axial length, lens power, corneal power, and anterior chamber depth; refractive error was slightly correlated with best corrected visual acuity, age and sex. Lens power, hyperopia, and spherical equivalent changed linearly with age; Moreover, the continuous loss of lens power produced hyperopic shifts in refraction in subjects aged more than 50 years.

  7. Sexuality and the Elderly: A Group Counseling Model.

    ERIC Educational Resources Information Center

    Capuzzi, Dave; Gossman, Larry

    1982-01-01

    Describes a 10-session group counseling model to facilitate awareness of sexuality and the legitimacy of its expression for older adults. Considers member selection, session length and setting, and group leadership. (Author/MCF)

  8. Pump control system for windmills

    DOEpatents

    Avery, Don E.

    1983-01-01

    A windmill control system having lever means, for varying length of stroke of the pump piston, and a control means, responsive to the velocity of the wind to operate the lever means to vary the length of stroke and hence the effective displacement of the pump in accordance with available wind energy, with the control means having a sensing member separate from the windmill disposed in the wind and displaceable thereby in accordance with wind velocity.

  9. Long-term Patterns of Microhabitat Use by Fish in a Southern Appalachian Stream from 1983 to 1992: Effects of Hydrologic Period, Season and Fish Length

    Treesearch

    Gary D. Grossman; Robert E. Ratajczak

    1998-01-01

    We quantified microhabitat use by members of a southern Appalachian stream fish assemblage over a ten-year period that included both floods and droughts. Our study site (37 m in length) encompassed riffle, run and pool habitats. Previous research indicated that species belonged to either benthic or water-column microhabitat guilds. Most species exhibited non-random...

  10. Long Term Mean Local Time of the Ascending Node Prediction

    NASA Technical Reports Server (NTRS)

    McKinley, David P.

    2007-01-01

    Significant error has been observed in the long term prediction of the Mean Local Time of the Ascending Node on the Aqua spacecraft. This error of approximately 90 seconds over a two year prediction is a complication in planning and timing of maneuvers for all members of the Earth Observing System Afternoon Constellation, which use Aqua's MLTAN as the reference for their inclination maneuvers. It was determined that the source of the prediction error was the lack of a solid Earth tide model in the operational force models. The Love Model of the solid Earth tide potential was used to derive analytic corrections to the inclination and right ascension of the ascending node of Aqua's Sun-synchronous orbit. Additionally, it was determined that the resonance between the Sun and orbit plane of the Sun-synchronous orbit is the primary driver of this error. The analytic corrections have been added to the operational force models for the Aqua spacecraft reducing the two-year 90-second error to less than 7 seconds.

  11. Cryogenic support system

    DOEpatents

    Nicol, T.H.; Niemann, R.C.; Gonczy, J.D.

    1988-11-01

    A support system is disclosed for restraining large masses at very low or cryogenic temperatures. The support system employs a tie bar that is pivotally connected at opposite ends to an anchoring support member and a sliding support member. The tie bar extends substantially parallel to the longitudinal axis of the cold mass assembly, and comprises a rod that lengthens when cooled and a pair of end attachments that contract when cooled. The rod and end attachments are sized so that when the tie bar is cooled to cryogenic temperature, the net change in tie bar length is approximately zero. Longitudinal force directed against the cold mass assembly is distributed by the tie bar between the anchoring support member and the sliding support member. 7 figs.

  12. A new species of the genus Capoeta Valenciennes, 1842 from the Caspian Sea basin in Iran (Teleostei, Cyprinidae)

    PubMed Central

    Jouladeh-Roudbar, Arash; Eagderi, Soheil; Ghanavi, Hamid Reza; Doadrio, Ignacio

    2017-01-01

    Abstract A new species of algae-scraping cyprinid of the genus Capoeta Valenciennes, 1842 is described from the Kheyroud River, located in the southern part of the Caspian Sea basin in Iran. The species differs from other members of this genus by a combination of the following characters: one pair of barbels; predorsal length equal to postdorsal length; maxillary barbel slightly smaller than eye’s horizontal diameter and reach to posterior margin of orbit; intranasal length slightly shorter than snout length; lateral line with 46–54 scales; 7–9 scales between dorsal-fin origin and lateral line, and 6–7 scales between anal-fin origin and lateral line. PMID:28769726

  13. Cooperative MIMO communication at wireless sensor network: an error correcting code approach.

    PubMed

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.

  14. DNA/RNA transverse current sequencing: intrinsic structural noise from neighboring bases

    PubMed Central

    Alvarez, Jose R.; Skachkov, Dmitry; Massey, Steven E.; Kalitsov, Alan; Velev, Julian P.

    2015-01-01

    Nanopore DNA sequencing via transverse current has emerged as a promising candidate for third-generation sequencing technology. It produces long read lengths which could alleviate problems with assembly errors inherent in current technologies. However, the high error rates of nanopore sequencing have to be addressed. A very important source of the error is the intrinsic noise in the current arising from carrier dispersion along the chain of the molecule, i.e., from the influence of neighboring bases. In this work we perform calculations of the transverse current within an effective multi-orbital tight-binding model derived from first-principles calculations of the DNA/RNA molecules, to study the effect of this structural noise on the error rates in DNA/RNA sequencing via transverse current in nanopores. We demonstrate that a statistical technique, utilizing not only the currents through the nucleotides but also the correlations in the currents, can in principle reduce the error rate below any desired precision. PMID:26150827

  15. Uncertainties in the cluster-cluster correlation function

    NASA Astrophysics Data System (ADS)

    Ling, E. N.; Frenk, C. S.; Barrow, J. D.

    1986-12-01

    The bootstrap resampling technique is applied to estimate sampling errors and significance levels of the two-point correlation functions determined for a subset of the CfA redshift survey of galaxies and a redshift sample of 104 Abell clusters. The angular correlation function for a sample of 1664 Abell clusters is also calculated. The standard errors in xi(r) for the Abell data are found to be considerably larger than quoted 'Poisson errors'. The best estimate for the ratio of the correlation length of Abell clusters (richness class R greater than or equal to 1, distance class D less than or equal to 4) to that of CfA galaxies is 4.2 + 1.4 or - 1.0 (68 percentile error). The enhancement of cluster clustering over galaxy clustering is statistically significant in the presence of resampling errors. The uncertainties found do not include the effects of possible systematic biases in the galaxy and cluster catalogs and could be regarded as lower bounds on the true uncertainty range.

  16. On the accuracy and precision of numerical waveforms: effect of waveform extraction methodology

    NASA Astrophysics Data System (ADS)

    Chu, Tony; Fong, Heather; Kumar, Prayush; Pfeiffer, Harald P.; Boyle, Michael; Hemberger, Daniel A.; Kidder, Lawrence E.; Scheel, Mark A.; Szilagyi, Bela

    2016-08-01

    We present a new set of 95 numerical relativity simulations of non-precessing binary black holes (BBHs). The simulations sample comprehensively both black-hole spins up to spin magnitude of 0.9, and cover mass ratios 1-3. The simulations cover on average 24 inspiral orbits, plus merger and ringdown, with low initial orbital eccentricities e\\lt {10}-4. A subset of the simulations extends the coverage of non-spinning BBHs up to mass ratio q = 10. Gravitational waveforms at asymptotic infinity are computed with two independent techniques: extrapolation and Cauchy characteristic extraction. An error analysis based on noise-weighted inner products is performed. We find that numerical truncation error, error due to gravitational wave extraction, and errors due to the Fourier transformation of signals with finite length of the numerical waveforms are of similar magnitude, with gravitational wave extraction errors dominating at noise-weighted mismatches of ˜ 3× {10}-4. This set of waveforms will serve to validate and improve aligned-spin waveform models for gravitational wave science.

  17. Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach

    PubMed Central

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732

  18. Is radiographic measurement of bony landmarks reliable for lateral meniscal sizing?

    PubMed

    Yoon, Jung-Ro; Kim, Taik-Seon; Lim, Hong-Chul; Lim, Hyung-Tae; Yang, Jae-Hyuk

    2011-03-01

    The accuracy of meniscal measurement methods is still in debate. The authors' protocol for radiologic measurements will provide reproducible bony landmarks, and this measurement method of the lateral tibial plateau will correlate with the actual anatomic value. Controlled laboratory study. Twenty-five samples of fresh lateral meniscus with attached proximal tibia were obtained during total knee arthroplasty. Each sample was obtained without damage to the meniscus and bony attachment sites. The inclusion criterion was mild to moderate osteoarthritis in patients with mechanical axis deviation of less than 15°. Knees with lateral compartment osteoarthritic change or injured or degenerated menisci were excluded. For the lateral tibial plateau length measurements, the radiographic beam was angled 10° caudally at neutral rotation, which allowed differentiation of the lateral plateau cortical margins from the medial plateau. The transition points were identified and used for length measurement. The values of length were then compared with the conventional Pollard method and the anatomic values. The width measurement was done according to Pollard's protocol. For each knee, the percentage deviation from the anatomic dimension was recorded. Intraobserver error and interobserver error were calculated. The deviation of the authors' radiographic length measurements from anatomic dimensions was 1.4 ± 1.1 mm. The deviation of Pollard's radiographic length measurements was 4.1 ± 2.0 mm. With respect to accuracy-which represents the frequency of measurements that fall within 10% of measurements-the accuracy of authors' length was 98%, whereas for Pollard's method it was 40%. There was a good correlation between anatomic meniscal dimensions and each radiologic plateau dimensions for lateral meniscal width (R(2) = .790) and the authors' lateral meniscal length (R(2) = .823) and fair correlation for Pollard's lateral meniscal length (R(2) = .660). The reliability of each radiologic measurement showed good reliability (intraclass correlation coefficients, .823 to .973). The authors tried to determine the best-fit equation for predicting meniscal size from Pollard's method of bone size, as follows: anatomic length = 0.52 × plateau length (according to Pollard's method) + 5.2, not as Pollard suggested (0.7 × Pollard's plateau length). Based on this equation-namely, the modified Pollard method-the percentage difference decreased, and the accuracy increased to 92%. Lateral meniscal length dimension can be accurately predicted from the authors' radiographic tibial plateau measurements. This study may provide valuable information in preoperative sizing of lateral meniscus in meniscal allograft transplantation.

  19. Evaluation of the geometric stability and the accuracy potential of digital cameras — Comparing mechanical stabilisation versus parameterisation

    NASA Astrophysics Data System (ADS)

    Rieke-Zapp, D.; Tecklenburg, W.; Peipe, J.; Hastedt, H.; Haig, Claudia

    Recent tests on the geometric stability of several digital cameras that were not designed for photogrammetric applications have shown that the accomplished accuracies in object space are either limited or that the accuracy potential is not exploited to the fullest extent. A total of 72 calibrations were calculated with four different software products for eleven digital camera models with different hardware setups, some with mechanical fixation of one or more parts. The calibration procedure was chosen in accord to a German guideline for evaluation of optical 3D measuring systems [VDI/VDE, VDI/VDE 2634 Part 1, 2002. Optical 3D Measuring Systems-Imaging Systems with Point-by-point Probing. Beuth Verlag, Berlin]. All images were taken with ringflashes which was considered a standard method for close-range photogrammetry. In cases where the flash was mounted to the lens, the force exerted on the lens tube and the camera mount greatly reduced the accomplished accuracy. Mounting the ringflash to the camera instead resulted in a large improvement of accuracy in object space. For standard calibration best accuracies in object space were accomplished with a Canon EOS 5D and a 35 mm Canon lens where the focusing tube was fixed with epoxy (47 μm maximum absolute length measurement error in object space). The fixation of the Canon lens was fairly easy and inexpensive resulting in a sevenfold increase in accuracy compared with the same lens type without modification. A similar accuracy was accomplished with a Nikon D3 when mounting the ringflash to the camera instead of the lens (52 μm maximum absolute length measurement error in object space). Parameterisation of geometric instabilities by introduction of an image variant interior orientation in the calibration process improved results for most cameras. In this case, a modified Alpa 12 WA yielded the best results (29 μm maximum absolute length measurement error in object space). Extending the parameter model with FiBun software to model not only an image variant interior orientation, but also deformations in the sensor domain of the cameras, showed significant improvements only for a small group of cameras. The Nikon D3 camera yielded the best overall accuracy (25 μm maximum absolute length measurement error in object space) with this calibration procedure indicating at the same time the presence of image invariant error in the sensor domain. Overall, calibration results showed that digital cameras can be applied for an accurate photogrammetric survey and that only a little effort was sufficient to greatly improve the accuracy potential of digital cameras.

  20. Integrating team resource management program into staff training improves staff's perception and patient safety in organ procurement and transplantation: the experience in a university-affiliated medical center in Taiwan.

    PubMed

    Hsu, Ya-Chi; Jerng, Jih-Shuin; Chang, Ching-Wen; Chen, Li-Chin; Hsieh, Ming-Yuan; Huang, Szu-Fen; Liu, Yueh-Ping; Hung, Kuan-Yu

    2014-08-11

    The process involved in organ procurement and transplantation is very complex that requires multidisciplinary coordination and teamwork. To prevent error during the processes, teamwork education and training might play an important role. We wished to evaluate the efficacy of implementing a Team Resource Management (TRM) program on patient safety and the behaviors of the team members involving in the process. We implemented a TRM training program for the organ procurement and transplantation team members of the National Taiwan University Hospital (NTUH), a teaching medical center in Taiwan. This 15-month intervention included TRM education and training courses for the healthcare workers, focused group skill training for the procurement and transplantation team members, video demonstration and training, and case reviews with feedbacks. Teamwork culture was evaluated and all procurement and transplantation cases were reviewed to evaluate the application of TRM skills during the actual processes. During the intervention period, a total of 34 staff members participated the program, and 67 cases of transplantations were performed. Teamwork framework concept was the most prominent dimension that showed improvement from the participants for training. The team members showed a variety of teamwork behaviors during the process of procurement and transplantation during the intervention period. Of note, there were two potential donors with a positive HIV result, for which the procurement processed was timely and successfully terminated by the team. None of the recipients was transplanted with an infected organ. No error in communication or patient identification was noted during review of the case records. Implementation of a Team Resource Management program improves the teamwork culture as well as patient safety in organ procurement and transplantation.

  1. Integrating team resource management program into staff training improves staff’s perception and patient safety in organ procurement and transplantation: the experience in a university-affiliated medical center in Taiwan

    PubMed Central

    2014-01-01

    Background The process involved in organ procurement and transplantation is very complex that requires multidisciplinary coordination and teamwork. To prevent error during the processes, teamwork education and training might play an important role. We wished to evaluate the efficacy of implementing a Team Resource Management (TRM) program on patient safety and the behaviors of the team members involving in the process. Methods We implemented a TRM training program for the organ procurement and transplantation team members of the National Taiwan University Hospital (NTUH), a teaching medical center in Taiwan. This 15-month intervention included TRM education and training courses for the healthcare workers, focused group skill training for the procurement and transplantation team members, video demonstration and training, and case reviews with feedbacks. Teamwork culture was evaluated and all procurement and transplantation cases were reviewed to evaluate the application of TRM skills during the actual processes. Results During the intervention period, a total of 34 staff members participated the program, and 67 cases of transplantations were performed. Teamwork framework concept was the most prominent dimension that showed improvement from the participants for training. The team members showed a variety of teamwork behaviors during the process of procurement and transplantation during the intervention period. Of note, there were two potential donors with a positive HIV result, for which the procurement processed was timely and successfully terminated by the team. None of the recipients was transplanted with an infected organ. No error in communication or patient identification was noted during review of the case records. Conclusion Implementation of a Team Resource Management program improves the teamwork culture as well as patient safety in organ procurement and transplantation. PMID:25115403

  2. 'Nudging' your patients toward improved oral health.

    PubMed

    Scarbecz, Mark

    2012-08-01

    Behavioral economics combines research from the fields of psychology, neurology and economics to help people understand how people make choices in complex social and economic environments. The principles of behavioral economics increasingly are being applied in health care. The author describes how dental team members can use behavioral economics principles to improve patients' oral health. Dental patients must make complex choices about care, and dental team members must provide information to patients to help them make choices. Patients are subject to predictable biases and are prone to making errors. Dental team members can use this information to "nudge" patients in healthy directions by providing an appropriate mix of incentives, default options and feedback. Practice Implications. The suggestions the author presents may help dental team members choose strategies that maximize both patient welfare and the success of their practices, while preserving patient autonomy.

  3. A comparative analysis of errors in long-term econometric forecasts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tepel, R.

    1986-04-01

    The growing body of literature that documents forecast accuracy falls generally into two parts. The first is prescriptive and is carried out by modelers who use simulation analysis as a tool for model improvement. These studies are ex post, that is, they make use of known values for exogenous variables and generate an error measure wholly attributable to the model. The second type of analysis is descriptive and seeks to measure errors, identify patterns among errors and variables and compare forecasts from different sources. Most descriptive studies use an ex ante approach, that is, they evaluate model outputs based onmore » estimated (or forecasted) exogenous variables. In this case, it is the forecasting process, rather than the model, that is under scrutiny. This paper uses an ex ante approach to measure errors in forecast series prepared by Data Resources Incorporated (DRI), Wharton Econometric Forecasting Associates (Wharton), and Chase Econometrics (Chase) and to determine if systematic patterns of errors can be discerned between services, types of variables (by degree of aggregation), length of forecast and time at which the forecast is made. Errors are measured as the percent difference between actual and forecasted values for the historical period of 1971 to 1983.« less

  4. Lexical and phonological variability in preschool children with speech sound disorder.

    PubMed

    Macrae, Toby; Tyler, Ann A; Lewis, Kerry E

    2014-02-01

    The authors of this study examined relationships between measures of word and speech error variability and between these and other speech and language measures in preschool children with speech sound disorder (SSD). In this correlational study, 18 preschool children with SSD, age-appropriate receptive vocabulary, and normal oral motor functioning and hearing were assessed across 2 sessions. Experimental measures included word and speech error variability, receptive vocabulary, nonword repetition (NWR), and expressive language. Pearson product–moment correlation coefficients were calculated among the experimental measures. The correlation between word and speech error variability was slight and nonsignificant. The correlation between word variability and receptive vocabulary was moderate and negative, although nonsignificant. High word variability was associated with small receptive vocabularies. The correlations between speech error variability and NWR and between speech error variability and the mean length of children's utterances were moderate and negative, although both were nonsignificant. High speech error variability was associated with poor NWR and language scores. High word variability may reflect unstable lexical representations, whereas high speech error variability may reflect indistinct phonological representations. Preschool children with SSD who show abnormally high levels of different types of speech variability may require slightly different approaches to intervention.

  5. A two-factor error model for quantitative steganalysis

    NASA Astrophysics Data System (ADS)

    Böhme, Rainer; Ker, Andrew D.

    2006-02-01

    Quantitative steganalysis refers to the exercise not only of detecting the presence of hidden stego messages in carrier objects, but also of estimating the secret message length. This problem is well studied, with many detectors proposed but only a sparse analysis of errors in the estimators. A deep understanding of the error model, however, is a fundamental requirement for the assessment and comparison of different detection methods. This paper presents a rationale for a two-factor model for sources of error in quantitative steganalysis, and shows evidence from a dedicated large-scale nested experimental set-up with a total of more than 200 million attacks. Apart from general findings about the distribution functions found in both classes of errors, their respective weight is determined, and implications for statistical hypothesis tests in benchmarking scenarios or regression analyses are demonstrated. The results are based on a rigorous comparison of five different detection methods under many different external conditions, such as size of the carrier, previous JPEG compression, and colour channel selection. We include analyses demonstrating the effects of local variance and cover saturation on the different sources of error, as well as presenting the case for a relative bias model for between-image error.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less

  7. Precision determination of the πN scattering lengths and the charged πNN coupling constant

    NASA Astrophysics Data System (ADS)

    Ericson, T. E. O.; Loiseau, B.; Thomas, A. W.

    2000-01-01

    We critically evaluate the isovector GMO sumrule for the charged πNN coupling constant using recent precision data from π-p and π-d atoms and with careful attention to systematic errors. From the π-d scattering length we deduce the pion-proton scattering lengths 1/2(aπ-p + aπ-n) = (-20 +/- 6(statistic)+/-10 (systematic) .10-4m-1πc and 1/2(aπ-p - aπ-n) = (903 +/- 14) . 10-4m-1πc. From this a direct evaluation gives g2c(GMO)/4π = 14.20 +/- 0.07 (statistic)+/-0.13(systematic) or f2c/4π = 0.0786 +/- 0.0008.

  8. Content Validity of a Tool Measuring Medication Errors.

    PubMed

    Tabassum, Nishat; Allana, Saleema; Saeed, Tanveer; Dias, Jacqueline Maria

    2015-08-01

    The objective of this study was to determine content and face validity of a tool measuring medication errors among nursing students in baccalaureate nursing education. Data was collected from the Aga Khan University School of Nursing and Midwifery (AKUSoNaM), Karachi, from March to August 2014. The tool was developed utilizing literature and the expertise of the team members, expert in different areas. The developed tool was then sent to five experts from all over Karachi for ensuring the content validity of the tool, which was measured on relevance and clarity of the questions. The Scale Content Validity Index (S-CVI) for clarity and relevance of the questions was found to be 0.94 and 0.98, respectively. The tool measuring medication errors has an excellent content validity. This tool should be used for future studies on medication errors, with different study populations such as medical students, doctors, and nurses.

  9. Peripheral refraction and retinal contour in stable and progressive myopia.

    PubMed

    Faria-Ribeiro, Miguel; Queirós, António; Lopes-Ferreira, Daniela; Jorge, Jorge; González-Méijome, José Manuel

    2013-01-01

    To compare the patterns of relative peripheral astigmatic refraction (tangential and sagittal power errors) and eccentric eye length between progressing and stable young-adult myopes. Sixty-two right eyes of 62 white patients participated in the study, of which 30 were nonprogressing myopes (NP group) for the last 2 years and 32 were progressing myopes (P group). Groups were matched for mean spherical refraction, axial length, and age. Peripheral refraction and eye length were measured along the horizontal meridian up to 35 and 30 degrees of eccentricity, respectively. There were statistically significant differences between groups (p < 0.001) in the nasal retina for the astigmatic components of peripheral refraction. The P group presented a hyperopic relative sagittal focus at 35 degrees in the nasal retina of +1.00 ± 0.83 diopters, as per comparison with a myopic relative sagittal focus of -0.10 ± 0.98 diopters observed in the NP group (p < 0.001). Retinal contour in the P group had a steeper shape in the nasal region than that in the NP group (t test, p = 0.001). An inverse correlation was found (r = -0.775; p < 0.001) between retinal contour and peripheral refraction. Thus, steeper retinas presented a more hyperopic trend in the periphery. Stable and progressing myopes of matched age, axial length, and central refraction showed significantly different characteristics in their peripheral retinal shape and astigmatic components of tangential and sagittal power errors. The present findings may help explain the mechanisms that regulate ocular growth in humans.

  10. Assessing the performance of the Oxford Nanopore Technologies MinION

    PubMed Central

    Laver, T.; Harrison, J.; O’Neill, P.A.; Moore, K.; Farbos, A.; Paszkiewicz, K.; Studholme, D.J.

    2015-01-01

    The Oxford Nanopore Technologies (ONT) MinION is a new sequencing technology that potentially offers read lengths of tens of kilobases (kb) limited only by the length of DNA molecules presented to it. The device has a low capital cost, is by far the most portable DNA sequencer available, and can produce data in real-time. It has numerous prospective applications including improving genome sequence assemblies and resolution of repeat-rich regions. Before such a technology is widely adopted, it is important to assess its performance and limitations in respect of throughput and accuracy. In this study we assessed the performance of the MinION by re-sequencing three bacterial genomes, with very different nucleotide compositions ranging from 28.6% to 70.7%; the high G + C strain was underrepresented in the sequencing reads. We estimate the error rate of the MinION (after base calling) to be 38.2%. Mean and median read lengths were 2 kb and 1 kb respectively, while the longest single read was 98 kb. The whole length of a 5 kb rRNA operon was covered by a single read. As the first nanopore-based single molecule sequencer available to researchers, the MinION is an exciting prospect; however, the current error rate limits its ability to compete with existing sequencing technologies, though we do show that MinION sequence reads can enhance contiguity of de novo assembly when used in conjunction with Illumina MiSeq data. PMID:26753127

  11. Organizational role stress among medical school faculty members in Iran: dealing with role conflict

    PubMed Central

    Ahmady, Soleiman; Changiz, Tahereh; Masiello, Italo; Brommels, Mats

    2007-01-01

    Background Little research has been conducted to investigate role stress experienced by faculty members in medical schools in developing countries. This becomes even more important when the process of reform in medical education has already taken place, such as the case of Iran. The objectives of this study were to investigate and assess the level and source of role-related stress as well as dimensions of conflict among the faculty members of Iranian medical schools. Variables like the length of academic work, academic rank, employment position, and the departments of affiliation were also taken into consideration in order to determine potentially related factors. Methods A survey was conducted at three different ranks of public medical schools. The validated Organizational Role Stress Scale was used to investigate the level of role stress and dimensions of role conflict among medical faculty members. The response rate was 66.5%. Results The findings show that role stress was experienced in high level among almost all faculty members. All three studied medical schools with different ranks are threatened with relatively the same levels of role stress. Specific differences were found among faculty members from different disciplines, and academic ranks. Also having permanent position and the length of services had significant correlation with the level of role stress. The major role- related stress and forms of conflict among faculty members were role overload, role expectation conflict, inter-role distance, resource inadequacy, role stagnation, and role isolation. Conclusion The most role-related stressors and forms of conflict among faculty members include too many tasks and everyday work load; conflicting demands from colleagues and superiors; incompatible demands from their different personal and organizational roles; inadequate resources for appropriate performance; insufficient competency to meet the demands of their role; inadequate autonomy to make decision on different tasks; and a feeling of underutilization. The findings of this study can assist administrators and policy makers to provide an attractive working climate in order to decrease side effects and consequences of role stress and to increase productivity of faculty members. Furthermore, understanding this situation can help to develop coping strategies in order to reduce role-related stress. PMID:17535421

  12. Organizational role stress among medical school faculty members in Iran: dealing with role conflict.

    PubMed

    Ahmady, Soleiman; Changiz, Tahereh; Masiello, Italo; Brommels, Mats

    2007-05-29

    Little research has been conducted to investigate role stress experienced by faculty members in medical schools in developing countries. This becomes even more important when the process of reform in medical education has already taken place, such as the case of Iran. The objectives of this study were to investigate and assess the level and source of role-related stress as well as dimensions of conflict among the faculty members of Iranian medical schools. Variables like the length of academic work, academic rank, employment position, and the departments of affiliation were also taken into consideration in order to determine potentially related factors. A survey was conducted at three different ranks of public medical schools. The validated Organizational Role Stress Scale was used to investigate the level of role stress and dimensions of role conflict among medical faculty members. The response rate was 66.5%. The findings show that role stress was experienced in high level among almost all faculty members. All three studied medical schools with different ranks are threatened with relatively the same levels of role stress. Specific differences were found among faculty members from different disciplines, and academic ranks. Also having permanent position and the length of services had significant correlation with the level of role stress. The major role- related stress and forms of conflict among faculty members were role overload, role expectation conflict, inter-role distance, resource inadequacy, role stagnation, and role isolation. The most role-related stressors and forms of conflict among faculty members include too many tasks and everyday work load; conflicting demands from colleagues and superiors; incompatible demands from their different personal and organizational roles; inadequate resources for appropriate performance; insufficient competency to meet the demands of their role; inadequate autonomy to make decision on different tasks; and a feeling of underutilization. The findings of this study can assist administrators and policy makers to provide an attractive working climate in order to decrease side effects and consequences of role stress and to increase productivity of faculty members. Furthermore, understanding this situation can help to develop coping strategies in order to reduce role-related stress.

  13. Current Status of the Development of a Transportable and Compact VLBI System by NICT and GSI

    NASA Technical Reports Server (NTRS)

    Ishii, Atsutoshi; Ichikawa, Ryuichi; Takiguchi, Hiroshi; Takefuji, Kazuhiro; Ujihara, Hideki; Koyama, Yasuhiro; Kondo, Tetsuro; Kurihara, Shinobu; Miura, Yuji; Matsuzaka, Shigeru; hide

    2010-01-01

    MARBLE (Multiple Antenna Radio-interferometer for Baseline Length Evaluation) is under development by NICT and GSI. The main part of MARBLE is a transportable VLBI system with a compact antenna. The aim of this system is to provide precise baseline length over about 10 km for calibrating baselines. The calibration baselines are used to check and validate surveying instruments such as GPS receiver and EDM (Electro-optical Distance Meter). It is necessary to examine the calibration baselines regularly to keep the quality of the validation. The VLBI technique can examine and evaluate the calibration baselines. On the other hand, the following roles are expected of a compact VLBI antenna in the VLBI2010 project. In order to achieve the challenging measurement precision of VLBI2010, it is well known that it is necessary to deal with the problem of thermal and gravitational deformation of the antenna. One promising approach may be connected-element interferometry between a compact antenna and a VLBI2010 antenna. By measuring repeatedly the baseline between the small stable antenna and the VLBI2010 antenna, the deformation of the primary antenna can be measured and the thermal and gravitational models of the primary antenna will be able to be constructed. We made two prototypes of a transportable and compact VLBI system from 2007 to 2009. We performed VLBI experiments using theses prototypes and got a baseline length between the two prototypes. The formal error of the measured baseline length was 2.7 mm. We expect that the baseline length error will be reduced by using a high-speed A/D sampler.

  14. Trunk-acceleration based assessment of gait parameters in older persons: a comparison of reliability and validity of four inverted pendulum based estimations.

    PubMed

    Zijlstra, Agnes; Zijlstra, Wiebren

    2013-09-01

    Inverted pendulum (IP) models of human walking allow for wearable motion-sensor based estimations of spatio-temporal gait parameters during unconstrained walking in daily-life conditions. At present it is unclear to what extent different IP based estimations yield different results, and reliability and validity have not been investigated in older persons without a specific medical condition. The aim of this study was to compare reliability and validity of four different IP based estimations of mean step length in independent-living older persons. Participants were assessed twice and walked at different speeds while wearing a tri-axial accelerometer at the lower back. For all step-length estimators, test-retest intra-class correlations approached or were above 0.90. Intra-class correlations with reference step length were above 0.92 with a mean error of 0.0 cm when (1) multiplying the estimated center-of-mass displacement during a step by an individual correction factor in a simple IP model, or (2) adding an individual constant for bipedal stance displacement to the estimated displacement during single stance in a 2-phase IP model. When applying generic corrections or constants in all subjects (i.e. multiplication by 1.25, or adding 75% of foot length), correlations were above 0.75 with a mean error of respectively 2.0 and 1.2 cm. Although the results indicate that an individual adjustment of the IP models provides better estimations of mean step length, the ease of a generic adjustment can be favored when merely evaluating intra-individual differences. Further studies should determine the validity of these IP based estimations for assessing gait in daily life. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Evaluation of 4 Commercial Viewing Devices for Radiographic Perceptibility and Working Length Determination.

    PubMed

    Lally, Trent; Geist, James R; Yu, Qingzhao; Himel, Van T; Sabey, Kent

    2015-07-01

    This study compared images displayed on 1 desktop monitor, 1 laptop monitor, and 2 tablets for the detection of contrast and working length interpretation, with a null hypothesis of no differences between the devices. Three aluminum blocks, with milled circles of varying depth, were radiographed at various exposure levels to create 45 images of varying radiographic density. Six observers viewed the images on 4 devices: Lenovo M92z desktop (Lenovo, Beijing, China), Lenovo Z580 laptop (Lenovo), iPad 3 (Apple, Cupertino, CA), and iPad mini (Apple). Observers recorded the number of circles detected for each image, and a perceptibility curve was used to compare the devices. Additionally, 42 extracted teeth were imaged with working length files affixed at various levels (short, flush, and long) relative to the anatomic apex. Observers measured the distance from file tip to tooth apex on each device. The absolute mean measurement error was calculated for each image. Analysis of variance tests compared the devices. Observers repeated their sessions 1 month later to evaluate intraobserver reliability as measured with weighted kappa tests. Interclass correlation coefficients compared interobserver reliability. There was no significant difference in perceptibility detection between the Lenovo M92z desktop, iPad 3, and iPad mini. However, on average, all 3 were significantly better than the Lenovo Z580 laptop (P values ≤.015). No significant difference in the mean absolute error was noted for working length measurements among the 4 viewing devices (P = .3509). Although all 4 viewing devices seemed comparable with regard to working length evaluation, the laptop computer screen had lower overall ability to perceive contrast differences. Copyright © 2015 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  16. Geodesy by radio interferometry: Water vapor radiometry for estimation of the wet delay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elgered, G.; Davis, J.L.; Herring, T.A.

    1991-04-10

    An important source of error in very-long-baseline interferometry (VLBI) estimates of baseline length is unmodeled variations of the refractivity of the neutral atmosphere along the propagation path of the radio signals. The authors present and discuss the method of using data from a water vapor readiometer (WVR) to correct for the propagation delay caused by atmospheric water vapor, the major cause of these variations. Data from different WVRs are compared with estimated propagation delays obtained by Kalman filtering of the VLBI data themselves. The consequences of using either WVR data of Kalman filtering to correct for atmospheric propagation delay atmore » the Onsala VLBI site are investigated by studying the repeatability of estimated baseline lengths from Onsala to several other sites. The lengths of the baselines range from 919 to 7,941 km. The repeatability obtained for baseline length estimates shows that the methods of water vapor radiometry and Kalman filtering offer comparable accuracies when applied to VLBI observations obtained in the climate of the Swedish west coast. The use of WVR data yielded a 13% smaller weighted-root-mean-square (WRMS) scatter of the baseline length estimates compared to the use of a Kalman filter. It is also clear that the best minimum elevation angle for VLBI observations depends on the accuracy of the determinations of the total propagation delay to be used, since the error in this delay increases with increasing air mass. For use of WVR data along with accurate determinations of total surface pressure, the best minimum is about 20{degrees}; for use of a model for the wet delay based on the humidity and temperature at the ground, the best minimum is about 35{degrees}.« less

  17. Error identification, disclosure, and reporting: practice patterns of three emergency medicine provider types.

    PubMed

    Hobgood, Cherri; Xie, Jipan; Weiner, Bryan; Hooker, James

    2004-02-01

    To gather preliminary data on how the three major types of emergency medicine (EM) providers, physicians, nurses (RNs), and out-of-hospital personnel (EMTs), differ in error identification, disclosure, and reporting. A convenience sample of emergency department (ED) providers completed a brief survey designed to evaluate error frequency, disclosure, and reporting practices as well as error-based discussion and educational activities. One hundred sixteen subjects participated: 41 EMTs (35%), 33 RNs (28%), and 42 physicians (36%). Forty-five percent of EMTs, 56% of RNs, and 21% of physicians identified no clinical errors during the preceding year. When errors were identified, physicians learned of them via dialogue with RNs (58%), patients (13%), pharmacy (35%), and attending physicians (35%). For known errors, all providers were equally unlikely to inform the team caring for the patient. Disclosure to patients was limited and varied by provider type (19% EMTs, 23% RNs, and 74% physicians). Disclosure education was rare, with

  18. Virtual reality robotic surgery warm-up improves task performance in a dry laboratory environment: a prospective randomized controlled study.

    PubMed

    Lendvay, Thomas S; Brand, Timothy C; White, Lee; Kowalewski, Timothy; Jonnadula, Saikiran; Mercer, Laina D; Khorsand, Derek; Andros, Justin; Hannaford, Blake; Satava, Richard M

    2013-06-01

    Preoperative simulation warm-up has been shown to improve performance and reduce errors in novice and experienced surgeons, yet existing studies have only investigated conventional laparoscopy. We hypothesized that a brief virtual reality (VR) robotic warm-up would enhance robotic task performance and reduce errors. In a 2-center randomized trial, 51 residents and experienced minimally invasive surgery faculty in General Surgery, Urology, and Gynecology underwent a validated robotic surgery proficiency curriculum on a VR robotic simulator and on the da Vinci surgical robot (Intuitive Surgical Inc). Once they successfully achieved performance benchmarks, surgeons were randomized to either receive a 3- to 5-minute VR simulator warm-up or read a leisure book for 10 minutes before performing similar and dissimilar (intracorporeal suturing) robotic surgery tasks. The primary outcomes compared were task time, tool path length, economy of motion, technical, and cognitive errors. Task time (-29.29 seconds, p = 0.001; 95% CI, -47.03 to -11.56), path length (-79.87 mm; p = 0.014; 95% CI, -144.48 to -15.25), and cognitive errors were reduced in the warm-up group compared with the control group for similar tasks. Global technical errors in intracorporeal suturing (0.32; p = 0.020; 95% CI, 0.06-0.59) were reduced after the dissimilar VR task. When surgeons were stratified by earlier robotic and laparoscopic clinical experience, the more experienced surgeons (n = 17) demonstrated significant improvements from warm-up in task time (-53.5 seconds; p = 0.001; 95% CI, -83.9 to -23.0) and economy of motion (0.63 mm/s; p = 0.007; 95% CI, 0.18-1.09), and improvement in these metrics was not statistically significantly appreciated in the less-experienced cohort (n = 34). We observed significant performance improvement and error reduction rates among surgeons of varying experience after VR warm-up for basic robotic surgery tasks. In addition, the VR warm-up reduced errors on a more complex task (robotic suturing), suggesting the generalizability of the warm-up. Copyright © 2013 American College of Surgeons. All rights reserved.

  19. [Comparative volumetry of human testes using special types of testicular sonography, Prader's orchidometer, Schirren's circle and sliding caliber].

    PubMed

    Dörnberger, V; Dörnberger, G

    1987-01-01

    On 99 testes of corpses (death had occurred between 26 und 86 years) comparative volumetry was done. In the left surrounding capsules (without scrotal skin and tunica dartos) the testes were measured via real time sonography in a waterbath (7.5 MHz linear-scan), afterwards length, breadth and height were measured by a sliding calibre, the largest diameter (the length) of the testis was determined by Schirren's circle and finally the size of the testis was measured via Prader's orchidometer. After all the testes were surgically exposed, their volume (by litres) was determined according to Archimedes' principle. As for the Archimedes' principle a random mean error of 7% must be accepted, sonographic determination of the volume showed a random mean error of 15%. Whereas the accuracy of measurement increases with increasing volumes, both methods should be used with caution if the volumes are below 4 ml, since the possibilities of error are rather great. According to Prader's orchidometer the measured volumes on average were higher (+ 27%) with a random mean error of 19.5%. With Schirren's circle the obtained mean value was even higher (+ 52%) in comparison to the "real" volume by Archimedes' principle with a random mean error of 19%. The measurements of the testes in the left capsules by sliding calibre can be optimized, if one applies a correcting factor f (sliding calibre) = 0.39 for calculation of the testis volume corresponding to an ellipsoid. Here you will get the same mean value as in Archimedes' principle with a standard mean error of only 9%. If one applies the correction factor of real time sonography of testis f (sono) = 0.65 the mean value of sliding calibre measurements would be 68.8% too high with a standard mean error of 20.3%. For measurements via sliding calibre the calculation of the testis volume corresponding to an ellipsoid one should apply the smaller factor f (sliding calibre) = 0.39, because in this way the left capsules of testis and the epididymis are considered.

  20. Virtual Reality Robotic Surgery Warm-Up Improves Task Performance in a Dry Lab Environment: A Prospective Randomized Controlled Study

    PubMed Central

    Lendvay, Thomas S.; Brand, Timothy C.; White, Lee; Kowalewski, Timothy; Jonnadula, Saikiran; Mercer, Laina; Khorsand, Derek; Andros, Justin; Hannaford, Blake; Satava, Richard M.

    2014-01-01

    Background Pre-operative simulation “warm-up” has been shown to improve performance and reduce errors in novice and experienced surgeons, yet existing studies have only investigated conventional laparoscopy. We hypothesized a brief virtual reality (VR) robotic warm-up would enhance robotic task performance and reduce errors. Study Design In a two-center randomized trial, fifty-one residents and experienced minimally invasive surgery faculty in General Surgery, Urology, and Gynecology underwent a validated robotic surgery proficiency curriculum on a VR robotic simulator and on the da Vinci surgical robot. Once successfully achieving performance benchmarks, surgeons were randomized to either receive a 3-5 minute VR simulator warm-up or read a leisure book for 10 minutes prior to performing similar and dissimilar (intracorporeal suturing) robotic surgery tasks. The primary outcomes compared were task time, tool path length, economy of motion, technical and cognitive errors. Results Task time (-29.29sec, p=0.001, 95%CI-47.03,-11.56), path length (-79.87mm, p=0.014, 95%CI -144.48,-15.25), and cognitive errors were reduced in the warm-up group compared to the control group for similar tasks. Global technical errors in intracorporeal suturing (0.32, p=0.020, 95%CI 0.06,0.59) were reduced after the dissimilar VR task. When surgeons were stratified by prior robotic and laparoscopic clinical experience, the more experienced surgeons(n=17) demonstrated significant improvements from warm-up in task time (-53.5sec, p=0.001, 95%CI -83.9,-23.0) and economy of motion (0.63mm/sec, p=0.007, 95%CI 0.18,1.09), whereas improvement in these metrics was not statistically significantly appreciated in the less experienced cohort(n=34). Conclusions We observed a significant performance improvement and error reduction rate among surgeons of varying experience after VR warm-up for basic robotic surgery tasks. In addition, the VR warm-up reduced errors on a more complex task (robotic suturing) suggesting the generalizability of the warm-up. PMID:23583618

  1. QUANTIFYING AN UNCERTAIN FUTURE: HYDROLOGIC MODEL PERFORMANCE FOR A SERIES OF REALIZED "/FUTURE" CONDITIONS

    EPA Science Inventory

    A systematic analysis of model performance during simulations based on observed landcover/use change is used to quantify errors associated with simulations of known "future" conditions. Calibrated and uncalibrated assessments of relative change over different lengths of...

  2. Methodological uncertainties in multi-regression analyses of middle-atmospheric data series.

    PubMed

    Kerzenmacher, Tobias E; Keckhut, Philippe; Hauchecorne, Alain; Chanin, Marie-Lise

    2006-07-01

    Multi-regression analyses have often been used recently to detect trends, in particular in ozone or temperature data sets in the stratosphere. The confidence in detecting trends depends on a number of factors which generate uncertainties. Part of these uncertainties comes from the random variability and these are what is usually considered. They can be statistically estimated from residual deviations between the data and the fitting model. However, interferences between different sources of variability affecting the data set, such as the Quasi-Biennal Oscillation (QBO), volcanic aerosols, solar flux variability and the trend can also be a critical source of errors. This type of error has hitherto not been well quantified. In this work an artificial data series has been generated to carry out such estimates. The sources of errors considered here are: the length of the data series, the dependence on the choice of parameters used in the fitting model and the time evolution of the trend in the data series. Curves provided here, will permit future studies to test the magnitude of the methodological bias expected for a given case, as shown in several real examples. It is found that, if the data series is shorter than a decade, the uncertainties are very large, whatever factors are chosen to identify the source of the variability. However the errors can be limited when dealing with natural variability, if a sufficient number of periods (for periodic forcings) are covered by the analysed dataset. However when analysing the trend, the response to volcanic eruption induces a bias, whatever the length of the data series. The signal to noise ratio is a key factor: doubling the noise increases the period for which data is required in order to obtain an error smaller than 10%, from 1 to 3-4 decades. Moreover, if non-linear trends are superimposed on the data, and if the length of the series is longer than five years, a non-linear function has to be used to estimate trends. When applied to real data series, and when a breakpoint in the series occurs, the study reveals that data extending over 5 years are needed to detect a significant change in the slope of the ozone trends at mid-latitudes.

  3. The relationship between organizational culture and family satisfaction in critical care.

    PubMed

    Dodek, Peter M; Wong, Hubert; Heyland, Daren K; Cook, Deborah J; Rocker, Graeme M; Kutsogiannis, Demetrios J; Dale, Craig; Fowler, Robert; Robinson, Sandra; Ayas, Najib T

    2012-05-01

    Family satisfaction with critical care is influenced by a variety of factors. We investigated the relationship between measures of organizational and safety culture, and family satisfaction in critical care. We further explored differences in this relationship depending on intensive care unit survival status and length of intensive care unit stay of the patient. Cross-sectional surveys. Twenty-three tertiary and community intensive care units within three provinces in Canada. One thousand two-hundred eighty-five respondents from 2374 intensive care unit clinical staff, and 880 respondents from 1381 family members of intensive care unit patients. None. Intensive care unit staff completed the Organization and Management of Intensive Care Units survey and the Hospital Survey on Patient Safety Culture. Family members completed the Family Satisfaction in the Intensive Care Unit 24, a validated survey of family satisfaction. A priori, we analyzed adjusted relationships between each domain score from the culture surveys and either satisfaction with care or satisfaction with decision-making for each of four subgroups of family members according to patient descriptors: intensive care unit survivors who had length of intensive care unit stay <14 days or >14 days, and intensive care unit nonsurvivors who had length of stay <14 days or ≥14 days. We found strong positive relationships between most domains of organizational and safety culture, and satisfaction with care or decision-making for family members of intensive care unit nonsurvivors who spent at least 14 days in the intensive care unit. For the other three groups, there were only a few weak relationships between domains of organizational and safety culture and family satisfaction. Our findings suggest that the effect of organizational culture on care delivery is most easily detectable by family members of the most seriously ill patients who interact frequently with intensive care unit staff, who are intensive care unit nonsurvivors, and who spend a longer time in the intensive care unit. Positive relationships between measures of organizational and safety culture and family satisfaction suggest that by improving organizational culture, we may also improve family satisfaction.

  4. On the error statistics of Viterbi decoding and the performance of concatenated codes

    NASA Technical Reports Server (NTRS)

    Miller, R. L.; Deutsch, L. J.; Butman, S. A.

    1981-01-01

    Computer simulation results are presented on the performance of convolutional codes of constraint lengths 7 and 10 concatenated with the (255, 223) Reed-Solomon code (a proposed NASA standard). These results indicate that as much as 0.8 dB can be gained by concatenating this Reed-Solomon code with a (10, 1/3) convolutional code, instead of the (7, 1/2) code currently used by the DSN. A mathematical model of Viterbi decoder burst-error statistics is developed and is validated through additional computer simulations.

  5. Joint optimization of a partially coherent Gaussian beam for free-space optical communication over turbulent channels with pointing errors.

    PubMed

    Lee, It Ee; Ghassemlooy, Zabih; Ng, Wai Pang; Khalighi, Mohammad-Ali

    2013-02-01

    Joint beam width and spatial coherence length optimization is proposed to maximize the average capacity in partially coherent free-space optical links, under the combined effects of atmospheric turbulence and pointing errors. An optimization metric is introduced to enable feasible translation of the joint optimal transmitter beam parameters into an analogous level of divergence of the received optical beam. Results show that near-ideal average capacity is best achieved through the introduction of a larger receiver aperture and the joint optimization technique.

  6. Parameter Estimation for GRACE-FO Geometric Ranging Errors

    NASA Astrophysics Data System (ADS)

    Wegener, H.; Mueller, V.; Darbeheshti, N.; Naeimi, M.; Heinzel, G.

    2017-12-01

    Onboard GRACE-FO, the novel Laser Ranging Instrument (LRI) serves as a technology demonstrator, but it is a fully functional instrument to provide an additional high-precision measurement of the primary mission observable: the biased range between the two spacecraft. Its (expectedly) two largest error sources are laser frequency noise and tilt-to-length (TTL) coupling. While not much can be done about laser frequency noise, the mechanics of the TTL error are widely understood. They depend, however, on unknown parameters. In order to improve the quality of the ranging data, it is hence essential to accurately estimate these parameters and remove the resulting TTL error from the data.Means to do so will be discussed. In particular, the possibility of using calibration maneuvers, the utility of the attitude information provided by the LRI via Differential Wavefront Sensing (DWS), and the benefit from combining ranging data from LRI with ranging data from the established microwave ranging, will be mentioned.

  7. Verification of image orthorectification techniques for low-cost geometric inspection of masonry arch bridges

    NASA Astrophysics Data System (ADS)

    González-Jorge, Higinio; Riveiro, Belén; Varela, María; Arias, Pedro

    2012-07-01

    A low-cost image orthorectification tool based on the utilization of compact cameras and scale bars is developed to obtain the main geometric parameters of masonry bridges for inventory and routine inspection purposes. The technique is validated in three different bridges by comparison with laser scanning data. The surveying process is very delicate and must make a balance between working distance and angle. Three different cameras are used in the study to establish the relationship between the error and the camera model. Results depict nondependence in error between the length of the bridge element, the type of bridge, and the type of element. Error values for all the cameras are below 4 percent (95 percent of the data). A compact Canon camera, the model with the best technical specifications, shows an error level ranging from 0.5 to 1.5 percent.

  8. The Gnomon Experiment

    NASA Astrophysics Data System (ADS)

    Krisciunas, Kevin

    2007-12-01

    A gnomon, or vertical pointed stick, can be used to determine the north-south direction at a site, as well as one's latitude. If one has accurate time and knows one's time zone, it is also possible to determine one's longitude. From observations on the first day of winter and the first day of summer one can determine the obliquity of the ecliptic. Since we can obtain accurate geographical coordinates from Google Earth or a GPS device, analysis of set of shadow length measurements can be used by students to learn about astronomical coordinate systems, time systems, systematic errors, and random errors. Systematic latitude errors of student datasets are typically 30 nautical miles (0.5 degree) or more, but with care one can achieve systematic and random errors less than 8 nautical miles. One of the advantages of this experiment is that it can be carried out during the day. Also, it is possible to determine if a student has made up his data.

  9. Chromatic dispersive confocal technology for intra-oral scanning: first in-vitro results

    NASA Astrophysics Data System (ADS)

    Ertl, T.; Zint, M.; Konz, A.; Brauer, E.; Hörhold, H.; Hibst, R.

    2015-02-01

    Various test objects, plaster models, partially equipped with extracted teeth and pig jaws representing various clinical situations of tooth preparations were used for in-vitro scanning tests with an experimental intra-oral scanning system based on chromatic-dispersive confocal technology. Scanning results were compared against data sets of the same object captured by an industrial μCT measuring system. Compared to μCT data an average error of 18 - 30 μm was achieved for a single tooth scan area and less than 40 to 60 μm error measured over the restoration + the neighbor teeth and pontic areas up to 7 units. Mean error for a full jaw is within 100 - 140 μm. The length error for a 3 - 4 unit bridge situation form contact point to contact point is below 100 μm and excellent interproximal surface coverage and prep margin clarity was achieved.

  10. Nearby Exo-Earth Astrometric Telescope (NEAT)

    NASA Technical Reports Server (NTRS)

    Shao, M.; Nemati, B.; Zhai, C.; Goullioud, R.

    2011-01-01

    NEAT (Nearby Exo ]Earths Astrometric Telescope) is a modest sized (1m diameter telescope) It will be capable of searching approx 100 nearby stars down to 1 Mearth planets in the habitable zone, and 200 @ 5 Mearth, 1AU. The concept addresses the major issues for ultra -precise astrometry: (1) Photon noise (0.5 deg dia field of view) (2) Optical errors (beam walk) with long focal length telescope (3) Focal plane errors , with laser metrology of the focal plane (4) PSF centroiding errors with measurement of the "True" PSF instead of using a "guess " of the true PSF, and correction for intra pixel QE non-uniformities. Technology "close" to complete. Focal plane geometry to 2e-5 pixels and centroiding to approx 4e -5 pixels.

  11. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  12. Flow interference in a variable porosity trisonic wind tunnel.

    NASA Technical Reports Server (NTRS)

    Davis, J. W.; Graham, R. F.

    1972-01-01

    Pressure data from a 20-degree cone-cylinder in a variable porosity wind tunnel for the Mach range 0.2 to 5.0 are compared to an interference free standard in order to determine wall interference effects. Four 20-degree cone-cylinder models representing an approximate range of percent blockage from one to six were compared to curve-fits of the interference free standard at each Mach number and errors determined at each pressure tap location. The average of the absolute values of the percent error over the length of the model was determined and used as the criterion for evaluating model blockage interference effects. The results are presented in the form of the percent error as a function of model blockage and Mach number.

  13. Accurate aging of juvenile salmonids using fork lengths

    USGS Publications Warehouse

    Sethi, Suresh; Gerken, Jonathon; Ashline, Joshua

    2017-01-01

    Juvenile salmon life history strategies, survival, and habitat interactions may vary by age cohort. However, aging individual juvenile fish using scale reading is time consuming and can be error prone. Fork length data are routinely measured while sampling juvenile salmonids. We explore the performance of aging juvenile fish based solely on fork length data, using finite Gaussian mixture models to describe multimodal size distributions and estimate optimal age-discriminating length thresholds. Fork length-based ages are compared against a validation set of juvenile coho salmon, Oncorynchus kisutch, aged by scales. Results for juvenile coho salmon indicate greater than 95% accuracy can be achieved by aging fish using length thresholds estimated from mixture models. Highest accuracy is achieved when aged fish are compared to length thresholds generated from samples from the same drainage, time of year, and habitat type (lentic versus lotic), although relatively high aging accuracy can still be achieved when thresholds are extrapolated to fish from populations in different years or drainages. Fork length-based aging thresholds are applicable for taxa for which multiple age cohorts coexist sympatrically. Where applicable, the method of aging individual fish is relatively quick to implement and can avoid ager interpretation bias common in scale-based aging.

  14. Quantifying glenoid bone loss in anterior shoulder instability: reliability and accuracy of 2-dimensional and 3-dimensional computed tomography measurement techniques.

    PubMed

    Bois, Aaron J; Fening, Stephen D; Polster, Josh; Jones, Morgan H; Miniaci, Anthony

    2012-11-01

    Glenoid support is critical for stability of the glenohumeral joint. An accepted noninvasive method of quantifying glenoid bone loss does not exist. To perform independent evaluations of the reliability and accuracy of standard 2-dimensional (2-D) and 3-dimensional (3-D) computed tomography (CT) measurements of glenoid bone deficiency. Descriptive laboratory study. Two sawbone models were used; one served as a model for 2 anterior glenoid defects and the other for 2 anteroinferior defects. For each scapular model, predefect and defect data were collected for a total of 6 data sets. Each sample underwent 3-D laser scanning followed by CT scanning. Six physicians measured linear indicators of bone loss (defect length and width-to-length ratio) on both 2-D and 3-D CT and quantified bone loss using the glenoid index method on 2-D CT and using the glenoid index, ratio, and Pico methods on 3-D CT. The intraclass correlation coefficient (ICC) was used to assess agreement, and percentage error was used to compare radiographic and true measurements. With use of 2-D CT, the glenoid index and defect length measurements had the least percentage error (-4.13% and 7.68%, respectively); agreement was very good (ICC, .81) for defect length only. With use of 3-D CT, defect length (0.29%) and the Pico(1) method (4.93%) had the least percentage error. Agreement was very good for all linear indicators of bone loss (range, .85-.90) and for the ratio linear and Pico surface area methods used to quantify bone loss (range, .84-.98). Overall, 3-D CT results demonstrated better agreement and accuracy compared to 2-D CT. None of the methods assessed in this study using 2-D CT was found to be valid, and therefore, 2-D CT is not recommended for these methods. However, the length of glenoid defects can be reliably and accurately measured on 3-D CT. The Pico and ratio techniques are most reliable; however, the Pico(1) method accurately quantifies glenoid bone loss in both the anterior and anteroinferior locations. Future work is required to implement valid imaging techniques of glenoid bone loss into clinical practice. This is one of the only studies to date that has investigated both the reliability and accuracy of multiple indicators and quantification methods that evaluate glenoid bone loss in anterior glenohumeral instability. These data are critical to ensure valid methods are used for preoperative assessment and to determine when a glenoid bone augmentation procedure is indicated.

  15. Weights and measures: a new look at bisection behaviour in neglect.

    PubMed

    McIntosh, Robert D; Schindler, Igor; Birchall, Daniel; Milner, A David

    2005-12-01

    Horizontal line bisection is a ubiquitous task in the investigation of visual neglect. Patients with left neglect typically make rightward errors that increase with line length and for lines at more leftward positions. For short lines, or for lines presented in right space, these errors may 'cross over' to become leftward. We have taken a new approach to these phenomena by employing a different set of dependent and independent variables for their description. Rather than recording bisection error, we record the lateral position of the response within the workspace. We have studied how this varies when the locations of the left and right endpoints are manipulated independently. Across 30 patients with left neglect, we have observed a characteristic asymmetry between the 'weightings' accorded to the two endpoints, such that responses are less affected by changes in the location of the left endpoint than by changes in the location of the right. We show that a simple endpoint weightings analysis accounts readily for the effects of line length and spatial position, including cross-over effects, and leads to an index of neglect that is more sensitive than the standard measure. We argue that this novel approach is more parsimonious than the standard model and yields fresh insights into the nature of neglect impairment.

  16. Active phase correction of high resolution silicon photonic arrayed waveguide gratings

    DOE PAGES

    Gehl, M.; Trotter, D.; Starbuck, A.; ...

    2017-03-10

    Arrayed waveguide gratings provide flexible spectral filtering functionality for integrated photonic applications. Achieving narrow channel spacing requires long optical path lengths which can greatly increase the footprint of devices. High index contrast waveguides, such as those fabricated in silicon-on-insulator wafers, allow tight waveguide bends which can be used to create much more compact designs. Both the long optical path lengths and the high index contrast contribute to significant optical phase error as light propagates through the device. Thus, silicon photonic arrayed waveguide gratings require active or passive phase correction following fabrication. We present the design and fabrication of compact siliconmore » photonic arrayed waveguide gratings with channel spacings of 50, 10 and 1 GHz. The largest device, with 11 channels of 1 GHz spacing, has a footprint of only 1.1 cm 2. Using integrated thermo-optic phase shifters, the phase error is actively corrected. We present two methods of phase error correction and demonstrate state-of-the-art cross-talk performance for high index contrast arrayed waveguide gratings. As a demonstration of possible applications, we perform RF channelization with 1 GHz resolution. In addition, we generate unique spectral filters by applying non-zero phase offsets calculated by the Gerchberg Saxton algorithm.« less

  17. Active phase correction of high resolution silicon photonic arrayed waveguide gratings.

    PubMed

    Gehl, M; Trotter, D; Starbuck, A; Pomerene, A; Lentine, A L; DeRose, C

    2017-03-20

    Arrayed waveguide gratings provide flexible spectral filtering functionality for integrated photonic applications. Achieving narrow channel spacing requires long optical path lengths which can greatly increase the footprint of devices. High index contrast waveguides, such as those fabricated in silicon-on-insulator wafers, allow tight waveguide bends which can be used to create much more compact designs. Both the long optical path lengths and the high index contrast contribute to significant optical phase error as light propagates through the device. Therefore, silicon photonic arrayed waveguide gratings require active or passive phase correction following fabrication. Here we present the design and fabrication of compact silicon photonic arrayed waveguide gratings with channel spacings of 50, 10 and 1 GHz. The largest device, with 11 channels of 1 GHz spacing, has a footprint of only 1.1 cm2. Using integrated thermo-optic phase shifters, the phase error is actively corrected. We present two methods of phase error correction and demonstrate state-of-the-art cross-talk performance for high index contrast arrayed waveguide gratings. As a demonstration of possible applications, we perform RF channelization with 1 GHz resolution. Additionally, we generate unique spectral filters by applying non-zero phase offsets calculated by the Gerchberg Saxton algorithm.

  18. Accuracy and sensitivity analysis on seismic anisotropy parameter estimation

    NASA Astrophysics Data System (ADS)

    Yan, Fuyong; Han, De-Hua

    2018-04-01

    There is significant uncertainty in measuring the Thomsen’s parameter δ in laboratory even though the dimensions and orientations of the rock samples are known. It is expected that more challenges will be encountered in the estimating of the seismic anisotropy parameters from field seismic data. Based on Monte Carlo simulation of vertical transversely isotropic layer cake model using the database of laboratory anisotropy measurement from the literature, we apply the commonly used quartic non-hyperbolic reflection moveout equation to estimate the seismic anisotropy parameters and test its accuracy and sensitivities to the source-receive offset, vertical interval velocity error and time picking error. The testing results show that the methodology works perfectly for noise-free synthetic data with short spread length. However, this method is extremely sensitive to the time picking error caused by mild random noises, and it requires the spread length to be greater than the depth of the reflection event. The uncertainties increase rapidly for the deeper layers and the estimated anisotropy parameters can be very unreliable for a layer with more than five overlain layers. It is possible that an isotropic formation can be misinterpreted as a strong anisotropic formation. The sensitivity analysis should provide useful guidance on how to group the reflection events and build a suitable geological model for anisotropy parameter inversion.

  19. Automatic readout micrometer

    DOEpatents

    Lauritzen, Ted

    1982-01-01

    A measuring system is disclosed for surveying and very accurately positioning objects with respect to a reference line. A principal use of this surveying system is for accurately aligning the electromagnets which direct a particle beam emitted from a particle accelerator. Prior art surveying systems require highly skilled surveyors. Prior art systems include, for example, optical surveying systems which are susceptible to operator reading errors, and celestial navigation-type surveying systems, with their inherent complexities. The present invention provides an automatic readout micrometer which can very accurately measure distances. The invention has a simplicity of operation which practically eliminates the possibilities of operator optical reading error, owning to the elimination of traditional optical alignments for making measurements. The invention has an extendable arm which carries a laser surveying target. The extendable arm can be continuously positioned over its entire length of travel by either a coarse or fine adjustment without having the fine adjustment outrun the coarse adjustment until a reference laser beam is centered on the target as indicated by a digital readout. The length of the micrometer can then be accurately and automatically read by a computer and compared with a standardized set of alignment measurements. Due to its construction, the micrometer eliminates any errors due to temperature changes when the system is operated within a standard operating temperature range.

  20. Automatic readout micrometer

    DOEpatents

    Lauritzen, T.

    A measuring system is described for surveying and very accurately positioning objects with respect to a reference line. A principle use of this surveying system is for accurately aligning the electromagnets which direct a particle beam emitted from a particle accelerator. Prior art surveying systems require highly skilled surveyors. Prior art systems include, for example, optical surveying systems which are susceptible to operator reading errors, and celestial navigation-type surveying systems, with their inherent complexities. The present invention provides an automatic readout micrometer which can very accurately measure distances. The invention has a simplicity of operation which practically eliminates the possibilities of operator optical reading error, owning to the elimination of traditional optical alignments for making measurements. The invention has an extendable arm which carries a laser surveying target. The extendable arm can be continuously positioned over its entire length of travel by either a coarse of fine adjustment without having the fine adjustment outrun the coarse adjustment until a reference laser beam is centered on the target as indicated by a digital readout. The length of the micrometer can then be accurately and automatically read by a computer and compared with a standardized set of alignment measurements. Due to its construction, the micrometer eliminates any errors due to temperature changes when the system is operated within a standard operating temperature range.

  1. Performance Data Errors in Air Carrier Operations: Causes and Countermeasures

    NASA Technical Reports Server (NTRS)

    Berman, Benjamin A.; Dismukes, R Key; Jobe, Kimberly K.

    2012-01-01

    Several airline accidents have occurred in recent years as the result of erroneous weight or performance data used to calculate V-speeds, flap/trim settings, required runway lengths, and/or required climb gradients. In this report we consider 4 recent studies of performance data error, report our own study of ASRS-reported incidents, and provide countermeasures that can reduce vulnerability to accidents caused by performance data errors. Performance data are generated through a lengthy process involving several employee groups and computer and/or paper-based systems. Although much of the airline indUStry 's concern has focused on errors pilots make in entering FMS data, we determined that errors occur at every stage of the process and that errors by ground personnel are probably at least as frequent and certainly as consequential as errors by pilots. Most of the errors we examined could in principle have been trapped by effective use of existing procedures or technology; however, the fact that they were not trapped anywhere indicates the need for better countermeasures. Existing procedures are often inadequately designed to mesh with the ways humans process information. Because procedures often do not take into account the ways in which information flows in actual flight ops and time pressures and interruptions experienced by pilots and ground personnel, vulnerability to error is greater. Some aspects of NextGen operations may exacerbate this vulnerability. We identify measures to reduce the number of errors and to help catch the errors that occur.

  2. α-adrenergic agonist brimonidine control of experimentally induced myopia in guinea pigs: A pilot study.

    PubMed

    Liu, Yan; Wang, Yuexin; Lv, Huibin; Jiang, Xiaodan; Zhang, Mingzhou; Li, Xuemin

    2017-01-01

    To investigate the efficacy of α-adrenergic agonist brimonidine either alone or combined with pirenzepine for inhibiting progressing myopia in guinea pig lens-myopia-induced models. Thirty-six guinea pigs were randomly divided into six groups: Group A received 2% pirenzepine, Group B received 0.2% brimonidine, Group C received 0.1% brimonidine, Group D received 2% pirenzepine + 0.2% brimonidine, Group E received 2% pirenzepine + 0.1% brimonidine, and Group F received the medium. Myopia was induced in the right eyes of all guinea pigs using polymethyl methacrylate (PMMA) lenses for 3 weeks. Eye drops were administered accordingly. Intraocular pressure was measured every day. Refractive error and axial length measurements were performed once a week. The enucleated eyeballs were removed for hematoxylin and eosin (H&E) and Van Gieson (VG) staining at the end of the study. The lens-induced myopia model was established after 3 weeks. Treatment with 0.1% brimonidine alone and 0.2% brimonidine alone was capable of inhibiting progressing myopia, as shown by the better refractive error (p=0.024; p=0.006) and shorter axial length (p=0.005; p=0.0017). Treatment with 0.1% brimonidine and 0.2% brimonidine combined with 2% pirenzepine was also effective in suppressing progressing refractive error (p=0.016; p=0.0006) and axial length (p=0.017; p=0.0004). The thickness of the sclera was kept stable in all groups except group F; the sclera was much thinner in the lens-induced myopia eyes compared to the control eyes. Treatment with 0.1% brimonidine alone and 0.2% brimonidine alone, as well as combined with 2% pirenzepine, was effective in inhibiting progressing myopia. The result indicates that intraocular pressure elevation is possibly a promising mechanism and potential treatment for progressing myopia.

  3. How do subvocal rehearsal and general attentional resources contribute to verbal short-term memory span?

    PubMed Central

    Morra, Sergio

    2015-01-01

    Whether rehearsal has a causal role in verbal STM has been controversial in the literature. Recent theories of working memory emphasize a role of attentional resources, but leave unclear how they contribute to verbal STM. Two experiments (with 49 and 102 adult participants, respectively) followed up previous studies with children, aiming to clarify the contributions of attentional capacity and rehearsal to verbal STM. Word length and presentation modality were manipulated. Experiment 1 focused on order errors, Experiment 2 on predicting individual differences in span from attentional capacity and articulation rate. Structural equation modeling showed clearly a major role of attentional capacity as a predictor of verbal STM span; but was inconclusive on whether rehearsal efficiency is an additional cause or a consequence of verbal STM. The effects of word length and modality on STM were replicated; a significant interaction was also found, showing a larger modality effect for long than short words, which replicates a previous finding on children. Item errors occurred more often with long words and correlated negatively with articulation rate. This set of findings seems to point to a role of rehearsal in maintaining item information. The probability of order errors per position increased linearly with list length. A revised version of a neo-Piagetian model was fit to the data of Experiment 2. That model was based on two parameters: attentional capacity (independently measured) and a free parameter representing loss of partly-activated information. The model could partly account for the results, but underestimated STM performance of the participants with smaller attentional capacity. It is concluded that modeling of verbal STM should consider individual and developmental differences in attentional capacity, rehearsal rate, and (perhaps) order representation. PMID:25798114

  4. How much time do drivers need to obtain situation awareness? A laboratory-based study of automated driving.

    PubMed

    Lu, Zhenji; Coster, Xander; de Winter, Joost

    2017-04-01

    Drivers of automated cars may occasionally need to take back manual control after a period of inattentiveness. At present, it is unknown how long it takes to build up situation awareness of a traffic situation. In this study, 34 participants were presented with animated video clips of traffic situations on a three-lane road, from an egocentric viewpoint on a monitor equipped with eye tracker. Each participant viewed 24 videos of different durations (1, 3, 7, 9, 12, or 20 s). After each video, participants reproduced the end of the video by placing cars in a top-down view, and indicated the relative speeds of the placed cars with respect to the ego-vehicle. Results showed that the longer the video length, the lower the absolute error of the number of placed cars, the lower the total distance error between the placed cars and actual cars, and the lower the geometric difference between the placed cars and the actual cars. These effects appeared to be saturated at video lengths of 7-12 s. The total speed error between placed and actual cars also reduced with video length, but showed no saturation up to 20 s. Glance frequencies to the mirrors decreased with observation time, which is consistent with the notion that participants first estimated the spatial pattern of cars after which they directed their attention to individual cars. In conclusion, observers are able to reproduce the layout of a situation quickly, but the assessment of relative speeds takes 20 s or more. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Evolving geometrical heterogeneities of fault trace data

    NASA Astrophysics Data System (ADS)

    Wechsler, Neta; Ben-Zion, Yehuda; Christofferson, Shari

    2010-08-01

    We perform a systematic comparative analysis of geometrical fault zone heterogeneities using derived measures from digitized fault maps that are not very sensitive to mapping resolution. We employ the digital GIS map of California faults (version 2.0) and analyse the surface traces of active strike-slip fault zones with evidence of Quaternary and historic movements. Each fault zone is broken into segments that are defined as a continuous length of fault bounded by changes of angle larger than 1°. Measurements of the orientations and lengths of fault zone segments are used to calculate the mean direction and misalignment of each fault zone from the local plate motion direction, and to define several quantities that represent the fault zone disorder. These include circular standard deviation and circular standard error of segments, orientation of long and short segments with respect to the mean direction, and normal separation distances of fault segments. We examine the correlations between various calculated parameters of fault zone disorder and the following three potential controlling variables: cumulative slip, slip rate and fault zone misalignment from the plate motion direction. The analysis indicates that the circular standard deviation and circular standard error of segments decrease overall with increasing cumulative slip and increasing slip rate of the fault zones. The results imply that the circular standard deviation and error, quantifying the range or dispersion in the data, provide effective measures of the fault zone disorder, and that the cumulative slip and slip rate (or more generally slip rate normalized by healing rate) represent the fault zone maturity. The fault zone misalignment from plate motion direction does not seem to play a major role in controlling the fault trace heterogeneities. The frequency-size statistics of fault segment lengths can be fitted well by an exponential function over the entire range of observations.

  6. How do subvocal rehearsal and general attentional resources contribute to verbal short-term memory span?

    PubMed

    Morra, Sergio

    2015-01-01

    Whether rehearsal has a causal role in verbal STM has been controversial in the literature. Recent theories of working memory emphasize a role of attentional resources, but leave unclear how they contribute to verbal STM. Two experiments (with 49 and 102 adult participants, respectively) followed up previous studies with children, aiming to clarify the contributions of attentional capacity and rehearsal to verbal STM. Word length and presentation modality were manipulated. Experiment 1 focused on order errors, Experiment 2 on predicting individual differences in span from attentional capacity and articulation rate. Structural equation modeling showed clearly a major role of attentional capacity as a predictor of verbal STM span; but was inconclusive on whether rehearsal efficiency is an additional cause or a consequence of verbal STM. The effects of word length and modality on STM were replicated; a significant interaction was also found, showing a larger modality effect for long than short words, which replicates a previous finding on children. Item errors occurred more often with long words and correlated negatively with articulation rate. This set of findings seems to point to a role of rehearsal in maintaining item information. The probability of order errors per position increased linearly with list length. A revised version of a neo-Piagetian model was fit to the data of Experiment 2. That model was based on two parameters: attentional capacity (independently measured) and a free parameter representing loss of partly-activated information. The model could partly account for the results, but underestimated STM performance of the participants with smaller attentional capacity. It is concluded that modeling of verbal STM should consider individual and developmental differences in attentional capacity, rehearsal rate, and (perhaps) order representation.

  7. The Age of At Variance.

    ERIC Educational Resources Information Center

    DeMott, Benjamin

    1990-01-01

    A faculty member at Amherst discusses the challenges that have shaken his "self-edifice." He says there was strain "in the scrambling, adjusting, re-doing, remodeling of the mind, and in the constant collisions with past fatuity and obliviousness." Mina Shaughnessy's "Errors and Expectations" is recommended. (MLW)

  8. IRBs, conflict and liability: will we see IRBs in court? Or is it when?

    PubMed

    Icenogle, Daniel L

    2003-01-01

    The entire human research infrastructure is under intense and increasing financial pressure. These pressures may have been responsible for several errors in judgment by those responsible for managing human research and protecting human subjects. The result of these errors has been some terrible accidents, some of which have cost the lives of human research volunteers. This, in turn, is producing both increased liability risk for those who manage the various aspects of human research and increasing scrutiny as to the capability of the human research protection structure as currently constituted. It is the author's contention that the current structure is fully capable of offering sufficient protection for participants in human research-if Institutional Review Board (IRB) staff and members are given sufficient resources and perform their tasks with sufficient responsibility. The status quo alternative is that IRBs and their members will find themselves at great risk of becoming defendants in lawsuits seeking compensation for damages resulting from human experimentation gone awry.

  9. Screening actuator locations for static shape control

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    1990-01-01

    Correction of shape distortion due to zero-mean normally distributed errors in structural sizes which are random variables is examined. A bound on the maximum improvement in the expected value of the root-mean-square shape error is obtained. The shape correction associated with the optimal actuators is also characterized. An actuator effectiveness index is developed and shown to be helpful in screening actuator locations in the structure. The results are specialized to a simple form for truss structures composed of nominally identical members. The bound and effectiveness index are tested on a 55-m radiometer antenna truss structure. It is found that previously obtained results for optimum actuators had a performance close to the bound obtained here. Furthermore, the actuators associated with the optimum design are shown to have high effectiveness indices. Since only a small fraction of truss elements tend to have high effectiveness indices, the proposed screening procedure can greatly reduce the number of truss members that need to be considered as actuator sites.

  10. Reliability of levator scapulae index in subjects with and without scapular downward rotation syndrome.

    PubMed

    Lee, Ji-Hyun; Cynn, Heon-Seock; Choi, Woo-Jeong; Jeong, Hyo-Jung; Yoon, Tae-Lim

    2016-05-01

    The objective of this study was to introduce levator scapulae (LS) measurement using a caliper and the levator scapulae index (LSI) and to investigate intra- and interrater reliability of the LSI in subjects with and without scapular downward rotation syndrome (SDRS). Two raters measured LS length twice in 38 subjects (19 with SDRS and 19 without SDRS). For reliability testing, intraclass correlation coefficients (ICCs), standard error of measurement (SEM), and minimal detectable change (MDC) were calculated. Intrarater reliability analysis resulted with ICCs ranging from 0.94 to 0.98 in subjects with SDRS and 0.96 to 0.98 in subjects without SDRS. These results represented that intrarater reliability in both groups were excellent for measuring LS length with the LSI. Interrater reliability was good (ICC: 0.82) in subjects with SDRS; however, interrater reliability was moderate (ICC: 0.75) in subjects without SDRS. Additionally, SEM and MDC were 0.13% and 0.36% in subjects with SDRS and 0.35% and 0.97% in subjects without SDRS. In subjects with SDRS, low dispersion of the measurement errors and MDC were shown. This study suggested that the LSI is a reliable method to measure LS length and is more reliable for subjects with SDRS. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Baseline peripheral refractive error and changes in axial refraction during one year in a young adult population.

    PubMed

    Hartwig, Andreas; Charman, William Neil; Radhakrishnan, Hema

    2016-01-01

    To determine whether the initial characteristics of individual patterns of peripheral refraction relate to subsequent changes in refraction over a one-year period. 54 myopic and emmetropic subjects (mean age: 24.9±5.1 years; median 24 years) with normal vision were recruited and underwent conventional non-cycloplegic subjective refraction. Peripheral refraction was also measured at 5° intervals over the central 60° of horizontal visual field, together with axial length. After one year, measurements of subjective refraction and axial length were repeated on the 43 subjects who were still available for examination. In agreement with earlier studies, higher myopes tended to show greater relative peripheral hyperopia. There was, however, considerable inter-subject variation in the pattern of relative peripheral refractive error (RPRE) at any level of axial refraction. Across the group, mean one-year changes in axial refraction and axial length did not differ significantly from zero. There was no correlation between changes in these parameters for individual subjects and any characteristic of their RPRE. No evidence was found to support the hypothesis that the pattern of RPRE is predictive of subsequent refractive change in this age group. Copyright © 2015 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.

  12. Word learning and the cerebral hemispheres: from serial to parallel processing of written words

    PubMed Central

    Ellis, Andrew W.; Ferreira, Roberto; Cathles-Hagan, Polly; Holt, Kathryn; Jarvis, Lisa; Barca, Laura

    2009-01-01

    Reading familiar words differs from reading unfamiliar non-words in two ways. First, word reading is faster and more accurate than reading of unfamiliar non-words. Second, effects of letter length are reduced for words, particularly when they are presented in the right visual field in familiar formats. Two experiments are reported in which right-handed participants read aloud non-words presented briefly in their left and right visual fields before and after training on those items. The non-words were interleaved with familiar words in the naming tests. Before training, naming was slow and error prone, with marked effects of length in both visual fields. After training, fewer errors were made, naming was faster, and the effect of length was much reduced in the right visual field compared with the left. We propose that word learning creates orthographic word forms in the mid-fusiform gyrus of the left cerebral hemisphere. Those word forms allow words to access their phonological and semantic representations on a lexical basis. But orthographic word forms also interact with more posterior letter recognition systems in the middle/inferior occipital gyri, inducing more parallel processing of right visual field words than is possible for any left visual field stimulus, or for unfamiliar non-words presented in the right visual field. PMID:19933140

  13. Efficiency of the neighbor-joining method in reconstructing deep and shallow evolutionary relationships in large phylogenies.

    PubMed

    Kumar, S; Gadagkar, S R

    2000-12-01

    The neighbor-joining (NJ) method is widely used in reconstructing large phylogenies because of its computational speed and the high accuracy in phylogenetic inference as revealed in computer simulation studies. However, most computer simulation studies have quantified the overall performance of the NJ method in terms of the percentage of branches inferred correctly or the percentage of replications in which the correct tree is recovered. We have examined other aspects of its performance, such as the relative efficiency in correctly reconstructing shallow (close to the external branches of the tree) and deep branches in large phylogenies; the contribution of zero-length branches to topological errors in the inferred trees; and the influence of increasing the tree size (number of sequences), evolutionary rate, and sequence length on the efficiency of the NJ method. Results show that the correct reconstruction of deep branches is no more difficult than that of shallower branches. The presence of zero-length branches in realized trees contributes significantly to the overall error observed in the NJ tree, especially in large phylogenies or slowly evolving genes. Furthermore, the tree size does not influence the efficiency of NJ in reconstructing shallow and deep branches in our simulation study, in which the evolutionary process is assumed to be homogeneous in all lineages.

  14. The dynamics and optimal control of spinning spacecraft with movable telescoping appendages. Part C: Effect of flexibility during boom deployment

    NASA Technical Reports Server (NTRS)

    Bainum, P. M.; James, P. K.

    1977-01-01

    The dynamics of a spinning symmetrical spacecraft system during the deployment (or retraction) of flexible boom-type appendages were investigated. The effect of flexibility during boom deployment is treated by modelling the deployable members as compound spherical pendula of varying length (according to a control law). The orientation of the flexible booms with respect to the hub, is described by a sequence of two Euler angles. The boom members contain a flexural stiffness which can be related to an assumed effective restoring linear spring constant, and structural damping which effects the entire system. Linearized equations of motion for this system, when the boom length is constant, involve periodic coefficients with the frequency of the hub spin. A bounded transformation is found which converts this system into a kinematically equivalent one involving only constant coefficients.

  15. Integrating aerodynamics and structures in the minimum weight design of a supersonic transport wing

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois M.; Wrenn, Gregory A.; Dovi, Augustine R.; Coen, Peter G.; Hall, Laura E.

    1992-01-01

    An approach is presented for determining the minimum weight design of aircraft wing models which takes into consideration aerodynamics-structure coupling when calculating both zeroth order information needed for analysis and first order information needed for optimization. When performing sensitivity analysis, coupling is accounted for by using a generalized sensitivity formulation. The results presented show that the aeroelastic effects are calculated properly and noticeably reduce constraint approximation errors. However, for the particular example selected, the error introduced by ignoring aeroelastic effects are not sufficient to significantly affect the convergence of the optimization process. Trade studies are reported that consider different structural materials, internal spar layouts, and panel buckling lengths. For the formulation, model and materials used in this study, an advanced aluminum material produced the lightest design while satisfying the problem constraints. Also, shorter panel buckling lengths resulted in lower weights by permitting smaller panel thicknesses and generally, by unloading the wing skins and loading the spar caps. Finally, straight spars required slightly lower wing weights than angled spars.

  16. Two high-density recording methods with run-length limited turbo code for holographic data storage system

    NASA Astrophysics Data System (ADS)

    Nakamura, Yusuke; Hoshizawa, Taku

    2016-09-01

    Two methods for increasing the data capacity of a holographic data storage system (HDSS) were developed. The first method is called “run-length-limited (RLL) high-density recording”. An RLL modulation has the same effect as enlarging the pixel pitch; namely, it optically reduces the hologram size. Accordingly, the method doubles the raw-data recording density. The second method is called “RLL turbo signal processing”. The RLL turbo code consists of \\text{RLL}(1,∞ ) trellis modulation and an optimized convolutional code. The remarkable point of the developed turbo code is that it employs the RLL modulator and demodulator as parts of the error-correction process. The turbo code improves the capability of error correction more than a conventional LDPC code, even though interpixel interference is generated. These two methods will increase the data density 1.78-fold. Moreover, by simulation and experiment, a data density of 2.4 Tbit/in.2 is confirmed.

  17. Are There Lower Repetition Priming Effects in Children with Developmental Dyslexia? Priming Effects in Spanish with the Masked Lexical Decision Task.

    PubMed

    Nievas-Cazorla, Francisco; Soriano-Ferrer, Manuel; Sánchez-López, Pilar

    2016-01-01

    The aim of this study was to compare the reaction times and errors of Spanish children with developmental dyslexia to the reaction times and errors of readers without dyslexia on a masked lexical decision task with identity or repetition priming. A priming paradigm was used to study the role of the lexical deficit in dyslexic children, manipulating the frequency and length of the words, with a short Stimulus Onset Asynchrony (SOA = 150 ms) and degraded stimuli. The sample consisted of 80 participants from 9 to 14 years old, divided equally into a group with a developmental dyslexia diagnosis and a control group without dyslexia. Results show that identity priming is higher in control children (133 ms) than in dyslexic children (55 ms). Thus, the "frequency" and "word length" variables are not the source or origin of this reduction in identity priming reaction times in children with developmental dyslexia compared to control children.

  18. Compensation of power drops in reflective semiconductor optical amplifier-based passive optical network with upstream data rate adjustment

    NASA Astrophysics Data System (ADS)

    Yeh, Chien-Hung; Chow, Chi-Wai; Chiang, Ming-Feng; Shih, Fu-Yuan; Pan, Ci-Ling

    2011-09-01

    In a wavelength division multiplexed-passive optical network (WDM-PON), different fiber lengths and optical components would introduce different power budgets to different optical networking units (ONUs). Besides, the power decay of the distributed optical carrier from the optical line terminal owing to aging of the optical transmitter could also reduce the injected power into the ONU. In this work, we propose and demonstrate a carrier distributed WDM-PON using a reflective semiconductor optical amplifier-based ONU that can adjust its upstream data rate to accommodate different injected optical powers. The WDM-PON is evaluated at standard-reach (25 km) and long-reach (100 km). Bit-error rate measurements at different injected optical powers and transmission lengths show that by adjusting the upstream data rate of the system (622 Mb/s, 1.25 and 2.5 Gb/s), error-free (<10-9) operation can still be achieved when the power budget drops.

  19. Moderate Deviation Analysis for Classical Communication over Quantum Channels

    NASA Astrophysics Data System (ADS)

    Chubb, Christopher T.; Tan, Vincent Y. F.; Tomamichel, Marco

    2017-11-01

    We analyse families of codes for classical data transmission over quantum channels that have both a vanishing probability of error and a code rate approaching capacity as the code length increases. To characterise the fundamental tradeoff between decoding error, code rate and code length for such codes we introduce a quantum generalisation of the moderate deviation analysis proposed by Altŭg and Wagner as well as Polyanskiy and Verdú. We derive such a tradeoff for classical-quantum (as well as image-additive) channels in terms of the channel capacity and the channel dispersion, giving further evidence that the latter quantity characterises the necessary backoff from capacity when transmitting finite blocks of classical data. To derive these results we also study asymmetric binary quantum hypothesis testing in the moderate deviations regime. Due to the central importance of the latter task, we expect that our techniques will find further applications in the analysis of other quantum information processing tasks.

  20. GNSS Single Frequency, Single Epoch Reliable Attitude Determination Method with Baseline Vector Constraint.

    PubMed

    Gong, Ang; Zhao, Xiubin; Pang, Chunlei; Duan, Rong; Wang, Yong

    2015-12-02

    For Global Navigation Satellite System (GNSS) single frequency, single epoch attitude determination, this paper proposes a new reliable method with baseline vector constraint. First, prior knowledge of baseline length, heading, and pitch obtained from other navigation equipment or sensors are used to reconstruct objective function rigorously. Then, searching strategy is improved. It substitutes gradually Enlarged ellipsoidal search space for non-ellipsoidal search space to ensure correct ambiguity candidates are within it and make the searching process directly be carried out by least squares ambiguity decorrelation algorithm (LAMBDA) method. For all vector candidates, some ones are further eliminated by derived approximate inequality, which accelerates the searching process. Experimental results show that compared to traditional method with only baseline length constraint, this new method can utilize a priori baseline three-dimensional knowledge to fix ambiguity reliably and achieve a high success rate. Experimental tests also verify it is not very sensitive to baseline vector error and can perform robustly when angular error is not great.

  1. Efficient Sparse Signal Transmission over a Lossy Link Using Compressive Sensing

    PubMed Central

    Wu, Liantao; Yu, Kai; Cao, Dongyu; Hu, Yuhen; Wang, Zhi

    2015-01-01

    Reliable data transmission over lossy communication link is expensive due to overheads for error protection. For signals that have inherent sparse structures, compressive sensing (CS) is applied to facilitate efficient sparse signal transmissions over lossy communication links without data compression or error protection. The natural packet loss in the lossy link is modeled as a random sampling process of the transmitted data, and the original signal will be reconstructed from the lossy transmission results using the CS-based reconstruction method at the receiving end. The impacts of packet lengths on transmission efficiency under different channel conditions have been discussed, and interleaving is incorporated to mitigate the impact of burst data loss. Extensive simulations and experiments have been conducted and compared to the traditional automatic repeat request (ARQ) interpolation technique, and very favorable results have been observed in terms of both accuracy of the reconstructed signals and the transmission energy consumption. Furthermore, the packet length effect provides useful insights for using compressed sensing for efficient sparse signal transmission via lossy links. PMID:26287195

  2. High-speed phosphor-LED wireless communication system utilizing no blue filter

    NASA Astrophysics Data System (ADS)

    Yeh, C. H.; Chow, C. W.; Chen, H. Y.; Chen, J.; Liu, Y. L.; Wu, Y. F.

    2014-09-01

    In this paper, we propose and investigate an adaptively 84.44 to 190 Mb/s phosphor-LED visible light communication (VLC) system at a practical transmission distance. Here, we utilize the orthogonal-frequency-division-multiplexing quadrature-amplitude-modulation (OFDM-QAM) modulation with power/bit-loading algorithm in proposed VLC system. In the experiment, the optimal analogy pre-equalization design is also performed at LED-Tx side and no blue filter is used at the Rx side for extending the modulation bandwidth from 1 MHz to 30 MHz. In addition, the corresponding free space transmission lengths are between 75 cm and 2 m under various data rates of proposed VLC. And the measured bit error rates (BERs) of < 3.8×10-3 [forward error correction (FEC) limit] at different transmission lengths and measured data rates can be also obtained. Finally, we believe that our proposed scheme could be another alternative VLC implementation in practical distance, supporting < 100 Mb/s, using commercially available LED and PD (without optical blue filtering) and compact size.

  3. Optimization of Dish Solar Collectors with and without Secondary Concentrators

    NASA Technical Reports Server (NTRS)

    Jaffe, L. D.

    1982-01-01

    Methods for optimizing parabolic dish solar collectors and the consequent effects of various optical, thermal, mechanical, and cost variables are examined. The most important performance optimization is adjusting the receiver aperture to maximize collector efficiency. Other parameters that can be adjusted to optimize efficiency include focal length, and, if a heat engine is used, the receiver temperature. The efficiency maxima associated with focal length and receiver temperature are relatively broad; it may, accordingly, be desirable to design somewhat away from the maxima. Performance optimization is sensitive to the slope and specularity errors of the concentrator. Other optical and thermal variables affecting optimization are the reflectance and blocking factor of the concentrator, the absorptance and losses of the receiver, and, if a heat engine is used, the shape of the engine efficiency versus temperature curve. Performance may sometimes be improved by use of an additional optical element (a secondary concentrator) or a receiver window if the errors of the primary concentrator are large or the receiver temperature is high.

  4. Working memory and inhibitory control across the life span: Intrusion errors in the Reading Span Test.

    PubMed

    Robert, Christelle; Borella, Erika; Fagot, Delphine; Lecerf, Thierry; de Ribaupierre, Anik

    2009-04-01

    The aim of this study was to examine to what extent inhibitory control and working memory capacity are related across the life span. Intrusion errors committed by children and younger and older adults were investigated in two versions of the Reading Span Test. In Experiment 1, a mixed Reading Span Test with items of various list lengths was administered. Older adults and children recalled fewer correct words and produced more intrusions than did young adults. Also, age-related differences were found in the type of intrusions committed. In Experiment 2, an adaptive Reading Span Test was administered, in which the list length of items was adapted to each individual's working memory capacity. Age groups differed neither on correct recall nor on the rate of intrusions, but they differed on the type of intrusions. Altogether, these findings indicate that the availability of attentional resources influences the efficiency of inhibition across the life span.

  5. Finding Blackbody Temperature and Emissivity on a Sub-Pixel Scale

    NASA Astrophysics Data System (ADS)

    Bernstein, D. J.; Bausell, J.; Grigsby, S.; Kudela, R. M.

    2015-12-01

    Surface temperature and emissivity provide important insight into the ecosystem being remotely sensed. Dozier (1981) proposed a an algorithm to solve for percent coverage and temperatures of two different surface types (e.g. sea surface, cloud cover, etc.) within a given pixel, with a constant value for emissivity assumed. Here we build on Dozier (1981) by proposing an algorithm that solves for both temperature and emissivity of a water body within a satellite pixel by assuming known percent coverage of surface types within the pixel. Our algorithm generates thermal infrared (TIR) and emissivity end-member spectra for the two surface types. Our algorithm then superposes these end-member spectra on emissivity and TIR spectra emitted from four pixels with varying percent coverage of different surface types. The algorithm was tested preliminarily (48 iterations) using simulated pixels containing more than one surface type, with temperature and emissivity percent errors of ranging from 0 to 1.071% and 2.516 to 15.311% respectively[1]. We then tested the algorithm using a MASTER image from MASTER collected as part of the NASA Student Airborne Research Program (NASA SARP). Here the temperature of water was calculated to be within 0.22 K of in situ data. The algorithm calculated emissivity of water with an accuracy of 0.13 to 1.53% error for Salton Sea pixels collected with MASTER, also collected as part of NASA SARP. This method could improve retrievals for the HyspIRI sensor. [1] Percent error for emissivity was generated by averaging percent error across all selected bands widths.

  6. On the nature of the excess heat capacity of mixing

    NASA Astrophysics Data System (ADS)

    Benisek, Artur; Dachs, Edgar

    2011-03-01

    The excess vibrational entropy (Δ S {vib/ex}) of several silicate solid solutions are found to be linearly correlated with the differences in end-member volumes (Δ V i ) and end-member bulk moduli (Δκ i ). If a substitution produces both, larger and elastically stiffer polyhedra, then the substituted ion will find itself in a strong enlarged structure. The frequency of its vibration is decreased because of the increase in bond lengths. Lowering of frequencies produces larger heat capacities, which give rise to positive excess vibrational entropies. If a substitution produces larger but elastically softer polyhedra, then increase and decrease of mean bond lengths may be similar in magnitude and their effect on the vibrational entropy tends to be compensated. The empirical relationship between Δ S {vib/ex}, Δ V i and Δκ i , as described by Δ S {vib/ex} = (Δ V i + mΔκ i ) f, was calibrated on six silicate solid solutions (analbite-sanidine, pyrope-grossular, forsterite-fayalite, analbite-anorthite, anorthite-sanidine, CaTs-diopside) yielding m = 0.0246 and f = 2.926. It allows the prediction of Δ S {vib/ex} behaviour of a solid solution based on its volume and bulk moduli end-member data.

  7. Surface photovoltage method extended to silicon solar cell junction

    NASA Technical Reports Server (NTRS)

    Wang, E. Y.; Baraona, C. R.; Brandhorst, H. W., Jr.

    1974-01-01

    The conventional surface photovoltage (SPV) method is extended to the measurement of the minority carrier diffusion length in diffused semiconductor junctions of the type used in a silicon solar cell. The minority carrier diffusion values obtained by the SPV method agree well with those obtained by the X-ray method. Agreement within experimental error is also obtained between the minority carrier diffusion lengths in solar cell diffusion junctions and in the same materials with n-regions removed by etching, when the SPV method was used in the measurements.

  8. Heuristic Algorithms for Solving Two Dimensional Loading Problems.

    DTIC Science & Technology

    1981-03-01

    L6i MICROCOPY RESOLUTION TEST CHART WTI0WAL BL4WA64OF STANDARDS- 1963-A -~~ le -I I ~- A-LA4C TEC1-NlCAL ’c:LJ? HEURISTIC ALGORITHMS FOR SOLVING...CONSIDER THE FOLLOWjING PROBLEM; ALLOCATE A SET OF ON’ DOXES, EACH HAVING A SPECIFIED LENGTH, WIDTH AND HEIGHT, TO A PALLET OF LENGTH " Le AND WIDTH "W...THE BOXES AND TI-EN-SELECT TI- lE BEST SOLUTION. SINCE THESE HEURISTICS ARE ESSENTIALLY A TRIAL AND ERROR PROCEDURE THEIR FORMULAS BECOME VERY

  9. Comparison of photogrammetric and astrometric data reduction results for the wild BC-4 camera

    NASA Technical Reports Server (NTRS)

    Hornbarger, D. H.; Mueller, I., I.

    1971-01-01

    The results of astrometric and photogrammetric plate reduction techniques for a short focal length camera are compared. Several astrometric models are tested on entire and limited plate areas to analyze their ability to remove systematic errors from interpolated satellite directions using a rigorous photogrammetric reduction as a standard. Residual plots are employed to graphically illustrate the analysis. Conclusions are made as to what conditions will permit the astrometric reduction to achieve comparable accuracies to those of photogrammetric reduction when applied for short focal length ballistic cameras.

  10. On the dipole approximation with error estimates

    NASA Astrophysics Data System (ADS)

    Boßmann, Lea; Grummt, Robert; Kolb, Martin

    2018-01-01

    The dipole approximation is employed to describe interactions between atoms and radiation. It essentially consists of neglecting the spatial variation of the external field over the atom. Heuristically, this is justified by arguing that the wavelength is considerably larger than the atomic length scale, which holds under usual experimental conditions. We prove the dipole approximation in the limit of infinite wavelengths compared to the atomic length scale and estimate the rate of convergence. Our results include N-body Coulomb potentials and experimentally relevant electromagnetic fields such as plane waves and laser pulses.

  11. Effect of orifice length-diameter ratio on the coefficient of discharge of fuel-injection nozzles

    NASA Technical Reports Server (NTRS)

    Gelalles, A G; March, E T

    1931-01-01

    The variation of the coefficient of discharge with the length-diameter ratio of the orifice was determined for nozzles having single orifice 0.008 and 0.020 inch in diameter. Ratios from 0.5 to 10 were investigated at injection pressures from 500 to 5,000 pounds per square inch. The tests showed that, within the error of the observation, the coefficients were the same whether the nozzles were assembled at the end of a constant tube or in an automatic injection valve having a plain stem.

  12. The Sensetivity of Flood Frequency Analysis on Record Length in Continuous United States

    NASA Astrophysics Data System (ADS)

    Hu, L.; Nikolopoulos, E. I.; Anagnostou, E. N.

    2017-12-01

    In flood frequency analysis (FFA), sufficiently long data series are important to get more reliable results. Compared to return periods of interest, at-site FFA usually needs large data sets. Generally, the precision of at site estimators and time-sampling errors are associated with the length of a gauged record. In this work, we quantify the difference with various record lengths. we use generalized extreme value (GEV) and Log Pearson type III (LP3), two traditional methods on annual maximum stream flows to undertake FFA, and propose quantitative ways, relative difference in median and interquartile range (IQR) to compare the flood frequency performances on different record length from selected 350 USGS gauges, which have more than 70 years record length in Continuous United States. Also, we group those gauges into different regions separately based on hydrological unit map and discuss the geometry impacts. The results indicate that long record length can avoid imposing an upper limit on the degree of sophistication. Working with relatively longer record length may lead accurate results than working with shorter record length. Furthermore, the influence of hydrologic unites for the watershed boundary dataset on those gauges also be presented. The California region is the most sensitive to record length, while gauges in the east perform steady.

  13. Aligner optimization increases accuracy and decreases compute times in multi-species sequence data.

    PubMed

    Robinson, Kelly M; Hawkins, Aziah S; Santana-Cruz, Ivette; Adkins, Ricky S; Shetty, Amol C; Nagaraj, Sushma; Sadzewicz, Lisa; Tallon, Luke J; Rasko, David A; Fraser, Claire M; Mahurkar, Anup; Silva, Joana C; Dunning Hotopp, Julie C

    2017-09-01

    As sequencing technologies have evolved, the tools to analyze these sequences have made similar advances. However, for multi-species samples, we observed important and adverse differences in alignment specificity and computation time for bwa- mem (Burrows-Wheeler aligner-maximum exact matches) relative to bwa-aln. Therefore, we sought to optimize bwa-mem for alignment of data from multi-species samples in order to reduce alignment time and increase the specificity of alignments. In the multi-species cases examined, there was one majority member (i.e. Plasmodium falciparum or Brugia malayi ) and one minority member (i.e. human or the Wolbachia endosymbiont w Bm) of the sequence data. Increasing bwa-mem seed length from the default value reduced the number of read pairs from the majority sequence member that incorrectly aligned to the reference genome of the minority sequence member. Combining both source genomes into a single reference genome increased the specificity of mapping, while also reducing the central processing unit (CPU) time. In Plasmodium , at a seed length of 18 nt, 24.1 % of reads mapped to the human genome using 1.7±0.1 CPU hours, while 83.6 % of reads mapped to the Plasmodium genome using 0.2±0.0 CPU hours (total: 107.7 % reads mapping; in 1.9±0.1 CPU hours). In contrast, 97.1 % of the reads mapped to a combined Plasmodium- human reference in only 0.7±0.0 CPU hours. Overall, the results suggest that combining all references into a single reference database and using a 23 nt seed length reduces the computational time, while maximizing specificity. Similar results were found for simulated sequence reads from a mock metagenomic data set. We found similar improvements to computation time in a publicly available human-only data set.

  14. Adaptive correction of ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Pelosi, Anna; Battista Chirico, Giovanni; Van den Bergh, Joris; Vannitsem, Stephane

    2017-04-01

    Forecasts from numerical weather prediction (NWP) models often suffer from both systematic and non-systematic errors. These are present in both deterministic and ensemble forecasts, and originate from various sources such as model error and subgrid variability. Statistical post-processing techniques can partly remove such errors, which is particularly important when NWP outputs concerning surface weather variables are employed for site specific applications. Many different post-processing techniques have been developed. For deterministic forecasts, adaptive methods such as the Kalman filter are often used, which sequentially post-process the forecasts by continuously updating the correction parameters as new ground observations become available. These methods are especially valuable when long training data sets do not exist. For ensemble forecasts, well-known techniques are ensemble model output statistics (EMOS), and so-called "member-by-member" approaches (MBM). Here, we introduce a new adaptive post-processing technique for ensemble predictions. The proposed method is a sequential Kalman filtering technique that fully exploits the information content of the ensemble. One correction equation is retrieved and applied to all members, however the parameters of the regression equations are retrieved by exploiting the second order statistics of the forecast ensemble. We compare our new method with two other techniques: a simple method that makes use of a running bias correction of the ensemble mean, and an MBM post-processing approach that rescales the ensemble mean and spread, based on minimization of the Continuous Ranked Probability Score (CRPS). We perform a verification study for the region of Campania in southern Italy. We use two years (2014-2015) of daily meteorological observations of 2-meter temperature and 10-meter wind speed from 18 ground-based automatic weather stations distributed across the region, comparing them with the corresponding COSMO-LEPS ensemble forecasts. Deterministic verification scores (e.g., mean absolute error, bias) and probabilistic scores (e.g., CRPS) are used to evaluate the post-processing techniques. We conclude that the new adaptive method outperforms the simpler running bias-correction. The proposed adaptive method often outperforms the MBM method in removing bias. The MBM method has the advantage of correcting the ensemble spread, although it needs more training data.

  15. Toward developing a standardized Arabic continuous text reading chart.

    PubMed

    Alabdulkader, Balsam; Leat, Susan Jennifer

    Near visual acuity is an essential measurement during an oculo-visual assessment. Short duration continuous text reading charts measure reading acuity and other aspects of reading performance. There is no standardized version of such chart in Arabic. The aim of this study is to create sentences of equal readability to use in the development of a standardized Arabic continuous text reading chart. Initially, 109 Arabic pairs of sentences were created for use in constructing a chart with similar layout to the Colenbrander chart. They were created to have the same grade level of difficulty and physical length. Fifty-three adults and sixteen children were recruited to validate the sentences. Reading speed in correct words per minute (CWPM) and standard length words per minute (SLWPM) was measured and errors were counted. Criteria based on reading speed and errors made in each sentence pair were used to exclude sentence pairs with more outlying characteristics, and to select the final group of sentence pairs. Forty-five sentence pairs were selected according to the elimination criteria. For adults, the average reading speed for the final sentences was 166 CWPM and 187 SLWPM and the average number of errors per sentence pair was 0.21. Childrens' average reading speed for the final group of sentences was 61 CWPM and 72 SLWPM. Their average error rate was 1.71. The reliability analysis showed that the final 45 sentence pairs are highly comparable. They will be used in constructing an Arabic short duration continuous text reading chart. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  16. The Relationship between Crystalline Lens Power and Refractive Error in Older Chinese Adults: The Shanghai Eye Study

    PubMed Central

    He, Jiangnan; Lu, Lina; He, Xiangui; Xu, Xian; Du, Xuan; Zhang, Bo; Zhao, Huijuan; Sha, Jida; Zhu, Jianfeng; Zou, Haidong; Xu, Xun

    2017-01-01

    Purpose To report calculated crystalline lens power and describe the distribution of ocular biometry and its association with refractive error in older Chinese adults. Methods Random clustering sampling was used to identify adults aged 50 years and above in Xuhui and Baoshan districts of Shanghai. Refraction was determined by subjective refraction that achieved the best corrected vision based on monocular measurement. Ocular biometry was measured by IOL Master. The crystalline lens power of right eyes was calculated using modified Bennett-Rabbetts formula. Results We analyzed 6099 normal phakic right eyes. The mean crystalline lens power was 20.34 ± 2.24D (range: 13.40–36.08). Lens power, spherical equivalent, and anterior chamber depth changed linearly with age; however, axial length, corneal power and AL/CR ratio did not vary with age. The overall prevalence of hyperopia, myopia, and high myopia was 48.48% (95% CI: 47.23%–49.74%), 22.82% (95% CI: 21.77%–23.88%), and 4.57% (95% CI: 4.05–5.10), respectively. The prevalence of hyperopia increased linearly with age while lens power decreased with age. In multivariate models, refractive error was strongly correlated with axial length, lens power, corneal power, and anterior chamber depth; refractive error was slightly correlated with best corrected visual acuity, age and sex. Conclusion Lens power, hyperopia, and spherical equivalent changed linearly with age; Moreover, the continuous loss of lens power produced hyperopic shifts in refraction in subjects aged more than 50 years. PMID:28114313

  17. Performance-limiting factors for x-ray free electron laser oscillator as a highly coherent, high spectral purity x-ray source

    NASA Astrophysics Data System (ADS)

    Park, Gunn Tae

    X-ray Free Electron Laser (XFEL) is a light source for coherent X-ray using the radiation from relativistic electrons and interaction between the two. In particular, XFEL oscillator(XFELO) uses optical cavity to repeatedly bring back the radiation to electron beam for the interaction. Its optimal performance, maximum single pass gain and minimum round trip loss, critically depends on cavity optics. In ideal case, the optimal performance would be achieved by the periodic radiation mode maximally overlapping with electron beam while the radiation mode is impinging on curved mirror that gives the radiation the focusing, below critical angle and angular divergence being kept small enough at each crystal for Bragg scattering, which is used for near-normal reflection. In reality, there exist various performance degrading factors in the cavity such as heat load on the crystal surface, misalignments of crystals and mirrors and mirror surface errors. In this thesis, we study via both analytic computation and numerical simulation the optimal design and performance of XFELO cavity in the presence of these factors. In optimal design, we implement asymmetric crystals into cavity to enhance the performance. In general, it has undesirable effect of pulse dilation. We present the configuration that avoids pulse length dilation. Then the effects of misalignments, focal length errors and mirror surface errors are to be evaluated and their tolerances are estimated. In particular, the simulation demonstrates that the effect of mirror surface errors on gain and round trip loss is well-within desired performance of XFELO.

  18. Validity of the two-level model for Viterbi decoder gap-cycle performance

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Arnold, S.

    1990-01-01

    A two-level model has previously been proposed for approximating the performance of a Viterbi decoder which encounters data received with periodically varying signal-to-noise ratio. Such cyclically gapped data is obtained from the Very Large Array (VLA), either operating as a stand-alone system or arrayed with Goldstone. This approximate model predicts that the decoder error rate will vary periodically between two discrete levels with the same period as the gap cycle. It further predicts that the length of the gapped portion of the decoder error cycle for a constraint length K decoder will be about K-1 bits shorter than the actual duration of the gap. The two-level model for Viterbi decoder performance with gapped data is subjected to detailed validation tests. Curves showing the cyclical behavior of the decoder error burst statistics are compared with the simple square-wave cycles predicted by the model. The validity of the model depends on a parameter often considered irrelevant in the analysis of Viterbi decoder performance, the overall scaling of the received signal or the decoder's branch-metrics. Three scaling alternatives are examined: optimum branch-metric scaling and constant branch-metric scaling combined with either constant noise-level scaling or constant signal-level scaling. The simulated decoder error cycle curves roughly verify the accuracy of the two-level model for both the case of optimum branch-metric scaling and the case of constant branch-metric scaling combined with constant noise-level scaling. However, the model is not accurate for the case of constant branch-metric scaling combined with constant signal-level scaling.

  19. Direct comparisons of Illumina vs. Roche 454 sequencing technologies on the same microbial community DNA sample.

    PubMed

    Luo, Chengwei; Tsementzi, Despina; Kyrpides, Nikos; Read, Timothy; Konstantinidis, Konstantinos T

    2012-01-01

    Next-generation sequencing (NGS) is commonly used in metagenomic studies of complex microbial communities but whether or not different NGS platforms recover the same diversity from a sample and their assembled sequences are of comparable quality remain unclear. We compared the two most frequently used platforms, the Roche 454 FLX Titanium and the Illumina Genome Analyzer (GA) II, on the same DNA sample obtained from a complex freshwater planktonic community. Despite the substantial differences in read length and sequencing protocols, the platforms provided a comparable view of the community sampled. For instance, derived assemblies overlapped in ~90% of their total sequences and in situ abundances of genes and genotypes (estimated based on sequence coverage) correlated highly between the two platforms (R(2)>0.9). Evaluation of base-call error, frameshift frequency, and contig length suggested that Illumina offered equivalent, if not better, assemblies than Roche 454. The results from metagenomic samples were further validated against DNA samples of eighteen isolate genomes, which showed a range of genome sizes and G+C% content. We also provide quantitative estimates of the errors in gene and contig sequences assembled from datasets characterized by different levels of complexity and G+C% content. For instance, we noted that homopolymer-associated, single-base errors affected ~1% of the protein sequences recovered in Illumina contigs of 10× coverage and 50% G+C; this frequency increased to ~3% when non-homopolymer errors were also considered. Collectively, our results should serve as a useful practical guide for choosing proper sampling strategies and data possessing protocols for future metagenomic studies.

  20. CFD modeling using PDF approach for investigating the flame length in rotary kilns

    NASA Astrophysics Data System (ADS)

    Elattar, H. F.; Specht, E.; Fouda, A.; Bin-Mahfouz, Abdullah S.

    2016-12-01

    Numerical simulations using computational fluid dynamics (CFD) are performed to investigate the flame length characteristics in rotary kilns using probability density function (PDF) approach. A commercial CFD package (ANSYS-Fluent) is employed for this objective. A 2-D axisymmetric model is applied to study the effect of both operating and geometric parameters of rotary kiln on the characteristics of the flame length. Three types of gaseous fuel are used in the present work; methane (CH4), carbon monoxide (CO) and biogas (50 % CH4 + 50 % CO2). Preliminary comparison study of 2-D modeling outputs of free jet flames with available experimental data is carried out to choose and validate the proper turbulence model for the present numerical simulations. The results showed that the excess air number, diameter of kiln air entrance, radiation modeling consideration and fuel type have remarkable effects on the flame length characteristics. Numerical correlations for the rotary kiln flame length are presented in terms of the studied kiln operating and geometric parameters within acceptable error.

  1. Family members' unique perspectives of the family: examining their scope, size, and relations to individual adjustment.

    PubMed

    Jager, Justin; Bornstein, Marc H; Putnick, Diane L; Hendricks, Charlene

    2012-06-01

    Using the McMaster Family Assessment Device (Epstein, Baldwin, & Bishop, 1983) and incorporating the perspectives of adolescent, mother, and father, this study examined each family member's "unique perspective" or nonshared, idiosyncratic view of the family. We used a modified multitrait-multimethod confirmatory factor analysis that (a) isolated for each family member's 6 reports of family dysfunction the nonshared variance (a combination of variance idiosyncratic to the individual and measurement error) from variance shared by 1 or more family members and (b) extracted common variance across each family member's set of nonshared variances. The sample included 128 families from a U.S. East Coast metropolitan area. Each family member's unique perspective generalized across his or her different reports of family dysfunction and accounted for a sizable proportion of his or her own variance in reports of family dysfunction. In addition, after holding level of dysfunction constant across families and controlling for a family's shared variance (agreement regarding family dysfunction), each family member's unique perspective was associated with his or her own adjustment. Future applications and competing alternatives for what these "unique perspectives" reflect about the family are discussed. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  2. Family Members' Unique Perspectives of the Family: Examining their Scope, Size, and Relations to Individual Adjustment

    PubMed Central

    Jager, Justin; Bornstein, Marc H.; Diane, L. Putnick; Hendricks, Charlene

    2012-01-01

    Using the Family Assessment Device (FAD; Epstein, Baldwin, & Bishop, 1983) and incorporating the perspectives of adolescent, mother, and father, this study examined each family member's “unique perspective” or non-shared, idiosyncratic view of the family. To do so we used a modified multitrait-multimethod confirmatory factor analysis that (1) isolated for each family member's six reports of family dysfunction the non-shared variance (a combination of variance idiosyncratic to the individual and measurement error) from variance shared by one or more family members and (2) extracted common variance across each family member's set of non-shared variances. The sample included 128 families from a U.S. East Coast metropolitan area. Each family member's unique perspective generalized across his or her different reports of family dysfunction and accounted for a sizable proportion of his or her own variance in reports of family dysfunction. Additionally, after holding level of dysfunction constant across families and controlling for a family's shared variance (agreement regarding family dysfunction), each family member's unique perspective was associated with his or her own adjustment. Future applications and competing alternatives for what these “unique perspectives” reflect about the family are discussed. PMID:22545933

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rottmann, Joerg; Berbeco, Ross

    Purpose: Precise prediction of respiratory motion is a prerequisite for real-time motion compensation techniques such as beam, dynamic couch, or dynamic multileaf collimator tracking. Collection of tumor motion data to train the prediction model is required for most algorithms. To avoid exposure of patients to additional dose from imaging during this procedure, the feasibility of training a linear respiratory motion prediction model with an external surrogate signal is investigated and its performance benchmarked against training the model with tumor positions directly. Methods: The authors implement a lung tumor motion prediction algorithm based on linear ridge regression that is suitable tomore » overcome system latencies up to about 300 ms. Its performance is investigated on a data set of 91 patient breathing trajectories recorded from fiducial marker tracking during radiotherapy delivery to the lung of ten patients. The expected 3D geometric error is quantified as a function of predictor lookahead time, signal sampling frequency and history vector length. Additionally, adaptive model retraining is evaluated, i.e., repeatedly updating the prediction model after initial training. Training length for this is gradually increased with incoming (internal) data availability. To assess practical feasibility model calculation times as well as various minimum data lengths for retraining are evaluated. Relative performance of model training with external surrogate motion data versus tumor motion data is evaluated. However, an internal–external motion correlation model is not utilized, i.e., prediction is solely driven by internal motion in both cases. Results: Similar prediction performance was achieved for training the model with external surrogate data versus internal (tumor motion) data. Adaptive model retraining can substantially boost performance in the case of external surrogate training while it has little impact for training with internal motion data. A minimum adaptive retraining data length of 8 s and history vector length of 3 s achieve maximal performance. Sampling frequency appears to have little impact on performance confirming previously published work. By using the linear predictor, a relative geometric 3D error reduction of about 50% was achieved (using adaptive retraining, a history vector length of 3 s and with results averaged over all investigated lookahead times and signal sampling frequencies). The absolute mean error could be reduced from (2.0 ± 1.6) mm when using no prediction at all to (0.9 ± 0.8) mm and (1.0 ± 0.9) mm when using the predictor trained with internal tumor motion training data and external surrogate motion training data, respectively (for a typical lookahead time of 250 ms and sampling frequency of 15 Hz). Conclusions: A linear prediction model can reduce latency induced tracking errors by an average of about 50% in real-time image guided radiotherapy systems with system latencies of up to 300 ms. Training a linear model for lung tumor motion prediction with an external surrogate signal alone is feasible and results in similar performance as training with (internal) tumor motion. Particularly for scenarios where motion data are extracted from fluoroscopic imaging with ionizing radiation, this may alleviate the need for additional imaging dose during the collection of model training data.« less

  4. Using an external surrogate for predictor model training in real-time motion management of lung tumors.

    PubMed

    Rottmann, Joerg; Berbeco, Ross

    2014-12-01

    Precise prediction of respiratory motion is a prerequisite for real-time motion compensation techniques such as beam, dynamic couch, or dynamic multileaf collimator tracking. Collection of tumor motion data to train the prediction model is required for most algorithms. To avoid exposure of patients to additional dose from imaging during this procedure, the feasibility of training a linear respiratory motion prediction model with an external surrogate signal is investigated and its performance benchmarked against training the model with tumor positions directly. The authors implement a lung tumor motion prediction algorithm based on linear ridge regression that is suitable to overcome system latencies up to about 300 ms. Its performance is investigated on a data set of 91 patient breathing trajectories recorded from fiducial marker tracking during radiotherapy delivery to the lung of ten patients. The expected 3D geometric error is quantified as a function of predictor lookahead time, signal sampling frequency and history vector length. Additionally, adaptive model retraining is evaluated, i.e., repeatedly updating the prediction model after initial training. Training length for this is gradually increased with incoming (internal) data availability. To assess practical feasibility model calculation times as well as various minimum data lengths for retraining are evaluated. Relative performance of model training with external surrogate motion data versus tumor motion data is evaluated. However, an internal-external motion correlation model is not utilized, i.e., prediction is solely driven by internal motion in both cases. Similar prediction performance was achieved for training the model with external surrogate data versus internal (tumor motion) data. Adaptive model retraining can substantially boost performance in the case of external surrogate training while it has little impact for training with internal motion data. A minimum adaptive retraining data length of 8 s and history vector length of 3 s achieve maximal performance. Sampling frequency appears to have little impact on performance confirming previously published work. By using the linear predictor, a relative geometric 3D error reduction of about 50% was achieved (using adaptive retraining, a history vector length of 3 s and with results averaged over all investigated lookahead times and signal sampling frequencies). The absolute mean error could be reduced from (2.0 ± 1.6) mm when using no prediction at all to (0.9 ± 0.8) mm and (1.0 ± 0.9) mm when using the predictor trained with internal tumor motion training data and external surrogate motion training data, respectively (for a typical lookahead time of 250 ms and sampling frequency of 15 Hz). A linear prediction model can reduce latency induced tracking errors by an average of about 50% in real-time image guided radiotherapy systems with system latencies of up to 300 ms. Training a linear model for lung tumor motion prediction with an external surrogate signal alone is feasible and results in similar performance as training with (internal) tumor motion. Particularly for scenarios where motion data are extracted from fluoroscopic imaging with ionizing radiation, this may alleviate the need for additional imaging dose during the collection of model training data.

  5. Effect of window length on performance of the elbow-joint angle prediction based on electromyography

    NASA Astrophysics Data System (ADS)

    Triwiyanto; Wahyunggoro, Oyas; Adi Nugroho, Hanung; Herianto

    2017-05-01

    The high performance of the elbow joint angle prediction is essential on the development of the devices based on electromyography (EMG) control. The performance of the prediction depends on the feature of extraction parameters such as window length. In this paper, we evaluated the effect of the window length on the performance of the elbow-joint angle prediction. The prediction algorithm consists of zero-crossing feature extraction and second order of Butterworth low pass filter. The feature was used to extract the EMG signal by varying window length. The EMG signal was collected from the biceps muscle while the elbow was moved in the flexion and extension motion. The subject performed the elbow motion by holding a 1-kg load and moved the elbow in different periods (12 seconds, 8 seconds and 6 seconds). The results indicated that the window length affected the performance of the prediction. The 250 window lengths yielded the best performance of the prediction algorithm of (mean±SD) root mean square error = 5.68%±1.53% and Person’s correlation = 0.99±0.0059.

  6. 2 GHz clock quantum key distribution over 260 km of standard telecom fiber.

    PubMed

    Wang, Shuang; Chen, Wei; Guo, Jun-Fu; Yin, Zhen-Qiang; Li, Hong-Wei; Zhou, Zheng; Guo, Guang-Can; Han, Zheng-Fu

    2012-03-15

    We report a demonstration of quantum key distribution (QKD) over a standard telecom fiber exceeding 50 dB in loss and 250 km in length. The differential phase shift QKD protocol was chosen and implemented with a 2 GHz system clock rate. By careful optimization of the 1 bit delayed Faraday-Michelson interferometer and the use of the superconducting single photon detector (SSPD), we achieved a quantum bit error rate below 2% when the fiber length was no more than 205 km, and of 3.45% for a 260 km fiber with 52.9 dB loss. We also improved the quantum efficiency of SSPD to obtain a high key rate for 50 km length.

  7. Phylogenomics of Lophotrochozoa with Consideration of Systematic Error.

    PubMed

    Kocot, Kevin M; Struck, Torsten H; Merkel, Julia; Waits, Damien S; Todt, Christiane; Brannock, Pamela M; Weese, David A; Cannon, Johanna T; Moroz, Leonid L; Lieb, Bernhard; Halanych, Kenneth M

    2017-03-01

    Phylogenomic studies have improved understanding of deep metazoan phylogeny and show promise for resolving incongruences among analyses based on limited numbers of loci. One region of the animal tree that has been especially difficult to resolve, even with phylogenomic approaches, is relationships within Lophotrochozoa (the animal clade that includes molluscs, annelids, and flatworms among others). Lack of resolution in phylogenomic analyses could be due to insufficient phylogenetic signal, limitations in taxon and/or gene sampling, or systematic error. Here, we investigated why lophotrochozoan phylogeny has been such a difficult question to answer by identifying and reducing sources of systematic error. We supplemented existing data with 32 new transcriptomes spanning the diversity of Lophotrochozoa and constructed a new set of Lophotrochozoa-specific core orthologs. Of these, 638 orthologous groups (OGs) passed strict screening for paralogy using a tree-based approach. In order to reduce possible sources of systematic error, we calculated branch-length heterogeneity, evolutionary rate, percent missing data, compositional bias, and saturation for each OG and analyzed increasingly stricter subsets of only the most stringent (best) OGs for these five variables. Principal component analysis of the values for each factor examined for each OG revealed that compositional heterogeneity and average patristic distance contributed most to the variance observed along the first principal component while branch-length heterogeneity and, to a lesser extent, saturation contributed most to the variance observed along the second. Missing data did not strongly contribute to either. Additional sensitivity analyses examined effects of removing taxa with heterogeneous branch lengths, large amounts of missing data, and compositional heterogeneity. Although our analyses do not unambiguously resolve lophotrochozoan phylogeny, we advance the field by reducing the list of viable hypotheses. Moreover, our systematic approach for dissection of phylogenomic data can be applied to explore sources of incongruence and poor support in any phylogenomic data set. [Annelida; Brachiopoda; Bryozoa; Entoprocta; Mollusca; Nemertea; Phoronida; Platyzoa; Polyzoa; Spiralia; Trochozoa.]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Recommendations to Improve the Accuracy of Estimates of Physical Activity Derived from Self Report

    PubMed Central

    Ainsworth, Barbara E; Caspersen, Carl J; Matthews, Charles E; Mâsse, Louise C; Baranowski, Tom; Zhu, Weimo

    2013-01-01

    Context Assessment of physical activity using self-report has the potential for measurement error that can lead to incorrect inferences about physical activity behaviors and bias study results. Objective To provide recommendations to improve the accuracy of physical activity derived from self report. Process We provide an overview of presentations and a compilation of perspectives shared by the authors of this paper and workgroup members. Findings We identified a conceptual framework for reducing errors using physical activity self-report questionnaires. The framework identifies six steps to reduce error: (1) identifying the need to measure physical activity, (2) selecting an instrument, (3) collecting data, (4) analyzing data, (5) developing a summary score, and (6) interpreting data. Underlying the first four steps are behavioral parameters of type, intensity, frequency, and duration of physical activities performed, activity domains, and the location where activities are performed. We identified ways to reduce measurement error at each step and made recommendations for practitioners, researchers, and organizational units to reduce error in questionnaire assessment of physical activity. Conclusions Self-report measures of physical activity have a prominent role in research and practice settings. Measurement error can be reduced by applying the framework discussed in this paper. PMID:22287451

  9. Displacement-length relationship of normal faults in Acheron Fossae, Mars: new observations with HRSC.

    NASA Astrophysics Data System (ADS)

    Charalambakis, E.; Hauber, E.; Knapmeyer, M.; Grott, M.; Gwinner, K.

    2007-08-01

    For Earth, data sets and models have shown that for a fault loaded by a constant remote stress, the maximum displacement on the fault is linearly related to its length by d = gamma · l [1]. The scaling and structure is self-similar through time [1]. The displacement-length relationship can provide useful information about the tectonic regime.We intend to use it to estimate the seismic moment released during the formation of Martian fault systems and to improve the seismicity model [2]. Only few data sets have been measured for extraterrestrial faults. One reason is the limited number of reliable topographic data sets. We used high-resolution Digital Elevation Models (DEM) [3] derived from HRSC image data taken from Mars Express orbit 1437. This orbit covers an area in the Acheron Fossae region, a rift-like graben system north of Olympus Mons with a "banana"-shaped topography [4]. It has a fault trend which runs approximately WNW-ESE. With an interactive IDL-based software tool [5] we measured the fault length and the vertical offset for 34 faults. We evaluated the height profile by plotting the fault lengths l vs. their observed maximum displacement (dmax-model). Additionally, we computed the maximum displacement of an elliptical fault scarp where the plane has the same area as in the observed case (elliptical model). The integration over the entire fault length necessary for the computation of the area supresses the "noise" introduced by local topographic effects like erosion or cratering. We should also mention that fault planes dipping 60 degree are usually assumed for Mars [e.g., 6] and even shallower dips have been found for normal fault planes [7]. This dip angle is used to compute displacement from vertical offset via d = h/(h*sinα), where h is the observed topographic step height, and ? is the fault dip angle. If fault dip angles of 30 degree are considered, the displacement differs by 40% from the one of dip angles of 60 degree. Depending on the data quality, especially the lighting conditions in the region, different errors can be made by determining the various values. Based on our experiences, we estimate that the error measuring the length of the fault is smaller than 10% and that the measurement error of the offset is smaller than 5%. Furthermore the horizontal resolution of the HRSC images is 12.5 m/pixel or 25 m/pixel and of the DEM derived from HRSC images 50 m/pixel because of re-sampling. That means that image resolution does not introduce a significant error at fault lengths in kilometer range. For the case of Mars it is known that in the growth of fault populations linkage is an essential process [8]. We obtained the d/l-values from selected examples of faults that were connected via a relay ramp. The error of ignoring an existing fault linkage is 20% to 50% if the elliptical fault model is used and 30% to 50% if only the dmax value is used to determine d l . This shows an advantage of the elliptic model. The error increases if more faults are linked, because the underestimation of the relevant length gets worse the longer the linked system is. We obtained a value of gamma=d/l of about 2 · 10-2 for the elliptic model and a value of approximately 2.7 · 10-2 for the dmax-model. The data show a relatively large scatter, but they can be compared to data from terrestrial faults ( d/l= ~1 · 10-2...5 · 10-2; [9] and references therein). In a first inspection of the Acheron Fossae 2 region in the orbit 1437 we could confirm our first observations [10]. If we consider fault linkage the d/l values shift towards lower d/l-ratios, since linkage means that d remains essentially constant, but l increases significantly. We will continue to measure other faults and obtain values for linked faults and relay ramps. References: [1] Cowie, P. A. and Scholz, C. H. (1992) JSG, 14, 1133-1148. [2] Knapmeyer, M. et al. (2006) JGR, 111, E11006. [3] Neukum, G. et al. (2004) ESA SP-1240, 17-35. [4] Kronberg, P. et al. (2007) J. Geophys. Res., 112, E04005, doi:10.1029/2006JE002780. [5] Hauber, E. et al. (2007) LPSC, XXXVIII, abstract 1338. [6] Wilkins, S. J. et al. (2002) GRL, 29, 1884, doi: 10.1029/2002GL015391. [7] Fueten, F. et al. (2007) LPSC, XXXVIII, abstract 1388. [8] Schultz, R. A. (2000) Tectonophysics, 316, 169-193. [9] Schultz, R. A. et al. (2006) JSG, 28, 2182-2193. [10] Hauber, E. et al. (2007) 7th Mars Conference, submitted.

  10. Limitations of Surface Mapping Technology in Accurately Identifying Critical Errors in Dental Students' Crown Preparations.

    PubMed

    Furness, Alan R; Callan, Richard S; Mackert, J Rodway; Mollica, Anthony G

    2018-01-01

    The aim of this study was to evaluate the effectiveness of the Planmeca Compare software in identifying and quantifying a common critical error in dental students' crown preparations. In 2014-17, a study was conducted at one U.S. dental school that evaluated an ideal crown prep made by a faculty member on a dentoform to modified preps. Two types of preparation errors were created by the addition of flowable composite to the occlusal surface of identical dies of the preparations to represent the underreduction of the distolingual cusp. The error was divided into two classes: the minor class allowed for 1 mm of occlusal clearance, and the major class allowed for no occlusal clearance. The preparations were then digitally evaluated against the ideal preparation using Planmeca Compare. Percent comparison values were obtained from each trial and averaged together. False positives and false negatives were also identified and used to determine the accuracy of the evaluation. Critical errors that did not involve a substantial change in the surface area of the preparation were inconsistently identified. Within the limitations of this study, the authors concluded that the Compare software was unable to consistently identify common critical errors within an acceptable degree of error.

  11. An Agent-Based Intervention to Assist Drivers Under Stereotype Threat: Effects of In-Vehicle Agents' Attributional Error Feedback.

    PubMed

    Joo, Yeon Kyoung; Lee-Won, Roselyn J

    2016-10-01

    For members of a group negatively stereotyped in a domain, making mistakes can aggravate the influence of stereotype threat because negative stereotypes often blame target individuals and attribute the outcome to their lack of ability. Virtual agents offering real-time error feedback may influence performance under stereotype threat by shaping the performers' attributional perception of errors they commit. We explored this possibility with female drivers, considering the prevalence of the "women-are-bad-drivers" stereotype. Specifically, we investigated how in-vehicle voice agents offering error feedback based on responsibility attribution (internal vs. external) and outcome attribution (ability vs. effort) influence female drivers' performance under stereotype threat. In addressing this question, we conducted an experiment in a virtual driving simulation environment that provided moment-to-moment error feedback messages. Participants performed a challenging driving task and made mistakes preprogrammed to occur. Results showed that the agent's error feedback with outcome attribution moderated the stereotype threat effect on driving performance. Participants under stereotype threat had a smaller number of collisions when the errors were attributed to effort than to ability. In addition, outcome attribution feedback moderated the effect of responsibility attribution on driving performance. Implications of these findings are discussed.

  12. A concept for a visual computer interface to make error taxonomies useful at the point of primary care.

    PubMed

    Singh, Ranjit; Pace, Wilson; Singh, Sonjoy; Singh, Ashok; Singh, Gurdev

    2007-01-01

    Evidence suggests that the quality of care delivered by the healthcare industry currently falls far short of its capabilities. Whilst most patient safety and quality improvement work to date has focused on inpatient settings, some estimates suggest that outpatient settings are equally important, with up to 200,000 avoidable deaths annually in the United States of America (USA) alone. There is currently a need for improved error reporting and taxonomy systems that are useful at the point of care. This provides an opportunity to harness the benefits of computer visualisation to help structure and illustrate the 'stories' behind errors. In this paper we present a concept for a visual taxonomy of errors, based on visual models of the healthcare system at both macrosystem and microsystem levels (previously published in this journal), and describe how this could be used to create a visual database of errors. In an alphatest in a US context, we were able to code a sample of 20 errors from an existing error database using the visual taxonomy. The approach is designed to capture and disseminate patient safety information in an unambiguous format that is useful to all members of the healthcare team (including the patient) at the point of care as well as at the policy-making level.

  13. 50 CFR 260.97 - Conditions for providing fishery products inspection service at official establishments.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... CERTIFICATION Inspection and Certification of Establishments and Fishery Products for Human Consumption... control systems and cooperation. The inspection effort requirement may be reevaluated when the contracting...; or (2) For production errors, such as processing temperatures, length of process, or misbranding of...

  14. 50 CFR 260.97 - Conditions for providing fishery products inspection service at official establishments.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... CERTIFICATION Inspection and Certification of Establishments and Fishery Products for Human Consumption... control systems and cooperation. The inspection effort requirement may be reevaluated when the contracting...; or (2) For production errors, such as processing temperatures, length of process, or misbranding of...

  15. 78 FR 40823 - Reports, Forms, and Record Keeping Requirements

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-08

    ... at time of approval. Title: National Survey of Principal Drivers of Vehicles with a Rear Seat Belt... from both groups and information on their passengers seat belt usage habits, as well as the... use computer-assisted telephone interviewing to reduce interview length and minimize recording errors...

  16. Assimilating every-30-second 100-m-mesh radar observations for convective weather: implications to non-Gaussian PDF

    NASA Astrophysics Data System (ADS)

    Miyoshi, T.; Teramura, T.; Ruiz, J.; Kondo, K.; Lien, G. Y.

    2016-12-01

    Convective weather is known to be highly nonlinear and chaotic, and it is hard to predict their location and timing precisely. Our Big Data Assimilation (BDA) effort has been exploring to use dense and frequent observations to avoid non-Gaussian probability density function (PDF) and to apply an ensemble Kalman filter under the Gaussian error assumption. The phased array weather radar (PAWR) can observe a dense three-dimensional volume scan with 100-m range resolution and 100 elevation angles in only 30 seconds. The BDA system assimilates the PAWR reflectivity and Doppler velocity observations every 30 seconds into 100 ensemble members of storm-scale numerical weather prediction (NWP) model at 100-m grid spacing. The 30-second-update, 100-m-mesh BDA system has been quite successful in multiple case studies of local severe rainfall events. However, with 1000 ensemble members, the reduced-resolution BDA system at 1-km grid spacing showed significant non-Gaussian PDF with every-30-second updates. With a 10240-member ensemble Kalman filter with a global NWP model at 112-km grid spacing, we found roughly 1000 members satisfactory to capture the non-Gaussian error structures. With these in mind, we explore how the density of observations in space and time affects the non-Gaussianity in an ensemble Kalman filter with a simple toy model. In this presentation, we will present the most up-to-date results of the BDA research, as well as the investigation with the toy model on the non-Gaussianity with dense and frequent observations.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivanyushenkov, Y.; Doose, C.; Fuerst, J.

    Development of superconducting undulator (SCU) technology continues at the Advanced Photon Source (APS). The experience of building and successful operating the first short-length, 16-mm period length superconducting undulator SCU0 paved the way for a 1-m long, 18-mm period device— SCU18-1— which has been in operation since May 2015. The APS SCU team has also built and tested a 1.5-m long, 21-mm period length undulator as a part of the LCLS SCU R&D program, aimed at demonstration of SCU technology availability for free electron lasers. This undulator successfully achieved all the requirements including a phase error of 5° RMS. Our teammore » has recently completed one more 1-m long, 18-mm period length undulator— SCU18-2— that is replacing the SCU0. We are also working on a helical SCU for the APS. The status of these projects will be presented.« less

  18. Study on effective MOSFET channel length extracted from gate capacitance

    NASA Astrophysics Data System (ADS)

    Tsuji, Katsuhiro; Terada, Kazuo; Fujisaka, Hisato

    2018-01-01

    The effective channel length (L GCM) of metal-oxide-semiconductor field-effect transistors (MOSFETs) is extracted from the gate capacitances of actual-size MOSFETs, which are measured by charge-injection-induced-error-free charge-based capacitance measurement (CIEF CBCM). To accurately evaluate the capacitances between the gate and the channel of test MOSFETs, the parasitic capacitances are removed by using test MOSFETs having various channel sizes and a source/drain reference device. A strong linear relationship between the gate-channel capacitance and the design channel length is obtained, from which L GCM is extracted. It is found that L GCM is slightly less than the effective channel length (L CRM) extracted from the measured MOSFET drain current. The reason for this is discussed, and it is found that the capacitance between the gate electrode and the source and drain regions affects this extraction.

  19. The impact of different background errors in the assimilation of satellite radiances and in-situ observational data using WRFDA for three rainfall events over Iran

    NASA Astrophysics Data System (ADS)

    Zakeri, Zeinab; Azadi, Majid; Ghader, Sarmad

    2018-01-01

    Satellite radiances and in-situ observations are assimilated through Weather Research and Forecasting Data Assimilation (WRFDA) system into Advanced Research WRF (ARW) model over Iran and its neighboring area. Domain specific background error based on x and y components of wind speed (UV) control variables is calculated for WRFDA system and some sensitivity experiments are carried out to compare the impact of global background error and the domain specific background errors, both on the precipitation and 2-m temperature forecasts over Iran. Three precipitation events that occurred over the country during January, September and October 2014 are simulated in three different experiments and the results for precipitation and 2-m temperature are verified against the verifying surface observations. Results show that using domain specific background error improves 2-m temperature and 24-h accumulated precipitation forecasts consistently, while global background error may even degrade the forecasts compared to the experiments without data assimilation. The improvement in 2-m temperature is more evident during the first forecast hours and decreases significantly as the forecast length increases.

  20. The dependence of crowding on flanker complexity and target-flanker similarity

    PubMed Central

    Bernard, Jean-Baptiste; Chung, Susana T.L.

    2013-01-01

    We examined the effects of the spatial complexity of flankers and target-flanker similarity on the performance of identifying crowded letters. On each trial, observers identified the middle character of random strings of three characters (“trigrams”) briefly presented at 10° below fixation. We tested the 26 lowercase letters of the Times-Roman and Courier fonts, a set of 79 characters (letters and non-letters) of the Times-Roman font, and the uppercase letters of two highly complex ornamental fonts, Edwardian and Aristocrat. Spatial complexity of characters was quantified by the length of the morphological skeleton of each character, and target-flanker similarity was defined based on a psychometric similarity matrix. Our results showed that (1) letter identification error rate increases with flanker complexity up to a certain value, beyond which error rate becomes independent of flanker complexity; (2) the increase of error rate is slower for high-complexity target letters; (3) error rate increases with target-flanker similarity; and (4) mislocation error rate increases with target-flanker similarity. These findings, combined with the current understanding of the faulty feature integration account of crowding, provide some constraints of how the feature integration process could cause perceptual errors. PMID:21730225

Top