Sample records for individual error bars

  1. Bandwagon effects and error bars in particle physics

    NASA Astrophysics Data System (ADS)

    Jeng, Monwhea

    2007-02-01

    We study historical records of experiments on particle masses, lifetimes, and widths, both for signs of expectation bias, and to compare actual errors with reported error bars. We show that significant numbers of particle properties exhibit "bandwagon effects": reported values show trends and clustering as a function of the year of publication, rather than random scatter about the mean. While the total amount of clustering is significant, it is also fairly small; most individual particle properties do not display obvious clustering. When differences between experiments are compared with the reported error bars, the deviations do not follow a normal distribution, but instead follow an exponential distribution for up to ten standard deviations.

  2. Estimating the domain of applicability for machine learning QSAR models: a study on aqueous solubility of drug discovery molecules.

    PubMed

    Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-12-01

    We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.

  3. Estimating the domain of applicability for machine learning QSAR models: a study on aqueous solubility of drug discovery molecules.

    PubMed

    Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-09-01

    We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.

  4. Estimating the domain of applicability for machine learning QSAR models: a study on aqueous solubility of drug discovery molecules

    NASA Astrophysics Data System (ADS)

    Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-12-01

    We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.

  5. Estimating the domain of applicability for machine learning QSAR models: a study on aqueous solubility of drug discovery molecules

    NASA Astrophysics Data System (ADS)

    Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-09-01

    We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.

  6. Influence of survey strategy and interpolation model on DEM quality

    NASA Astrophysics Data System (ADS)

    Heritage, George L.; Milan, David J.; Large, Andrew R. G.; Fuller, Ian C.

    2009-11-01

    Accurate characterisation of morphology is critical to many studies in the field of geomorphology, particularly those dealing with changes over time. Digital elevation models (DEMs) are commonly used to represent morphology in three dimensions. The quality of the DEM is largely a function of the accuracy of individual survey points, field survey strategy, and the method of interpolation. Recommendations concerning field survey strategy and appropriate methods of interpolation are currently lacking. Furthermore, the majority of studies to date consider error to be uniform across a surface. This study quantifies survey strategy and interpolation error for a gravel bar on the River Nent, Blagill, Cumbria, UK. Five sampling strategies were compared: (i) cross section; (ii) bar outline only; (iii) bar and chute outline; (iv) bar and chute outline with spot heights; and (v) aerial LiDAR equivalent, derived from degraded terrestrial laser scan (TLS) data. Digital Elevation Models were then produced using five different common interpolation algorithms. Each resultant DEM was differentiated from a terrestrial laser scan of the gravel bar surface in order to define the spatial distribution of vertical and volumetric error. Overall triangulation with linear interpolation (TIN) or point kriging appeared to provide the best interpolators for the bar surface. Lowest error on average was found for the simulated aerial LiDAR survey strategy, regardless of interpolation technique. However, comparably low errors were also found for the bar-chute-spot sampling strategy when TINs or point kriging was used as the interpolator. The magnitude of the errors between survey strategy exceeded those found between interpolation technique for a specific survey strategy. Strong relationships between local surface topographic variation (as defined by the standard deviation of vertical elevations in a 0.2-m diameter moving window), and DEM errors were also found, with much greater errors found at slope breaks such as bank edges. A series of curves are presented that demonstrate these relationships for each interpolation and survey strategy. The simulated aerial LiDAR data set displayed the lowest errors across the flatter surfaces; however, sharp slope breaks are better modelled by the morphologically based survey strategy. The curves presented have general application to spatially distributed data of river beds and may be applied to standard deviation grids to predict spatial error within a surface, depending upon sampling strategy and interpolation algorithm.

  7. Publisher Correction: Unravelling the immune signature of Plasmodium falciparum transmission-reducing immunity.

    PubMed

    Stone, Will J R; Campo, Joseph J; Ouédraogo, André Lin; Meerstein-Kessel, Lisette; Morlais, Isabelle; Da, Dari; Cohuet, Anna; Nsango, Sandrine; Sutherland, Colin J; van de Vegte-Bolmer, Marga; Siebelink-Stoter, Rianne; van Gemert, Geert-Jan; Graumans, Wouter; Lanke, Kjerstin; Shandling, Adam D; Pablo, Jozelyn V; Teng, Andy A; Jones, Sophie; de Jong, Roos M; Fabra-García, Amanda; Bradley, John; Roeffen, Will; Lasonder, Edwin; Gremo, Giuliana; Schwarzer, Evelin; Janse, Chris J; Singh, Susheel K; Theisen, Michael; Felgner, Phil; Marti, Matthias; Drakeley, Chris; Sauerwein, Robert; Bousema, Teun; Jore, Matthijs M

    2018-04-11

    The original version of this Article contained errors in Fig. 3. In panel a, bars from a chart depicting the percentage of antibody-positive individuals in non-infectious and infectious groups were inadvertently included in place of bars depicting the percentage of infectious individuals, as described in the Article and figure legend. However, the p values reported in the Figure and the resulting conclusions were based on the correct dataset. The corrected Fig. 3a now shows the percentage of infectious individuals in antibody-negative and -positive groups, in both the PDF and HTML versions of the Article. The incorrect and correct versions of Figure 3a are also presented for comparison in the accompanying Publisher Correction as Figure 1.The HTML version of the Article also omitted a link to Supplementary Data 6. The error has now been fixed and Supplementary Data 6 is available to download.

  8. Effect of bar-code technology on the safety of medication administration.

    PubMed

    Poon, Eric G; Keohane, Carol A; Yoon, Catherine S; Ditmore, Matthew; Bane, Anne; Levtzion-Korach, Osnat; Moniz, Thomas; Rothschild, Jeffrey M; Kachalia, Allen B; Hayes, Judy; Churchill, William W; Lipsitz, Stuart; Whittemore, Anthony D; Bates, David W; Gandhi, Tejal K

    2010-05-06

    Serious medication errors are common in hospitals and often occur during order transcription or administration of medication. To help prevent such errors, technology has been developed to verify medications by incorporating bar-code verification technology within an electronic medication-administration system (bar-code eMAR). We conducted a before-and-after, quasi-experimental study in an academic medical center that was implementing the bar-code eMAR. We assessed rates of errors in order transcription and medication administration on units before and after implementation of the bar-code eMAR. Errors that involved early or late administration of medications were classified as timing errors and all others as nontiming errors. Two clinicians reviewed the errors to determine their potential to harm patients and classified those that could be harmful as potential adverse drug events. We observed 14,041 medication administrations and reviewed 3082 order transcriptions. Observers noted 776 nontiming errors in medication administration on units that did not use the bar-code eMAR (an 11.5% error rate) versus 495 such errors on units that did use it (a 6.8% error rate)--a 41.4% relative reduction in errors (P<0.001). The rate of potential adverse drug events (other than those associated with timing errors) fell from 3.1% without the use of the bar-code eMAR to 1.6% with its use, representing a 50.8% relative reduction (P<0.001). The rate of timing errors in medication administration fell by 27.3% (P<0.001), but the rate of potential adverse drug events associated with timing errors did not change significantly. Transcription errors occurred at a rate of 6.1% on units that did not use the bar-code eMAR but were completely eliminated on units that did use it. Use of the bar-code eMAR substantially reduced the rate of errors in order transcription and in medication administration as well as potential adverse drug events, although it did not eliminate such errors. Our data show that the bar-code eMAR is an important intervention to improve medication safety. (ClinicalTrials.gov number, NCT00243373.) 2010 Massachusetts Medical Society

  9. Time trend of injection drug errors before and after implementation of bar-code verification system.

    PubMed

    Sakushima, Ken; Umeki, Reona; Endoh, Akira; Ito, Yoichi M; Nasuhara, Yasuyuki

    2015-01-01

    Bar-code technology, used for verification of patients and their medication, could prevent medication errors in clinical practice. Retrospective analysis of electronically stored medical error reports was conducted in a university hospital. The number of reported medication errors of injected drugs, including wrong drug administration and administration to the wrong patient, was compared before and after implementation of the bar-code verification system for inpatient care. A total of 2867 error reports associated with injection drugs were extracted. Wrong patient errors decreased significantly after implementation of the bar-code verification system (17.4/year vs. 4.5/year, p< 0.05), although wrong drug errors did not decrease sufficiently (24.2/year vs. 20.3/year). The source of medication errors due to wrong drugs was drug preparation in hospital wards. Bar-code medication administration is effective for prevention of wrong patient errors. However, ordinary bar-code verification systems are limited in their ability to prevent incorrect drug preparation in hospital wards.

  10. The opercular mouth-opening mechanism of largemouth bass functions as a 3D four-bar linkage with three degrees of freedom.

    PubMed

    Olsen, Aaron M; Camp, Ariel L; Brainerd, Elizabeth L

    2017-12-15

    The planar, one degree of freedom (1-DoF) four-bar linkage is an important model for understanding the function, performance and evolution of numerous biomechanical systems. One such system is the opercular mechanism in fishes, which is thought to function like a four-bar linkage to depress the lower jaw. While anatomical and behavioral observations suggest some form of mechanical coupling, previous attempts to model the opercular mechanism as a planar four-bar have consistently produced poor model fits relative to observed kinematics. Using newly developed, open source mechanism fitting software, we fitted multiple three-dimensional (3D) four-bar models with varying DoF to in vivo kinematics in largemouth bass to test whether the opercular mechanism functions instead as a 3D four-bar with one or more DoF. We examined link position error, link rotation error and the ratio of output to input link rotation to identify a best-fit model at two different levels of variation: for each feeding strike and across all strikes from the same individual. A 3D, 3-DoF four-bar linkage was the best-fit model for the opercular mechanism, achieving link rotational errors of less than 5%. We also found that the opercular mechanism moves with multiple degrees of freedom at the level of each strike and across multiple strikes. These results suggest that active motor control may be needed to direct the force input to the mechanism by the axial muscles and achieve a particular mouth-opening trajectory. Our results also expand the versatility of four-bar models in simulating biomechanical systems and extend their utility beyond planar or single-DoF systems. © 2017. Published by The Company of Biologists Ltd.

  11. Visual short-term memory deficits associated with GBA mutation and Parkinson's disease.

    PubMed

    Zokaei, Nahid; McNeill, Alisdair; Proukakis, Christos; Beavan, Michelle; Jarman, Paul; Korlipara, Prasad; Hughes, Derralynn; Mehta, Atul; Hu, Michele T M; Schapira, Anthony H V; Husain, Masud

    2014-08-01

    Individuals with mutation in the lysosomal enzyme glucocerebrosidase (GBA) gene are at significantly high risk of developing Parkinson's disease with cognitive deficit. We examined whether visual short-term memory impairments, long associated with patients with Parkinson's disease, are also present in GBA-positive individuals-both with and without Parkinson's disease. Precision of visual working memory was measured using a serial order task in which participants observed four bars, each of a different colour and orientation, presented sequentially at screen centre. Afterwards, they were asked to adjust a coloured probe bar's orientation to match the orientation of the bar of the same colour in the sequence. An additional attentional 'filtering' condition tested patients' ability to selectively encode one of the four bars while ignoring the others. A sensorimotor task using the same stimuli controlled for perceptual and motor factors. There was a significant deficit in memory precision in GBA-positive individuals-with or without Parkinson's disease-as well as GBA-negative patients with Parkinson's disease, compared to healthy controls. Worst recall was observed in GBA-positive cases with Parkinson's disease. Although all groups were impaired in visual short-term memory, there was a double dissociation between sources of error associated with GBA mutation and Parkinson's disease. The deficit observed in GBA-positive individuals, regardless of whether they had Parkinson's disease, was explained by a systematic increase in interference from features of other items in memory: misbinding errors. In contrast, impairments in patients with Parkinson's disease, regardless of GBA status, was explained by increased random responses. Individuals who were GBA-positive and also had Parkinson's disease suffered from both types of error, demonstrating the worst performance. These findings provide evidence for dissociable signature deficits within the domain of visual short-term memory associated with GBA mutation and with Parkinson's disease. Identification of the specific pattern of cognitive impairment in GBA mutation versus Parkinson's disease is potentially important as it might help to identify individuals at risk of developing Parkinson's disease. © The Author (2014). Published by Oxford University Press on behalf of the Guarantors of Brain.

  12. The Effects of Bar-coding Technology on Medication Errors: A Systematic Literature Review.

    PubMed

    Hutton, Kevin; Ding, Qian; Wellman, Gregory

    2017-02-24

    The bar-coding technology adoptions have risen drastically in U.S. health systems in the past decade. However, few studies have addressed the impact of bar-coding technology with strong prospective methodologies and the research, which has been conducted from both in-pharmacy and bedside implementations. This systematic literature review is to examine the effectiveness of bar-coding technology on preventing medication errors and what types of medication errors may be prevented in the hospital setting. A systematic search of databases was performed from 1998 to December 2016. Studies measuring the effect of bar-coding technology on medication errors were included in a full-text review. Studies with the outcomes other than medication errors such as efficiency or workarounds were excluded. The outcomes were measured and findings were summarized for each retained study. A total of 2603 articles were initially identified and 10 studies, which used prospective before-and-after study design, were fully reviewed in this article. Of the 10 included studies, 9 took place in the United States, whereas the remaining was conducted in the United Kingdom. One research article focused on bar-coding implementation in a pharmacy setting, whereas the other 9 focused on bar coding within patient care areas. All 10 studies showed overall positive effects associated with bar-coding implementation. The results of this review show that bar-coding technology may reduce medication errors in hospital settings, particularly on preventing targeted wrong dose, wrong drug, wrong patient, unauthorized drug, and wrong route errors.

  13. Six1-Eya-Dach Network in Breast Cancer

    DTIC Science & Technology

    2009-05-01

    Ctrl scramble controls. Responsiveness was tested using luciferase activity of the 3TP reporter construct and normalized to renilla luciferase...construct and normalized to renilla luciferase activity. Data points show the mean of two individual clones from two experiments and error bars represent

  14. Rate Constants for Fine-Structure Excitations in O - H Collisions with Error Bars Obtained by Machine Learning

    NASA Astrophysics Data System (ADS)

    Vieira, Daniel; Krems, Roman

    2017-04-01

    Fine-structure transitions in collisions of O(3Pj) with atomic hydrogen are an important cooling mechanism in the interstellar medium; knowledge of the rate coefficients for these transitions has a wide range of astrophysical applications. The accuracy of the theoretical calculation is limited by inaccuracy in the ab initio interaction potentials used in the coupled-channel quantum scattering calculations from which the rate coefficients can be obtained. In this work we use the latest ab initio results for the O(3Pj) + H interaction potentials to improve on previous calculations of the rate coefficients. We further present a machine-learning technique based on Gaussian Process regression to determine the sensitivity of the rate coefficients to variations of the underlying adiabatic interaction potentials. To account for the inaccuracy inherent in the ab initio calculations we compute error bars for the rate coefficients corresponding to 20% variation in each of the interaction potentials. We obtain these error bars by fitting a Gaussian Process model to a data set of potential curves and rate constants. We use the fitted model to do sensitivity analysis, determining the relative importance of individual adiabatic potential curves to a given fine-structure transition. NSERC.

  15. Output Error Analysis of Planar 2-DOF Five-bar Mechanism

    NASA Astrophysics Data System (ADS)

    Niu, Kejia; Wang, Jun; Ting, Kwun-Lon; Tao, Fen; Cheng, Qunchao; Wang, Quan; Zhang, Kaiyang

    2018-03-01

    Aiming at the mechanism error caused by clearance of planar 2-DOF Five-bar motion pair, the method of equivalent joint clearance of kinematic pair to virtual link is applied. The structural error model of revolute joint clearance is established based on the N-bar rotation laws and the concept of joint rotation space, The influence of the clearance of the moving pair is studied on the output error of the mechanis. and the calculation method and basis of the maximum error are given. The error rotation space of the mechanism under the influence of joint clearance is obtained. The results show that this method can accurately calculate the joint space error rotation space, which provides a new way to analyze the planar parallel mechanism error caused by joint space.

  16. Haptic spatial matching in near peripersonal space.

    PubMed

    Kaas, Amanda L; Mier, Hanneke I van

    2006-04-01

    Research has shown that haptic spatial matching at intermanual distances over 60 cm is prone to large systematic errors. The error pattern has been explained by the use of reference frames intermediate between egocentric and allocentric coding. This study investigated haptic performance in near peripersonal space, i.e. at intermanual distances of 60 cm and less. Twelve blindfolded participants (six males and six females) were presented with two turn bars at equal distances from the midsagittal plane, 30 or 60 cm apart. Different orientations (vertical/horizontal or oblique) of the left bar had to be matched by adjusting the right bar to either a mirror symmetric (/ \\) or parallel (/ /) position. The mirror symmetry task can in principle be performed accurately in both an egocentric and an allocentric reference frame, whereas the parallel task requires an allocentric representation. Results showed that parallel matching induced large systematic errors which increased with distance. Overall error was significantly smaller in the mirror task. The task difference also held for the vertical orientation at 60 cm distance, even though this orientation required the same response in both tasks, showing a marked effect of task instruction. In addition, men outperformed women on the parallel task. Finally, contrary to our expectations, systematic errors were found in the mirror task, predominantly at 30 cm distance. Based on these findings, we suggest that haptic performance in near peripersonal space might be dominated by different mechanisms than those which come into play at distances over 60 cm. Moreover, our results indicate that both inter-individual differences and task demands affect task performance in haptic spatial matching. Therefore, we conclude that the study of haptic spatial matching in near peripersonal space might reveal important additional constraints for the specification of adequate models of haptic spatial performance.

  17. Error analysis of mechanical system and wavelength calibration of monochromator

    NASA Astrophysics Data System (ADS)

    Zhang, Fudong; Chen, Chen; Liu, Jie; Wang, Zhihong

    2018-02-01

    This study focuses on improving the accuracy of a grating monochromator on the basis of the grating diffraction equation in combination with an analysis of the mechanical transmission relationship between the grating, the sine bar, and the screw of the scanning mechanism. First, the relationship between the mechanical error in the monochromator with the sine drive and the wavelength error is analyzed. Second, a mathematical model of the wavelength error and mechanical error is developed, and an accurate wavelength calibration method based on the sine bar's length adjustment and error compensation is proposed. Based on the mathematical model and calibration method, experiments using a standard light source with known spectral lines and a pre-adjusted sine bar length are conducted. The model parameter equations are solved, and subsequent parameter optimization simulations are performed to determine the optimal length ratio. Lastly, the length of the sine bar is adjusted. The experimental results indicate that the wavelength accuracy is ±0.3 nm, which is better than the original accuracy of ±2.6 nm. The results confirm the validity of the error analysis of the mechanical system of the monochromator as well as the validity of the calibration method.

  18. The Impact of Bar Code Medication Administration Technology on Reported Medication Errors

    ERIC Educational Resources Information Center

    Holecek, Andrea

    2011-01-01

    The use of bar-code medication administration technology is on the rise in acute care facilities in the United States. The technology is purported to decrease medication errors that occur at the point of administration. How significantly this technology affects actual rate and severity of error is unknown. This descriptive, longitudinal research…

  19. Numerical modeling of the divided bar measurements

    NASA Astrophysics Data System (ADS)

    LEE, Y.; Keehm, Y.

    2011-12-01

    The divided-bar technique has been used to measure thermal conductivity of rocks and fragments in heat flow studies. Though widely used, divided-bar measurements can have errors, which are not systematically quantified yet. We used an FEM and performed a series of numerical studies to evaluate various errors in divided-bar measurements and to suggest more reliable measurement techniques. A divided-bar measurement should be corrected against lateral heat loss on the sides of rock samples, and the thermal resistance at the contacts between the rock sample and the bar. We first investigated how the amount of these corrections would change by the thickness and thermal conductivity of rock samples through numerical modeling. When we fixed the sample thickness as 10 mm and varied thermal conductivity, errors in the measured thermal conductivity ranges from 2.02% for 1.0 W/m/K to 7.95% for 4.0 W/m/K. While we fixed thermal conductivity as 1.38 W/m/K and varied the sample thickness, we found that the error ranges from 2.03% for the 30 mm-thick sample to 11.43% for the 5 mm-thick sample. After corrections, a variety of error analyses for divided-bar measurements were conducted numerically. Thermal conductivity of two thin standard disks (2 mm in thickness) located at the top and the bottom of the rock sample slightly affects the accuracy of thermal conductivity measurements. When the thermal conductivity of a sample is 3.0 W/m/K and that of two standard disks is 0.2 W/m/K, the relative error in measured thermal conductivity is very small (~0.01%). However, the relative error would reach up to -2.29% for the same sample when thermal conductivity of two disks is 0.5 W/m/K. The accuracy of thermal conductivity measurements strongly depends on thermal conductivity and the thickness of thermal compound that is applied to reduce thermal resistance at contacts between the rock sample and the bar. When the thickness of thermal compound (0.29 W/m/K) is 0.03 mm, we found that the relative error in measured thermal conductivity is 4.01%, while the relative error can be very significant (~12.2%) if the thickness increases to 0.1 mm. Then, we fixed the thickness (0.03 mm) and varied thermal conductivity of the thermal compound. We found that the relative error with an 1.0 W/m/K compound is 1.28%, and the relative error with a 0.29 W/m/K is 4.06%. When we repeated this test with a different thickness of the thermal compound (0.1 mm), the relative error with an 1.0 W/m/K compound is 3.93%, and that with a 0.29 W/m/K is 12.2%. In addition, the cell technique by Sass et al.(1971), which is widely used to measure thermal conductivity of rock fragments, was evaluated using the FEM modeling. A total of 483 isotropic and homogeneous spherical rock fragments in the sample holder were used to test numerically the accuracy of the cell technique. The result shows the relative error of -9.61% for rock fragments with the thermal conductivity of 2.5 W/m/K. In conclusion, we report quantified errors in the divided-bar and the cell technique for thermal conductivity measurements for rocks and fragments. We found that the FEM modeling can accurately mimic these measurement techniques and can help us to estimate measurement errors quantitatively.

  20. Spirality: A Noval Way to Measure Spiral Arm Pitch Angle

    NASA Astrophysics Data System (ADS)

    Shields, Douglas W.; Boe, Benjamin; Henderson, Casey L.; Hartley, Matthew; Davis, Benjamin L.; Pour Imani, Hamed; Kennefick, Daniel; Kennefick, Julia D.

    2015-01-01

    We present the MATLAB code Spirality, a novel method for measuring spiral arm pitch angles by fitting galaxy images to spiral templates of known pitch. For a given pitch angle template, the mean pixel value is found along each of typically 1000 spiral axes. The fitting function, which shows a local maximum at the best-fit pitch angle, is the variance of these means. Error bars are found by varying the inner radius of the measurement annulus and finding the standard deviation of the best-fit pitches. Computation time is typically on the order of 2 minutes per galaxy, assuming at least 8 GB of working memory. We tested the code using 128 synthetic spiral images of known pitch. These spirals varied in the number of spiral arms, pitch angle, degree of logarithmicity, radius, SNR, inclination angle, bar length, and bulge radius. A correct result is defined as a result that matches the true pitch within the error bars, with error bars no greater than ±7°. For the non-logarithmic spiral sample, the correct answer is similarly defined, with the mean pitch as function of radius in place of the true pitch. For all synthetic spirals, correct results were obtained so long as SNR > 0.25, the bar length was no more than 60% of the spiral's diameter (when the bar was included in the measurement), the input center of the spiral was no more than 6% of the spiral radius away from the true center, and the inclination angle was no more than 30°. The synthetic spirals were not deprojected prior to measurement. The code produced the correct result for all barred spirals when the measurement annulus was placed outside the bar. Additionally, we compared the code's results against 2DFFT results for 203 visually selected spiral galaxies in GOODS North and South. Among the entire sample, Spirality's error bars overlapped 2DFFT's error bars 64% of the time. For those galaxies in which Source code is available by email request from the primary author.

  1. Evaluation of Key Factors Impacting Feeding Safety in the Neonatal Intensive Care Unit: A Systematic Review.

    PubMed

    Matus, Bethany A; Bridges, Kayla M; Logomarsino, John V

    2018-06-21

    Individualized feeding care plans and safe handling of milk (human or formula) are critical in promoting growth, immune function, and neurodevelopment in the preterm infant. Feeding errors and disruptions or limitations to feeding processes in the neonatal intensive care unit (NICU) are associated with negative safety events. Feeding errors include contamination of milk and delivery of incorrect or expired milk and may result in adverse gastrointestinal illnesses. The purpose of this review was to evaluate the effect(s) of centralized milk preparation, use of trained technicians, use of bar code-scanning software, and collaboration between registered dietitians and registered nurses on feeding safety in the NICU. A systematic review of the literature was completed, and 12 articles were selected as relevant to search criteria. Study quality was evaluated using the Downs and Black scoring tool. An evaluation of human studies indicated that the use of centralized milk preparation, trained technicians, bar code-scanning software, and possible registered dietitian involvement decreased feeding-associated error in the NICU. A state-of-the-art NICU includes a centralized milk preparation area staffed by trained technicians, care supported by bar code-scanning software, and utilization of a registered dietitian to improve patient safety. These resources will provide nurses more time to focus on nursing-specific neonatal care. Further research is needed to evaluate the impact of factors related to feeding safety in the NICU as well as potential financial benefits of these quality improvement opportunities.

  2. Computer-assisted bar-coding system significantly reduces clinical laboratory specimen identification errors in a pediatric oncology hospital.

    PubMed

    Hayden, Randall T; Patterson, Donna J; Jay, Dennis W; Cross, Carl; Dotson, Pamela; Possel, Robert E; Srivastava, Deo Kumar; Mirro, Joseph; Shenep, Jerry L

    2008-02-01

    To assess the ability of a bar code-based electronic positive patient and specimen identification (EPPID) system to reduce identification errors in a pediatric hospital's clinical laboratory. An EPPID system was implemented at a pediatric oncology hospital to reduce errors in patient and laboratory specimen identification. The EPPID system included bar-code identifiers and handheld personal digital assistants supporting real-time order verification. System efficacy was measured in 3 consecutive 12-month time frames, corresponding to periods before, during, and immediately after full EPPID implementation. A significant reduction in the median percentage of mislabeled specimens was observed in the 3-year study period. A decline from 0.03% to 0.005% (P < .001) was observed in the 12 months after full system implementation. On the basis of the pre-intervention detected error rate, it was estimated that EPPID prevented at least 62 mislabeling events during its first year of operation. EPPID decreased the rate of misidentification of clinical laboratory samples. The diminution of errors observed in this study provides support for the development of national guidelines for the use of bar coding for laboratory specimens, paralleling recent recommendations for medication administration.

  3. Analysis of the technology acceptance model in examining hospital nurses' behavioral intentions toward the use of bar code medication administration.

    PubMed

    Song, Lunar; Park, Byeonghwa; Oh, Kyeung Mi

    2015-04-01

    Serious medication errors continue to exist in hospitals, even though there is technology that could potentially eliminate them such as bar code medication administration. Little is known about the degree to which the culture of patient safety is associated with behavioral intention to use bar code medication administration. Based on the Technology Acceptance Model, this study evaluated the relationships among patient safety culture and perceived usefulness and perceived ease of use, and behavioral intention to use bar code medication administration technology among nurses in hospitals. Cross-sectional surveys with a convenience sample of 163 nurses using bar code medication administration were conducted. Feedback and communication about errors had a positive impact in predicting perceived usefulness (β=.26, P<.01) and perceived ease of use (β=.22, P<.05). In a multiple regression model predicting for behavioral intention, age had a negative impact (β=-.17, P<.05); however, teamwork within hospital units (β=.20, P<.05) and perceived usefulness (β=.35, P<.01) both had a positive impact on behavioral intention. The overall bar code medication administration behavioral intention model explained 24% (P<.001) of the variance. Identified factors influencing bar code medication administration behavioral intention can help inform hospitals to develop tailored interventions for RNs to reduce medication administration errors and increase patient safety by using this technology.

  4. Minimizing human error in radiopharmaceutical preparation and administration via a bar code-enhanced nuclear pharmacy management system.

    PubMed

    Hakala, John L; Hung, Joseph C; Mosman, Elton A

    2012-09-01

    The objective of this project was to ensure correct radiopharmaceutical administration through the use of a bar code system that links patient and drug profiles with on-site information management systems. This new combined system would minimize the amount of manual human manipulation, which has proven to be a primary source of error. The most common reason for dosing errors is improper patient identification when a dose is obtained from the nuclear pharmacy or when a dose is administered. A standardized electronic transfer of information from radiopharmaceutical preparation to injection will further reduce the risk of misadministration. Value stream maps showing the flow of the patient dose information, as well as potential points of human error, were developed. Next, a future-state map was created that included proposed corrections for the most common critical sites of error. Transitioning the current process to the future state will require solutions that address these sites. To optimize the future-state process, a bar code system that links the on-site radiology management system with the nuclear pharmacy management system was proposed. A bar-coded wristband connects the patient directly to the electronic information systems. The bar code-enhanced process linking the patient dose with the electronic information reduces the number of crucial points for human error and provides a framework to ensure that the prepared dose reaches the correct patient. Although the proposed flowchart is designed for a site with an in-house central nuclear pharmacy, much of the framework could be applied by nuclear medicine facilities using unit doses. An electronic connection between information management systems to allow the tracking of a radiopharmaceutical from preparation to administration can be a useful tool in preventing the mistakes that are an unfortunate reality for any facility.

  5. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    NASA Astrophysics Data System (ADS)

    McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.

    2015-04-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.

  6. Partial entrainment of gravel bars during floods

    USGS Publications Warehouse

    Konrad, Christopher P.; Booth, Derek B.; Burges, Stephen J.; Montgomery, David R.

    2002-01-01

    Spatial patterns of bed material entrainment by floods were documented at seven gravel bars using arrays of metal washers (bed tags) placed in the streambed. The observed patterns were used to test a general stochastic model that bed material entrainment is a spatially independent, random process where the probability of entrainment is uniform over a gravel bar and a function of the peak dimensionless shear stress τ0* of the flood. The fraction of tags missing from a gravel bar during a flood, or partial entrainment, had an approximately normal distribution with respect to τ0* with a mean value (50% of the tags entrained) of 0.085 and standard deviation of 0.022 (root‐mean‐square error of 0.09). Variation in partial entrainment for a given τ0* demonstrated the effects of flow conditioning on bed strength, with lower values of partial entrainment after intermediate magnitude floods (0.065 < τ0*< 0.08) than after higher magnitude floods. Although the probability of bed material entrainment was approximately uniform over a gravel bar during individual floods and independent from flood to flood, regions of preferential stability and instability emerged at some bars over the course of a wet season. Deviations from spatially uniform and independent bed material entrainment were most pronounced for reaches with varied flow and in consecutive floods with small to intermediate magnitudes.

  7. On the Bar Pattern Speed Determination of NGC 3367

    NASA Astrophysics Data System (ADS)

    Gabbasov, R. F.; Repetto, P.; Rosado, M.

    2009-09-01

    An important dynamic parameter of barred galaxies is the bar pattern speed, Ω P . Among several methods that are used for the determination of Ω P , the Tremaine-Weinberg method has the advantage of model independence and accuracy. In this work, we apply the method to a simulated bar including gas dynamics and study the effect of two-dimensional spectroscopy data quality on robustness of the method. We added white noise and a Gaussian random field to the data and measured the corresponding errors in Ω P . We found that a signal to noise ratio in surface density ~5 introduces errors of ~20% for the Gaussian noise, while for the white noise the corresponding errors reach ~50%. At the same time, the velocity field is less sensitive to contamination. On the basis of the performed study, we applied the method to the NGC 3367 spiral galaxy using Hα Fabry-Pérot interferometry data. We found Ω P = 43 ± 6 km s-1 kpc-1 for this galaxy.

  8. OR14-V-Uncertainty-PD2La Uncertainty Quantification for Nuclear Safeguards and Nondestructive Assay Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicholson, Andrew D.; Croft, Stephen; McElroy, Robert Dennis

    2017-08-01

    The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically provide error bars and also partition total uncertainty into “random” and “systematic” components so that, for example, an error bar can be developed for the total mass estimate in multiple items. Uncertainty Quantification (UQ) for NDA has always been important, but itmore » is recognized that greater rigor is needed and achievable using modern statistical methods.« less

  9. The company they keep: drinking group attitudes and male bar aggression.

    PubMed

    Dumas, Tara M; Graham, Kathryn; Wells, Samantha

    2015-05-01

    The purpose of this study was to assess (a) similarities in self-reported bar-aggression-related attitudes and behaviors among members of young male groups recruited on their way to bars and (b) group-level variables associated with individual members' self-reported likelihood of perpetrating physical bar aggression in the past year, controlling for individual attitudes. Young, male, natural drinking groups recruited on their way to a bar district Thursday, Friday, and Saturday nights (n = 167, 53 groups) completed an online survey that measured whether they had perpetrated physical aggression at a bar in the past year and constructs associated with bar aggression, including attitudes toward male bar aggression and frequency of heavy episodic drinking in the past year. Intraclass correlations and chi-square tests demonstrated significant within-group similarity on bar-aggression-related attitudes and behaviors (ps < .01). Hierarchical linear modeling revealed that group attitudes toward bar aggression were significantly associated with individuals' likelihood of perpetrating physical bar aggression, controlling for individual attitudes (p < .01); however, the link between group heavy episodic drinking and self-reported bar aggression was nonsignificant in the full model. This study suggests that the most important group influence on young men's bar aggression is the attitudes of other group members. These attitudes were associated with group members' likelihood of engaging in bar aggression over and above individuals' own attitudes. A better understanding of how group attitudes and behavior affect the behavior of individual group members is needed to inform aggression-prevention programming.

  10. On the formulation of gravitational potential difference between the GRACE satellites based on energy integral in Earth fixed frame

    NASA Astrophysics Data System (ADS)

    Zeng, Y. Y.; Guo, J. Y.; Shang, K.; Shum, C. K.; Yu, J. H.

    2015-09-01

    Two methods for computing gravitational potential difference (GPD) between the GRACE satellites using orbit data have been formulated based on energy integral; one in geocentric inertial frame (GIF) and another in Earth fixed frame (EFF). Here we present a rigorous theoretical formulation in EFF with particular emphasis on necessary approximations, provide a computational approach to mitigate the approximations to negligible level, and verify our approach using simulations. We conclude that a term neglected or ignored in all former work without verification should be retained. In our simulations, 2 cycle per revolution (CPR) errors are present in the GPD computed using our formulation, and empirical removal of the 2 CPR and lower frequency errors can improve the precisions of Stokes coefficients (SCs) of degree 3 and above by 1-2 orders of magnitudes. This is despite of the fact that the result without removing these errors is already accurate enough. Furthermore, the relation between data errors and their influences on GPD is analysed, and a formal examination is made on the possible precision that real GRACE data may attain. The result of removing 2 CPR errors may imply that, if not taken care of properly, the values of SCs computed by means of the energy integral method using real GRACE data may be seriously corrupted by aliasing errors from possibly very large 2 CPR errors based on two facts: (1) errors of bar C_{2,0} manifest as 2 CPR errors in GPD and (2) errors of bar C_{2,0} in GRACE data-the differences between the CSR monthly values of bar C_{2,0} independently determined using GRACE and SLR are a reasonable measure of their magnitude-are very large. Our simulations show that, if 2 CPR errors in GPD vary from day to day as much as those corresponding to errors of bar C_{2,0} from month to month, the aliasing errors of degree 15 and above SCs computed using a month's GPD data may attain a level comparable to the magnitude of gravitational potential variation signal that GRACE was designed to recover. Consequently, we conclude that aliasing errors from 2 CPR errors in real GRACE data may be very large if not properly handled; and therefore, we propose an approach to reduce aliasing errors from 2 CPR and lower frequency errors for computing SCs above degree 2.

  11. Metacontrast masking and attention do not interact.

    PubMed

    Agaoglu, Sevda; Breitmeyer, Bruno; Ogmen, Haluk

    2016-07-01

    Visual masking and attention have been known to control the transfer of information from sensory memory to visual short-term memory. A natural question is whether these processes operate independently or interact. Recent evidence suggests that studies that reported interactions between masking and attention suffered from ceiling and/or floor effects. The objective of the present study was to investigate whether metacontrast masking and attention interact by using an experimental design in which saturation effects are avoided. We asked observers to report the orientation of a target bar randomly selected from a display containing either two or six bars. The mask was a ring that surrounded the target bar. Attentional load was controlled by set-size and masking strength by the stimulus onset asynchrony between the target bar and the mask ring. We investigated interactions between masking and attention by analyzing two different aspects of performance: (i) the mean absolute response errors and (ii) the distribution of signed response errors. Our results show that attention affects observers' performance without interacting with masking. Statistical modeling of response errors suggests that attention and metacontrast masking exert their effects by independently modulating the probability of "guessing" behavior. Implications of our findings for models of attention are discussed.

  12. Uncertainty analysis technique for OMEGA Dante measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    May, M. J.; Widmann, K.; Sorce, C.

    2010-10-15

    The Dante is an 18 channel x-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g., hohlraums, etc.) at x-ray energies between 50 eV and 10 keV. It is a main diagnostic installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the x-ray diodes, filters and mirrors, and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less

  13. Uncertainty Analysis Technique for OMEGA Dante Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    May, M J; Widmann, K; Sorce, C

    2010-05-07

    The Dante is an 18 channel X-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g. hohlraums, etc.) at X-ray energies between 50 eV to 10 keV. It is a main diagnostics installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the X-ray diodes, filters and mirrors and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte-Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less

  14. An infrared image based methodology for breast lesions screening

    NASA Astrophysics Data System (ADS)

    Morais, K. C. C.; Vargas, J. V. C.; Reisemberger, G. G.; Freitas, F. N. P.; Oliari, S. H.; Brioschi, M. L.; Louveira, M. H.; Spautz, C.; Dias, F. G.; Gasperin, P.; Budel, V. M.; Cordeiro, R. A. G.; Schittini, A. P. P.; Neto, C. D.

    2016-05-01

    The objective of this paper is to evaluate the potential of utilizing a structured methodology for breast lesions screening, based on infrared imaging temperature measurements of a healthy control group to establish expected normality ranges, and of breast cancer patients, previously diagnosed through biopsies of the affected regions. An analysis of the systematic error of the infrared camera skin temperature measurements was conducted in several different regions of the body, by direct comparison to high precision thermistor temperature measurements, showing that infrared camera temperatures are consistently around 2 °C above the thermistor temperatures. Therefore, a method of conjugated gradients is proposed to eliminate the infrared camera direct temperature measurement imprecision, by calculating the temperature difference between two points to cancel out the error. The method takes into account the human body approximate bilateral symmetry, and compares measured dimensionless temperature difference values (Δ θ bar) between two symmetric regions of the patient's breast, that takes into account the breast region, the surrounding ambient and the individual core temperatures, and doing so, the results interpretation for different individuals become simple and non subjective. The range of normal whole breast average dimensionless temperature differences for 101 healthy individuals was determined, and admitting that the breasts temperatures exhibit a unimodal normal distribution, the healthy normal range for each region was considered to be the dimensionless temperature difference plus/minus twice the standard deviation of the measurements, Δ θ bar ‾ + 2σ Δ θ bar ‾ , in order to represent 95% of the population. Forty-seven patients with previously diagnosed breast cancer through biopsies were examined with the method, which was capable of detecting breast abnormalities in 45 cases (96%). Therefore, the conjugated gradients method was considered effective in breast lesions screening through infrared imaging in order to recommend a biopsy, even with the use of a low optical resolution camera (160 × 120 pixels) and a thermal resolution of 0.1 °C, whose results were compared to the results of a higher resolution camera (320 × 240 pixels). The main conclusion is that the results demonstrate that the method has potential for utilization as a noninvasive screening exam for individuals with breast complaints, indicating whether the patient should be submitted to a biopsy or not.

  15. Chemical Evolution and History of Star Formation in the Large Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Gustafsson, Bengt

    1995-07-01

    Large scale processes controlling star formation and nucleosynthesis are fundamental but poorly understood. This is especially true for external galaxies. A detailed study of individual main sequence stars in the LMC Bar is proposed. The LMC is close enough to allow this, has considerable spread in stellar ages and a structure permitting identification of stellar populations and their structural features. The Bar presumably plays a dominant role in the chemical and dynamical evolution of the galaxy. Our knowledge is, at best, based on educated guesses. Still, the major population of the Bar is quite old, and many member stars are relatively evolved. The Bar seems to contain stars similar to those of Intermediate to Extreme Pop II in the Galaxy. We want to study the history of star formation, chemical evolution and initial mass function of the population dominating the Bar. We will use field stars close to the turn off point in the HR diagram. From earlier studies, we know that 250-500 such stars are available for uvby photometry in the PC field. We aim at an accuracy of 0.1 -0.2 dex in Me/H and 25% or better in relative ages. This requires an accuracy of about 0.02 mag in the uvby indices, which can be reached, taking into account errors in calibration, flat fielding, guiding and problems due to crowding. For a study of the luminosity function fainter stars will be included as well. Calibration fields are available in Omega Cen and M 67.

  16. Reduction in specimen labeling errors after implementation of a positive patient identification system in phlebotomy.

    PubMed

    Morrison, Aileen P; Tanasijevic, Milenko J; Goonan, Ellen M; Lobo, Margaret M; Bates, Michael M; Lipsitz, Stuart R; Bates, David W; Melanson, Stacy E F

    2010-06-01

    Ensuring accurate patient identification is central to preventing medical errors, but it can be challenging. We implemented a bar code-based positive patient identification system for use in inpatient phlebotomy. A before-after design was used to evaluate the impact of the identification system on the frequency of mislabeled and unlabeled samples reported in our laboratory. Labeling errors fell from 5.45 in 10,000 before implementation to 3.2 in 10,000 afterward (P = .0013). An estimated 108 mislabeling events were prevented by the identification system in 1 year. Furthermore, a workflow step requiring manual preprinting of labels, which was accompanied by potential labeling errors in about one quarter of blood "draws," was removed as a result of the new system. After implementation, a higher percentage of patients reported having their wristband checked before phlebotomy. Bar code technology significantly reduced the rate of specimen identification errors.

  17. High-resolution smile measurement and control of wavelength-locked QCW and CW laser diode bars

    NASA Astrophysics Data System (ADS)

    Rosenkrantz, Etai; Yanson, Dan; Klumel, Genady; Blonder, Moshe; Rappaport, Noam; Peleg, Ophir

    2018-02-01

    High-power linewidth-narrowed applications of laser diode arrays demand high beam quality in the fast, or vertical, axis. This requires very high fast-axis collimation (FAC) quality with sub-mrad angular errors, especially where laser diode bars are wavelength-locked by a volume Bragg grating (VBG) to achieve high pumping efficiency in solid-state and fiber lasers. The micron-scale height deviation of emitters in a bar against the FAC lens causes the so-called smile effect with variable beam pointing errors and wavelength locking degradation. We report a bar smile imaging setup allowing FAC-free smile measurement in both QCW and CW modes. By Gaussian beam simulation, we establish optimum smile imaging conditions to obtain high resolution and accuracy with well-resolved emitter images. We then investigate the changes in the smile shape and magnitude under thermal stresses such as variable duty cycles in QCW mode and, ultimately, CW operation. Our smile measurement setup provides useful insights into the smile behavior and correlation between the bar collimation in QCW mode and operating conditions under CW pumping. With relaxed alignment tolerances afforded by our measurement setup, we can screen bars for smile compliance and potential VBG lockability prior to assembly, with benefits in both lower manufacturing costs and higher yield.

  18. Some conservative estimates in quantum cryptography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molotkov, S. N.

    2006-08-15

    Relationship is established between the security of the BB84 quantum key distribution protocol and the forward and converse coding theorems for quantum communication channels. The upper bound Q{sub c} {approx} 11% on the bit error rate compatible with secure key distribution is determined by solving the transcendental equation H(Q{sub c})=C-bar({rho})/2, where {rho} is the density matrix of the input ensemble, C-bar({rho}) is the classical capacity of a noiseless quantum channel, and H(Q) is the capacity of a classical binary symmetric channel with error rate Q.

  19. Instrument Reflections and Scene Amplitude Modulation in a Polychromatic Microwave Quadrature Interferometer

    NASA Technical Reports Server (NTRS)

    Dobson, Chris C.; Jones, Jonathan E.; Chavers, Greg

    2003-01-01

    A polychromatic microwave quadrature interferometer has been characterized using several laboratory plasmas. Reflections between the transmitter and the receiver have been observed, and the effects of including reflection terms in the data reduction equation have been examined. An error analysis which includes the reflections, modulation of the scene beam amplitude by the plasma, and simultaneous measurements at two frequencies has been applied to the empirical database, and the results are summarized. For reflection amplitudes around 1096, the reflection terms were found to reduce the calculated error bars for electron density measurements by about a factor of 2. The impact of amplitude modulation is also quantified. In the complete analysis, the mean error bar for high- density measurements is 7.596, and the mean phase shift error for low-density measurements is 1.2". .

  20. Effects of pressure on aqueous chemical equilibria at subzero temperatures with applications to Europa

    USGS Publications Warehouse

    Marion, G.M.; Kargel, J.S.; Catling, D.C.; Jakubowski, S.D.

    2005-01-01

    Pressure plays a critical role in controlling aqueous geochemical processes in deep oceans and deep ice. The putative ocean of Europa could have pressures of 1200 bars or higher on the seafloor, a pressure not dissimilar to the deepest ocean basin on Earth (the Mariana Trench at 1100 bars of pressure). At such high pressures, chemical thermodynamic relations need to explicitly consider pressure. A number of papers have addressed the role of pressure on equilibrium constants, activity coefficients, and the activity of water. None of these models deal, however, with processes at subzero temperatures, which may be important in cold environments on Earth and other planetary bodies. The objectives of this work were to (1) incorporate a pressure dependence into an existing geochemical model parameterized for subzero temperatures (FREZCHEM), (2) validate the model, and (3) simulate pressure-dependent processes on Europa. As part of objective 1, we examined two models for quantifying the volumetric properties of liquid water at subzero temperatures: one model is based on the measured properties of supercooled water, and the other model is based on the properties of liquid water in equilibrium with ice. The relative effect of pressure on solution properties falls in the order: equilibrium constants(K) > activity coefficients (??) > activity of water (aw). The errors (%) in our model associated with these properties, however, fall in the order: ?? > K > aw. The transposition between K and ?? is due to a more accurate model for estimating K than for estimating ??. Only activity coefficients are likely to be significantly in error. However, even in this case, the errors are likely to be only in the range of 2 to 5% up to 1000 bars of pressure. Evidence based on the pressure/temperature melting of ice and salt solution densities argue in favor of the equilibrium water model, which depends on extrapolations, for characterizing the properties of liquid water in electrolyte solutions at subzero temperatures, rather than the supercooled water model. Model-derived estimates of mixed salt solution densities and chemical equilibria as a function of pressure are in reasonably good agreement with experimental measurements. To demonstrate the usefulness of this low-temperature, high-pressure model, we examined two hypothetical cases for Europa. Case 1 dealt with the ice cover of Europa, where we asked the question: How far above the putative ocean in the ice layer could we expect to find thermodynamically stable brine pockets that could serve as habitats for life? For a hypothetical nonconvecting 20 km icy shell, this potential life zone only extends 2.8 km into the icy shell before the eutectic is reached. For the case of a nonconvecting icy shell, the cold surface of Europa precludes stable aqueous phases (habitats for life) anywhere near the surface. Case 2 compared chemical equilibria at 1 bar (based on previous work) with a more realistic 1460 bars of pressure at the base of a 100 km Europan ocean. A pressure of 1460 bars, compared to 1 bar, caused a 12 K decrease in the temperature at which ice first formed and a 11 K increase in the temperature at which MgSO4. 12H2O first formed. Remarkably, there was only a 1.2 K decrease in the eutectic temperatures between 1 and 1460 bars of pressure. Chemical systems and their response to pressure depend, ultimately, on the volumetric properties of individual constituents, which makes every system response highly individualistic. Copyright ?? 2005 Elsevier Ltd.

  1. Mechanical design of deformation compensated flexural pivots structured for linear nanopositioning stages

    DOEpatents

    Shu, Deming; Kearney, Steven P.; Preissner, Curt A.

    2015-02-17

    A method and deformation compensated flexural pivots structured for precision linear nanopositioning stages are provided. A deformation-compensated flexural linear guiding mechanism includes a basic parallel mechanism including a U-shaped member and a pair of parallel bars linked to respective pairs of I-link bars and each of the I-bars coupled by a respective pair of flexural pivots. The basic parallel mechanism includes substantially evenly distributed flexural pivots minimizing center shift dynamic errors.

  2. Use the Bar Code System to Improve Accuracy of the Patient and Sample Identification.

    PubMed

    Chuang, Shu-Hsia; Yeh, Huy-Pzu; Chi, Kun-Hung; Ku, Hsueh-Chen

    2018-01-01

    In time and correct sample collection were highly related to patient's safety. The sample error rate was 11.1%, because misbranded patient information and wrong sample containers during January to April, 2016. We developed a barcode system of "Specimens Identify System" through process of reengineering of TRM, used bar code scanners, add sample container instructions, and mobile APP. Conclusion, the bar code systems improved the patient safety and created green environment.

  3. Validation of instrumentation to monitor dynamic performance of olympic weightlifters.

    PubMed

    Bruenger, Adam J; Smith, Sarah L; Sands, William A; Leigh, Michael R

    2007-05-01

    The purpose of this study was to validate the accuracy and reliability of the Weightlifting Video Overlay System (WVOS) used by coaches and sport biomechanists at the United States Olympic Training Center. Static trials with the bar set at specific positions and dynamic trials of a power snatch were performed. Static and dynamic values obtained by the WVOS were compared with values obtained by tape measure and standard video kinematic analysis. Coordinate positions (horizontal [X] and vertical [Y]) were compared on both ends (left and right) of the bar. Absolute technical error of measurement between WVOS and kinematic values were calculated (0.97 cm [left X], 0.98 cm [right X], 0.88 cm [left Y], and 0.53 cm [right Y]) for the static data. Pearson correlations for all dynamic trials exceeded r = 0.88. The greatest discrepancies between the 2 measuring systems were found to occur when there was twisting of the bar during the performance. This error was probably due to the location on the bar where the coordinates were measured. The WVOS appears to provide accurate position information when compared with standard kinematics; however, care must be taken in evaluating position measurements if there is a significant amount of twisting in the movement. The WVOS appears to be reliable and valid within reasonable error limits for the determination of weightlifting movement technique.

  4. Previous Estimates of Mitochondrial DNA Mutation Level Variance Did Not Account for Sampling Error: Comparing the mtDNA Genetic Bottleneck in Mice and Humans

    PubMed Central

    Wonnapinij, Passorn; Chinnery, Patrick F.; Samuels, David C.

    2010-01-01

    In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference. PMID:20362273

  5. Predicting Error Bars for QSAR Models

    NASA Astrophysics Data System (ADS)

    Schroeter, Timon; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-09-01

    Unfavorable physicochemical properties often cause drug failures. It is therefore important to take lipophilicity and water solubility into account early on in lead discovery. This study presents log D7 models built using Gaussian Process regression, Support Vector Machines, decision trees and ridge regression algorithms based on 14556 drug discovery compounds of Bayer Schering Pharma. A blind test was conducted using 7013 new measurements from the last months. We also present independent evaluations using public data. Apart from accuracy, we discuss the quality of error bars that can be computed by Gaussian Process models, and ensemble and distance based techniques for the other modelling approaches.

  6. Three-dimensional accuracy of different correction methods for cast implant bars

    PubMed Central

    Kwon, Ji-Yung; Kim, Chang-Whe; Lim, Young-Jun; Kwon, Ho-Beom

    2014-01-01

    PURPOSE The aim of the present study was to evaluate the accuracy of three techniques for correction of cast implant bars. MATERIALS AND METHODS Thirty cast implant bars were fabricated on a metal master model. All cast implant bars were sectioned at 5 mm from the left gold cylinder using a disk of 0.3 mm thickness, and then each group of ten specimens was corrected by gas-air torch soldering, laser welding, and additional casting technique. Three dimensional evaluation including horizontal, vertical, and twisting measurements was based on measurement and comparison of (1) gap distances of the right abutment replica-gold cylinder interface at buccal, distal, lingual side, (2) changes of bar length, and (3) axis angle changes of the right gold cylinders at the step of the post-correction measurements on the three groups with a contact and non-contact coordinate measuring machine. One-way analysis of variance (ANOVA) and paired t-test were performed at the significance level of 5%. RESULTS Gap distances of the cast implant bars after correction procedure showed no statistically significant difference among groups. Changes in bar length between pre-casting and post-correction measurement were statistically significance among groups. Axis angle changes of the right gold cylinders were not statistically significance among groups. CONCLUSION There was no statistical significance among three techniques in horizontal, vertical and axial errors. But, gas-air torch soldering technique showed the most consistent and accurate trend in the correction of implant bar error. However, Laser welding technique, showed a large mean and standard deviation in vertical and twisting measurement and might be technique-sensitive method. PMID:24605205

  7. Error-Detecting Identification Codes for Algebra Students.

    ERIC Educational Resources Information Center

    Sutherland, David C.

    1990-01-01

    Discusses common error-detecting identification codes using linear algebra terminology to provide an interesting application of algebra. Presents examples from the International Standard Book Number, the Universal Product Code, bank identification numbers, and the ZIP code bar code. (YP)

  8. A Substantive Process Analysis of Responses to Items from the Multistate Bar Examination

    ERIC Educational Resources Information Center

    Bonner, Sarah M.; D'Agostino, Jerome V.

    2012-01-01

    We investigated examinees' cognitive processes while they solved selected items from the Multistate Bar Exam (MBE), a high-stakes professional certification examination. We focused on ascertaining those mental processes most frequently used by examinees, and the most common types of errors in their thinking. We compared the relationships between…

  9. ENDF/B-IV fission-product files: summary of major nuclide data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    England, T.R.; Schenter, R.E.

    1975-09-01

    The major fission-product parameters [sigma/sub th/, RI, tau/sub 1/2/, E- bar/sub $beta$/, E-bar/sub $gamma$/, E-bar/sub $alpha$/, decay and (n,$gamma$) branching, Q, and AWR] abstracted from ENDF/B-IV files for 824 nuclides are summarized. These data are most often requested by users concerned with reactor design, reactor safety, dose, and other sundry studies. The few known file errors are corrected to date. Tabular data are listed by increasing mass number. (auth)

  10. Reading color barcodes using visual snakes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaub, Hanspeter

    2004-05-01

    Statistical pressure snakes are used to track a mono-color target in an unstructured environment using a video camera. The report discusses an algorithm to extract a bar code signal that is embedded within the target. The target is assumed to be rectangular in shape, with the bar code printed in a slightly different saturation and value in HSV color space. Thus, the visual snake, which primarily weighs hue tracking errors, will not be deterred by the presence of the color bar codes in the target. The bar code is generate with the standard 3 of 9 method. Using this method,more » the numeric bar codes reveal if the target is right-side-up or up-side-down.« less

  11. Patient safety with blood products administration using wireless and bar-code technology.

    PubMed

    Porcella, Aleta; Walker, Kristy

    2005-01-01

    Supported by a grant from the Agency for Healthcare Research and Quality, a University of Iowa Hospitals and Clinics interdisciplinary research team created an online data-capture-response tool utilizing wireless mobile devices and bar code technology to track and improve blood products administration process. The tool captures 1) sample collection, 2) sample arrival in the blood bank, 3) blood product dispense from blood bank, and 4) administration. At each step, the scanned patient wristband ID bar code is automatically compared to scanned identification barcode on requisition, sample, and/or product, and the system presents either a confirmation or an error message to the user. Following an eight-month, 5 unit, staged pilot, a 'big bang,' hospital-wide implementation occurred on February 7, 2005. Preliminary results from pilot data indicate that the new barcode process captures errors 3 to 10 times better than the old manual process.

  12. Computerized bar code-based blood identification systems and near-miss transfusion episodes and transfusion errors.

    PubMed

    Nuttall, Gregory A; Abenstein, John P; Stubbs, James R; Santrach, Paula; Ereth, Mark H; Johnson, Pamela M; Douglas, Emily; Oliver, William C

    2013-04-01

    To determine whether the use of a computerized bar code-based blood identification system resulted in a reduction in transfusion errors or near-miss transfusion episodes. Our institution instituted a computerized bar code-based blood identification system in October 2006. After institutional review board approval, we performed a retrospective study of transfusion errors from January 1, 2002, through December 31, 2005, and from January 1, 2007, through December 31, 2010. A total of 388,837 U were transfused during the 2002-2005 period. There were 6 misidentification episodes of a blood product being transfused to the wrong patient during that period (incidence of 1 in 64,806 U or 1.5 per 100,000 transfusions; 95% CI, 0.6-3.3 per 100,000 transfusions). There was 1 reported near-miss transfusion episode (incidence of 0.3 per 100,000 transfusions; 95% CI, <0.1-1.4 per 100,000 transfusions). A total of 304,136 U were transfused during the 2007-2010 period. There was 1 misidentification episode of a blood product transfused to the wrong patient during that period when the blood bag and patient's armband were scanned after starting to transfuse the unit (incidence of 1 in 304,136 U or 0.3 per 100,000 transfusions; 95% CI, <0.1-1.8 per 100,000 transfusions; P=.14). There were 34 reported near-miss transfusion errors (incidence of 11.2 per 100,000 transfusions; 95% CI, 7.7-15.6 per 100,000 transfusions; P<.001). Institution of a computerized bar code-based blood identification system was associated with a large increase in discovered near-miss events. Copyright © 2013 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  13. Weak charge form factor and radius of 208Pb through parity violation in electron scattering

    DOE PAGES

    Horowitz, C. J.; Ahmed, Z.; Jen, C. -M.; ...

    2012-03-26

    We use distorted wave electron scattering calculations to extract the weak charge form factor F W(more » $$\\bar{q}$$), the weak charge radius R W, and the point neutron radius R n, of 208Pb from the PREX parity violating asymmetry measurement. The form factor is the Fourier transform of the weak charge density at the average momentum transfer $$\\bar{q}$$ = 0.475 fm -1. We find F W($$\\bar{q}$$) = 0.204 ± 0.028(exp) ± 0.001(model). We use the Helm model to infer the weak radius from F W($$\\bar{q}$$). We find RW = 5.826 ± 0.181(exp) ± 0.027(model) fm. Here the exp error includes PREX statistical and systematic errors, while the model error describes the uncertainty in R W from uncertainties in the surface thickness σ of the weak charge density. The weak radius is larger than the charge radius, implying a 'weak charge skin' where the surface region is relatively enriched in weak charges compared to (electromagnetic) charges. We extract the point neutron radius R n = 5.751 ± 0.175 (exp) ± 0.026(model) ± 0.005(strange) fm, from R W. Here there is only a very small error (strange) from possible strange quark contributions. We find R n to be slightly smaller than R W because of the nucleon's size. As a result, we find a neutron skin thickness of R n-R p = 0.302 ± 0.175 (exp) ± 0.026 (model) ± 0.005 (strange) fm, where R p is the point proton radius.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buras, Andrzej J.; /Munich, Tech. U.; Gorbahn, Martin

    The authors calculate the complete next-to-next-to-leading order QCD corrections to the charm contribution of the rare decay K{sup +} {yields} {pi}{sup +}{nu}{bar {nu}}. They encounter several new features, which were absent in lower orders. They discuss them in detail and present the results for the two-loop matching conditions of the Wilson coefficients, the three-loop anomalous dimensions, and the two-loop matrix elements of the relevant operators that enter the next-to-next-to-leading order renormalization group analysis of the Z-penguin and the electroweak box contribution. The inclusion of the next-to-next-to-leading order QCD corrections leads to a significant reduction of the theoretical uncertainty from {+-}more » 9.8% down to {+-} 2.4% in the relevant parameter P{sub c}(X), implying the leftover scale uncertainties in {Beta}(K{sup +} {yields} {pi}{sup +}{nu}{bar {nu}}) and in the determination of |V{sub td}|, sin 2{beta}, and {gamma} from the K {yields} {pi}{nu}{bar {nu}} system to be {+-} 1.3%, {+-} 1.0%, {+-} 0.006, and {+-} 1.2{sup o}, respectively. For the charm quark {ovr MS} mass m{sub c}(m{sub c}) = (1.30 {+-} 0.05) GeV and |V{sub us}| = 0.2248 the next-to-leading order value P{sub c}(X) = 0.37 {+-} 0.06 is modified to P{sub c}(X) = 0.38 {+-} 0.04 at the next-to-next-to-leading order level with the latter error fully dominated by the uncertainty in m{sub c}(m{sub c}). They present tables for P{sub c}(X) as a function of m{sub c}(m{sub c}) and {alpha}{sub s}(M{sub z}) and a very accurate analytic formula that summarizes these two dependences as well as the dominant theoretical uncertainties. Adding the recently calculated long-distance contributions they find {Beta}(K{sup +} {yields} {pi}{sup +}{nu}{bar {nu}}) = (8.0 {+-} 1.1) x 10{sup -11} with the present uncertainties in m{sub c}(m{sub c}) and the Cabibbo-Kobayashi-Maskawa elements being the dominant individual sources in the quoted error. They also emphasize that improved calculations of the long-distance contributions to K{sup +} {yields} {pi}{sup +}{nu}{bar {nu}} and of the isospin breaking corrections in the evaluation of the weak current matrix elements from K{sup +} {yields} {pi}{sup 0}e{sup +}{nu} would be valuable in order to increase the potential of the two golden K {yields} {pi}{nu}{bar {nu}} decays in the search for new physics.« less

  15. [Medication error management climate and perception for system use according to construction of medication error prevention system].

    PubMed

    Kim, Myoung Soo

    2012-08-01

    The purpose of this cross-sectional study was to examine current status of IT-based medication error prevention system construction and the relationships among system construction, medication error management climate and perception for system use. The participants were 124 patient safety chief managers working for 124 hospitals with over 300 beds in Korea. The characteristics of the participants, construction status and perception of systems (electric pharmacopoeia, electric drug dosage calculation system, computer-based patient safety reporting and bar-code system) and medication error management climate were measured in this study. The data were collected between June and August 2011. Descriptive statistics, partial Pearson correlation and MANCOVA were used for data analysis. Electric pharmacopoeia were constructed in 67.7% of participating hospitals, computer-based patient safety reporting systems were constructed in 50.8%, electric drug dosage calculation systems were in use in 32.3%. Bar-code systems showed up the lowest construction rate at 16.1% of Korean hospitals. Higher rates of construction of IT-based medication error prevention systems resulted in greater safety and a more positive error management climate prevailed. The supportive strategies for improving perception for use of IT-based systems would add to system construction, and positive error management climate would be more easily promoted.

  16. Bayesian aerosol retrieval algorithm for MODIS AOD retrieval over land

    NASA Astrophysics Data System (ADS)

    Lipponen, Antti; Mielonen, Tero; Pitkänen, Mikko R. A.; Levy, Robert C.; Sawyer, Virginia R.; Romakkaniemi, Sami; Kolehmainen, Ville; Arola, Antti

    2018-03-01

    We have developed a Bayesian aerosol retrieval (BAR) algorithm for the retrieval of aerosol optical depth (AOD) over land from the Moderate Resolution Imaging Spectroradiometer (MODIS). In the BAR algorithm, we simultaneously retrieve all dark land pixels in a granule, utilize spatial correlation models for the unknown aerosol parameters, use a statistical prior model for the surface reflectance, and take into account the uncertainties due to fixed aerosol models. The retrieved parameters are total AOD at 0.55 µm, fine-mode fraction (FMF), and surface reflectances at four different wavelengths (0.47, 0.55, 0.64, and 2.1 µm). The accuracy of the new algorithm is evaluated by comparing the AOD retrievals to Aerosol Robotic Network (AERONET) AOD. The results show that the BAR significantly improves the accuracy of AOD retrievals over the operational Dark Target (DT) algorithm. A reduction of about 29 % in the AOD root mean square error and decrease of about 80 % in the median bias of AOD were found globally when the BAR was used instead of the DT algorithm. Furthermore, the fraction of AOD retrievals inside the ±(0.05+15 %) expected error envelope increased from 55 to 76 %. In addition to retrieving the values of AOD, FMF, and surface reflectance, the BAR also gives pixel-level posterior uncertainty estimates for the retrieved parameters. The BAR algorithm always results in physical, non-negative AOD values, and the average computation time for a single granule was less than a minute on a modern personal computer.

  17. VizieR Online Data Catalog: R absolute magnitudes of Kuiper Belt objects (Peixinho+, 2012)

    NASA Astrophysics Data System (ADS)

    Peixinho, N.; Delsanti, A.; Guilbert-Lepoutre, A.; Gafeira, R.; Lacerda, P.

    2012-06-01

    Compilation of absolute magnitude HRα, B-R color spectral features used in this work. For each object, we computed the average color index from the different papers presenting data obtained simultaneously in B and R bands (e.g. contiguous observations within a same night). When individual R apparent magnitude and date were available, we computed the HRα=R-5log(r Delta), where R is the R-band magnitude, r and Delta are the helio- and geocentric distances at the time of observation in AU, respectively. When V and V-R colors were available, we derived an R and then HRα value. We did not correct for the phase-angle α effect. This table includes also spectral information on the presence of water ice, methanol, methane, or confirmed featureless spectra, as available in the literature. We highlight only the cases with clear bands in the spectrum, which were reported/confirmed by some other work. The 1st column indicates the object identification number and name or provisional designation; the 2nd column indicates the dynamical class; the 3rd column indicates the average HRα value and 1-σ error bars; the 4th column indicates the average $B-R$ color and 1-σ error bars; the 5th column indicates the most important spectral features detected; and the 6th column points to the bibliographic references used for each object. (3 data files).

  18. Predicting Error Bars for QSAR Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schroeter, Timon; Technische Universitaet Berlin, Department of Computer Science, Franklinstrasse 28/29, 10587 Berlin; Schwaighofer, Anton

    2007-09-18

    Unfavorable physicochemical properties often cause drug failures. It is therefore important to take lipophilicity and water solubility into account early on in lead discovery. This study presents log D{sub 7} models built using Gaussian Process regression, Support Vector Machines, decision trees and ridge regression algorithms based on 14556 drug discovery compounds of Bayer Schering Pharma. A blind test was conducted using 7013 new measurements from the last months. We also present independent evaluations using public data. Apart from accuracy, we discuss the quality of error bars that can be computed by Gaussian Process models, and ensemble and distance based techniquesmore » for the other modelling approaches.« less

  19. A large-scale test of free-energy simulation estimates of protein-ligand binding affinities.

    PubMed

    Mikulskis, Paulius; Genheden, Samuel; Ryde, Ulf

    2014-10-27

    We have performed a large-scale test of alchemical perturbation calculations with the Bennett acceptance-ratio (BAR) approach to estimate relative affinities for the binding of 107 ligands to 10 different proteins. Employing 20-Å truncated spherical systems and only one intermediate state in the perturbations, we obtain an error of less than 4 kJ/mol for 54% of the studied relative affinities and a precision of 0.5 kJ/mol on average. However, only four of the proteins gave acceptable errors, correlations, and rankings. The results could be improved by using nine intermediate states in the simulations or including the entire protein in the simulations using periodic boundary conditions. However, 27 of the calculated affinities still gave errors of more than 4 kJ/mol, and for three of the proteins the results were not satisfactory. This shows that the performance of BAR calculations depends on the target protein and that several transformations gave poor results owing to limitations in the molecular-mechanics force field or the restricted sampling possible within a reasonable simulation time. Still, the BAR results are better than docking calculations for most of the proteins.

  20. 78 FR 36635 - Additional Designations, Foreign Narcotics Kingpin Designation Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-18

    ...) (individual) [SDNTK] (Linked To: RESTAURANT BAR LOS ANDARIEGOS, S.A. DE C.V.). 2. BUENROSTRO VILLA, Denisse... (Mexico) (individual) [SDNTK] (Linked To: RESTAURANT BAR LOS ANDARIEGOS, S.A. DE C.V.). 8. CORTES...]. 15. RESTAURANT BAR LOS ANDARIEGOS, S.A. DE C.V. (a.k.a. BARBARESCO RESTAURANT), [[Page 36638

  1. Fabricating CAD/CAM Implant-Retained Mandibular Bar Overdentures: A Clinical and Technical Overview.

    PubMed

    Goo, Chui Ling; Tan, Keson Beng Choon

    2017-01-01

    This report describes the clinical and technical aspects in the oral rehabilitation of an edentulous patient with knife-edge ridge at the mandibular anterior edentulous region, using implant-retained overdentures. The application of computer-aided design and computer-aided manufacturing (CAD/CAM) in the fabrication of the overdenture framework simplifies the laboratory process of the implant prostheses. The Nobel Procera CAD/CAM System was utilised to produce a lightweight titanium overdenture bar with locator attachments. It is proposed that the digital workflow of CAD/CAM milled implant overdenture bar allows us to avoid numerous technical steps and possibility of casting errors involved in the conventional casting of such bars.

  2. Publisher Correction: Role of outer surface probes for regulating ion gating of nanochannels.

    PubMed

    Li, Xinchun; Zhai, Tianyou; Gao, Pengcheng; Cheng, Hongli; Hou, Ruizuo; Lou, Xiaoding; Xia, Fan

    2018-02-08

    The original version of this Article contained an error in Fig. 3. The scale bars in Figs 3c and 3d were incorrectly labelled as 50 μA. In the correct version, the scale bars are labelled as 0.5 μA. This has now been corrected in both the PDF and HTML versions of the Article.

  3. An Insertable Passive LC Pressure Sensor Based on an Alumina Ceramic for In Situ Pressure Sensing in High-Temperature Environments.

    PubMed

    Xiong, Jijun; Li, Chen; Jia, Pinggang; Chen, Xiaoyong; Zhang, Wendong; Liu, Jun; Xue, Chenyang; Tan, Qiulin

    2015-08-31

    Pressure measurements in high-temperature applications, including compressors, turbines, and others, have become increasingly critical. This paper proposes an implantable passive LC pressure sensor based on an alumina ceramic material for in situ pressure sensing in high-temperature environments. The inductance and capacitance elements of the sensor were designed independently and separated by a thermally insulating material, which is conducive to reducing the influence of the temperature on the inductance element and improving the quality factor of the sensor. In addition, the sensor was fabricated using thick film integrated technology from high-temperature materials that ensure stable operation of the sensor in high-temperature environments. Experimental results showed that the sensor accurately monitored pressures from 0 bar to 2 bar at temperatures up to 800 °C. The sensitivity, linearity, repeatability error, and hysteretic error of the sensor were 0.225 MHz/bar, 95.3%, 5.5%, and 6.2%, respectively.

  4. An Insertable Passive LC Pressure Sensor Based on an Alumina Ceramic for In Situ Pressure Sensing in High-Temperature Environments

    PubMed Central

    Xiong, Jijun; Li, Chen; Jia, Pinggang; Chen, Xiaoyong; Zhang, Wendong; Liu, Jun; Xue, Chenyang; Tan, Qiulin

    2015-01-01

    Pressure measurements in high-temperature applications, including compressors, turbines, and others, have become increasingly critical. This paper proposes an implantable passive LC pressure sensor based on an alumina ceramic material for in situ pressure sensing in high-temperature environments. The inductance and capacitance elements of the sensor were designed independently and separated by a thermally insulating material, which is conducive to reducing the influence of the temperature on the inductance element and improving the quality factor of the sensor. In addition, the sensor was fabricated using thick film integrated technology from high-temperature materials that ensure stable operation of the sensor in high-temperature environments. Experimental results showed that the sensor accurately monitored pressures from 0 bar to 2 bar at temperatures up to 800 °C. The sensitivity, linearity, repeatability error, and hysteretic error of the sensor were 0.225 MHz/bar, 95.3%, 5.5%, and 6.2%, respectively. PMID:26334279

  5. Using Modified-ISS Model to Evaluate Medication Administration Safety During Bar Code Medication Administration Implementation in Taiwan Regional Teaching Hospital.

    PubMed

    Ma, Pei-Luen; Jheng, Yan-Wun; Jheng, Bi-Wei; Hou, I-Ching

    2017-01-01

    Bar code medication administration (BCMA) could reduce medical errors and promote patient safety. This research uses modified information systems success model (M-ISS model) to evaluate nurses' acceptance to BCMA. The result showed moderate correlation between medication administration safety (MAS) to system quality, information quality, service quality, user satisfaction, and limited satisfaction.

  6. fMRI identifies chronotype-specific brain activation associated with attention to motion--why we need to know when subjects go to bed.

    PubMed

    Reske, Martina; Rosenberg, Jessica; Plapp, Sabrina; Kellermann, Thilo; Shah, N Jon

    2015-05-01

    Human cognition relies on attentional capacities which, among others, are influenced by factors like tiredness or mood. Based on their inherent preferences in sleep and wakefulness, individuals can be classified as specific "chronotypes". The present study investigated how early, intermediate and late chronotypes (EC, IC, LC) differ neurally on an attention-to-motion task. Twelve EC, 18 IC and 17 LC were included into the study. While undergoing functional magnetic resonance imaging (fMRI) scans, subjects looked at vertical bars in an attention-to-motion task. In the STATIONARY condition, subjects focused on a central fixation cross. During Fix-MOVING and Attend-MOVING, bars were moving horizontally. Only during the Attend-MOVING, subjects were required to attend to changes in the velocity of bars and indicate those by button presses. A two-way repeated measures ANOVA probed group by attentional load effects. The dorsolateral prefrontal cortex (DLPFC), insula and anterior cingulate cortex showed group by attention specific activations. Specifically, EC and LC presented attenuated DLPFC activation under high attentional load (Attend-MOVING), while EC showed less anterior insula activation than IC. LC compared to IC exhibited attenuation of superior parietal cortex. Our study reveals that individual sleep preferences are associated with characteristic brain activation in areas crucial for attention and bodily awareness. These results imply that considering sleep preferences in neuroimaging studies is crucial when administering cognitive tasks. Our study also has socio-economic implications. Task performance in non-optimal times of the day (e.g. shift workers), may result in cognitive impairments leading to e.g. increased error rates and slower reaction times. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Tracking control of a closed-chain five-bar robot with two degrees of freedom by integration of an approximation-based approach and mechanical design.

    PubMed

    Cheng, Long; Hou, Zeng-Guang; Tan, Min; Zhang, W J

    2012-10-01

    The trajectory tracking problem of a closed-chain five-bar robot is studied in this paper. Based on an error transformation function and the backstepping technique, an approximation-based tracking algorithm is proposed, which can guarantee the control performance of the robotic system in both the stable and transient phases. In particular, the overshoot, settling time, and final tracking error of the robotic system can be all adjusted by properly setting the parameters in the error transformation function. The radial basis function neural network (RBFNN) is used to compensate the complicated nonlinear terms in the closed-loop dynamics of the robotic system. The approximation error of the RBFNN is only required to be bounded, which simplifies the initial "trail-and-error" configuration of the neural network. Illustrative examples are given to verify the theoretical analysis and illustrate the effectiveness of the proposed algorithm. Finally, it is also shown that the proposed approximation-based controller can be simplified by a smart mechanical design of the closed-chain robot, which demonstrates the promise of the integrated design and control philosophy.

  8. Identification of Drivers of Liking for Bar-Type Snacks Based on Individual Consumer Preference.

    PubMed

    Kim, Mina K; Greve, Patrick; Lee, Youngseung

    2016-01-01

    Understanding consumer hedonic responses on food products are of greatest interests in global food industry. A global partial least square regression (GPLSR) had been well accepted method for understanding consumer preferences. Recently, individual partial least square regression (IPLSR) was accepted as an alternative method of predicting consumer preferences on given food product, because it utilizes the individual differences on product acceptability. To improve the understanding of what constitutes bar-type snack preference, the relationship between sensory attributes and consumer overall liking for 12 bar-type snacks was determined. Sensory attributes that drive consumer product likings were analyzed using averaged-consumer data by GPLSR. To facilitate the interpretation of individual consumer liking, a dummy matrix for the significant weighted regression coefficients of each consumer derived from IPLSR was created. From the application of GPLSR and IPLSR, current study revealed that chocolate and cereal-flavored bars were preferred over fruit-flavored bars. Attributes connected to chocolate flavor positively influenced consumer overall likings on the global and individual consumer levels. Textural attributes affected liking only on the individual level. To fully capture the importance of sensory attributes on consumer preference, the use of GPLSR in conjunction with IPLSR is recommended. © 2015 Institute of Food Technologists®

  9. Measurement of the τ Michel parameters \\bar{η} and ξκ in the radiative leptonic decay τ^- \\rArr ℓ^- ν_{τ} \\bar{ν}_{ℓ}γ

    NASA Astrophysics Data System (ADS)

    Shimizu, N.; Aihara, H.; Epifanov, D.; Adachi, I.; Al Said, S.; Asner, D. M.; Aulchenko, V.; Aushev, T.; Ayad, R.; Babu, V.; Badhrees, I.; Bakich, A. M.; Bansal, V.; Barberio, E.; Bhardwaj, V.; Bhuyan, B.; Biswal, J.; Bobrov, A.; Bozek, A.; Bračko, M.; Browder, T. E.; Červenkov, D.; Chang, M.-C.; Chang, P.; Chekelian, V.; Chen, A.; Cheon, B. G.; Chilikin, K.; Cho, K.; Choi, S.-K.; Choi, Y.; Cinabro, D.; Czank, T.; Dash, N.; Di Carlo, S.; Doležal, Z.; Dutta, D.; Eidelman, S.; Fast, J. E.; Ferber, T.; Fulsom, B. G.; Garg, R.; Gaur, V.; Gabyshev, N.; Garmash, A.; Gelb, M.; Goldenzweig, P.; Greenwald, D.; Guido, E.; Haba, J.; Hayasaka, K.; Hayashii, H.; Hedges, M. T.; Hirose, S.; Hou, W.-S.; Iijima, T.; Inami, K.; Inguglia, G.; Ishikawa, A.; Itoh, R.; Iwasaki, M.; Jaegle, I.; Jeon, H. B.; Jia, S.; Jin, Y.; Joo, K. K.; Julius, T.; Kang, K. H.; Karyan, G.; Kawasaki, T.; Kiesling, C.; Kim, D. Y.; Kim, J. B.; Kim, S. H.; Kim, Y. J.; Kinoshita, K.; Kodyž, P.; Korpar, S.; Kotchetkov, D.; Križan, P.; Kroeger, R.; Krokovny, P.; Kulasiri, R.; Kuzmin, A.; Kwon, Y.-J.; Lange, J. S.; Lee, I. S.; Li, L. K.; Li, Y.; Li Gioi, L.; Libby, J.; Liventsev, D.; Masuda, M.; Merola, M.; Miyabayashi, K.; Miyata, H.; Mohanty, G. B.; Moon, H. K.; Mori, T.; Mussa, R.; Nakano, E.; Nakao, M.; Nanut, T.; Nath, K. J.; Natkaniec, Z.; Nayak, M.; Niiyama, M.; Nisar, N. K.; Nishida, S.; Ogawa, S.; Okuno, S.; Ono, H.; Pakhlova, G.; Pal, B.; Park, C. W.; Park, H.; Paul, S.; Pedlar, T. K.; Pestotnik, R.; Piilonen, L. E.; Popov, V.; Ritter, M.; Rostomyan, A.; Sakai, Y.; Salehi, M.; Sandilya, S.; Sato, Y.; Savinov, V.; Schneider, O.; Schnell, G.; Schwanda, C.; Seino, Y.; Senyo, K.; Sevior, M. E.; Shebalin, V.; Shibata, T.-A.; Shiu, J.-G.; Shwartz, B.; Sokolov, A.; Solovieva, E.; Starič, M.; Strube, J. F.; Sumisawa, K.; Sumiyoshi, T.; Tamponi, U.; Tanida, K.; Tenchini, F.; Trabelsi, K.; Uchida, M.; Uglov, T.; Unno, Y.; Uno, S.; Usov, Y.; Van Hulse, C.; Varner, G.; Vorobyev, V.; Vossen, A.; Wang, C. H.; Wang, M.-Z.; Wang, P.; Watanabe, M.; Widmann, E.; Won, E.; Yamashita, Y.; Ye, H.; Yuan, C. Z.; Zhang, Z. P.; Zhilich, V.; Zhukova, V.; Zhulanov, V.; Zupanc, A.

    2018-02-01

    We present a measurement of the Michel parameters of the τ lepton, \\bar{η} and ξκ, in the radiative leptonic decay τ^- \\rArr ℓ^- ν_{τ} \\bar{ν}_{ℓ} γ using 711 fb^{-1} of collision data collected with the Belle detector at the KEKB e^+e^- collider. The Michel parameters are measured in an unbinned maximum likelihood fit to the kinematic distribution of e^+e^-\\rArrτ^+τ^-\\rArr (π^+π^0 \\bar{ν}_τ)(ℓ^-ν_{τ}\\bar{ν}_{ℓ}γ)(ℓ=e or μ). The measured values of the Michel parameters are \\bar{η} = -1.3 ± 1.5 ± 0.8 and ξκ = 0.5 ± 0.4 ± 0.2, where the first error is statistical and the second is systematic. This is the first measurement of these parameters. These results are consistent with the Standard Model predictions within their uncertainties, and constrain the coupling constants of the generalized weak interaction.

  10. The p-Value You Can't Buy.

    PubMed

    Demidenko, Eugene

    2016-01-02

    There is growing frustration with the concept of the p -value. Besides having an ambiguous interpretation, the p- value can be made as small as desired by increasing the sample size, n . The p -value is outdated and does not make sense with big data: Everything becomes statistically significant. The root of the problem with the p- value is in the mean comparison. We argue that statistical uncertainty should be measured on the individual, not the group, level. Consequently, standard deviation (SD), not standard error (SE), error bars should be used to graphically present the data on two groups. We introduce a new measure based on the discrimination of individuals/objects from two groups, and call it the D -value. The D -value can be viewed as the n -of-1 p -value because it is computed in the same way as p while letting n equal 1. We show how the D -value is related to discrimination probability and the area above the receiver operating characteristic (ROC) curve. The D -value has a clear interpretation as the proportion of patients who get worse after the treatment, and as such facilitates to weigh up the likelihood of events under different scenarios. [Received January 2015. Revised June 2015.].

  11. Implant-supported overdenture with prefabricated bar attachment system in mandibular edentulous patient

    PubMed Central

    Ha, Seung-Ryong; Song, Seung-Il; Hong, Seong-Tae; Kim, Gy-Young

    2012-01-01

    Implant-supported overdenture is a reliable treatment option for the patients with edentulous mandible when they have difficulty in using complete dentures. Several options have been used for implant-supported overdenture attachments. Among these, bar attachment system has greater retention and better maintainability than others. SFI-Bar® is prefabricated and can be adjustable at chairside. Therefore, laboratory procedures such as soldering and welding are unnecessary, which leads to fewer errors and lower costs. A 67-year-old female patient presented, complaining of mobility of lower anterior teeth with old denture. She had been wearing complete denture in the maxilla and removable partial denture in the mandible with severe bone loss. After extracting the teeth, two implants were placed in front of mental foramen, and SFI-Bar® was connected. A tube bar was seated to two adapters through large ball joints and fixation screws, connecting each implant. The length of the tube bar was adjusted according to inter-implant distance. Then, a female part was attached to the bar beneath the new denture. This clinical report describes two-implant-supported overdenture using the SFI-Bar® system in a mandibular edentulous patient. PMID:23236580

  12. The influence of graphic display format on the interpretations of quantitative risk information among adults with lower education and literacy: a randomized experimental study.

    PubMed

    McCaffery, Kirsten J; Dixon, Ann; Hayen, Andrew; Jansen, Jesse; Smith, Sian; Simpson, Judy M

    2012-01-01

    To test optimal graphic risk communication formats for presenting small probabilities using graphics with a denominator of 1000 to adults with lower education and literacy. A randomized experimental study, which took place in adult basic education classes in Sydney, Australia. The participants were 120 adults with lower education and literacy. An experimental computer-based manipulation compared 1) pictographs in 2 forms, shaded "blocks" and unshaded "dots"; and 2) bar charts across different orientations (horizontal/vertical) and numerator size (small <100, medium 100-499, large 500-999). Accuracy (size of error) and ease of processing (reaction time) were assessed on a gist task (estimating the larger chance of survival) and a verbatim task (estimating the size of difference). Preferences for different graph types were also assessed. Accuracy on the gist task was very high across all conditions (>95%) and not tested further. For the verbatim task, optimal graph type depended on the numerator size. For small numerators, pictographs resulted in fewer errors than bar charts (blocks: odds ratio [OR] = 0.047, 95% confidence interval [CI] = 0.023-0.098; dots: OR = 0.049, 95% CI = 0.024-0.099). For medium and large numerators, bar charts were more accurate (e.g., medium dots: OR = 4.29, 95% CI = 2.9-6.35). Pictographs were generally processed faster for small numerators (e.g., blocks: 14.9 seconds v. bars: 16.2 seconds) and bar charts for medium or large numerators (e.g., large blocks: 41.6 seconds v. 26.7 seconds). Vertical formats were processed slightly faster than horizontal graphs with no difference in accuracy. Most participants preferred bar charts (64%); however, there was no relationship with performance. For adults with low education and literacy, pictographs are likely to be the best format to use when displaying small numerators (<100/1000) and bar charts for larger numerators (>100/1000).

  13. Studying W‧ boson contributions in \\bar{B} \\rightarrow {D}^{(* )}{{\\ell }}^{-}{\\bar{\

    NASA Astrophysics Data System (ADS)

    Wang, Yi-Long; Wei, Bin; Sheng, Jin-Huan; Wang, Ru-Min; Yang, Ya-Dong

    2018-05-01

    Recently, the Belle collaboration reported the first measurement of the τ lepton polarization P τ (D*) in \\bar{B}\\to {D}* {τ }-{\\bar{ν }}τ decay and a new measurement of the rate of the branching ratios R(D*), which are consistent with the Standard Model (SM) predictions. These could be used to constrain the New Physics (NP) beyond the SM. In this paper, we probe \\bar{B}\\to {D}(* ){{\\ell }}-{\\bar{ν }}{\\ell } (ℓ = e, μ, τ) decays in the model-independent way and in the specific G(221) models with lepton flavour universality. Considering the theoretical uncertainties and the experimental errors at the 95% C.L., we obtain the quite strong bounds on the model-independent parameters {C}{{LL}}{\\prime },{C}{{LR}}{\\prime },{C}{{RR}}{\\prime },{C}{{RL}}{\\prime },{g}V,{g}A,{g}V{\\prime },{g}A{\\prime } and the specific G(221) model parameter rates. We find that the constrained NP couplings have no obvious effects on all (differential) branching ratios and their rates, nevertheless, many NP couplings have very large effects on the lepton spin asymmetries of \\bar{B}\\to {D}(* ){{\\ell }}-{\\bar{ν }}{\\ell } decays and the forward–backward asymmetries of \\bar{B}\\to {D}* {{\\ell }}-{\\bar{ν }}{\\ell }. So we expect precision measurements of these observables would be researched by LHCb and Belle-II.

  14. Sine-Bar Attachment For Machine Tools

    NASA Technical Reports Server (NTRS)

    Mann, Franklin D.

    1988-01-01

    Sine-bar attachment for collets, spindles, and chucks helps machinists set up quickly for precise angular cuts that require greater precision than provided by graduations of machine tools. Machinist uses attachment to index head, carriage of milling machine or lathe relative to table or turning axis of tool. Attachment accurate to 1 minute or arc depending on length of sine bar and precision of gauge blocks in setup. Attachment installs quickly and easily on almost any type of lathe or mill. Requires no special clamps or fixtures, and eliminates many trial-and-error measurements. More stable than improvised setups and not jarred out of position readily.

  15. Effects of a direct refill program for automated dispensing cabinets on medication-refill errors.

    PubMed

    Helmons, Pieter J; Dalton, Ashley J; Daniels, Charles E

    2012-10-01

    The effects of a direct refill program for automated dispensing cabinets (ADCs) on medication-refill errors were studied. This study was conducted in designated acute care areas of a 386-bed academic medical center. A wholesaler-to-ADC direct refill program, consisting of prepackaged delivery of medications and bar-code-assisted ADC refilling, was implemented in the inpatient pharmacy of the medical center in September 2009. Medication-refill errors in 26 ADCs from the general medicine units, the infant special care unit, the surgical and burn intensive care units, and intermediate units were assessed before and after the implementation of this program. Medication-refill errors were defined as an ADC pocket containing the wrong drug, wrong strength, or wrong dosage form. ADC refill errors decreased by 77%, from 62 errors per 6829 refilled pockets (0.91%) to 8 errors per 3855 refilled pockets (0.21%) (p < 0.0001). The predominant error type detected before the intervention was the incorrect medication (wrong drug, wrong strength, or wrong dosage form) in the ADC pocket. Of the 54 incorrect medications found before the intervention, 38 (70%) were loaded in a multiple-drug drawer. After the implementation of the new refill process, 3 of the 5 incorrect medications were loaded in a multiple-drug drawer. There were 3 instances of expired medications before and only 1 expired medication after implementation of the program. A redesign of the ADC refill process using a wholesaler-to-ADC direct refill program that included delivery of prepackaged medication and bar-code-assisted refill significantly decreased the occurrence of ADC refill errors.

  16. Heavy flavor decay of Zγ at CDF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timothy M. Harrington-Taber

    2013-01-01

    Diboson production is an important and frequently measured parameter of the Standard Model. This analysis considers the previously neglected pmore » $$\\bar{p}$$ →Z γ→ b$$\\bar{b}$$ channel, as measured at the Collider Detector at Fermilab. Using the entire Tevatron Run II dataset, the measured result is consistent with Standard Model predictions, but the statistical error associated with this method of measurement limits the strength of this correlation.« less

  17. Positional reference system for ultraprecision machining

    DOEpatents

    Arnold, Jones B.; Burleson, Robert R.; Pardue, Robert M.

    1982-01-01

    A stable positional reference system for use in improving the cutting tool-to-part contour position in numerical controlled-multiaxis metal turning machines is provided. The reference system employs a plurality of interferometers referenced to orthogonally disposed metering bars which are substantially isolated from machine strain induced position errors for monitoring the part and tool positions relative to the metering bars. A microprocessor-based control system is employed in conjunction with the plurality of position interferometers and part contour description data inputs to calculate error components for each axis of movement and output them to corresponding axis drives with appropriate scaling and error compensation. Real-time position control, operating in combination with the reference system, makes possible the positioning of the cutting points of a tool along a part locus with a substantially greater degree of accuracy than has been attained previously in the art by referencing and then monitoring only the tool motion relative to a reference position located on the machine base.

  18. Positional reference system for ultraprecision machining

    DOEpatents

    Arnold, J.B.; Burleson, R.R.; Pardue, R.M.

    1980-09-12

    A stable positional reference system for use in improving the cutting tool-to-part contour position in numerical controlled-multiaxis metal turning machines is provided. The reference system employs a plurality of interferometers referenced to orthogonally disposed metering bars which are substantially isolated from machine strain induced position errors for monitoring the part and tool positions relative to the metering bars. A microprocessor-based control system is employed in conjunction with the plurality of positions interferometers and part contour description data input to calculate error components for each axis of movement and output them to corresponding axis driven with appropriate scaling and error compensation. Real-time position control, operating in combination with the reference system, makes possible the positioning of the cutting points of a tool along a part locus with a substantially greater degree of accuracy than has been attained previously in the art by referencing and then monitoring only the tool motion relative to a reference position located on the machine base.

  19. Quantitative NO-LIF imaging in high-pressure flames

    NASA Astrophysics Data System (ADS)

    Bessler, W. G.; Schulz, C.; Lee, T.; Shin, D.-I.; Hofmann, M.; Jeffries, J. B.; Wolfrum, J.; Hanson, R. K.

    2002-07-01

    Planar laser-induced fluorescence (PLIF) images of NO concentration are reported in premixed laminar flames from 1-60 bar exciting the A-X(0,0) band. The influence of O2 interference and gas composition, the variation with local temperature, and the effect of laser and signal attenuation by UV light absorption are investigated. Despite choosing a NO excitation and detection scheme with minimum O2-LIF contribution, this interference produces errors of up to 25% in a slightly lean 60 bar flame. The overall dependence of the inferred NO number density with temperature in the relevant (1200-2500 K) range is low (<±15%) because different effects cancel. The attenuation of laser and signal light by combustion products CO2 and H2O is frequently neglected, yet such absorption yields errors of up to 40% in our experiment despite the small scale (8 mm flame diameter). Understanding the dynamic range for each of these corrections provides guidance to minimize errors in single shot imaging experiments at high pressure.

  20. The cost of implementing inpatient bar code medication administration.

    PubMed

    Sakowski, Julie Ann; Ketchel, Alan

    2013-02-01

    To calculate the costs associated with implementing and operating an inpatient bar-code medication administration (BCMA) system in the community hospital setting and to estimate the cost per harmful error prevented. This is a retrospective, observational study. Costs were calculated from the hospital perspective and a cost-consequence analysis was performed to estimate the cost per preventable adverse drug event averted. Costs were collected from financial records and key informant interviews at 4 not-for profit community hospitals. Costs included direct expenditures on capital, infrastructure, additional personnel, and the opportunity costs of time for existing personnel working on the project. The number of adverse drug events prevented using BCMA was estimated by multiplying the number of doses administered using BCMA by the rate of harmful errors prevented by interventions in response to system warnings. Our previous work found that BCMA identified and intercepted medication errors in 1.1% of doses administered, 9% of which potentially could have resulted in lasting harm. The cost of implementing and operating BCMA including electronic pharmacy management and drug repackaging over 5 years is $40,000 (range: $35,600 to $54,600) per BCMA-enabled bed and $2000 (range: $1800 to $2600) per harmful error prevented. BCMA can be an effective and potentially cost-saving tool for preventing the harm and costs associated with medication errors.

  1. Star formation suppression and bar ages in nearby barred galaxies

    NASA Astrophysics Data System (ADS)

    James, P. A.; Percival, S. M.

    2018-03-01

    We present new spectroscopic data for 21 barred spiral galaxies, which we use to explore the effect of bars on disc star formation, and to place constraints on the characteristic lifetimes of bar episodes. The analysis centres on regions of heavily suppressed star formation activity, which we term `star formation deserts'. Long-slit optical spectroscopy is used to determine H β absorption strengths in these desert regions, and comparisons with theoretical stellar population models are used to determine the time since the last significant star formation activity, and hence the ages of the bars. We find typical ages of ˜1 Gyr, but with a broad range, much larger than would be expected from measurement errors alone, extending from ˜0.25 to >4 Gyr. Low-level residual star formation, or mixing of stars from outside the `desert' regions, could result in a doubling of these age estimates. The relatively young ages of the underlying populations coupled with the strong limits on the current star formation rule out a gradual exponential decline in activity, and hence support our assumption of an abrupt truncation event.

  2. 17 CFR 201.193 - Applications by barred individuals for consent to associate.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 17 Commodity and Securities Exchanges 2 2010-04-01 2010-04-01 false Applications by barred individuals for consent to associate. 201.193 Section 201.193 Commodity and Securities Exchanges SECURITIES... securities dealers, government securities brokers, government securities dealers, investment advisers...

  3. Enhancing the sensitivity to new physics in the tt¯ invariant mass distribution

    NASA Astrophysics Data System (ADS)

    Álvarez, Ezequiel

    2012-08-01

    We propose selection cuts on the LHC tt¯ production sample which should enhance the sensitivity to new physics signals in the study of the tt¯ invariant mass distribution. We show that selecting events in which the tt¯ object has little transverse and large longitudinal momentum enlarges the quark-fusion fraction of the sample and therefore increases its sensitivity to new physics which couples to quarks and not to gluons. We find that systematic error bars play a fundamental role and assume a simple model for them. We check how a non-visible new particle would become visible after the selection cuts enhance its resonance bump. A final realistic analysis should be done by the experimental groups with a correct evaluation of the systematic error bars.

  4. A novel single-ended readout depth-of-interaction PET detector fabricated using sub-surface laser engraving.

    PubMed

    Uchida, H; Sakai, T; Yamauchi, H; Hakamata, K; Shimizu, K; Yamashita, T

    2016-09-21

    We propose a novel scintillation detector design for positron emission tomography (PET), which has depth of interaction (DOI) capability and uses a single-ended readout scheme. The DOI detector contains a pair of crystal bars segmented using sub-surface laser engraving (SSLE). The two crystal bars are optically coupled to each other at their top segments and are coupled to two photo-sensors at their bottom segments. Initially, we evaluated the performance of different designs of single crystal bars coupled to photomultiplier tubes at both ends. We found that segmentation by SSLE results in superior performance compared to the conventional method. As the next step, we constructed a crystal unit composed of a 3  ×  3  ×  20 mm 3 crystal bar pair, with each bar containing four layers segmented using the SSLE. We measured the DOI performance by changing the optical conditions for the crystal unit. Based on the experimental results, we then assessed the detector performance in terms of the DOI capability by evaluating the position error, energy resolution, and light collection efficiency for various crystal unit designs with different bar sizes and a different number of layers (four to seven layers). DOI encoding with small position error was achieved for crystal units composed of a 3  ×  3  ×  20 mm 3 LYSO bar pair having up to seven layers, and with those composed of a 2  ×  2  ×  20 mm 3 LYSO bar pair having up to six layers. The energy resolution of the segment in the seven-layer 3  ×  3  ×  20 mm 3 crystal bar pair was 9.3%-15.5% for 662 keV gamma-rays, where the segments closer to the photo-sensors provided better energy resolution. SSLE provides high geometrical accuracy at low production cost due to the simplicity of the crystal assembly. Therefore, the proposed DOI detector is expected to be an attractive choice for practical small-bore PET systems dedicated to imaging of the brain, breast, and small animals.

  5. Automatic Identification Technology (AIT): The Development of Functional Capability and Card Application Matrices

    DTIC Science & Technology

    1994-09-01

    650 B.C. in Asia Minor, coins were developed and used in acquiring goods and services. In France, during the eighteenth century, paper money made its... counterfeited . [INFO94, p. 23] Other weaknesses of bar code technology include limited data storage capability based on the bar code symbology used when...extremely accurate, with calculated error rates as low as 1 in 100 trillion, and are difficult to counterfeit . Strong magnetic fields cannot erase RF

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miftakov, V

    The BABAR experiment at SLAC provides an opportunity for measurement of the Standard Model parameters describing CP violation. A method of measuring the CKM matrix element |V{sub cb}| using Inclusive Semileptonic B decays in events tagged by a fully reconstructed decay of one of the B mesons is presented here. This mode is considered to be one of the most powerful approaches due to its large branching fraction, simplicity of the theoretical description and very clean experimental signatures. Using fully reconstructed B mesons to flag B{bar B} event we were able to produce the spectrum and branching fraction for electronmore » momenta P{sub C.M.S.} > 0.5 GeV/c. Extrapolation to the lower momenta has been carried out with Heavy Quark Effective Theory. The branching fractions are measured separately for charged and neutral B mesons. For 82 fb{sup -1} of data collected at BABAR we obtain: BR(B{sup {+-}} {yields} X e{bar {nu}}) = 10.63 {+-} 0.24 {+-} 0.29%, BR(B{sup 0} {yields} X e{bar {nu}}) = 10.68 {+-} 0.34 {+-} 0.31%, averaged BR(B {yields} X e{bar {nu}}) = 10.65 {+-} 0.19 {+-} 0.27%, ratio of Branching fractions BR(B{sup {+-}})/BR(B{sup 0}) = 0.996 {+-} 0.039 {+-} 0.015 (errors are statistical and systematic, respectively). They also obtain V{sub cb} = 0.0409 {+-} 0.00074 {+-} 0.0010 {+-} 0.000858 (errors are: statistical, systematic and theoretical).« less

  7. Errors of Measurement, Theory, and Public Policy. William H. Angoff Memorial Lecture Series

    ERIC Educational Resources Information Center

    Kane, Michael

    2010-01-01

    The 12th annual William H. Angoff Memorial Lecture was presented by Dr. Michael T. Kane, ETS's (Educational Testing Service) Samuel J. Messick Chair in Test Validity and the former Director of Research at the National Conference of Bar Examiners. Dr. Kane argues that it is important for policymakers to recognize the impact of errors of measurement…

  8. Pioneer-Venus radio occultation (ORO) data reduction: Profiles of 13 cm absorptivity

    NASA Technical Reports Server (NTRS)

    Steffes, Paul G.

    1990-01-01

    In order to characterize possible variations in the abundance and distribution of subcloud sulfuric acid vapor, 13 cm radio occultation signals from 23 orbits that occurred in late 1986 and 1987 (Season 10) and 7 orbits that occurred in 1979 (Season 1) were processed. The data were inverted via inverse Abel transform to produce 13 cm absorptivity profiles. Pressure and temperature profiles obtained with the Pioneer-Venus night probe and the northern probe were used along with the absorptivity profiles to infer upper limits for vertical profiles of the abundance of gaseous H2SO4. In addition to inverting the data, error bars were placed on the absorptivity profiles and H2SO4 abundance profiles using the standard propagation of errors. These error bars were developed by considering the effects of statistical errors only. The profiles show a distinct pattern with regard to latitude which is consistent with latitude variations observed in data obtained during the occultation seasons nos. 1 and 2. However, when compared with the earlier data, the recent occultation studies suggest that the amount of sulfuric acid vapor occurring at and below the main cloud layer may have decreased between early 1979 and late 1986.

  9. Prototype Stop Bar System Evaluation at John F. Kennedy International Airport

    DTIC Science & Technology

    1992-09-01

    2 Red Stop Bar Visual Presentation 4 3 Green Stop Bar Visual Presentation 5 4 Photographs of Red and Green Inset Stop Bar Lights 6 5 Photographs of...to green. This provides pilots with a visual confirmation of the controller’s verbal clearance and is intended to prevent runway incursions. The Port...34 colocated with the red lights. The visual presentation of an individual stop bar appears as either five red lights (see figure 2), or five green

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van den Bergh, Sidney, E-mail: sidney.vandenbergh@nrc.gc.ca

    Lenticular galaxies with M{sub B} < -21.5 are almost exclusively unbarred, whereas both barred and unbarred objects occur at fainter luminosity levels. This effect is observed both for objects classified in blue light, and for those that were classified in the infrared. This result suggests that the most luminous (massive) S0 galaxies find it difficult to form bars. As a result, the mean luminosity of unbarred lenticular galaxies in both B and IR light is observed to be {approx}0.4 mag brighter than that of barred lenticulars. A small contribution to the observed luminosity difference that is found between SA0 andmore » SB0 galaxies may also be due to the fact that there is an asymmetry between the effects of small classification errors on SA0 and SB0 galaxies. An elliptical (E) galaxy might be misclassified as a lenticular (S0) or an S0 as an E. However, an E will never be misclassified as an SB0, nor will an SB0 ever be called an E. This asymmetry is important because E galaxies are typically twice as luminous as S0 galaxies. The present results suggest that the evolution of luminous lenticular galaxies may be closely linked to that of elliptical galaxies, whereas fainter lenticulars might be more closely associated with ram-pressure stripped spiral galaxies. Finally, it is pointed out that fine details of the galaxy formation process might account for some of the differences between the classifications of the same galaxy by individual competent morphologists.« less

  11. Bar Code Medication Administration Technology: Characterization of High-Alert Medication Triggers and Clinician Workarounds.

    PubMed

    Miller, Daniel F; Fortier, Christopher R; Garrison, Kelli L

    2011-02-01

    Bar code medication administration (BCMA) technology is gaining acceptance for its ability to prevent medication administration errors. However, studies suggest that improper use of BCMA technology can yield unsatisfactory error prevention and introduction of new potential medication errors. To evaluate the incidence of high-alert medication BCMA triggers and alert types and discuss the type of nursing and pharmacy workarounds occurring with the use of BCMA technology and the electronic medication administration record (eMAR). Medication scanning and override reports from January 1, 2008, through November 30, 2008, for all adult medical/surgical units were retrospectively evaluated for high-alert medication system triggers, alert types, and override reason documentation. An observational study of nursing workarounds on an adult medicine step-down unit was performed and an analysis of potential pharmacy workarounds affecting BCMA and the eMAR was also conducted. Seventeen percent of scanned medications triggered an error alert of which 55% were for high-alert medications. Insulin aspart, NPH insulin, hydromorphone, potassium chloride, and morphine were the top 5 high-alert medications that generated alert messages. Clinician override reasons for alerts were documented in only 23% of administrations. Observational studies assessing for nursing workarounds revealed a median of 3 clinician workarounds per administration. Specific nursing workarounds included a failure to scan medications/patient armband and scanning the bar code once the dosage has been removed from the unit-dose packaging. Analysis of pharmacy order entry process workarounds revealed the potential for missed doses, duplicate doses, and doses being scheduled at the wrong time. BCMA has the potential to prevent high-alert medication errors by alerting clinicians through alert messages. Nursing and pharmacy workarounds can limit the recognition of optimal safety outcomes and therefore workflow processes must be continually analyzed and restructured to yield the intended full benefits of BCMA technology. © 2011 SAGE Publications.

  12. Radial basis function network learns ceramic processing and predicts related strength and density

    NASA Technical Reports Server (NTRS)

    Cios, Krzysztof J.; Baaklini, George Y.; Vary, Alex; Tjia, Robert E.

    1993-01-01

    Radial basis function (RBF) neural networks were trained using the data from 273 Si3N4 modulus of rupture (MOR) bars which were tested at room temperature and 135 MOR bars which were tested at 1370 C. Milling time, sintering time, and sintering gas pressure were the processing parameters used as the input features. Flexural strength and density were the outputs by which the RBF networks were assessed. The 'nodes-at-data-points' method was used to set the hidden layer centers and output layer training used the gradient descent method. The RBF network predicted strength with an average error of less than 12 percent and density with an average error of less than 2 percent. Further, the RBF network demonstrated a potential for optimizing and accelerating the development and processing of ceramic materials.

  13. Laser damage metrology in biaxial nonlinear crystals using different test beams

    NASA Astrophysics Data System (ADS)

    Hildenbrand, Anne; Wagner, Frank R.; Akhouayri, Hassan; Natoli, Jean-Yves; Commandre, Mireille

    2008-01-01

    Laser damage measurements in nonlinear optical crystals, in particular in biaxial crystals, may be influenced by several effects proper to these materials or greatly enhanced in these materials. Before discussion of these effects, we address the topic of error bar determination for probability measurements. Error bars for the damage probabilities are important because nonlinear crystals are often small and expensive, thus only few sites are used for a single damage probability measurement. We present the mathematical basics and a flow diagram for the numerical calculation of error bars for probability measurements that correspond to a chosen confidence level. Effects that possibly modify the maximum intensity in a biaxial nonlinear crystal are: focusing aberration, walk-off and self-focusing. Depending on focusing conditions, propagation direction, polarization of the light and the position of the focus point in the crystal, strong aberrations may change the beam profile and drastically decrease the maximum intensity in the crystal. A correction factor for this effect is proposed, but quantitative corrections are not possible without taking into account the experimental beam profile after the focusing lens. The characteristics of walk-off and self-focusing have quickly been reviewed for the sake of completeness of this article. Finally, parasitic second harmonic generation may influence the laser damage behavior of crystals. The important point for laser damage measurements is that the amount of externally observed SHG after the crystal does not correspond to the maximum amount of second harmonic light inside the crystal.

  14. Comparison of near fusional vergence ranges with rotary prisms and with prism bars.

    PubMed

    Goss, David A; Becker, Emily

    2011-02-01

    Common methods for determination of fusional vergence ranges make use of rotary prisms in the phoropter or prism bars out of the phoropter. This study compared near fusional vergence ranges with rotary prisms with those with prism bars. Fifty young adults served as subjects. Odd-numbered subjects had rotary prism vergences performed before prism bar vergences. For even-numbered subjects, prism bar vergences were done first. Base-in (BI) vergences were done before base-out (BO) vergences with both rotary prisms and prism bars. A coefficient of agreement was calculated by multiplying the standard deviation of the individual subject differences between rotary prisms and prism bars by 1.96, to approximate the range within which the 2 tests would agree 95% of the time. The lowest coefficient of agreement was 7.3Δ for the BI recovery. The others were high, ranging from 15.4Δ for the BO recovery to 19.5Δ for the BO break. Fusional vergence ranges determined by prism bars out of the phoropter cannot be used interchangeably with those determined by phoropter rotary prisms for the purpose of follow-up on individual patients or for the purpose of comparison with norms. Copyright © 2010 American Optometric Association. Published by Elsevier Inc. All rights reserved.

  15. Development of self-sensing BFRP bars with distributed optic fiber sensors

    NASA Astrophysics Data System (ADS)

    Tang, Yongsheng; Wu, Zhishen; Yang, Caiqian; Shen, Sheng; Wu, Gang; Hong, Wan

    2009-03-01

    In this paper, a new type of self-sensing basalt fiber reinforced polymer (BFRP) bars is developed with using the Brillouin scattering-based distributed optic fiber sensing technique. During the fabrication, optic fiber without buffer and sheath as a core is firstly reinforced through braiding around mechanically dry continuous basalt fiber sheath in order to survive the pulling-shoving process of manufacturing the BFRP bars. The optic fiber with dry basalt fiber sheath as a core embedded further in the BFRP bars will be impregnated well with epoxy resin during the pulling-shoving process. The bond between the optic fiber and the basalt fiber sheath as well as between the basalt fiber sheath and the FRP bar can be controlled and ensured. Therefore, the measuring error due to the slippage between the optic fiber core and the coating can be improved. Moreover, epoxy resin of the segments, where the connection of optic fibers will be performed, is uncured by isolating heat from these parts of the bar during the manufacture. Consequently, the optic fiber in these segments of the bar can be easily taken out, and the connection between optic fibers can be smoothly carried out. Finally, a series of experiments are performed to study the sensing and mechanical properties of the propose BFRP bars. The experimental results show that the self-sensing BFRP bar is characterized by not only excellent accuracy, repeatability and linearity for strain measuring but also good mechanical property.

  16. Brain Arterial Diameters as a Risk Factor for Vascular Events.

    PubMed

    Gutierrez, Jose; Cheung, Ken; Bagci, Ahmet; Rundek, Tatjana; Alperin, Noam; Sacco, Ralph L; Wright, Clinton B; Elkind, Mitchell S V

    2015-08-06

    Arterial luminal diameters are routinely used to assess for vascular disease. Although small diameters are typically considered pathological, arterial dilatation has also been associated with disease. We hypothesize that extreme arterial diameters are biomarkers of the risk of vascular events. Participants in the Northern Manhattan Study who had a time-of-flight magnetic resonance angiography were included in this analysis (N=1034). A global arterial Z-score, called the brain arterial remodeling (BAR) score, was obtained by averaging the measured diameters within each individual. Individuals with a BAR score <-2 SDs were considered to have the smallest diameters, individuals with a BAR score >-2 and <2 SDs had average diameters, and individuals with a BAR score >2 SDs had the largest diameters. All vascular events were recorded prospectively after the brain magnetic resonance imaging. Spline curves and incidence rates were used to test our hypothesis. The association of the BAR score with death (P=0.001), vascular death (P=0.02), any vascular event (P=0.05), and myocardial infarction (P=0.10) was U-shaped except for ischemic stroke (P=0.74). Consequently, incidence rates for death, vascular death, myocardial infarction, and any vascular event were higher in individuals with the largest diameters, whereas individuals with the smallest diameters had a higher incidence of death, vascular death, any vascular event, and ischemic stroke compared with individuals with average diameters. The risk of death, vascular death, and any vascular event increased at both extremes of brain arterial diameters. The pathophysiology linking brain arterial remodeling to systemic vascular events needs further research. © 2015 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  17. Revision of laser-induced damage threshold evaluation from damage probability data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bataviciute, Gintare; Grigas, Povilas; Smalakys, Linas

    2013-04-15

    In this study, the applicability of commonly used Damage Frequency Method (DFM) is addressed in the context of Laser-Induced Damage Threshold (LIDT) testing with pulsed lasers. A simplified computer model representing the statistical interaction between laser irradiation and randomly distributed damage precursors is applied for Monte Carlo experiments. The reproducibility of LIDT predicted from DFM is examined under both idealized and realistic laser irradiation conditions by performing numerical 1-on-1 tests. A widely accepted linear fitting resulted in systematic errors when estimating LIDT and its error bars. For the same purpose, a Bayesian approach was proposed. A novel concept of parametricmore » regression based on varying kernel and maximum likelihood fitting technique is introduced and studied. Such approach exhibited clear advantages over conventional linear fitting and led to more reproducible LIDT evaluation. Furthermore, LIDT error bars are obtained as a natural outcome of parametric fitting which exhibit realistic values. The proposed technique has been validated on two conventionally polished fused silica samples (355 nm, 5.7 ns).« less

  18. Remote Sensing Global Surface Air Pressure Using Differential Absorption BArometric Radar (DiBAR)

    NASA Technical Reports Server (NTRS)

    Lin, Bing; Harrah, Steven; Lawrence, Wes; Hu, Yongxiang; Min, Qilong

    2016-01-01

    Tropical storms and severe weathers are listed as one of core events that need improved observations and predictions in World Meteorological Organization and NASA Decadal Survey (DS) documents and have major impacts on public safety and national security. This effort tries to observe surface air pressure, especially over open seas, from space using a Differential-absorption BArometric Radar (DiBAR) operating at the 50-55 gigahertz O2 absorption band. Air pressure is among the most important variables that affect atmospheric dynamics, and currently can only be measured by limited in-situ observations over oceans. Analyses show that with the proposed space radar the errors in instantaneous (averaged) pressure estimates can be as low as approximately 4 millibars (approximately 1 millibar under all weather conditions). With these sea level pressure measurements, the forecasts of severe weathers such as hurricanes will be significantly improved. Since the development of the DiBAR concept about a decade ago, NASA Langley DiBAR research team has made substantial progress in advancing the concept. The feasibility assessment clearly shows the potential of sea surface barometry using existing radar technologies. The team has developed a DiBAR system design, fabricated a Prototype-DiBAR (P-DiBAR) for proof-of-concept, conducted lab, ground and airborne P-DiBAR tests. The flight test results are consistent with the instrumentation goals. Observational system simulation experiments for space DiBAR performance based on the existing DiBAR technology and capability show substantial improvements in tropical storm predictions, not only for the hurricane track and position but also for the hurricane intensity. DiBAR measurements will lead us to an unprecedented level of the prediction and knowledge on global extreme weather and climate conditions.

  19. Needle bar for warp knitting machines

    DOEpatents

    Hagel, Adolf; Thumling, Manfred

    1979-01-01

    Needle bar for warp knitting machines with a number of needles individually set into slits of the bar and having shafts cranked to such an extent that the head section of each needle is in alignment with the shaft section accommodated by the slit. Slackening of the needles will thus not influence the needle spacing.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, Tom; Croft, Stephen; Jarman, Kenneth D.

    The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings, and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically quantify total uncertainty in terms of “random” and “systematic” components, and then specify error bars for the total mass estimate in multiple items. Uncertainty quantification (UQ) for NDA has always been important, but it is recognized that greater rigor is needed andmore » achievable using modern statistical methods. To this end, we describe the extent to which the guideline for expressing uncertainty in measurements (GUM) can be used for NDA. Also, we propose improvements over GUM for NDA by illustrating UQ challenges that it does not address, including calibration with errors in predictors, model error, and item-specific biases. A case study is presented using low-resolution NaI spectra and applying the enrichment meter principle to estimate the U-235 mass in an item. The case study illustrates how to update the current American Society for Testing and Materials guide for application of the enrichment meter principle using gamma spectra from a NaI detector.« less

  1. Interpolating Spherical Harmonics for Computing Antenna Patterns

    DTIC Science & Technology

    2011-07-01

    4∞. If gNF denotes the spline computed from the uniform partition of NF + 1 frequency points, the splines converge as O[N−4F ]: ‖gN − g‖∞ ≤ C0‖g(4...splines. There is the possibility of estimating the error ‖g− gNF ‖∞ even though the function g is unknown. Table 1 compares these unknown errors ‖g − gNF ...to the computable estimates ‖ gNF − g2NF ‖∞. The latter is a strong predictor of the unknown error. The triple bar is the sup-norm error over all the

  2. Precision modelling of M dwarf stars: the magnetic components of CM Draconis

    NASA Astrophysics Data System (ADS)

    MacDonald, J.; Mullan, D. J.

    2012-04-01

    The eclipsing binary CM Draconis (CM Dra) contains two nearly identical red dwarfs of spectral class dM4.5. The masses and radii of the two components have been reported with unprecedentedly small statistical errors: for M, these errors are 1 part in 260, while for R, the errors reported by Morales et al. are 1 part in 130. When compared with standard stellar models with appropriate mass and age (≈4 Gyr), the empirical results indicate that both components are discrepant from the models in the following sense: the observed stars are larger in R ('bloated'), by several standard deviations, than the models predict. The observed luminosities are also lower than the models predict. Here, we attempt at first to model the two components of CM Dra in the context of standard (non-magnetic) stellar models using a systematic array of different assumptions about helium abundances (Y), heavy element abundances (Z), opacities and mixing length parameter (α). We find no 4-Gyr-old models with plausible values of these four parameters that fit the observed L and R within the reported statistical error bars. However, CM Dra is known to contain magnetic fields, as evidenced by the occurrence of star-spots and flares. Here we ask: can inclusion of magnetic effects into stellar evolution models lead to fits of L and R within the error bars? Morales et al. have reported that the presence of polar spots results in a systematic overestimate of R by a few per cent when eclipses are interpreted with a standard code. In a star where spots cover a fraction f of the surface area, we find that the revised R and L for CM Dra A can be fitted within the error bars by varying the parameter α. The latter is often assumed to be reduced by the presence of magnetic fields, although the reduction in α as a function of B is difficult to quantify. An alternative magnetic effect, namely inhibition of the onset of convection, can be readily quantified in terms of a magnetic parameter δ≈B2/4πγpgas (where B is the strength of the local vertical magnetic field). In the context of δ models in which B is not allowed to exceed a 'ceiling' of 106 G, we find that the revised R and L can also be fitted, within the error bars, in a finite region of the f-δ plane. The permitted values of δ near the surface leads us to estimate that the vertical field strength on the surface of CM Dra A is about 500 G, in good agreement with independent observational evidence for similar low-mass stars. Recent results for another binary with parameters close to those of CM Dra suggest that metallicity differences cannot be the dominant explanation for the bloating of the two components of CM Dra.

  3. A PRELIMINARY JUPITER MODEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hubbard, W. B.; Militzer, B.

    In anticipation of new observational results for Jupiter's axial moment of inertia and gravitational zonal harmonic coefficients from the forthcoming Juno orbiter, we present a number of preliminary Jupiter interior models. We combine results from ab initio computer simulations of hydrogen–helium mixtures, including immiscibility calculations, with a new nonperturbative calculation of Jupiter's zonal harmonic coefficients, to derive a self-consistent model for the planet's external gravity and moment of inertia. We assume helium rain modified the interior temperature and composition profiles. Our calculation predicts zonal harmonic values to which measurements can be compared. Although some models fit the observed (pre-Juno) second-more » and fourth-order zonal harmonics to within their error bars, our preferred reference model predicts a fourth-order zonal harmonic whose absolute value lies above the pre-Juno error bars. This model has a dense core of about 12 Earth masses and a hydrogen–helium-rich envelope with approximately three times solar metallicity.« less

  4. Contribution to the theory of propeller vibrations

    NASA Technical Reports Server (NTRS)

    Liebers, F

    1930-01-01

    This report presents a calculation of the torsional frequencies of revolving bars with allowance for the air forces. Calculation of the flexural or bonding frequencies of revolving straight or tapered bars in terms of the angular velocity of revolution. Calculation on the basis of Rayleigh's principle of variation. There is also a discussion of error estimation and the accuracy of results. The author then provides an application of the theory to screw propellers for airplanes and the discusses the liability of propellers to damage through vibrations due to lack of uniform loading.

  5. Individual- and School-Level Factors Related to School-Based Salad Bar Use among Children and Adolescents

    ERIC Educational Resources Information Center

    Spruance, Lori Andersen; Myers, Leann; O'Malley, Keelia; Rose, Donald; Johnson, Carolyn C.

    2017-01-01

    Background: Consumption levels of fruits and vegetables (F/V) among children/adolescents are low. Programs like school-based salad bars (SB) provide children/adolescents increased F/V access. Aims: The purpose of this study was to examine the relationship between SB use and individual and school-level factors among elementary and secondary school…

  6. Tutorial: Asteroseismic Stellar Modelling with AIMS

    NASA Astrophysics Data System (ADS)

    Lund, Mikkel N.; Reese, Daniel R.

    The goal of aims (Asteroseismic Inference on a Massive Scale) is to estimate stellar parameters and credible intervals/error bars in a Bayesian manner from a set of asteroseismic frequency data and so-called classical constraints. To achieve reliable parameter estimates and computational efficiency, it searches through a grid of pre-computed models using an MCMC algorithm—interpolation within the grid of models is performed by first tessellating the grid using a Delaunay triangulation and then doing a linear barycentric interpolation on matching simplexes. Inputs for the modelling consist of individual frequencies from peak-bagging, which can be complemented with classical spectroscopic constraints. aims is mostly written in Python with a modular structure to facilitate contributions from the community. Only a few computationally intensive parts have been rewritten in Fortran in order to speed up calculations.

  7. A supersensitive silver nanoprobe based aptasensor for low cost detection of malathion residues in water and food samples

    NASA Astrophysics Data System (ADS)

    Bala, Rajni; Mittal, Sherry; Sharma, Rohit K.; Wangoo, Nishima

    2018-05-01

    In the present study, we report a highly sensitive, rapid and low cost colorimetric monitoring of malathion (an organophosphate insecticide) employing a basic hexapeptide, malathion specific aptamer (oligonucleotide) and silver nanoparticles (AgNPs) as a nanoprobe. AgNPs are made to interact with the aptamer and peptide to give different optical responses depending upon the presence or absence of malathion. The nanoparticles remain yellow in color in the absence of malathion owing to the binding of aptamer with peptide which otherwise tends to aggregate the particles because of charge based interactions. In the presence of malathion, the agglomeration of the particles occurs which turns the solution orange. Furthermore, the developed aptasensor was successfully applied to detect malathion in various water samples and apple. The detection offered high recoveries in the range of 89-120% with the relative standard deviation within 2.98-4.78%. The proposed methodology exhibited excellent selectivity and a very low limit of detection i.e. 0.5 pM was achieved. The developed facile, rapid and low cost silver nanoprobe based on aptamer and peptide proved to be potentially applicable for highly selective and sensitive colorimetric sensing of trace levels of malathion in complex environmental samples. Figure S2. HPLC Chromatogram of KKKRRR. Figure S3. UV- Visible spectra of AgNPs in the presence of increasing peptide concentrations. Inset shows respective color changes of AgNPs with peptide concentrations ranging from 0.1 mM to 100 mM (a to e). Figure S4. UV- Visible spectra of AgNPs in the presence 10 mM peptide and varying aptamer concentrations. Inset shows the corresponding color changes. a to e shows aptamer concentrations ranging from 10 nM to 1000 nM. Figure S5. Interference Studies. Ratio of A520 nm/390 nm of AgNPs in the presence of 10 mM peptide, 500 nM aptamer, 0.5 nM malathion and 0.5 mM interfering components i.e. sodium, potassium, calcium, alanine, arginine, aspartic acid, ascorbic acid (AA) and glucose. Figure S6. (A) Absorbance spectra of AgNPs with increasing malathion concentrations. (B) Calibration plot for spiked lake water. Inset shows their respective images where a to g represents malathion concentrations from 0.01 nM to 0.75 nM. Each point represents an average of three individual measurements and error bars indicate standard deviation. Figure S7. (A) Absorbance spectra of AgNPs with increasing malathion concentrations in spiked tap water samples. (B) Calibration plot for the biosensor. Inset represents the color changes. a to g represents varying malathion concentrations from 0.01 nM to 0.75 nM. Each point represents an average of three individual measurements and error bars indicate standard deviation. Figure S8. (A) Absorbance spectra of AgNPs in the presence of different malathion concentrations in spiked apple samples. (B) Calibration plot for spiked apple. Inset displays the corresponding color changes. a to g shows the color of solutions having malathion concentrations from 0.01 nM to 0.75 nM. Each point represents an average of three individual measurements and error bars indicate standard deviation.

  8. Success and High Predictability of Intraorally Welded Titanium Bar in the Immediate Loading Implants

    PubMed Central

    Fogli, Vaniel; Camerini, Michele; Carinci, Francesco

    2014-01-01

    The implants failure may be caused by micromotion and stress exerted on implants during the phase of bone healing. This concept is especially true in case of implants placed in atrophic ridges. So the primary stabilization and fixation of implants are an important goal that can also allow immediate loading and oral rehabilitation on the same day of surgery. This goal may be achieved thanks to the technique of welding titanium bars on implant abutments. In fact, the procedure can be performed directly in the mouth eliminating possibility of errors or distortions due to impression. This paper describes a case report and the most recent data about long-term success and high predictability of intraorally welded titanium bar in immediate loading implants. PMID:24963419

  9. Quark fragmentation into spin-triplet S -wave quarkonium

    DOE PAGES

    Bodwin, Geoffrey T.; Chung, Hee Sok; Kim, U-Rae; ...

    2015-04-08

    We compute fragmentation functions for a quark to fragment to a quarkonium through an S-wave spin-triplet heavy quark-antiquark pair. We consider both color-singlet and color-octet heavy quark-antiquark (Q (Q) over bar) pairs. We give results for the case in which the fragmenting quark and the quark that is a constituent of the quarkonium have different flavors and for the case in which these quarks have the same flavors. Our results for the sum over all spin polarizations of the Q (Q) over bar pairs confirm previous results. Our results for longitudinally polarized Q (Q) over bar pairs agree with previousmore » calculations for the same flavor cases and correct an error in a previous calculation for the different-flavor case.« less

  10. A model for flexi-bar to evaluate intervertebral disc and muscle forces in exercises.

    PubMed

    Abdollahi, Masoud; Nikkhoo, Mohammad; Ashouri, Sajad; Asghari, Mohsen; Parnianpour, Mohamad; Khalaf, Kinda

    2016-10-01

    This study developed and validated a lumped parameter model for the FLEXI-BAR, a popular training instrument that provides vibration stimulation. The model which can be used in conjunction with musculoskeletal-modeling software for quantitative biomechanical analyses, consists of 3 rigid segments, 2 torsional springs, and 2 torsional dashpots. Two different sets of experiments were conducted to determine the model's key parameters including the stiffness of the springs and the damping ratio of the dashpots. In the first set of experiments, the free vibration of the FLEXI-BAR with an initial displacement at its end was considered, while in the second set, forced oscillations of the bar were studied. The properties of the mechanical elements in the lumped parameter model were derived utilizing a non-linear optimization algorithm which minimized the difference between the model's prediction and the experimental data. The results showed that the model is valid (8% error) and can be used for simulating exercises with the FLEXI-BAR for excitations in the range of the natural frequency. The model was then validated in combination with AnyBody musculoskeletal modeling software, where various lumbar disc, spinal muscles and hand muscles forces were determined during different FLEXI-BAR exercise simulations. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  11. Bar patronage and motivational predictors of drinking in the San Francisco Bay Area: Gender and sexual identity differences

    PubMed Central

    Trocki, Karen; Drabble, Laurie

    2009-01-01

    Prior research has found differences in heavier drinking by both gender and sexual orientation. Heavier drinking and alcohol-related problems appear to be higher in sexual minority populations, particularly among women. It has been suggested that differences may be explained in part by socializing in bars and other public drinking venues. This paper explores bar patronage, alcohol consumption, alcohol-related problems, and reasons for going to bars in relation to both gender and sexual orientation based on two different samples: respondents from a random digit dial (RDD) probability study of 1,043 households in Northern California and 569 individuals who were surveyed exiting from 25 different bars in the same three counties that constituted the RDD sample. Bar patrons, in most instances, regardless of gender or sexual identity were at much higher risk of excessive consumption and related problems and consequences. On several key variables, women from the bar patron sample exceeded the problem rates of men in the general population. Bisexual women and men were elevated on a majority of the alcohol measures relative to heterosexuals. Measures of heavier drinking and alcohol-related problems were also elevated among lesbians compared to heterosexual women. Lesbian and gay respondents were less likely to endorse various motives as being important to their bar patronage. Finally, two of the bar motive variables, sensation seeking and mood change motives, were particularly predictive of heavier drinking and alcohol-related problems. Social motives did not predict problems. The findings suggest that bar patrons constitute a population of individuals who require special attention in prevention and intervention that should be tailored to their interests while taking into consideration their unique motivational needs. PMID:19248392

  12. Path synthesis of four-bar mechanisms using synergy of polynomial neural network and Stackelberg game theory

    NASA Astrophysics Data System (ADS)

    Ahmadi, Bahman; Nariman-zadeh, Nader; Jamali, Ali

    2017-06-01

    In this article, a novel approach based on game theory is presented for multi-objective optimal synthesis of four-bar mechanisms. The multi-objective optimization problem is modelled as a Stackelberg game. The more important objective function, tracking error, is considered as the leader, and the other objective function, deviation of the transmission angle from 90° (TA), is considered as the follower. In a new approach, a group method of data handling (GMDH)-type neural network is also utilized to construct an approximate model for the rational reaction set (RRS) of the follower. Using the proposed game-theoretic approach, the multi-objective optimal synthesis of a four-bar mechanism is then cast into a single-objective optimal synthesis using the leader variables and the obtained RRS of the follower. The superiority of using the synergy game-theoretic method of Stackelberg with a GMDH-type neural network is demonstrated for two case studies on the synthesis of four-bar mechanisms.

  13. The relationship between group size, intoxication and continuing to drink after bar attendance.

    PubMed

    Reed, Mark B; Clapp, John D; Martell, Brandi; Hidalgo-Sotelo, Alexandra

    2013-11-01

    The present study was undertaken to explore multilevel determinants of planning to continue to drink alcohol after leaving public drinking events. We assessed whether individual-level factors, group-related factors, or event-level bar characteristics were associated with post-bar drinking. We recruited a total of 642 participants from 30 participating bars in urban Southern California. Groups who arrived to patron a bar were interviewed upon their entrance and exit. Given data nesting, we employed a multilevel modeling approach to data analysis. More than one-third (40%) of our sample reported the intention to continue drinking as they exited the bar. Results of our multilevel model indicated eight individual-level variables significantly associated with intending to continue to drink. Time of night moderated the relationship between BrAC change and intentions to continue to drink. Although none of the group factors were significant in our model, a significant cross-level interaction between BrAC change and number of group members indicated the effect of intoxication on planning to continue to drink increases as group members increase. At the bar level, the presence of temporary bars and server offers of non-alcoholic drinks significantly decreased intentions to continue to drink. Given the large percentage of participants who reported the intention to continue drinking after exiting a bar, this study draws attention to the fact that field studies of drinking behavior may assess drinking mid-event rather than at the end of a drinking event. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  14. Critical error fields for locked mode instability in tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    La Haye, R.J.; Fitzpatrick, R.; Hender, T.C.

    1992-07-01

    Otherwise stable discharges can become nonlinearly unstable to disruptive locked modes when subjected to a resonant {ital m}=2, {ital n}=1 error field from irregular poloidal field coils, as in DIII-D (Nucl. Fusion {bold 31}, 875 (1991)), or from resonant magnetic perturbation coils as in COMPASS-C ({ital Proceedings} {ital of} {ital the} 18{ital th} {ital European} {ital Conference} {ital on} {ital Controlled} {ital Fusion} {ital and} {ital Plasma} {ital Physics}, Berlin (EPS, Petit-Lancy, Switzerland, 1991), Vol. 15C, Part II, p. 61). Experiments in Ohmically heated deuterium discharges with {ital q}{approx}3.5, {ital {bar n}} {approx} 2 {times} 10{sup 19} m{sup {minus}3} andmore » {ital B}{sub {ital T}} {approx} 1.2 T show that a much larger relative error field ({ital B}{sub {ital r}21}/{ital B}{sub {ital T}} {approx} 1 {times} 10{sup {minus}3}) is required to produce a locked mode in the small, rapidly rotating plasma of COMPASS-C ({ital R}{sub 0} = 0.56 m, {ital f}{approx}13 kHz) than in the medium-sized plasmas of DIII-D ({ital R}{sub 0} = 1.67 m, {ital f}{approx}1.6 kHz), where the critical relative error field is {ital B}{sub {ital r}21}/{ital B}{sub {ital T}} {approx} 2 {times} 10{sup {minus}4}. This dependence of the threshold for instability is explained by a nonlinear tearing theory of the interaction of resonant magnetic perturbations with rotating plasmas that predicts the critical error field scales as ({ital fR}{sub 0}/{ital B}{sub {ital T}}){sup 4/3}{ital {bar n}}{sup 2/3}. Extrapolating from existing devices, the predicted critical field for locked modes in Ohmic discharges on the International Thermonuclear Experimental Reactor (ITER) (Nucl. Fusion {bold 30}, 1183 (1990)) ({ital f}=0.17 kHz, {ital R}{sub 0} = 6.0 m, {ital B}{sub {ital T}} = 4.9 T, {ital {bar n}} = 2 {times} 10{sup 19} m{sup {minus}3}) is {ital B}{sub {ital r}21}/{ital B}{sub {ital T}} {approx} 2 {times} 10{sup {minus}5}.« less

  15. Monitoring Changes of Tropical Extreme Rainfall Events Using Differential Absorption Barometric Radar (DiBAR)

    NASA Technical Reports Server (NTRS)

    Lin, Bing; Harrah, Steven; Lawrence, R. Wes; Hu, Yongxiang; Min, Qilong

    2015-01-01

    This work studies the potential of monitoring changes in tropical extreme rainfall events such as tropical storms from space using a Differential-absorption BArometric Radar (DiBAR) operating at 50-55 gigahertz O2 absorption band to remotely measure sea surface air pressure. Air pressure is among the most important variables that affect atmospheric dynamics, and currently can only be measured by limited in-situ observations over oceans. Analyses show that with the proposed radar the errors in instantaneous (averaged) pressure estimates can be as low as approximately 5 millibars (approximately 1 millibar) under all weather conditions. With these sea level pressure measurements, the forecasts, analyses and understanding of these extreme events in both short and long time scales can be improved. Severe weathers, especially hurricanes, are listed as one of core areas that need improved observations and predictions in WCRP (World Climate Research Program) and NASA Decadal Survey (DS) and have major impacts on public safety and national security through disaster mitigation. Since the development of the DiBAR concept about a decade ago, our team has made substantial progress in advancing the concept. Our feasibility assessment clearly shows the potential of sea surface barometry using existing radar technologies. We have developed a DiBAR system design, fabricated a Prototype-DiBAR (P-DiBAR) for proof-of-concept, conducted lab, ground and airborne P-DiBAR tests. The flight test results are consistent with our instrumentation goals. Observational system simulation experiments for space DiBAR performance show substantial improvements in tropical storm predictions, not only for the hurricane track and position but also for the hurricane intensity. DiBAR measurements will lead us to an unprecedented level of the prediction and knowledge on tropical extreme rainfall weather and climate conditions.

  16. Estimating instream constituent loads using replicate synoptic sampling, Peru Creek, Colorado

    NASA Astrophysics Data System (ADS)

    Runkel, Robert L.; Walton-Day, Katherine; Kimball, Briant A.; Verplanck, Philip L.; Nimick, David A.

    2013-05-01

    SummaryThe synoptic mass balance approach is often used to evaluate constituent mass loading in streams affected by mine drainage. Spatial profiles of constituent mass load are used to identify sources of contamination and prioritize sites for remedial action. This paper presents a field scale study in which replicate synoptic sampling campaigns are used to quantify the aggregate uncertainty in constituent load that arises from (1) laboratory analyses of constituent and tracer concentrations, (2) field sampling error, and (3) temporal variation in concentration from diel constituent cycles and/or source variation. Consideration of these factors represents an advance in the application of the synoptic mass balance approach by placing error bars on estimates of constituent load and by allowing all sources of uncertainty to be quantified in aggregate; previous applications of the approach have provided only point estimates of constituent load and considered only a subset of the possible errors. Given estimates of aggregate uncertainty, site specific data and expert judgement may be used to qualitatively assess the contributions of individual factors to uncertainty. This assessment can be used to guide the collection of additional data to reduce uncertainty. Further, error bars provided by the replicate approach can aid the investigator in the interpretation of spatial loading profiles and the subsequent identification of constituent source areas within the watershed. The replicate sampling approach is applied to Peru Creek, a stream receiving acidic, metal-rich effluent from the Pennsylvania Mine. Other sources of acidity and metals within the study reach include a wetland area adjacent to the mine and tributary inflow from Cinnamon Gulch. Analysis of data collected under low-flow conditions indicates that concentrations of Al, Cd, Cu, Fe, Mn, Pb, and Zn in Peru Creek exceed aquatic life standards. Constituent loading within the study reach is dominated by effluent from the Pennsylvania Mine, with over 50% of the Cd, Cu, Fe, Mn, and Zn loads attributable to a collapsed adit near the top of the study reach. These estimates of mass load may underestimate the effect of the Pennsylvania Mine as leakage from underground mine workings may contribute to metal loads that are currently attributed to the wetland area. This potential leakage confounds the evaluation of remedial options and additional research is needed to determine the magnitude and location of the leakage.

  17. Estimating instream constituent loads using replicate synoptic sampling, Peru Creek, Colorado

    USGS Publications Warehouse

    Runkel, Robert L.; Walton-Day, Katherine; Kimball, Briant A.; Verplanck, Philip L.; Nimick, David A.

    2013-01-01

    The synoptic mass balance approach is often used to evaluate constituent mass loading in streams affected by mine drainage. Spatial profiles of constituent mass load are used to identify sources of contamination and prioritize sites for remedial action. This paper presents a field scale study in which replicate synoptic sampling campaigns are used to quantify the aggregate uncertainty in constituent load that arises from (1) laboratory analyses of constituent and tracer concentrations, (2) field sampling error, and (3) temporal variation in concentration from diel constituent cycles and/or source variation. Consideration of these factors represents an advance in the application of the synoptic mass balance approach by placing error bars on estimates of constituent load and by allowing all sources of uncertainty to be quantified in aggregate; previous applications of the approach have provided only point estimates of constituent load and considered only a subset of the possible errors. Given estimates of aggregate uncertainty, site specific data and expert judgement may be used to qualitatively assess the contributions of individual factors to uncertainty. This assessment can be used to guide the collection of additional data to reduce uncertainty. Further, error bars provided by the replicate approach can aid the investigator in the interpretation of spatial loading profiles and the subsequent identification of constituent source areas within the watershed.The replicate sampling approach is applied to Peru Creek, a stream receiving acidic, metal-rich effluent from the Pennsylvania Mine. Other sources of acidity and metals within the study reach include a wetland area adjacent to the mine and tributary inflow from Cinnamon Gulch. Analysis of data collected under low-flow conditions indicates that concentrations of Al, Cd, Cu, Fe, Mn, Pb, and Zn in Peru Creek exceed aquatic life standards. Constituent loading within the study reach is dominated by effluent from the Pennsylvania Mine, with over 50% of the Cd, Cu, Fe, Mn, and Zn loads attributable to a collapsed adit near the top of the study reach. These estimates of mass load may underestimate the effect of the Pennsylvania Mine as leakage from underground mine workings may contribute to metal loads that are currently attributed to the wetland area. This potential leakage confounds the evaluation of remedial options and additional research is needed to determine the magnitude and location of the leakage.

  18. Hubble Space Telescope secondary mirror vertex radius/conic constant test

    NASA Technical Reports Server (NTRS)

    Parks, Robert

    1991-01-01

    The Hubble Space Telescope backup secondary mirror was tested to determine the vertex radius and conic constant. Three completely independent tests (to the same procedure) were performed. Similar measurements in the three tests were highly consistent. The values obtained for the vertex radius and conic constant were the nominal design values within the error bars associated with the tests. Visual examination of the interferometric data did not show any measurable zonal figure error in the secondary mirror.

  19. Uncertainty quantification in application of the enrichment meter principle for nondestructive assay of special nuclear material

    DOE PAGES

    Burr, Tom; Croft, Stephen; Jarman, Kenneth D.

    2015-09-05

    The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings, and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically quantify total uncertainty in terms of “random” and “systematic” components, and then specify error bars for the total mass estimate in multiple items. Uncertainty quantification (UQ) for NDA has always been important, but it is recognized that greater rigor is needed andmore » achievable using modern statistical methods. To this end, we describe the extent to which the guideline for expressing uncertainty in measurements (GUM) can be used for NDA. Also, we propose improvements over GUM for NDA by illustrating UQ challenges that it does not address, including calibration with errors in predictors, model error, and item-specific biases. A case study is presented using low-resolution NaI spectra and applying the enrichment meter principle to estimate the U-235 mass in an item. The case study illustrates how to update the current American Society for Testing and Materials guide for application of the enrichment meter principle using gamma spectra from a NaI detector.« less

  20. Rapid social network assessment for predicting HIV and STI risk among men attending bars and clubs in San Diego, California.

    PubMed

    Drumright, Lydia N; Frost, Simon D W

    2010-12-01

    To test the use of a rapid assessment tool to determine social network size, and to test whether social networks with a high density of HIV/sexually transmitted infection (STI) or substance using persons were independent predictors of HIV and STI status among men who have sex with men (MSM) using a rapid tool for collecting network information. We interviewed 609 MSM from 14 bars in San Diego, California, USA, using an enhanced version of the Priorities for Local AIDS Control Efforts (PLACE) methodology. Social network size was assessed using a series of 19 questions of the form 'How many people do you know that have the name X?', where X included specific male and female names (eg, Keith), use illicit substances, and have HIV. Generalised linear models were used to estimate average and group-specific network sizes, and their association with HIV status, STI history and methamphetamine use. Despite possible errors in ascertaining network size, average reported network sizes were larger for larger groups. Those who reported having HIV infection or having past STI reported significantly more HIV infected and methamphetamine or popper using individuals in their social network. There was a dose-dependent effect of social network size of HIV infected individuals on self-reported HIV status, past STI and use of methamphetamine in the last 12 months, after controlling for age, ethnicity and numbers of sexual partners in the last year. Relatively simple measures of social networks are associated with HIV/STI risk, and may provide a useful tool for targeting HIV/STI surveillance and prevention.

  1. Predicting vertical jump height from bar velocity.

    PubMed

    García-Ramos, Amador; Štirn, Igor; Padial, Paulino; Argüelles-Cienfuegos, Javier; De la Fuente, Blanca; Strojnik, Vojko; Feriche, Belén

    2015-06-01

    The objective of the study was to assess the use of maximum (Vmax) and final propulsive phase (FPV) bar velocity to predict jump height in the weighted jump squat. FPV was defined as the velocity reached just before bar acceleration was lower than gravity (-9.81 m·s(-2)). Vertical jump height was calculated from the take-off velocity (Vtake-off) provided by a force platform. Thirty swimmers belonging to the National Slovenian swimming team performed a jump squat incremental loading test, lifting 25%, 50%, 75% and 100% of body weight in a Smith machine. Jump performance was simultaneously monitored using an AMTI portable force platform and a linear velocity transducer attached to the barbell. Simple linear regression was used to estimate jump height from the Vmax and FPV recorded by the linear velocity transducer. Vmax (y = 16.577x - 16.384) was able to explain 93% of jump height variance with a standard error of the estimate of 1.47 cm. FPV (y = 12.828x - 6.504) was able to explain 91% of jump height variance with a standard error of the estimate of 1.66 cm. Despite that both variables resulted to be good predictors, heteroscedasticity in the differences between FPV and Vtake-off was observed (r(2) = 0.307), while the differences between Vmax and Vtake-off were homogenously distributed (r(2) = 0.071). These results suggest that Vmax is a valid tool for estimating vertical jump height in a loaded jump squat test performed in a Smith machine. Key pointsVertical jump height in the loaded jump squat can be estimated with acceptable precision from the maximum bar velocity recorded by a linear velocity transducer.The relationship between the point at which bar acceleration is less than -9.81 m·s(-2) and the real take-off is affected by the velocity of movement.Mean propulsive velocity recorded by a linear velocity transducer does not appear to be optimal to monitor ballistic exercise performance.

  2. Predicting Vertical Jump Height from Bar Velocity

    PubMed Central

    García-Ramos, Amador; Štirn, Igor; Padial, Paulino; Argüelles-Cienfuegos, Javier; De la Fuente, Blanca; Strojnik, Vojko; Feriche, Belén

    2015-01-01

    The objective of the study was to assess the use of maximum (Vmax) and final propulsive phase (FPV) bar velocity to predict jump height in the weighted jump squat. FPV was defined as the velocity reached just before bar acceleration was lower than gravity (-9.81 m·s-2). Vertical jump height was calculated from the take-off velocity (Vtake-off) provided by a force platform. Thirty swimmers belonging to the National Slovenian swimming team performed a jump squat incremental loading test, lifting 25%, 50%, 75% and 100% of body weight in a Smith machine. Jump performance was simultaneously monitored using an AMTI portable force platform and a linear velocity transducer attached to the barbell. Simple linear regression was used to estimate jump height from the Vmax and FPV recorded by the linear velocity transducer. Vmax (y = 16.577x - 16.384) was able to explain 93% of jump height variance with a standard error of the estimate of 1.47 cm. FPV (y = 12.828x - 6.504) was able to explain 91% of jump height variance with a standard error of the estimate of 1.66 cm. Despite that both variables resulted to be good predictors, heteroscedasticity in the differences between FPV and Vtake-off was observed (r2 = 0.307), while the differences between Vmax and Vtake-off were homogenously distributed (r2 = 0.071). These results suggest that Vmax is a valid tool for estimating vertical jump height in a loaded jump squat test performed in a Smith machine. Key points Vertical jump height in the loaded jump squat can be estimated with acceptable precision from the maximum bar velocity recorded by a linear velocity transducer. The relationship between the point at which bar acceleration is less than -9.81 m·s-2 and the real take-off is affected by the velocity of movement. Mean propulsive velocity recorded by a linear velocity transducer does not appear to be optimal to monitor ballistic exercise performance. PMID:25983572

  3. Influence of Culture and Personality on Determinants of Cognitive Processes Under Conditions of Uncertainty

    DTIC Science & Technology

    2004-05-14

    Tal , Y., Raviv , A., & Spitzer, A., 1999). Janis and Mann (1977) suggested that situational conditions determine how individuals cope with decision...and ignore contrary information relative to non-stressful conditions, which can have disastrous consequences. Bar- Tal , Raviv , and Spitzer (1999...1176. Bar- Tal , Y., Raviv , A., & Spitzer, A. (1999). The need and ability to achieve cognitive structuring: Individual differences that moderate

  4. Prey preference of Myopopone castanea (hymenoptera: formicidae) toward larvae Oryctes rhinoceros Linn (coleoptera: scarabidae)

    NASA Astrophysics Data System (ADS)

    Widihastuty; Tobing, M. C.; Marheni; Kuswardani, R. A.

    2018-02-01

    Myopoponecastanea (Hymenoptera: Formicidae) ant is a predator for larvae Oryctes rhinoceros (Coleoptera: Scarabidae) which is a pest on oil palm. These ants are able to prey on all stadia of O. rhinoceros larvae. This study was conducted to determine prey preference of M. castaneatoward its prey O. rhinoceros larvae.. The study was conducted using a Factorial Complete Random Design with two factors (using log and no log) and five replications. Preferences test was done by choice test and no choice test. The results of no choice preference test on the log treatment, M. castaneaprefer preyed on firstinstar larvae of O. rhinoceros (\\bar{X}=2.6 individual) with a preference index was 0.194 and on no log treatment, M. castanea prefer for both first instar larvae and second instarlarvae (\\bar{X}=4.6 individual) with a preference index 0.197. The results of the choice preference test using logs, showed that M. castanea prefer the firstinstar larvae of O. rhinoceros (\\bar{X}=2.6 individual), with a preference index (0.35), and on no log treatment, M. castaneaprefer the second instar larvae(\\bar{X}=1.4 individual) with a preference index 0.189. Both first and secondinstarlarvaeof O. rhinoceros were preferred by predator M. castanea

  5. Effect of Bar-code Technology on the Incidence of Medication Dispensing Errors and Potential Adverse Drug Events in a Hospital Pharmacy

    PubMed Central

    Poon, Eric G; Cina, Jennifer L; Churchill, William W; Mitton, Patricia; McCrea, Michelle L; Featherstone, Erica; Keohane, Carol A; Rothschild, Jeffrey M; Bates, David W; Gandhi, Tejal K

    2005-01-01

    We performed a direct observation pre-post study to evaluate the impact of barcode technology on medication dispensing errors and potential adverse drug events in the pharmacy of a tertiary-academic medical center. We found that barcode technology significantly reduced the rate of target dispensing errors leaving the pharmacy by 85%, from 0.37% to 0.06%. The rate of potential adverse drug events (ADEs) due to dispensing errors was also significantly reduced by 63%, from 0.19% to 0.069%. In a 735-bed hospital where 6 million doses of medications are dispensed per year, this technology is expected to prevent about 13,000 dispensing errors and 6,000 potential ADEs per year. PMID:16779372

  6. Evaluating the Claims of Network Centric Warfare

    DTIC Science & Technology

    2005-12-01

    judgments (see Bar- Tal , Raviv , & Spitzer, 1999). However, not everyone’s reactions to stressors are the same. A 12-month team performance study...45-58. Bar- Tal , Y. Raviv , A., & Spitzer, A. (1999). The need and ability to achieve cognitive structuring: Individual differences that moderate...definiteness, and regularity” (Bar- Tal , 1994, p. 45). Stress is one of the human responses to uncertainty. Stress can be initiated by a distinct event

  7. Measurement of sigma(Lambda(b)0) / sigma(anti-B 0) x B(Lambda0(b) ---> Lambda+(c) pi-) / B(anti-B0 ---> D+ pi-) in p anti-p collisions at S**(1/2) = 1.96-TeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abulencia, A.; Acosta, D.; Adelman, Jahred A.

    2006-01-01

    The authors present the first observation of the baryon decay {Lambda}{sub b}{sup 0} {yields} {Lambda}{sub c}{sup +} {pi}{sup -} followed by {Lambda}{sub c}{sup +} {yields} pK{sup -} {pi}{sup +} in 106 pb{sup -1} p{bar p} collisions at {radical}s = 1.96 TeV in the CDF experiment. IN order to reduce systematic error, the measured rate for {Lambda}{sub b}{sup 0} decay is normalized to the kinematically similar meson decay {bar B}{sup 0} {yields} D{sup +}{pi}{sup -} followed by D{sup +} {yields} {pi}{sup +}K{sup -}{pi}{sup +}. They report the ratio of production cross sections ({sigma}) times the ratio of branching fractions ({Beta}) formore » the momentum region integrated above p{sub T} > 6 GeV/c and pseudorapidity range |{eta}| < 1.3: {sigma}(p{bar p} {yields} {Lambda}{sub b}{sup 0}X)/{sigma}(p{bar p} {yields} {bar B}{sup 0} X) x {Beta}({Lambda}{sub b}{sup 0} {yields} {Lambda}{sub c}{sup +}{pi}{sup -})/{Beta}({bar B}{sup 0} {yields} D{sup +}{pi}{sup -}) = 0.82 {+-} 0.08(stat) {+-} 0.11(syst) {+-} 0.22 ({Beta}({Lambda}{sub c}{sup +} {yields} pK{sup -} {pi}{sup +})).« less

  8. Driving out errors through tight integration between software and automation.

    PubMed

    Reifsteck, Mark; Swanson, Thomas; Dallas, Mary

    2006-01-01

    A clear case has been made for using clinical IT to improve medication safety, particularly bar-code point-of-care medication administration and computerized practitioner order entry (CPOE) with clinical decision support. The equally important role of automation has been overlooked. When the two are tightly integrated, with pharmacy information serving as a hub, the distinctions between software and automation become blurred. A true end-to-end medication management system drives out errors from the dockside to the bedside. Presbyterian Healthcare Services in Albuquerque has been building such a system since 1999, beginning by automating pharmacy operations to support bar-coded medication administration. Encouraged by those results, it then began layering on software to further support clinician workflow and improve communication, culminating with the deployment of CPOE and clinical decision support. This combination, plus a hard-wired culture of safety, has resulted in a dramatically lower mortality and harm rate that could not have been achieved with a partial solution.

  9. Machine learning models for lipophilicity and their domain of applicability.

    PubMed

    Schroeter, Timon; Schwaighofer, Anton; Mika, Sebastian; Laak, Antonius Ter; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-01-01

    Unfavorable lipophilicity and water solubility cause many drug failures; therefore these properties have to be taken into account early on in lead discovery. Commercial tools for predicting lipophilicity usually have been trained on small and neutral molecules, and are thus often unable to accurately predict in-house data. Using a modern Bayesian machine learning algorithm--a Gaussian process model--this study constructs a log D7 model based on 14,556 drug discovery compounds of Bayer Schering Pharma. Performance is compared with support vector machines, decision trees, ridge regression, and four commercial tools. In a blind test on 7013 new measurements from the last months (including compounds from new projects) 81% were predicted correctly within 1 log unit, compared to only 44% achieved by commercial software. Additional evaluations using public data are presented. We consider error bars for each method (model based error bars, ensemble based, and distance based approaches), and investigate how well they quantify the domain of applicability of each model.

  10. Active Sensing Air Pressure Using Differential Absorption Barometric Radar

    NASA Astrophysics Data System (ADS)

    Lin, B.

    2016-12-01

    Tropical storms and other severe weathers cause huge life losses and property damages and have major impacts on public safety and national security. Their observations and predictions need to be significantly improved. This effort tries to develop a feasible active microwave approach that measures surface air pressure, especially over open seas, from space using a Differential-absorption BArometric Radar (DiBAR) operating at 50-55 GHz O2 absorption band in order to constrain assimilated dynamic fields of numerical weather Prediction (NWP) models close to actual conditions. Air pressure is the most important variable that drives atmospheric dynamics, and currently can only be measured by limited in-situ observations over oceans. Even over land there is no uniform coverage of surface air pressure measurements. Analyses show that with the proposed space radar the errors in instantaneous (averaged) pressure estimates can be as low as 4mb ( 1mb) under all weather conditions. NASA Langley research team has made substantial progresses in advancing the DiBAR concept. The feasibility assessment clearly shows the potential of surface barometry using existing radar technologies. The team has also developed a DiBAR system design, fabricated a Prototype-DiBAR (P-DiBAR) for proof-of-concept, conducted laboratory, ground and airborne P-DiBAR tests. The flight test results are consistent with the instrumentation goals. The precision and accuracy of radar surface pressure measurements are within the range of the theoretical analysis of the DiBAR concept. Observational system simulation experiments for space DiBAR performance based on the existing DiBAR technology and capability show substantial improvements in tropical storm predictions, not only for the hurricane track and position but also for the hurricane intensity. DiBAR measurements will provide us an unprecedented level of the prediction and knowledge on global extreme weather and climate conditions.

  11. A young person's game: immersion and distancing in bar work.

    PubMed

    Conway, Thomas; MacNeela, Pádraig

    2012-01-01

    Previous research indicates that bar workers report high levels of alcohol consumption, but the bar work experience itself has been little studied as a means to understand health threats associated with this job role. The subjective experience and meaning of bar work was explored in this study by interviewing current and ex-bar workers from a district in an Irish city that had a high density of bars and busy tourism industry. A total of 12 participants took part in focus groups (FGs) and seven in individual interviews. Four themes were identified in a thematic analysis. The central depiction of bar work was of an initial immersion in an intensive lifestyle characterised by heavy drinking, with subsequent distancing from the extremes of the lifestyle. The participants affiliated strongly with the bar work occupational identity, which included alcohol use in group scenarios for drinking during work, after work and on time off. The bar work lifestyle was most intense in the 'superpub' environment, characterised by permissive staff drinking norms and reported stress. Although an important identity, bar work was ultimately a transient role. The findings are considered in relation to research on occupation-specific stress and alcohol use, social identity and developmental needs in young adulthood.

  12. The economic impact of a smoke-free bylaw on restaurant and bar sales in Ottawa, Canada.

    PubMed

    Luk, Rita; Ferrence, Roberta; Gmel, Gerhard

    2006-05-01

    On 1 August 2001, the City of Ottawa (Canada's Capital) implemented a smoke-free bylaw that completely prohibited smoking in work-places and public places, including restaurants and bars, with no exemption for separately ventilated smoking rooms. This paper evaluates the effects of this bylaw on restaurant and bar sales. DATA AND MEASURES: We used retail sales tax data from March 1998 to June 2002 to construct two outcome measures: the ratio of licensed restaurant and bar sales to total retail sales and the ratio of unlicensed restaurant sales to total retail sales. Restaurant and bar sales were subtracted from total retail sales in the denominator of these measures. We employed an interrupted time-series design. Autoregressive integrated moving average (ARIMA) intervention analysis was used to test for three possible impacts that the bylaw might have on the sales of restaurants and bars. We repeated the analysis using regression with autoregressive moving average (ARMA) errors method to triangulate our results. Outcome measures showed declining trends at baseline before the bylaw went into effect. Results from ARIMA intervention and regression analyses did not support the hypotheses that the smoke-free bylaw had an impact that resulted in (1) abrupt permanent, (2) gradual permanent or (3) abrupt temporary changes in restaurant and bar sales. While a large body of research has found no significant adverse impact of smoke-free legislation on restaurant and bar sales in the United States, Australia and elsewhere, our study confirms these results in a northern region with a bilingual population, which has important implications for impending policy in Europe and other areas.

  13. Nurses' attitudes toward the use of the bar-coding medication administration system.

    PubMed

    Marini, Sana Daya; Hasman, Arie; Huijer, Huda Abu-Saad; Dimassi, Hani

    2010-01-01

    This study determines nurses' attitudes toward bar-coding medication administration system use. Some of the factors underlying the successful use of bar-coding medication administration systems that are viewed as a connotative indicator of users' attitudes were used to gather data that describe the attitudinal basis for system adoption and use decisions in terms of subjective satisfaction. Only 67 nurses in the United States had the chance to respond to the e-questionnaire posted on the CARING list server for the months of June and July 2007. Participants rated their satisfaction with bar-coding medication administration system use based on system functionality, usability, and its positive/negative impact on the nursing practice. Results showed, to some extent, positive attitude, but the image profile draws attention to nurses' concerns for improving certain system characteristics. The high bar-coding medication administration system skills revealed a more negative perception of the system by the nursing staff. The reasons underlying dissatisfaction with bar-coding medication administration use by skillful users are an important source of knowledge that can be helpful for system development as well as system deployment. As a result, strengthening bar-coding medication administration system usability by magnifying its ability to eliminate medication errors and the contributing factors, maximizing system functionality by ascertaining its power as an extra eye in the medication administration process, and impacting the clinical nursing practice positively by being helpful to nurses, speeding up the medication administration process, and being user-friendly can offer a congenial settings for establishing positive attitude toward system use, which in turn leads to successful bar-coding medication administration system use.

  14. Pleistocene barrier bar seaward of ooid shoal complex near Miami, Florida

    USGS Publications Warehouse

    Halley, Robert B.; Shinn, Eugene A.; Hudson, J. Harold; Lidz, Barbara H.

    1977-01-01

    An ooid sand barrier bar of Pleistocene age was deposited along the seaward side of an ooid shoal complex southwest of Miami, Florida. The bar is 35 km long, about 0.8 km wide, elongate parallel with the trend of the ooid shoal complex and perpendicular to channels between individual shoals. A depression 1.6 km wide, interpreted as a back-barrier channel, isolates the bar from the ooid shoals. During sea-level fall and subaerial exposure of the bar, the ooid sand was cemented in place, preventing migration of the barrier. No Holocene analogue of this sand body is recognized, perhaps because of the relative youthfulness of Holocene ooid shoals. This Pleistocene ooid shoal complex, with its reservoir-size barrier bar, may serve as a refined model for exploration in ancient ooid sand belts.

  15. EIT Imaging of admittivities with a D-bar method and spatial prior: experimental results for absolute and difference imaging.

    PubMed

    Hamilton, S J

    2017-05-22

    Electrical impedance tomography (EIT) is an emerging imaging modality that uses harmless electrical measurements taken on electrodes at a body's surface to recover information about the internal electrical conductivity and or permittivity. The image reconstruction task of EIT is a highly nonlinear inverse problem that is sensitive to noise and modeling errors making the image reconstruction task challenging. D-bar methods solve the nonlinear problem directly, bypassing the need for detailed and time-intensive forward models, to provide absolute (static) as well as time-difference EIT images. Coupling the D-bar methodology with the inclusion of high confidence a priori data results in a noise-robust regularized image reconstruction method. In this work, the a priori D-bar method for complex admittivities is demonstrated effective on experimental tank data for absolute imaging for the first time. Additionally, the method is adjusted for, and tested on, time-difference imaging scenarios. The ability of the method to be used for conductivity, permittivity, absolute as well as time-difference imaging provides the user with great flexibility without a high computational cost.

  16. Experimental and artificial neural network based prediction of performance and emission characteristics of DI diesel engine using Calophyllum inophyllum methyl ester at different nozzle opening pressure

    NASA Astrophysics Data System (ADS)

    Vairamuthu, G.; Thangagiri, B.; Sundarapandian, S.

    2018-01-01

    The present work investigates the effect of varying Nozzle Opening Pressures (NOP) from 220 bar to 250 bar on performance, emissions and combustion characteristics of Calophyllum inophyllum Methyl Ester (CIME) in a constant speed, Direct Injection (DI) diesel engine using Artificial Neural Network (ANN) approach. An ANN model has been developed to predict a correlation between specific fuel consumption (SFC), brake thermal efficiency (BTE), exhaust gas temperature (EGT), Unburnt hydrocarbon (UBHC), CO, CO2, NOx and smoke density using load, blend (B0 and B100) and NOP as input data. A standard Back-Propagation Algorithm (BPA) for the engine is used in this model. A Multi Layer Perceptron network (MLP) is used for nonlinear mapping between the input and the output parameters. An ANN model can predict the performance of diesel engine and the exhaust emissions with correlation coefficient (R2) in the range of 0.98-1. Mean Relative Errors (MRE) values are in the range of 0.46-5.8%, while the Mean Square Errors (MSE) are found to be very low. It is evident that the ANN models are reliable tools for the prediction of DI diesel engine performance and emissions. The test results show that the optimum NOP is 250 bar with B100.

  17. Using a Divided Bar Apparatus to Measure Thermal Conductivity of Samples of Odd Sizes and Shapes

    NASA Astrophysics Data System (ADS)

    Crowell, J. "; Gosnold, W. D.

    2012-12-01

    Standard procedure for measuring thermal conductivity using a divided bar apparatus requires a sample that has the same surface dimensions as the heat sink/source surface in the divided bar. Heat flow is assumed to be constant throughout the column and thermal conductivity (K) is determined by measuring temperatures (T) across the sample and across standard layers and using the basic relationship Ksample=(Kstandard*(ΔT1+ΔT2)/2)/(ΔTsample). Sometimes samples are not large enough or of correct proportions to match the surface of the heat sink/source, however using the equations presented here the thermal conductivity of these samples can still be measured with a divided bar. Measurements were done on the UND Geothermal Laboratories stationary divided bar apparatus (SDB). This SDB has been designed to mimic many in-situ conditions, with a temperature range of -20C to 150C and a pressure range of 0 to 10,000 psi for samples with parallel surfaces and 0 to 3000 psi for samples with non-parallel surfaces. The heat sink/source surfaces are copper disks and have a surface area of 1,772 mm2 (2.74 in2). Layers of polycarbonate 6 mm thick with the same surface area as the copper disks are located in the heat sink and in the heat source as standards. For this study, all samples were prepared from a single piece of 4 inch limestone core. Thermal conductivities were measured for each sample as it was cut successively smaller. The above equation was adjusted to include the thicknesses (Th) of the samples and the standards and the surface areas (A) of the heat sink/source and of the sample Ksample=(Kstandard*Astandard*Thsample*(ΔT1+ΔT3))/(ΔTsample*Asample*2*Thstandard). Measuring the thermal conductivity of samples of multiple sizes, shapes, and thicknesses gave consistent values for samples with surfaces as small as 50% of the heat sink/source surface, regardless of the shape of the sample. Measuring samples with surfaces smaller than 50% of the heat sink/source surface resulted in thermal conductivity values which were too high. The cause of the error with the smaller samples is being examined as is the relationship between the amount of error in the thermal conductivity and the difference in surface areas. As more measurements are made an equation to mathematically correct for the error is being developed on in case a way to physically correct the problem cannot be determined.

  18. Synthesis and optimization of four bar mechanism with six design parameters

    NASA Astrophysics Data System (ADS)

    Jaiswal, Ankur; Jawale, H. P.

    2018-04-01

    Function generation is synthesis of mechanism for specific task, involves complexity for specially synthesis above five precision of coupler points. Thus pertains to large structural error. The methodology for arriving to better precision solution is to use the optimization technique. Work presented herein considers methods of optimization of structural error in closed kinematic chain with single degree of freedom, for generating functions like log(x), ex, tan(x), sin(x) with five precision points. The equation in Freudenstein-Chebyshev method is used to develop five point synthesis of mechanism. The extended formulation is proposed and results are obtained to verify existing results in literature. Optimization of structural error is carried out using least square approach. Comparative structural error analysis is presented on optimized error through least square method and extended Freudenstein-Chebyshev method.

  19. Stress tracking in thin bars by eigenstrain actuation

    NASA Astrophysics Data System (ADS)

    Schoeftner, J.; Irschik, H.

    2016-11-01

    This contribution focuses on stress tracking in slender structures. The axial stress distribution of a linear elastic bar is investigated, in particular, we seek for an answer to the following question: in which manner do we have to distribute eigenstrains, such that the axial stress in a bar is equal to a certain desired stress distribution, despite external forces or support excitations are present? In order to track a certain time- and space-dependent stress function, smart actuators, such as piezoelectric actuators, are needed to realize eigenstrains. Based on the equation of motion and the constitutive relation, which relate stress, strain, displacement and eigenstrains, an analytical solution for the stress tracking problem is derived. The starting point for the derivation of a solution for the stress tracking problem is a semi-positive definite integral depending on the error stress which is the difference between the actual stress and the desired stress. Our derived stress tracking theory is verified by two examples: first, a clamped-free bar which is harmonically excited is investigated. It is shown under which circumstances the axial stress vanishes at every location and at every time instant. The second example is a support-excited bar with end mass, where a desired stress profile is prescribed.

  20. Technology and medication errors: impact in nursing homes.

    PubMed

    Baril, Chantal; Gascon, Viviane; St-Pierre, Liette; Lagacé, Denis

    2014-01-01

    The purpose of this paper is to study a medication distribution technology's (MDT) impact on medication errors reported in public nursing homes in Québec Province. The work was carried out in six nursing homes (800 patients). Medication error data were collected from nursing staff through a voluntary reporting process before and after MDT was implemented. The errors were analysed using: totals errors; medication error type; severity and patient consequences. A statistical analysis verified whether there was a significant difference between the variables before and after introducing MDT. The results show that the MDT detected medication errors. The authors' analysis also indicates that errors are detected more rapidly resulting in less severe consequences for patients. MDT is a step towards safer and more efficient medication processes. Our findings should convince healthcare administrators to implement technology such as electronic prescriber or bar code medication administration systems to improve medication processes and to provide better healthcare to patients. Few studies have been carried out in long-term healthcare facilities such as nursing homes. The authors' study extends what is known about MDT's impact on medication errors in nursing homes.

  1. Recent Measurement of Flavor Asymmetry of Antiquarks in the Proton by Drell–Yan Experiment SeaQuest at Fermilab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagai, Kei

    A measurement of the flavor asymmetry of the antiquarks (more » $$\\bar{d}$$ and $$\\bar{u}$$) in the proton is described in this thesis. The proton consists of three valence quarks, sea quarks, and gluons. Antiquarks in the proton are sea quarks. They are generated from the gluon splitting: g → q + $$\\bar{q}$$. According to QCD (Quantum Chromodynamics), the gluon splitting is independent of quark flavor. It suggests that the amounts of $$\\bar{d}$$ and $$\\bar{u}$$ should be the same in the proton. However, the NMC experiment at CERN found that the amount of $$\\bar{d}$$ is larger than that of $$\\bar{u}$$ in the proton using the deep inelastic scattering in 1991. This result is obtained for $$\\bar{d}$$ and $$\\bar{u}$$ integrated over Bjorken x. Bjorken x is the fraction of the momentum of the parton to that of the proton. The NA51 experiment (x ~ 0.2) at CERN and E866/NuSea experiment (0.015 < x < 0.35) at Fermilab measured the flavor asymmetry of the antiquarks ($$\\bar{d}$$/$$\\bar{u}$$) in the proton as a function of x using Drell–Yan process. The experiments reported that the flavor symmetry is broken over all measured x values. Understanding the flavor asymmetry of the antiquarks in the proton is a challenge of the QCD. The theo- retical investigation from the first principle of QCD such as lattice QCD calculation is important. In addition, the QCD effective models and hadron models such as the meson cloud model can also be tested with the flavor asymmetry of antiquarks. From the experimental side, it is important to measure with higher accuracy and in a wider x range. The SeaQuest (E906) experiment measures $$\\bar{d}$$/$$\\bar{u}$$ at large x (0.15 < x < 0.45) accurately to understand its behavior. The SeaQuest experiment is a Drell–Yan experiment at Fermi National Accelerator Laboratory (Fermilab). In the Drell–Yan process of proton-proton reaction, an antiquark in a proton and a quark in another proton annihilate and create a virtual photon, which then decays into a muon pair (q$$\\bar{q}$$ → γ* → µ +µ -). The SeaQuest experiment uses a 120 GeV proton beam extracted from Fermilab’s Main Injector. The proton beam interacts with hydrogen and deuterium targets. The SeaQuest spectrometer detects the muon pairs from the Drell–Yan process. The $$\\bar{d}$$/$$\\bar{u}$$ ratio at 0.1 < x < 0.58 is extracted from the number of detected Drell–Yan muon pairs. After the detector construction, commissioning run and detector upgrade, the SeaQuest experiment started the physics data acquisition from 2013. We finished so far three periods of physics data acquisition. The fourth period is in progress. The detector construction, detector performance evaluation, data taking and data analysis for the flavor asymmetry of the antiquarks $$\\bar{d}$$/$$\\bar{u}$$ in the proton are my contribution to SeaQuest. The cross section ratio of Drell–Yan process in p- p and p-d reactions is obtained from dimuon yields. In the experiment with high beam intensity, it is important to control the tracking efficiency of charged particles through the magnetic spectrometer. The tracking efficiency depends on the chamber occupancy, and the appropriate method for the correction is important. The chamber occupancy is the number of hits in drift chambers. A new method of the correction for the tracking efficiency is developed based on the occupancy, and applied to the data. This method reflects the real response of the drift chambers. Therefore, the systematic error is well controlled by this method. The flavor asymmetry of antiquarks is obtained at 0.1 < x < 0.58. At 0.1 < x < 0.45, the result is $$\\bar{d}$$/$$\\bar{u}$$ > 1. The result at 0.1 < x < 0.24 agrees with the E866 result. The result at x > 0.24, however, disagrees with the E866 result. The result at 0.45 < x < 0 the statistical errors. u¯ results extracted from experiments are used to investigate the validity of the theoretical models. The present experimental result provides the data points in wide x region. It is useful for understanding the proton structure in the light of QCD and effective hadron models. The present result has a practical application as well. Antiquark distributions are important as inputs to simulations of hadron reactions such as W± production in various experiments. The new knowledge on antiquark distributions helps to improve the precision of the simulations.« less

  2. Sediment Dynamics Over a Stable Point bar of the San Pedro River, Southeastern Arizona

    NASA Astrophysics Data System (ADS)

    Hamblen, J. M.; Conklin, M. H.

    2002-12-01

    Streams of the Southwest receive enormous inputs of sediment during storm events in the monsoon season due to the high intensity rainfall and large percentages of exposed soil in the semi-arid landscape. In the Upper San Pedro River, with a watershed area of approximately 3600 square kilometers, particle size ranges from clays to boulders with large fractions of sand and gravel. This study focuses on the mechanics of scour and fill on a stable point bar. An innovative technique using seven co-located scour chains and liquid-filled, load-cell scour sensors characterized sediment dynamics over the point bar during the monsoon season of July to September 2002. The sensors were set in two transects to document sediment dynamics near the head and toe of the bar. Scour sensors record area-averaged sediment depths while scour chains measure scour and fill at a point. The average area covered by each scour sensor is 11.1 square meters. Because scour sensors have never been used in a system similar to the San Pedro, one goal of the study was to test their ability to detect changes in sediment load with time in order to determine the extent of scour and fill during monsoonal storms. Because of the predominantly unconsolidated nature of the substrate it was hypothesized that dune bedforms would develop in events less than the 1-year flood. The weak 2002 monsoon season produced only two storms that completely inundated the point bar, both less than the 1-year flood event. The first event, 34 cms, produced net deposition in areas where Johnson grass had been present and was now buried. The scour sensor at the lowest elevation, in a depression which serves as a secondary channel during storm events, recorded scour during the rising limb of the hydrograph followed by pulses we interpret to be the passage of dunes. The second event, although smaller at 28 cms, resulted from rain more than 50 km upstream and had a much longer peak and a slowly declining falling limb. During the second flood, several areas with buried vegetation were scoured back to their original bed elevations. Pulses of sediment passed over the sensor in the secondary channel and the sensor in the vegetated zone. Scour sensor measurements agree with data from scour chains (error +/- 3 cm) and surveys (error +/- 0.6 cm) performed before and after the two storm events, within the range of error of each method. All load sensor data were recorded at five minute intervals. Use of a smaller interval could give more details about the shapes of sediment waves and aid in bedform determination. Results suggest that dune migration is the dominant mechanism for scour and backfill in the point bar setting. Scour sensors, when coupled with surveying and/or scour chains, are a tremendous addition to the geomorphologist's toolbox, allowing unattended real-time measurements of sediment depth with time.

  3. Information technology and medication safety: what is the benefit?

    PubMed Central

    Kaushal, R; Bates, D

    2002-01-01

    

 Medication errors occur frequently and have significant clinical and financial consequences. Several types of information technologies can be used to decrease rates of medication errors. Computerized physician order entry with decision support significantly reduces serious inpatient medication error rates in adults. Other available information technologies that may prove effective for inpatients include computerized medication administration records, robots, automated pharmacy systems, bar coding, "smart" intravenous devices, and computerized discharge prescriptions and instructions. In outpatients, computerization of prescribing and patient oriented approaches such as personalized web pages and delivery of web based information may be important. Public and private mandates for information technology interventions are growing, but further development, application, evaluation, and dissemination are required. PMID:12486992

  4. Should Persons with Contagious Diseases Be Barred from School?

    ERIC Educational Resources Information Center

    Roe, Richard L.

    1987-01-01

    Reviews recent court decisions regarding whether individuals with contagious diseases may be barred from public schools. Devotes specific attention to the issue of whether certain communicable diseases such as tuberculosis and Acquired Immune Deficiency Syndrome (AIDS) can be classified as handicaps and thereby qualify a person for protection…

  5. 76 FR 3111 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-19

    .... Storage: Maintained in paper and electronic storage media. Retrievability: Retrieved by name and or Social... Program.'' Categories of records in the system: Delete entry and replace with ``Individual's name, Social..., and award records. Statement of good standing before the bar and other State Bar records, law school...

  6. Galaxy Zoo: secular evolution of barred galaxies from structural decomposition of multiband images

    NASA Astrophysics Data System (ADS)

    Kruk, Sandor J.; Lintott, Chris J.; Bamford, Steven P.; Masters, Karen L.; Simmons, Brooke D.; Häußler, Boris; Cardamone, Carolin N.; Hart, Ross E.; Kelvin, Lee; Schawinski, Kevin; Smethurst, Rebecca J.; Vika, Marina

    2018-02-01

    We present the results of two-component (disc+bar) and three-component (disc+bar+bulge) multiwavelength 2D photometric decompositions of barred galaxies in five Sloan Digital Sky Survey (SDSS) bands (ugriz). This sample of ∼3500 nearby (z < 0.06) galaxies with strong bars selected from the Galaxy Zoo citizen science project is the largest sample of barred galaxies to be studied using photometric decompositions that include a bar component. With detailed structural analysis, we obtain physical quantities such as the bar- and bulge-to-total luminosity ratios, effective radii, Sérsic indices and colours of the individual components. We observe a clear difference in the colours of the components, the discs being bluer than the bars and bulges. An overwhelming fraction of bulge components have Sérsic indices consistent with being pseudo-bulges. By comparing the barred galaxies with a mass-matched and volume-limited sample of unbarred galaxies, we examine the connection between the presence of a large-scale galactic bar and the properties of discs and bulges. We find that the discs of unbarred galaxies are significantly bluer compared to the discs of barred galaxies, while there is no significant difference in the colours of the bulges. We find possible evidence of secular evolution via bars that leads to the build-up of pseudo-bulges and to the quenching of star formation in the discs. We identify a subsample of unbarred galaxies with an inner lens/oval and find that their properties are similar to barred galaxies, consistent with an evolutionary scenario in which bars dissolve into lenses. This scenario deserves further investigation through both theoretical and observational work.

  7. Variation and decomposition of the partial molar volume of small gas molecules in different organic solvents derived from molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Klähn, Marco; Martin, Alistair; Cheong, Daniel W.; Garland, Marc V.

    2013-12-01

    The partial molar volumes, bar V_i, of the gas solutes H2, CO, and CO2, solvated in acetone, methanol, heptane, and diethylether are determined computationally in the limit of infinite dilution and standard conditions. Solutions are described with molecular dynamics simulations in combination with the OPLS-aa force field for solvents and customized force field for solutes. bar V_i is determined with the direct method, while the composition of bar V_i is studied with Kirkwood-Buff integrals (KBIs). Subsequently, the amount of unoccupied space and size of pre-formed cavities in pure solvents is determined. Additionally, the shape of individual solvent cages is analyzed. Calculated bar V_i deviate only 3.4 cm3 mol-1 (7.1%) from experimental literature values. Experimental bar V_i variations across solutions are reproduced qualitatively and also quantitatively in most cases. The KBI analysis identifies differences in solute induced solvent reorganization in the immediate vicinity of H2 (<0.7 nm) and solvent reorganization up to the third solvation shell of CO and CO2 (<1.6 nm) as the origin of bar V_i variations. In all solutions, larger bar V_i are found in solvents that exhibit weak internal interactions, low cohesive energy density and large compressibility. Weak internal interactions facilitate solvent displacement by thermal solute movement, which enhances the size of solvent cages and thus bar V_i. Additionally, attractive electrostatic interactions of CO2 and the solvents, which do not depend on internal solvent interactions only, partially reversed the bar V_i trends observed in H2 and CO solutions where electrostatic interactions with the solvents are absent. More empty space and larger pre-formed cavities are found in solvents with weak internal interactions, however, no evidence is found that solutes in any considered solvent are accommodated in pre-formed cavities. Individual solvent cages are found to be elongated in the negative direction of solute movement. This wake behind the moving solute is more pronounced in case of mobile H2 and in solvents with weaker internal interactions. However, deviations from a spherical solvent cage shape do not influence solute-solvent radial distribution functions after averaging over all solvent cage orientations and hence do not change bar V_i. Overall, the applied methodology reproduces bar V_i and its variations reliably and the used bar V_i decompositions identify the underlying reasons behind observed bar V_i variations.

  8. A neural network for real-time retrievals of PWV and LWP from Arctic millimeter-wave ground-based observations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cadeddu, M. P.; Turner, D. D.; Liljegren, J. C.

    2009-07-01

    This paper presents a new neural network (NN) algorithm for real-time retrievals of low amounts of precipitable water vapor (PWV) and integrated liquid water from millimeter-wave ground-based observations. Measurements are collected by the 183.3-GHz G-band vapor radiometer (GVR) operating at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility, Barrow, AK. The NN provides the means to explore the nonlinear regime of the measurements and investigate the physical boundaries of the operability of the instrument. A methodology to compute individual error bars associated with the NN output is developed, and a detailed error analysis of the network output is provided.more » Through the error analysis, it is possible to isolate several components contributing to the overall retrieval errors and to analyze the dependence of the errors on the inputs. The network outputs and associated errors are then compared with results from a physical retrieval and with the ARM two-channel microwave radiometer (MWR) statistical retrieval. When the NN is trained with a seasonal training data set, the retrievals of water vapor yield results that are comparable to those obtained from a traditional physical retrieval, with a retrieval error percentage of {approx}5% when the PWV is between 2 and 10 mm, but with the advantages that the NN algorithm does not require vertical profiles of temperature and humidity as input and is significantly faster computationally. Liquid water path (LWP) retrievals from the NN have a significantly improved clear-sky bias (mean of {approx}2.4 g/m{sup 2}) and a retrieval error varying from 1 to about 10 g/m{sup 2} when the PWV amount is between 1 and 10 mm. As an independent validation of the LWP retrieval, the longwave downwelling surface flux was computed and compared with observations. The comparison shows a significant improvement with respect to the MWR statistical retrievals, particularly for LWP amounts of less than 60 g/m{sup 2}.« less

  9. Author Correction: Nanoscale control of competing interactions and geometrical frustration in a dipolar trident lattice.

    PubMed

    Farhan, Alan; Petersen, Charlotte F; Dhuey, Scott; Anghinolfi, Luca; Qin, Qi Hang; Saccone, Michael; Velten, Sven; Wuth, Clemens; Gliga, Sebastian; Mellado, Paula; Alava, Mikko J; Scholl, Andreas; van Dijken, Sebastiaan

    2017-12-12

    The original version of this article contained an error in the legend to Figure 4. The yellow scale bar should have been defined as '~600 nm', not '~600 µm'. This has now been corrected in both the PDF and HTML versions of the article.

  10. The Importance of Statistical Modeling in Data Analysis and Inference

    ERIC Educational Resources Information Center

    Rollins, Derrick, Sr.

    2017-01-01

    Statistical inference simply means to draw a conclusion based on information that comes from data. Error bars are the most commonly used tool for data analysis and inference in chemical engineering data studies. This work demonstrates, using common types of data collection studies, the importance of specifying the statistical model for sound…

  11. LOCATING NEARBY SOURCES OF AIR POLLUTION BY NONPARAMETRIC REGRESSION OF ATMOSPHERIC CONCENTRATIONS ON WIND DIRECTION. (R826238)

    EPA Science Inventory

    The relationship of the concentration of air pollutants to wind direction has been determined by nonparametric regression using a Gaussian kernel. The results are smooth curves with error bars that allow for the accurate determination of the wind direction where the concentrat...

  12. Measurement of the double differential diject mass cross section in p$$\\bar{p}$$ collisions at √(s) = 1.96 TeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rominsky, Mandy Kathleen

    2009-01-01

    This thesis presents the analysis of the double differential dijet mass cross section, measured at the D0 detector in Batavia, IL, using pmore » $$\\bar{p}$$ collisions at a center of mass energy of √s = 1.96 TeV. The dijet mass was calculated using the two highest p T jets in the event, with approximately 0.7 fb -1 of data collected between 2004 and 2005. The analysis was presented in bins of dijet mass (M JJ) and rapidity (y), and extends the measurement farther in M JJ and y than any previous measurement. Corrections due to detector effects were calculated using a Monte Carlo simulation and applied to data. The errors on the measurement consist of statistical and systematic errors, of which the Jet Energy Scale was the largest. The final result was compared to next-to-leading order theory and good agreement was found. These results may be used in the determination of the proton parton distribution functions and to set limits on new physics.« less

  13. Keep an eye on your hands: on the role of visual mechanisms in processing of haptic space

    PubMed Central

    Zuidhoek, Sander; Noordzij, Matthijs L.; Kappers, Astrid M. L.

    2008-01-01

    The present paper reviews research on a haptic orientation processing. Central is a task in which a test bar has to be set parallel to a reference bar at another location. Introducing a delay between inspecting the reference bar and setting the test bar leads to a surprising improvement. Moreover, offering visual background information also elevates performance. Interestingly, (congenitally) blind individuals do not or to a weaker extent show the improvement with time, while in parallel to this, they appear to benefit less from spatial imagery processing. Together this strongly points to an important role for visual processing mechanisms in the perception of haptic inputs. PMID:18196305

  14. Color Histogram Diffusion for Image Enhancement

    NASA Technical Reports Server (NTRS)

    Kim, Taemin

    2011-01-01

    Various color histogram equalization (CHE) methods have been proposed to extend grayscale histogram equalization (GHE) for color images. In this paper a new method called histogram diffusion that extends the GHE method to arbitrary dimensions is proposed. Ranges in a histogram are specified as overlapping bars of uniform heights and variable widths which are proportional to their frequencies. This diagram is called the vistogram. As an alternative approach to GHE, the squared error of the vistogram from the uniform distribution is minimized. Each bar in the vistogram is approximated by a Gaussian function. Gaussian particles in the vistoram diffuse as a nonlinear autonomous system of ordinary differential equations. CHE results of color images showed that the approach is effective.

  15. Evaluating diffraction based overlay metrology for double patterning technologies

    NASA Astrophysics Data System (ADS)

    Saravanan, Chandra Saru; Liu, Yongdong; Dasari, Prasad; Kritsun, Oleg; Volkman, Catherine; Acheta, Alden; La Fontaine, Bruno

    2008-03-01

    Demanding sub-45 nm node lithographic methodologies such as double patterning (DPT) pose significant challenges for overlay metrology. In this paper, we investigate scatterometry methods as an alternative approach to meet these stringent new metrology requirements. We used a spectroscopic diffraction-based overlay (DBO) measurement technique in which registration errors are extracted from specially designed diffraction targets for double patterning. The results of overlay measurements are compared to traditional bar-in-bar targets. A comparison between DBO measurements and CD-SEM measurements is done to show the correlation between the two approaches. We discuss the total measurement uncertainty (TMU) requirements for sub-45 nm nodes and compare TMU from the different overlay approaches.

  16. Confusion—specimen mix-up in dermatopathology and measures to prevent and detect it

    PubMed Central

    Weyers, Wolfgang

    2014-01-01

    Maintaining patient identity throughout the biopsy pathway is critical for the practice of dermatology and dermatopathology. From the biopsy procedure to the acquisition of the pathology report, a specimen may pass through the hands of more than twenty individuals in several workplaces. The risk of a mix-up is considerable and may account for more serious mistakes than diagnostic errors. To prevent specimen mix-up, work processes should be standardized and automated wherever possible, e.g., by strict order in the operating room and in the laboratory and by adoption of a bar code system to identify specimens and corresponding request forms. Mutual control of clinicians, technicians, histopathologists, and secretaries, both simultaneously and downstream, is essential to detect errors. The most vulnerable steps of the biopsy pathway, namely, labeling of specimens and request forms and accessioning of biopsy specimens in the laboratory, should be carried out by two persons simultaneously. In preceding work steps, clues must be provided that allow a mix-up to be detected later on, such as information about clinical diagnosis, biopsy technique, and biopsy site by the clinician, and a sketch of the specimen by the technician grossing it. Awareness of the danger of specimen mix-up is essential for preventing and detecting it. The awareness can be heightened by documentation of any error in the biopsy pathway. In case of suspicion, a mix-up of specimens from different patients can be confirmed by DNA analysis. PMID:24520511

  17. Confusion-specimen mix-up in dermatopathology and measures to prevent and detect it.

    PubMed

    Weyers, Wolfgang

    2014-01-01

    Maintaining patient identity throughout the biopsy pathway is critical for the practice of dermatology and dermatopathology. From the biopsy procedure to the acquisition of the pathology report, a specimen may pass through the hands of more than twenty individuals in several workplaces. The risk of a mix-up is considerable and may account for more serious mistakes than diagnostic errors. To prevent specimen mix-up, work processes should be standardized and automated wherever possible, e.g., by strict order in the operating room and in the laboratory and by adoption of a bar code system to identify specimens and corresponding request forms. Mutual control of clinicians, technicians, histopathologists, and secretaries, both simultaneously and downstream, is essential to detect errors. The most vulnerable steps of the biopsy pathway, namely, labeling of specimens and request forms and accessioning of biopsy specimens in the laboratory, should be carried out by two persons simultaneously. In preceding work steps, clues must be provided that allow a mix-up to be detected later on, such as information about clinical diagnosis, biopsy technique, and biopsy site by the clinician, and a sketch of the specimen by the technician grossing it. Awareness of the danger of specimen mix-up is essential for preventing and detecting it. The awareness can be heightened by documentation of any error in the biopsy pathway. In case of suspicion, a mix-up of specimens from different patients can be confirmed by DNA analysis.

  18. [Measuring the effect of eyeglasses on determination of squint angle with Purkinje reflexes and the prism cover test].

    PubMed

    Barry, J C; Backes, A

    1998-04-01

    The alternating prism and cover test is the conventional test for the measurement of the angle of strabismus. The error induced by the prismatic effect of glasses is typically about 27-30%/10 D. Alternatively, the angle of strabismus can be measured with methods based on Purkinje reflex positions. This study examines the differences between three such options, taking into account the influence of glasses. The studied system comprised the eyes with or without glasses, a fixation object and a device for recording the eye position: in the case of the alternate prism and cover test, a prism bar was required; in the case of a Purkinje reflex based device, light sources for generation of reflexes and a camera for the documentation of the reflex positions were used. Measurements performed on model eyes and computer ray traces were used to analyze and compare the options. When a single corneal reflex is used, the misalignment of the corneal axis can be measured; the error in this measurement due to the prismatic effect of glasses was 7.6%/10 D, the smallest found in this study. The individual Hirschberg ratio can be determined by monocular measurements in three gaze directions. The angle of strabismus can be measured with Purkinje reflex based methods if the fundamental differences between these methods and the alternate prism and cover test, and if the influence of glasses and other sources of error are accounted for.

  19. High Bar Swing Performance in Novice Adults: Effects of Practice and Talent

    ERIC Educational Resources Information Center

    Busquets, Albert; Marina, Michel; Irurtia, Alfredo; Ranz, Daniel; Angulo-Barroso, Rosa M.

    2011-01-01

    An individual's a priori talent can affect movement performance during learning. Also, task requirements and motor-perceptual factors are critical to the learning process. This study describes changes in high bar swing performance after a 2-month practice period. Twenty-five novice participants were divided by a priori talent level…

  20. Fiber optic coupling of a microlens conditioned, stacked semiconductor laser diode array

    DOEpatents

    Beach, Raymond J.; Benett, William J.; Mills, Steven T.

    1997-01-01

    The output radiation from the two-dimensional aperture of a semiconductor laser diode array is efficiently coupled into an optical fiber. The two-dimensional aperture is formed by stacking individual laser diode bars on top of another in a "rack and stack" configuration. Coupling into the fiber is then accomplished using individual microlenses to condition the output radiation of the laser diode bars. A lens that matches the divergence properties and wavefront characteristics of the laser light to the fiber optic is used to focus this conditioned radiation into the fiber.

  1. Tobacco related bar promotions: insights from tobacco industry documents.

    PubMed

    Katz, S K; Lavack, A M

    2002-03-01

    To examine the tobacco industry's use of bar promotions, including their target groups, objectives, strategies, techniques, and results. Over 2000 tobacco industry documents available as a result of the Master Settlement Agreement were reviewed on the internet at several key web sites using keyword searches that included "bar", "night", "pub", "party", and "club". The majority of the documents deal with the US market, with a minor emphasis on Canadian and overseas markets. The documents indicate that bar promotions are important for creating and maintaining brand image, and are generally targeted at a young adult audience. Several measures of the success of these promotions are used, including number of individuals exposed to the promotion, number of promotional items given away, and increased sales of a particular brand during and after the promotion. Bar promotions position cigarettes as being part of a glamorous lifestyle that includes attendance at nightclubs and bars, and appear to be highly successful in increasing sales of particular brands.

  2. The Red Edge Problem in asteroid band parameter analysis

    NASA Astrophysics Data System (ADS)

    Lindsay, Sean S.; Dunn, Tasha L.; Emery, Joshua P.; Bowles, Neil E.

    2016-04-01

    Near-infrared reflectance spectra of S-type asteroids contain two absorptions at 1 and 2 μm (band I and II) that are diagnostic of mineralogy. A parameterization of these two bands is frequently employed to determine the mineralogy of S(IV) asteroids through the use of ordinary chondrite calibration equations that link the mineralogy to band parameters. The most widely used calibration study uses a Band II terminal wavelength point (red edge) at 2.50 μm. However, due to the limitations of the NIR detectors on prominent telescopes used in asteroid research, spectral data for asteroids are typically only reliable out to 2.45 μm. We refer to this discrepancy as "The Red Edge Problem." In this report, we evaluate the associated errors for measured band area ratios (BAR = Area BII/BI) and calculated relative abundance measurements. We find that the Red Edge Problem is often not the dominant source of error for the observationally limited red edge set at 2.45 μm, but it frequently is for a red edge set at 2.40 μm. The error, however, is one sided and therefore systematic. As such, we provide equations to adjust measured BARs to values with a different red edge definition. We also provide new ol/(ol+px) calibration equations for red edges set at 2.40 and 2.45 μm.

  3. Characteristic study of flat spray nozzle by using particle image velocimetry (PIV) and ANSYS simulation method

    NASA Astrophysics Data System (ADS)

    Pairan, M. Rasidi; Asmuin, Norzelawati; Isa, Nurasikin Mat; Sies, Farid

    2017-04-01

    Water mist sprays are used in wide range of application. However it is depend to the spray characteristic to suit the particular application. This project studies the water droplet velocity and penetration angle generated by new development mist spray with a flat spray pattern. This research conducted into two part which are experimental and simulation section. The experimental was conducted by using particle image velocimetry (PIV) method, ANSYS software was used as tools for simulation section meanwhile image J software was used to measure the penetration angle. Three different of combination pressure of air and water were tested which are 1 bar (case A), 2 bar (case B) and 3 bar (case C). The flat spray generated by the new development nozzle was examined at 9cm vertical line from 8cm of the nozzle orifice. The result provided in the detailed analysis shows that the trend of graph velocity versus distance gives the good agreement within simulation and experiment for all the pressure combination. As the water and air pressure increased from 1 bar to 2 bar, the velocity and angle penetration also increased, however for case 3 which run under 3 bar condition, the water droplet velocity generated increased but the angle penetration is decreased. All the data then validated by calculate the error between experiment and simulation. By comparing the simulation data to the experiment data for all the cases, the standard deviation for this case A, case B and case C relatively small which are 5.444, 0.8242 and 6.4023.

  4. Modelling and experimental study of temperature profiles in cw laser diode bars

    NASA Astrophysics Data System (ADS)

    Bezotosnyi, V. V.; Gordeev, V. P.; Krokhin, O. N.; Mikaelyan, G. T.; Oleshchenko, V. A.; Pevtsov, V. F.; Popov, Yu M.; Cheshev, E. A.

    2018-02-01

    Three-dimensional simulation is used to theoretically assess temperature profiles in proposed 10-mm-wide cw laser diode bars packaged in a standard heat spreader of the C - S mount type with the aim of raising their reliable cw output power. We obtain calculated temperature differences across the emitting aperture and along the cavity. Using experimental laser bar samples with up to 60 W of cw output power, the emission spectra of individual clusters are measured at different pump currents. We compare and discuss the simulation results and experimental data.

  5. Health and efficiency in trimix versus air breathing in compressed air workers.

    PubMed

    Van Rees Vellinga, T P; Verhoeven, A C; Van Dijk, F J H; Sterk, W

    2006-01-01

    The Western Scheldt Tunneling Project in the Netherlands provided a unique opportunity to evaluate the effects of trimix usage on the health of compressed air workers and the efficiency of the project. Data analysis addressed 318 exposures to compressed air at 3.9-4.4 bar gauge and 52 exposures to trimix (25% oxygen, 25% helium, and 50% nitrogen) at 4.6-4.8 bar gauge. Results revealed three incidents of decompression sickness all of which involved the use of compressed air. During exposure to compressed air, the effects of nitrogen narcosis were manifested in operational errors and increased fatigue among the workers. When using trimix, less effort was required for breathing, and mandatory decompression times for stays of a specific duration and maximum depth were considerably shorter. We conclude that it might be rational--for both medical and operational reasons--to use breathing gases with lower nitrogen fractions (e.g., trimix) for deep-caisson work at pressures exceeding 3 bar gauge, although definitive studies are needed.

  6. Three methods of presenting flight vector information in a head-up display during simulated STOL approaches

    NASA Technical Reports Server (NTRS)

    Dwyer, J. H., III; Palmer, E. A., III

    1975-01-01

    A simulator study was conducted to determine the usefulness of adding flight path vector symbology to a head-up display designed to improve glide-slope tracking performance during steep 7.5 deg visual approaches in STOL aircraft. All displays included a fixed attitude symbol, a pitch- and roll-stabilized horizon bar, and a glide-slope reference bar parallel to and 7.5 deg below the horizon bar. The displays differed with respect to the flight-path marker (FPM) symbol: display 1 had no FPM symbol; display 2 had an air-referenced FPM, and display 3 had a ground-referenced FPM. No differences between displays 1 and 2 were found on any of the performance measures. Display 3 was found to decrease height error in the early part of the approach and to reduce descent rate variation over the entire approach. Two measures of workload did not indicate any differences between the displays.

  7. Turbulent heat flux measurements in a transitional boundary layer

    NASA Technical Reports Server (NTRS)

    Sohn, K. H.; Zaman, K. B. M. Q.; Reshotko, E.

    1992-01-01

    During an experimental investigation of the transitional boundary layer over a heated flat plate, an unexpected result was encountered for the turbulent heat flux (bar-v't'). This quantity, representing the correlation between the fluctuating normal velocity and the temperature, was measured to be negative near the wall under certain conditions. The result was unexpected as it implied a counter-gradient heat transfer by the turbulent fluctuations. Possible reasons for this anomalous result were further investigated. The possible causes considered for this negative bar-v't' were: (1) plausible measurement error and peculiarity of the flow facility, (2) large probe size effect, (3) 'streaky structure' in the near wall boundary layer, and (4) contributions from other terms usually assumed negligible in the energy equation including the Reynolds heat flux in the streamwise direction (bar-u't'). Even though the energy balance has remained inconclusive, none of the items (1) to (3) appear to be contributing directly to the anomaly.

  8. Application of Gurson–Tvergaard–Needleman Constitutive Model to the Tensile Behavior of Reinforcing Bars with Corrosion Pits

    PubMed Central

    Xu, Yidong; Qian, Chunxiang

    2013-01-01

    Based on meso-damage mechanics and finite element analysis, the aim of this paper is to describe the feasibility of the Gurson–Tvergaard–Needleman (GTN) constitutive model in describing the tensile behavior of corroded reinforcing bars. The orthogonal test results showed that different fracture pattern and the related damage evolution process can be simulated by choosing different material parameters of GTN constitutive model. Compared with failure parameters, the two constitutive parameters are significant factors affecting the tensile strength. Both the nominal yield and ultimate tensile strength decrease markedly with the increase of constitutive parameters. Combining with the latest data and trial-and-error method, the suitable material parameters of GTN constitutive model were adopted to simulate the tensile behavior of corroded reinforcing bars in concrete under carbonation environment attack. The numerical predictions can not only agree very well with experimental measurements, but also simplify the finite element modeling process. PMID:23342140

  9. The DiskMass Survey. II. Error Budget

    NASA Astrophysics Data System (ADS)

    Bershady, Matthew A.; Verheijen, Marc A. W.; Westfall, Kyle B.; Andersen, David R.; Swaters, Rob A.; Martinsson, Thomas

    2010-06-01

    We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ_{*}), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25°-35° is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction ({F}_bar) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σdyn), disk stellar mass-to-light ratio (Υ^disk_{*}), and disk maximality ({F}_{*,max}^disk≡ V^disk_{*,max}/ V_c). Random and systematic errors in these quantities for individual galaxies will be ~25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.

  10. Patient Safety: Moving the Bar in Prison Health Care Standards

    PubMed Central

    Greifinger, Robert B.; Mellow, Jeff

    2010-01-01

    Improvements in community health care quality through error reduction have been slow to transfer to correctional settings. We convened a panel of correctional experts, which recommended 60 patient safety standards focusing on such issues as creating safety cultures at organizational, supervisory, and staff levels through changes to policy and training and by ensuring staff competency, reducing medication errors, encouraging the seamless transfer of information between and within practice settings, and developing mechanisms to detect errors or near misses and to shift the emphasis from blaming staff to fixing systems. To our knowledge, this is the first published set of standards focusing on patient safety in prisons, adapted from the emerging literature on quality improvement in the community. PMID:20864714

  11. Tobacco related bar promotions: insights from tobacco industry documents

    PubMed Central

    Katz, S; Lavack, A

    2002-01-01

    Design: Over 2000 tobacco industry documents available as a result of the Master Settlement Agreement were reviewed on the internet at several key web sites using keyword searches that included "bar", "night", "pub", "party", and "club". The majority of the documents deal with the US market, with a minor emphasis on Canadian and overseas markets. Results: The documents indicate that bar promotions are important for creating and maintaining brand image, and are generally targeted at a young adult audience. Several measures of the success of these promotions are used, including number of individuals exposed to the promotion, number of promotional items given away, and increased sales of a particular brand during and after the promotion. Conclusion: Bar promotions position cigarettes as being part of a glamorous lifestyle that includes attendance at nightclubs and bars, and appear to be highly successful in increasing sales of particular brands. PMID:11893819

  12. Erratum: Measurement of the electron charge asymmetry in $$\\boldsymbol{p\\bar{p}\\rightarrow W+X \\rightarrow e\

    DOE PAGES

    Abazov, Victor Mukhamedovich

    2015-04-30

    The recent paper on the charge asymmetry for electrons from W boson decay has an error in the Tables VII to XI that show the correlation coefficients of systematic uncertainties. Furthermore, the correlation matrix elements shown in the original publication were the square roots of the calculated values.

  13. Author Correction: Phase-resolved X-ray polarimetry of the Crab pulsar with the AstroSat CZT Imager

    NASA Astrophysics Data System (ADS)

    Vadawale, S. V.; Chattopadhyay, T.; Mithun, N. P. S.; Rao, A. R.; Bhattacharya, D.; Vibhute, A.; Bhalerao, V. B.; Dewangan, G. C.; Misra, R.; Paul, B.; Basu, A.; Joshi, B. C.; Sreekumar, S.; Samuel, E.; Priya, P.; Vinod, P.; Seetha, S.

    2018-05-01

    In the Supplementary Information file originally published for this Letter, in Supplementary Fig. 7 the error bars for the polarization fraction were provided as confidence intervals but instead should have been Bayesian credibility intervals. This has been corrected and does not alter the conclusions of the Letter in any way.

  14. National Centers for Environmental Prediction

    Science.gov Websites

    : Influence of convective parameterization on the systematic errors of Climate Forecast System (CFS) model ; Climate Dynamics, 41, 45-61, 2013. Saha, S., S. Pokhrel and H. S. Chaudhari : Influence of Eurasian snow Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather

  15. Author Correction: Circuit dissection of the role of somatostatin in itch and pain.

    PubMed

    Huang, Jing; Polgár, Erika; Solinski, Hans Jürgen; Mishra, Santosh K; Tseng, Pang-Yen; Iwagaki, Noboru; Boyle, Kieran A; Dickie, Allen C; Kriegbaum, Mette C; Wildner, Hendrik; Zeilhofer, Hanns Ulrich; Watanabe, Masahiko; Riddell, John S; Todd, Andrew J; Hoon, Mark A

    2018-06-01

    In the version of this article initially published online, the labels were switched for the right-hand pair of bars in Fig. 4e. The left one of the two should be Chloroquine + veh, the right one Chloroquine + CNO. The error has been corrected in the print, HTML and PDF versions of the article.

  16. Morphology and spacing of river meander scrolls

    NASA Astrophysics Data System (ADS)

    Strick, Robert J. P.; Ashworth, Philip J.; Awcock, Graeme; Lewin, John

    2018-06-01

    Many of the world's alluvial rivers are characterised by single or multiple channels that are often sinuous and that migrate to produce a mosaicked floodplain landscape of truncated scroll (or point) bars. Surprisingly little is known about the morphology and geometry of scroll bars despite increasing interest from hydrocarbon geoscientists working with ancient large meandering river deposits. This paper uses remote sensing imagery, LiDAR data-sets of meandering scroll bar topography, and global coverage elevation data to quantify scroll bar geometry, anatomy, relief, and spacing. The analysis focuses on preserved scroll bars in the Mississippi River (USA) floodplain but also compares attributes to 19 rivers of different scale and depositional environments from around the world. Analysis of 10 large scroll bars (median area = 25 km2) on the Mississippi shows that the point bar deposits can be categorised into three different geomorphological units of increasing scale: individual 'scrolls', 'depositional packages', and 'point bar complexes'. Scroll heights and curvatures are greatest near the modern channel and at the terminating boundaries of different depositional packages, confirming the importance of the formative main channel on subsequent scroll bar relief and shape. Fourier analysis shows a periodic variation in signal (scroll bar height) with an average period (spacing) of 167 m (range 150-190 m) for the Mississippi point bars. For other rivers, a strong relationship exists between the period of scroll bars and the adjacent primary channel width for a range of rivers from 55 to 2042 mis 50% of the main channel width. The strength of this correlation over nearly two orders of magnitude of channel size indicates a scale independence of scroll bar spacing and suggests a strong link between channel migration and scroll bar construction with apparent regularities despite different flow regimes. This investigation of meandering river dynamics and floodplain patterns shows that it is possible to develop a suite of metrics that describe scroll bar morphology and geometry that can be valuable to geoscientists predicting the heterogeneity of subsurface meandering deposits.

  17. Bar patronage and motivational predictors of drinking in the San Francisco Bay Area: gender and sexual identity differences.

    PubMed

    Trocki, Karen; Drabble, Laurie

    2008-11-01

    Prior research has found heavier drinking and alcohol-related problems to be more prevalent in sexual minority populations, particularly among women. It has been suggested that differences may be explained in part by socializing in bars and other public drinking venues. This study explores gender, sexual orientation and bar patronage in two different samples: respondents from a random digit dial (RDD) probability study of 1,043 households in Northern California and 569 individuals who were surveyed exiting from 25 different bars in the same three counties that constituted the RDD sample. Bar patrons, in most instances, were at much higher risk of excessive consumption and related problems and consequences. On several key variables, women from the bar patron sample exceeded the problem rates of men in the general population. Bisexual women and bisexual men exhibited riskier behavior on many alcohol measures relative to heterosexuals. Measures of heavier drinking and alcohol-related problems were also elevated among lesbians compared to heterosexual women. Two of the bar motive variables, sensation seeking and mood change motives, were particularly predictive of heavier drinking and alcohol-related problems. Social motives did not predict problems.

  18. Verification of image orthorectification techniques for low-cost geometric inspection of masonry arch bridges

    NASA Astrophysics Data System (ADS)

    González-Jorge, Higinio; Riveiro, Belén; Varela, María; Arias, Pedro

    2012-07-01

    A low-cost image orthorectification tool based on the utilization of compact cameras and scale bars is developed to obtain the main geometric parameters of masonry bridges for inventory and routine inspection purposes. The technique is validated in three different bridges by comparison with laser scanning data. The surveying process is very delicate and must make a balance between working distance and angle. Three different cameras are used in the study to establish the relationship between the error and the camera model. Results depict nondependence in error between the length of the bridge element, the type of bridge, and the type of element. Error values for all the cameras are below 4 percent (95 percent of the data). A compact Canon camera, the model with the best technical specifications, shows an error level ranging from 0.5 to 1.5 percent.

  19. Reanalyzing the visible colors of Centaurs and KBOs: what is there and what we might be missing

    NASA Astrophysics Data System (ADS)

    Peixinho, Nuno; Delsanti, Audrey; Doressoundiram, Alain

    2015-05-01

    Since the discovery of the Kuiper belt, broadband surface colors were thoroughly studied as a first approximation to the object reflectivity spectra. Visible colors (BVRI) have proven to be a reasonable proxy for real spectra, which are rather linear in this range. In contrast, near-IR colors (JHK bands) could be misleading when absorption features of ices are present in the spectra. Although the physical and chemical information provided by colors are rather limited, broadband photometry remains the best tool for establishing the bulk surface properties of Kuiper belt objects (KBOs) and Centaurs. In this work, we explore for the first time general, recurrent effects in the study of visible colors that could affect the interpretation of the scientific results: i) how a correlation could be missed or weakened as a result of the data error bars; ii) the "risk" of missing an existing trend because of low sampling, and the possibility of making quantified predictions on the sample size needed to detect a trend at a given significance level - assuming the sample is unbiased; iii) the use of partial correlations to distinguish the mutual effect of two or more (physical) parameters; and iv) the sensitivity of the "reddening line" tool to the central wavelength of the filters used. To illustrate and apply these new tools, we have compiled the visible colors and orbital parameters of about 370 objects available in the literature - assumed, by default, as unbiased samples - and carried out a traditional analysis per dynamical family. Our results show in particular how a) data error bars impose a limit on the detectable correlations regardless of sample size and that therefore, once that limit is achieved, it is important to diminish the error bars, but it is pointless to enlarge the sampling with the same or larger errors; b) almost all dynamical families still require larger samplings to ensure the detection of correlations stronger than ±0.5, that is, correlations that may explain ~25% or more of the color variability; c) the correlation strength between (V - R) vs. (R - I) is systematically lower than the one between (B - V) vs. (V - R) and is not related with error-bar differences between these colors; d) it is statistically equivalent to use any of the different flavors of orbital excitation or collisional velocity parameters regarding the famous color-inclination correlation among classical KBOs - which no longer appears to be a strong correlation - whereas the inclination and Tisserand parameter relative to Neptune cannot be separated from one another; and e) classical KBOs are the only dynamical family that shows neither (B - V) vs. (V - R) nor (V - R) vs. (R - I) correlations. It therefore is the family with the most unpredictable visible surface reflectivities. Tables 4 and 5 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/577/A35

  20. Fiber optic coupling of a microlens conditioned, stacked semiconductor laser diode array

    DOEpatents

    Beach, R.J.; Benett, W.J.; Mills, S.T.

    1997-04-01

    The output radiation from the two-dimensional aperture of a semiconductor laser diode array is efficiently coupled into an optical fiber. The two-dimensional aperture is formed by stacking individual laser diode bars on top of another in a ``rack and stack`` configuration. Coupling into the fiber is then accomplished using individual microlenses to condition the output radiation of the laser diode bars. A lens that matches the divergence properties and wavefront characteristics of the laser light to the fiber optic is used to focus this conditioned radiation into the fiber. 3 figs.

  1. Magnetometry of micro-magnets with electrostatically defined Hall bars

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lachance-Quirion, Dany; Camirand Lemyre, Julien; Bergeron, Laurent

    2015-11-30

    Micro-magnets are key components for quantum information processing with individual spins, enabling arbitrary rotations and addressability. In this work, characterization of sub-micrometer sized CoFe ferromagnets is performed with Hall bars electrostatically defined in a two-dimensional electron gas. Due to the ballistic nature of electron transport in the cross junction of the Hall bar, anomalies such as the quenched Hall effect appear near zero external magnetic field, thus hindering the sensitivity of the magnetometer to small magnetic fields. However, it is shown that the sensitivity of the diffusive limit can be almost completely restored at low temperatures using a large currentmore » density in the Hall bar of about 10 A/m. Overcoming the size limitation of conventional etched Hall bars with electrostatic gating enables the measurement of magnetization curves of 440 nm wide micro-magnets with a signal-to-noise ratio above 10{sup 3}. Furthermore, the inhomogeneity of the stray magnetic field created by the micro-magnets is directly measured using the gate-voltage-dependent width of the sensitive area of the Hall bar.« less

  2. Sexually transmitted infections and the marriage problem

    NASA Astrophysics Data System (ADS)

    Bouzat, Sebastián; Zanette, Damián H.

    2009-08-01

    We study an SIS epidemiological model for a sexually transmitted infection in a monogamous population where the formation and breaking of couples is governed by individual preferences. The mechanism of couple recombination is based on the so-called bar dynamics for the marriage problem. We compare the results with those of random recombination - where no individual preferences exist - for which we calculate analytically the infection incidence and the endemic threshold. We find that individual preferences give rise to a large dispersion in the average duration of different couples, causing substantial changes in the incidence of the infection and in the endemic threshold. Our analysis yields also new results on the bar dynamics, that may be of interest beyond the field of epidemiological models.

  3. Confidence, Concentration, and Competitive Performance of Elite Athletes: A Natural Experiment in Olympic Gymnastics.

    ERIC Educational Resources Information Center

    Grandjean, Burke D.; Taylor, Patricia A.; Weiner, Jay

    2002-01-01

    During the women's all-around gymnastics final at the 2000 Olympics, the vault was inadvertently set 5 cm too low for a random half of the gymnasts. The error was widely viewed as undermining their confidence and subsequent performance. However, data from pretest and posttest scores on the vault, bars, beam, and floor indicated that the vault…

  4. Implementing Material Surfaces with an Adhesive Switch

    DTIC Science & Technology

    2014-02-28

    squares), M15 (solid triangles), M13 (open circles), M11 (solid circles), or NC14 (open triangles) DNA primary targets. Error bars indicating...5’–ATCAGGCGCAA–3’ M13 = 5’–ATCAGCGGCAATC–3’ M15 = 5’–ATCAGCCCCAATCCA–3’ L3M9 = 5’–ATLCACLCCGLC–3’ L3M11 = 5

  5. Kinematic parameter estimation using close range photogrammetry for sport applications

    NASA Astrophysics Data System (ADS)

    Magre Colorado, Luz Alejandra; Martínez Santos, Juan Carlos

    2015-12-01

    In this article, we show the development of a low-cost hardware/software system based on close range photogrammetry to track the movement of a person performing weightlifting. The goal is to reduce the costs to the trainers and athletes dedicated to this sport when it comes to analyze the performance of the sportsman and avoid injuries or accidents. We used a web-cam as the data acquisition hardware and develop the software stack in Processing using the OpenCV library. Our algorithm extracts size, position, velocity, and acceleration measurements of the bar along the course of the exercise. We present detailed characteristics of the system with their results in a controlled setting. The current work improves the detection and tracking capabilities from a previous version of this system by using HSV color model instead of RGB. Preliminary results show that the system is able to profile the movement of the bar as well as determine the size, position, velocity, and acceleration values of a marker/target in scene. The average error finding the size of object at four meters of distance is less than 4%, and the error of the acceleration value is 1.01% in average.

  6. Analysis of D0 -> K anti-K X Decays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessop, Colin P.

    2003-06-06

    Using data taken with the CLEO II detector, they have studied the decays of the D{sup 0} to K{sup +}K{sup -}, K{sup 0}{bar K}{sup 0}, K{sub S}{sup 0}K{sub S}{sup 0}, K{sub S}{sup 0}K{sub S}{sup 0}{pi}{sup 0}, K{sup +}K{sup -}{pi}{sup 0}. The authors present significantly improved results for B(D{sup 0} {yields} K{sup +}K{sup -}) = (0.454 {+-} 0.028 {+-} 0.035)%, B(D{sup 0} {yields} K{sup 0}{bar K}{sup 0}) = (0.054 {+-} 0.012 {+-} 0.010)% and B(D{sup 0} {yields} K{sub S}{sup 0}K{sub S}{sup 0}K{sub S}{sup 0}) = (0.074 {+-} 0.010 {+-} 0.015)% where the first errors are statistical and the second errors aremore » the estimate of their systematic uncertainty. They also present a new upper limit B(D{sup 0} {yields} K{sub S}{sup 0}K{sub S}{sup 0}{pi}{sup 0}) < 0.059% at the 90% confidence level and the first measurement of B(D{sup 0} {yields} K{sup +}K{sup -}{pi}{sup 0}) = (0.14 {+-} 0.04)%.« less

  7. Signatures of the Galactic bar on stellar kinematics unveiled by APOGEE

    NASA Astrophysics Data System (ADS)

    Palicio, Pedro A.; Martinez-Valpuesta, Inma; Prieto, Carlos Allende; Vecchia, Claudio Dalla; Zamora, Olga; Zasowski, Gail; Fernandez-Trincado, J. G.; Masters, Karen L.; García-Hernández, D. A.; Roman-Lopes, Alexandre

    2018-05-01

    Bars are common galactic structures in the local universe that play an important role in the secular evolution of galaxies, including the Milky Way. In particular, the velocity distribution of individual stars in our galaxy is useful to shed light on stellar dynamics, and provides information complementary to that inferred from the integrated light of external galaxies. However, since a wide variety of models reproduce the distribution of velocity and the velocity dispersion observed in the Milky Way, we look for signatures of the bar on higher-order moments of the line-of-sight velocity (V_los) distribution. We make use of two different numerical simulations -one that has developed a bar and one that remains nearly axisymmetric- to compare them with observations in the latest APOGEE data release (SDSS DR14). This comparison reveals three interesting structures that support the notion that the Milky Way is a barred galaxy. A high skewness region found at positive longitudes constrains the orientation angle of the bar, and is incompatible with the orientation of the bar at ℓ = 0° proposed in previous studies. We also analyse the V_los distributions in three regions, and introduce the Hellinger distance to quantify the differences among them. Our results show a strong non-Gaussian distribution both in the data and in the barred model, confirming the qualitative conclusions drawn from the velocity maps. In contrast to earlier work, we conclude it is possible to infer the presence of the bar from the kurtosis distribution.

  8. Identification and Remediation of Phonological and Motor Errors in Acquired Sound Production Impairment

    PubMed Central

    Gagnon, Bernadine; Miozzo, Michele

    2017-01-01

    Purpose This study aimed to test whether an approach to distinguishing errors arising in phonological processing from those arising in motor planning also predicts the extent to which repetition-based training can lead to improved production of difficult sound sequences. Method Four individuals with acquired speech production impairment who produced consonant cluster errors involving deletion were examined using a repetition task. We compared the acoustic details of productions with deletion errors in target consonant clusters to singleton consonants. Changes in accuracy over the course of the study were also compared. Results Two individuals produced deletion errors consistent with a phonological locus of the errors, and 2 individuals produced errors consistent with a motoric locus of the errors. The 2 individuals who made phonologically driven errors showed no change in performance on a repetition training task, whereas the 2 individuals with motoric errors improved in their production of both trained and untrained items. Conclusions The results extend previous findings about a metric for identifying the source of sound production errors in individuals with both apraxia of speech and aphasia. In particular, this work may provide a tool for identifying predominant error types in individuals with complex deficits. PMID:28655044

  9. J.J. O'Keefe's: A Participant-Observation Study of Teachers in a Bar on Friday Afternoon.

    ERIC Educational Resources Information Center

    Mehlenbacher, Sandra; Mehlenbacher, Earl

    This research was undertaken with the idea that it may be possible to learn about teachers' work lives through the investigation of teacher behavior and attitudes in an out-of-work setting. Group and individual behavior of teachers who habitually gathered at the same bar on Friday afternoons was observed in order to examine patterns of interaction…

  10. Clinal variation in the juvenal plumage of American kestrels

    USGS Publications Warehouse

    Smallwood, J.A.; Natale, C.; Steenhof, K.; Meetz, M.; Marti, C.D.; Melvin, R.J.; Bortolotti, G.R.; Robertson, R.; Robertson, S.; Shuford, W.R.; Lindemann, S.A.; Tornwall, B.

    1999-01-01

    The American Kestrel(Falco sparverius) is a sexually dichromatic falcon that exhibits considerable individual plumage variability. For example, the anterior extent of the black dorsal barring in juvenile males has been used throughout North America as one of several aging criteria, but recent data demonstrate that the variability among individual Southeastern American Kestrels(E S. paulus)exceeds that accounted for by age. The objective of this study was to search for geographic patterns in the variability of juvenal plumage, particularly those characteristics considered indicative of age. Nestling kestrels (n = 610) were examined prior to fledging during the 1997 breeding season at nest box programs across a large portion of the North American breeding range. From south to north (1) the crown patches of both males and females become more completely rufous, and (2) shaft streaks on forehead and crown feathers become more pronounced, especially in males. Male Southeastern American Kestrels differed from other males (E s. sparverius) in that the anterior extent of dorsal barring averaged less but was more variable. The variability observed in North America appears to be part of a cline extending across the species range in the Western Hemisphere, where tropical subspecies are small and have reduced dorsal barring. Both body size and, especially in males, dorsal barring increases with increasing north and south latitude. We suggest that this geographic pattern is adaptive in terms of thermoregulation, and that differences in the sex roles may explain why males become less barred with maturity while females do not.

  11. Improved inference in Bayesian segmentation using Monte Carlo sampling: application to hippocampal subfield volumetry.

    PubMed

    Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Van Leemput, Koen

    2013-10-01

    Many segmentation algorithms in medical image analysis use Bayesian modeling to augment local image appearance with prior anatomical knowledge. Such methods often contain a large number of free parameters that are first estimated and then kept fixed during the actual segmentation process. However, a faithful Bayesian analysis would marginalize over such parameters, accounting for their uncertainty by considering all possible values they may take. Here we propose to incorporate this uncertainty into Bayesian segmentation methods in order to improve the inference process. In particular, we approximate the required marginalization over model parameters using computationally efficient Markov chain Monte Carlo techniques. We illustrate the proposed approach using a recently developed Bayesian method for the segmentation of hippocampal subfields in brain MRI scans, showing a significant improvement in an Alzheimer's disease classification task. As an additional benefit, the technique also allows one to compute informative "error bars" on the volume estimates of individual structures. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gibson, Adam Paul

    The authors present a measurement of the mass of the top quark. The event sample is selected from proton-antiproton collisions, at 1.96 TeV center-of-mass energy, observed with the CDF detector at Fermilab's Tevatron. They consider a 318 pb -1 dataset collected between March 2002 and August 2004. They select events that contain one energetic lepton, large missing transverse energy, exactly four energetic jets, and at least one displaced vertex b tag. The analysis uses leading-order tmore » $$\\bar{t}$$ and background matrix elements along with parameterized parton showering to construct event-by-event likelihoods as a function of top quark mass. From the 63 events observed with the 318 pb -1 dataset they extract a top quark mass of 172.0 ± 2.6(stat) ± 3.3(syst) GeV/c 2 from the joint likelihood. The mean expected statistical uncertainty is 3.2 GeV/c 2 for m $$\\bar{t}$$ = 178 GTeV/c 2 and 3.1 GeV/c 2 for m $$\\bar{t}$$ = 172.5 GeV/c 2. The systematic error is dominated by the uncertainty of the jet energy scale.« less

  13. A novel automated rat catalepsy bar test system based on a RISC microcontroller.

    PubMed

    Alvarez-Cervera, Fernando J; Villanueva-Toledo, Jairo; Moo-Puc, Rosa E; Heredia-López, Francisco J; Alvarez-Cervera, Margarita; Pineda, Juan C; Góngora-Alfaro, José L

    2005-07-15

    Catalepsy tests performed in rodents treated with drugs that interfere with dopaminergic transmission have been widely used for the screening of drugs with therapeutic potential in the treatment of Parkinson's disease. The basic method for measuring catalepsy intensity is the "standard" bar test. We present here an easy to use microcontroller-based automatic system for recording bar test experiments. The design is simple, compact, and has a low cost. Recording intervals and total experimental time can be programmed within a wide range of values. The resulting catalepsy times are stored, and up to five simultaneous experiments can be recorded. A standard personal computer interface is included. The automated system also permits the elimination of human error associated with factors such as fatigue, distraction, and data transcription, occurring during manual recording. Furthermore, a uniform criterion for timing the cataleptic condition can be achieved. Correlation values between the results obtained with the automated system and those reported by two independent observers ranged between 0.88 and 0.99 (P<0.0001; three treatments, nine animals, 144 catalepsy time measurements).

  14. Three Axis Control of the Hubble Space Telescope Using Two Reaction Wheels and Magnetic Torquer Bars for Science Observations

    NASA Technical Reports Server (NTRS)

    Hur-Diaz, Sun; Wirzburger, John; Smith, Dan

    2008-01-01

    The Hubble Space Telescope (HST) is renowned for its superb pointing accuracy of less than 10 milli-arcseconds absolute pointing error. To accomplish this, the HST relies on its complement of four reaction wheel assemblies (RWAs) for attitude control and four magnetic torquer bars (MTBs) for momentum management. As with most satellites with reaction wheel control, the fourth RWA provides for fault tolerance to maintain three-axis pointing capability should a failure occur and a wheel is lost from operations. If an additional failure is encountered, the ability to maintain three-axis pointing is jeopardized. In order to prepare for this potential situation, HST Pointing Control Subsystem (PCS) Team developed a Two Reaction Wheel Science (TRS) control mode. This mode utilizes two RWAs and four magnetic torquer bars to achieve three-axis stabilization and pointing accuracy necessary for a continued science observing program. This paper presents the design of the TRS mode and operational considerations necessary to protect the spacecraft while allowing for a substantial science program.

  15. FastSim: A Fast Simulation for the SuperB Detector

    NASA Astrophysics Data System (ADS)

    Andreassen, R.; Arnaud, N.; Brown, D. N.; Burmistrov, L.; Carlson, J.; Cheng, C.-h.; Di Simone, A.; Gaponenko, I.; Manoni, E.; Perez, A.; Rama, M.; Roberts, D.; Rotondo, M.; Simi, G.; Sokoloff, M.; Suzuki, A.; Walsh, J.

    2011-12-01

    We have developed a parameterized (fast) simulation for detector optimization and physics reach studies of the proposed SuperB Flavor Factory in Italy. Detector components are modeled as thin sections of planes, cylinders, disks or cones. Particle-material interactions are modeled using simplified cross-sections and formulas. Active detectors are modeled using parameterized response functions. Geometry and response parameters are configured using xml files with a custom-designed schema. Reconstruction algorithms adapted from BaBar are used to build tracks and clusters. Multiple sources of background signals can be merged with primary signals. Pattern recognition errors are modeled statistically by randomly misassigning nearby tracking hits. Standard BaBar analysis tuples are used as an event output. Hadronic B meson pair events can be simulated at roughly 10Hz.

  16. Characterization of individual stacking faults in a wurtzite GaAs nanowire by nanobeam X-ray diffraction.

    PubMed

    Davtyan, Arman; Lehmann, Sebastian; Kriegner, Dominik; Zamani, Reza R; Dick, Kimberly A; Bahrami, Danial; Al-Hassan, Ali; Leake, Steven J; Pietsch, Ullrich; Holý, Václav

    2017-09-01

    Coherent X-ray diffraction was used to measure the type, quantity and the relative distances between stacking faults along the growth direction of two individual wurtzite GaAs nanowires grown by metalorganic vapour epitaxy. The presented approach is based on the general property of the Patterson function, which is the autocorrelation of the electron density as well as the Fourier transformation of the diffracted intensity distribution of an object. Partial Patterson functions were extracted from the diffracted intensity measured along the [000\\bar{1}] direction in the vicinity of the wurtzite 00\\bar{1}\\bar{5} Bragg peak. The maxima of the Patterson function encode both the distances between the fault planes and the type of the fault planes with the sensitivity of a single atomic bilayer. The positions of the fault planes are deduced from the positions and shapes of the maxima of the Patterson function and they are in excellent agreement with the positions found with transmission electron microscopy of the same nanowire.

  17. Science 101: When Drawing Graphs from Collected Data, Why Don't You Just "Connect the Dots?"

    ERIC Educational Resources Information Center

    Robertson, William C.

    2007-01-01

    Using "error bars" on graphs is a good way to help students see that, within the inherent uncertainty of the measurements due to the instruments used for measurement, the data points do, in fact, lie along the line that represents the linear relationship. In this article, the author explains why connecting the dots on graphs of collected data is…

  18. Ionospheric Modeling: Development, Verification and Validation

    DTIC Science & Technology

    2007-08-15

    The University of Massachusetts (UMass), Lowell, has introduced a new version of their ionogram autoscaling program ARTIST , Version 5. A very...Investigation of the Reliability of the ESIR Ionogram Autoscaling Method (Expert System for Ionogram Reduction) ESIR.book.pdf Dec 06 Quality...Figures and Error Bars for Autoscaled Vertical Incidence Ionograms. Background and User Documentation for QualScan V2007.2 AFRL_QualScan.book.pdf Feb

  19. DataPlus™ - a revolutionary applications generator for DOS hand-held computers

    Treesearch

    David Dean; Linda Dean

    2000-01-01

    DataPlus allows the user to easily design data collection templates for DOS-based hand-held computers that mimic clipboard data sheets. The user designs and tests the application on the desktop PC and then transfers it to a DOS field computer. Other features include: error checking, missing data checks, and sensor input from RS-232 devices such as bar code wands,...

  20. Improving radiopharmaceutical supply chain safety by implementing bar code technology.

    PubMed

    Matanza, David; Hallouard, François; Rioufol, Catherine; Fessi, Hatem; Fraysse, Marc

    2014-11-01

    The aim of this study was to describe and evaluate an approach for improving radiopharmaceutical supply chain safety by implementing bar code technology. We first evaluated the current situation of our radiopharmaceutical supply chain and, by means of the ALARM protocol, analysed two dispensing errors that occurred in our department. Thereafter, we implemented a bar code system to secure selected key stages of the radiopharmaceutical supply chain. Finally, we evaluated the cost of this implementation, from overtime, to overheads, to additional radiation exposure to workers. An analysis of the events that occurred revealed a lack of identification of prepared or dispensed drugs. Moreover, the evaluation of the current radiopharmaceutical supply chain showed that the dispensation and injection steps needed to be further secured. The bar code system was used to reinforce product identification at three selected key stages: at usable stock entry; at preparation-dispensation; and during administration, allowing to check conformity between the labelling of the delivered product (identity and activity) and the prescription. The extra time needed for all these steps had no impact on the number and successful conduct of examinations. The investment cost was reduced (2600 euros for new material and 30 euros a year for additional supplies) because of pre-existing computing equipment. With regard to the radiation exposure to workers there was an insignificant overexposure for hands with this new organization because of the labelling and scanning processes of radiolabelled preparation vials. Implementation of bar code technology is now an essential part of a global securing approach towards optimum patient management.

  1. Dalitz plot analysis of the decay B 0 ( B ¯ 0 ) → K ± π ∓ π 0

    DOE PAGES

    Aubert, B.; Bona, M.; Karyotakis, Y.; ...

    2008-09-12

    Here, we report a Dalitz-plot analysis of the charmless hadronic decays of neutral B mesons to K ± π ∓ π 0 . With a sample of ( 231.8 ± 2.6 ) × 10 6 Υ ( 4 S ) → Bmore » $$\\bar{B}$$ decays collected by the BABAR detector at the PEP-II asymmetric-energy B Factory at SLAC, we measure the magnitudes and phases of the intermediate resonant and nonresonant amplitudes for B 0 and $$\\bar{B}$$ 0 decays and determine the corresponding C P -averaged branching fractions and charge asymmetries. Furthermore, we measure the inclusive branching fraction and C P -violating charge asymmetry and found it to be B ( B 0 → K + π - π 0 ) = ( 35.7$$+2.6\\atop{-1.5}$$ + 2.6 - 1.5 ± 2.2 ) × 10 - 6 and A C P = - 0.030 $$+ 0.045\\atop{- 0.051}$$ ± 0.055 where the first errors are statistical and the second systematic. We observe the decay B 0 → K * 0 ( 892 ) π 0 with the branching fraction B ( B 0 → K * 0 ( 892 ) π 0 ) = ( 3.6 $$+ 0.7\\atop- {0.8}$$ ± 0.4 ) × 10 - 6 . This measurement differs from zero by 5.6 standard deviations (including the systematic uncertainties). The selected sample also contains B 0 → $$\\bar{D}$$ 0 π 0 decays where $$\\bar{D}$$ 0 → K + π - , and we measure B ( B 0 → $$\\bar{D}$$ 0π 0 ) = ( 2.93 ± 0.17 ± 0.18 ) × 10 - 4 .« less

  2. LEARNING STRATEGY REFINEMENT REVERSES EARLY SENSORY CORTICAL MAP EXPANSION BUT NOT BEHAVIOR: SUPPORT FOR A THEORY OF DIRECTED CORTICAL SUBSTRATES OF LEARNING AND MEMORY

    PubMed Central

    Elias, Gabriel A.; Bieszczad, Kasia M.; Weinberger, Norman M.

    2015-01-01

    Primary sensory cortical fields develop highly specific associative representational plasticity, notably enlarged area of representation of reinforced signal stimuli within their topographic maps. However, overtraining subjects after they have solved an instrumental task can reduce or eliminate the expansion while the successful behavior remains. As the development of this plasticity depends on the learning strategy used to solve a task, we asked whether the loss of expansion is due to the strategy used during overtraining. Adult male rats were trained in a three-tone auditory discrimination task to bar-press to the CS+ for water reward and refrain from doing so during the CS− tones and silent intertrial intervals; errors were punished by a flashing light and time-out penalty. Groups acquired this task to a criterion within seven training sessions by relying on a strategy that was “bar-press from tone-onset-to-error signal” (“TOTE”). Three groups then received different levels of overtraining: Group ST, none; Group RT, one week; Group OT, three weeks. Post-training mapping of their primary auditory fields (A1) showed that Groups ST and RT had developed significantly expanded representational areas, specifically restricted to the frequency band of the CS+ tone. In contrast, the A1 of Group OT was no different from naïve controls. Analysis of learning strategy revealed this group had shifted strategy to a refinement of TOTE in which they self-terminated bar-presses before making an error (“iTOTE”). Across all animals, the greater the use of iTOTE, the smaller was the representation of the CS+ in A1. Thus, the loss of cortical expansion is attributable to a shift or refinement in strategy. This reversal of expansion was considered in light of a novel theoretical framework (CONCERTO) highlighting four basic principles of brain function that resolve anomalous findings and explaining why even a minor change in strategy would involve concomitant shifts of involved brain sites, including reversal of cortical expansion. PMID:26596700

  3. A Comparison of Full and Empirical Bayes Techniques for Inferring Sea Level Changes from Tide Gauge Records

    NASA Astrophysics Data System (ADS)

    Piecuch, C. G.; Huybers, P. J.; Tingley, M.

    2016-12-01

    Sea level observations from coastal tide gauges are some of the longest instrumental records of the ocean. However, these data can be noisy, biased, and gappy, featuring missing values, and reflecting land motion and local effects. Coping with these issues in a formal manner is a challenging task. Some studies use Bayesian approaches to estimate sea level from tide gauge records, making inference probabilistically. Such methods are typically empirically Bayesian in nature: model parameters are treated as known and assigned point values. But, in reality, parameters are not perfectly known. Empirical Bayes methods thus neglect a potentially important source of uncertainty, and so may overestimate the precision (i.e., underestimate the uncertainty) of sea level estimates. We consider whether empirical Bayes methods underestimate uncertainty in sea level from tide gauge data, comparing to a full Bayes method that treats parameters as unknowns to be solved for along with the sea level field. We develop a hierarchical algorithm that we apply to tide gauge data on the North American northeast coast over 1893-2015. The algorithm is run in full Bayes mode, solving for the sea level process and parameters, and in empirical mode, solving only for the process using fixed parameter values. Error bars on sea level from the empirical method are smaller than from the full Bayes method, and the relative discrepancies increase with time; the 95% credible interval on sea level values from the empirical Bayes method in 1910 and 2010 is 23% and 56% narrower, respectively, than from the full Bayes approach. To evaluate the representativeness of the credible intervals, empirical Bayes and full Bayes methods are applied to corrupted data of a known surrogate field. Using rank histograms to evaluate the solutions, we find that the full Bayes method produces generally reliable error bars, whereas the empirical Bayes method gives too-narrow error bars, such that the 90% credible interval only encompasses 70% of true process values. Results demonstrate that parameter uncertainty is an important source of process uncertainty, and advocate for the fully Bayesian treatment of tide gauge records in ocean circulation and climate studies.

  4. Learning strategy refinement reverses early sensory cortical map expansion but not behavior: Support for a theory of directed cortical substrates of learning and memory.

    PubMed

    Elias, Gabriel A; Bieszczad, Kasia M; Weinberger, Norman M

    2015-12-01

    Primary sensory cortical fields develop highly specific associative representational plasticity, notably enlarged area of representation of reinforced signal stimuli within their topographic maps. However, overtraining subjects after they have solved an instrumental task can reduce or eliminate the expansion while the successful behavior remains. As the development of this plasticity depends on the learning strategy used to solve a task, we asked whether the loss of expansion is due to the strategy used during overtraining. Adult male rats were trained in a three-tone auditory discrimination task to bar-press to the CS+ for water reward and refrain from doing so during the CS- tones and silent intertrial intervals; errors were punished by a flashing light and time-out penalty. Groups acquired this task to a criterion within seven training sessions by relying on a strategy that was "bar-press from tone-onset-to-error signal" ("TOTE"). Three groups then received different levels of overtraining: Group ST, none; Group RT, one week; Group OT, three weeks. Post-training mapping of their primary auditory fields (A1) showed that Groups ST and RT had developed significantly expanded representational areas, specifically restricted to the frequency band of the CS+ tone. In contrast, the A1 of Group OT was no different from naïve controls. Analysis of learning strategy revealed this group had shifted strategy to a refinement of TOTE in which they self-terminated bar-presses before making an error ("iTOTE"). Across all animals, the greater the use of iTOTE, the smaller was the representation of the CS+ in A1. Thus, the loss of cortical expansion is attributable to a shift or refinement in strategy. This reversal of expansion was considered in light of a novel theoretical framework (CONCERTO) highlighting four basic principles of brain function that resolve anomalous findings and explaining why even a minor change in strategy would involve concomitant shifts of involved brain sites, including reversal of cortical expansion. Published by Elsevier Inc.

  5. The dual function of barred plumage in birds: camouflage and communication.

    PubMed

    Gluckman, T L; Cardoso, G C

    2010-11-01

    A commonly held principle in visual ecology is that communication compromises camouflage: while visual signals are often conspicuous, camouflage provides concealment. However, some traits may have evolved for communication and camouflage simultaneously, thereby overcoming this functional compromise. Visual patterns generally provide camouflage, but it was suggested that a particular type of visual pattern – avian barred plumage – could also be a signal of individual quality. Here, we test if the evolution of sexual dimorphism in barred plumage, as well as differences between juvenile and adult plumage, indicate camouflage and/or signalling functions across the class Aves. We found a higher frequency of female- rather than male-biased sexual dimorphism in barred plumage, indicating that camouflage is its most common function. But we also found that, compared to other pigmentation patterns, barred plumage is more frequently biased towards males and its expression more frequently restricted to adulthood, suggesting that barred plumage often evolves or is maintained as a sexual communication signal. This illustrates how visual traits can accommodate the apparently incompatible functions of camouflage and communication, which has implications for our understanding of avian visual ecology and sexual ornamentation.

  6. Globular Clusters: Absolute Proper Motions and Galactic Orbits

    NASA Astrophysics Data System (ADS)

    Chemel, A. A.; Glushkova, E. V.; Dambis, A. K.; Rastorguev, A. S.; Yalyalieva, L. N.; Klinichev, A. D.

    2018-04-01

    We cross-match objects from several different astronomical catalogs to determine the absolute proper motions of stars within the 30-arcmin radius fields of 115 Milky-Way globular clusters with the accuracy of 1-2 mas yr-1. The proper motions are based on positional data recovered from the USNO-B1, 2MASS, URAT1, ALLWISE, UCAC5, and Gaia DR1 surveys with up to ten positions spanning an epoch difference of up to about 65 years, and reduced to Gaia DR1 TGAS frame using UCAC5 as the reference catalog. Cluster members are photometrically identified by selecting horizontal- and red-giant branch stars on color-magnitude diagrams, and the mean absolute proper motions of the clusters with a typical formal error of about 0.4 mas yr-1 are computed by averaging the proper motions of selected members. The inferred absolute proper motions of clusters are combined with available radial-velocity data and heliocentric distance estimates to compute the cluster orbits in terms of the Galactic potential models based on Miyamoto and Nagai disk, Hernquist spheroid, and modified isothermal dark-matter halo (axisymmetric model without a bar) and the same model + rotating Ferre's bar (non-axisymmetric). Five distant clusters have higher-than-escape velocities, most likely due to large errors of computed transversal velocities, whereas the computed orbits of all other clusters remain bound to the Galaxy. Unlike previously published results, we find the bar to affect substantially the orbits of most of the clusters, even those at large Galactocentric distances, bringing appreciable chaotization, especially in the portions of the orbits close to the Galactic center, and stretching out the orbits of some of the thick-disk clusters.

  7. Signatures of the Galactic bar on stellar kinematics unveiled by APOGEE

    NASA Astrophysics Data System (ADS)

    Palicio, Pedro A.; Martinez-Valpuesta, Inma; Allende Prieto, Carlos; Dalla Vecchia, Claudio; Zamora, Olga; Zasowski, Gail; Fernandez-Trincado, J. G.; Masters, Karen L.; García-Hernández, D. A.; Roman-Lopes, Alexandre

    2018-07-01

    Bars are common galactic structures in the local universe that play an important role in the secular evolution of galaxies, including the Milky Way. In particular, the velocity distribution of individual stars in our galaxy is useful to shed light on stellar dynamics, and provides information complementary to that inferred from the integrated light of external galaxies. However, since a wide variety of models reproduce the distribution of velocity and the velocity dispersion observed in the Milky Way, we look for signatures of the bar on higher order moments of the line-of-sight velocity (V_{los}) distribution. We use two different numerical simulations - one that has developed a bar and one that remains nearly axisymmetric - to compare them with observations in the latest Apache Point Observatory Galactic Evolution Experiment data release (SDSS DR14). This comparison reveals three interesting structures that support the notion that the Milky Way is a barred galaxy. A high-skewness region found at positive longitudes constrains the orientation angle of the bar, and is incompatible with the orientation of the bar at ℓ = 0° proposed in previous studies. We also analyse the V_{los} distributions in three regions, and introduce the Hellinger distance to quantify the differences among them. Our results show a strong non-Gaussian distribution both in the data and in the barred model, confirming the qualitative conclusions drawn from the velocity maps. In contrast to earlier work, we conclude it is possible to infer the presence of the bar from the kurtosis distribution.

  8. Does legislation to prevent alcohol sales to drunk individuals work? Measuring the propensity for night-time sales to drunks in a UK city.

    PubMed

    Hughes, Karen; Bellis, Mark A; Leckenby, Nicola; Quigg, Zara; Hardcastle, Katherine; Sharples, Olivia; Llewellyn, David J

    2014-05-01

    By measuring alcohol retailers' propensity to illegally sell alcohol to young people who appear highly intoxicated, we examine whether UK legislation is effective at preventing health harms resulting from drunk individuals continuing to access alcohol. 73 randomly selected pubs, bars and nightclubs in a city in North West England were subjected to an alcohol purchase test by pseudo-drunk actors. Observers recorded venue characteristics to identify poorly managed and problematic (PMP) bars. 83.6% of purchase attempts resulted in a sale of alcohol to a pseudo-intoxicated actor. Alcohol sales increased with the number of PMP markers bars had, yet even in those with no markers, 66.7% of purchase attempts resulted in a sale. Bar servers often recognised signs of drunkenness in actors, but still served them. In 18% of alcohol sales, servers attempted to up-sell by suggesting actors purchase double rather than single vodkas. UK law preventing sales of alcohol to drunks is routinely broken in nightlife environments, yet prosecutions are rare. Nightlife drunkenness places enormous burdens on health and health services. Preventing alcohol sales to drunks should be a public health priority, while policy failures on issues, such as alcohol pricing, are revisited.

  9. THE HST/ACS COMA CLUSTER SURVEY. VIII. BARRED DISK GALAXIES IN THE CORE OF THE COMA CLUSTER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marinova, Irina; Jogee, Shardha; Weinzirl, Tim

    2012-02-20

    We use high-resolution ({approx}0.''1) F814W Advanced Camera for Surveys (ACS) images from the Hubble Space Telescope ACS Treasury survey of the Coma cluster at z {approx} 0.02 to study bars in massive disk galaxies (S0s), as well as low-mass dwarf galaxies in the core of the Coma cluster, the densest environment in the nearby universe. Our study helps to constrain the evolution of bars and disks in dense environments and provides a comparison point for studies in lower density environments and at higher redshifts. Our results are: (1) we characterize the fraction and properties of bars in a sample ofmore » 32 bright (M{sub V} {approx}< -18, M{sub *} > 10{sup 9.5} M{sub Sun }) S0 galaxies, which dominate the population of massive disk galaxies in the Coma core. We find that the measurement of a bar fraction among S0 galaxies must be handled with special care due to the difficulty in separating unbarred S0s from ellipticals, and the potential dilution of the bar signature by light from a relatively large, bright bulge. The results depend sensitively on the method used: the bar fraction for bright S0s in the Coma core is 50% {+-} 11%, 65% {+-} 11%, and 60% {+-} 11% based on three methods of bar detection, namely, strict ellipse fit criteria, relaxed ellipse fit criteria, and visual classification. (2) We compare the S0 bar fraction across different environments (the Coma core, A901/902, and Virgo) adopting the critical step of using matched samples and matched methods in order to ensure robust comparisons. We find that the bar fraction among bright S0 galaxies does not show a statistically significant variation (within the error bars of {+-}11%) across environments which span two orders of magnitude in galaxy number density (n {approx} 300-10,000 galaxies Mpc{sup -3}) and include rich and poor clusters, such as the core of Coma, the A901/902 cluster, and Virgo. We speculate that the bar fraction among S0s is not significantly enhanced in rich clusters compared to low-density environments for two reasons. First, S0s in rich clusters are less prone to bar instabilities as they are dynamically heated by harassment and are gas poor as a result of ram pressure stripping and accelerated star formation. Second, high-speed encounters in rich clusters may be less effective than slow, strong encounters in inducing bars. (3) We also take advantage of the high resolution of the ACS ({approx}50 pc) to analyze a sample of 333 faint (M{sub V} > -18) dwarf galaxies in the Coma core. Using visual inspection of unsharp-masked images, we find only 13 galaxies with bar and/or spiral structure. An additional eight galaxies show evidence for an inclined disk. The paucity of disk structures in Coma dwarfs suggests that either disks are not common in these galaxies or that any disks present are too hot to develop instabilities.« less

  10. Incorporating a Spatial Prior into Nonlinear D-Bar EIT Imaging for Complex Admittivities.

    PubMed

    Hamilton, Sarah J; Mueller, J L; Alsaker, M

    2017-02-01

    Electrical Impedance Tomography (EIT) aims to recover the internal conductivity and permittivity distributions of a body from electrical measurements taken on electrodes on the surface of the body. The reconstruction task is a severely ill-posed nonlinear inverse problem that is highly sensitive to measurement noise and modeling errors. Regularized D-bar methods have shown great promise in producing noise-robust algorithms by employing a low-pass filtering of nonlinear (nonphysical) Fourier transform data specific to the EIT problem. Including prior data with the approximate locations of major organ boundaries in the scattering transform provides a means of extending the radius of the low-pass filter to include higher frequency components in the reconstruction, in particular, features that are known with high confidence. This information is additionally included in the system of D-bar equations with an independent regularization parameter from that of the extended scattering transform. In this paper, this approach is used in the 2-D D-bar method for admittivity (conductivity as well as permittivity) EIT imaging. Noise-robust reconstructions are presented for simulated EIT data on chest-shaped phantoms with a simulated pneumothorax and pleural effusion. No assumption of the pathology is used in the construction of the prior, yet the method still produces significant enhancements of the underlying pathology (pneumothorax or pleural effusion) even in the presence of strong noise.

  11. SU-F-J-192: A Quick and Effective Method to Validate Patient’s Daily Setup and Geometry Changes Prior to Proton Treatment Delivery Based On Water Equivalent Thickness Projection Imaging (WETPI) for Head Neck Cancer (HNC) Patient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, G; Qin, A; Zhang, J

    Purpose: With the implementation of Cone-beam Computed-Tomography (CBCT) in proton treatment, we introduces a quick and effective tool to verify the patient’s daily setup and geometry changes based on the Water-Equivalent-Thickness Projection-Image(WETPI) from individual beam angle. Methods: A bilateral head neck cancer(HNC) patient previously treated via VMAT was used in this study. The patient received 35 daily CBCT during the whole treatment and there is no significant weight change. The CT numbers of daily CBCTs were corrected by mapping the CT numbers from simulation CT via Deformable Image Registration(DIR). IMPT plan was generated using 4-field IMPT robust optimization (3.5% rangemore » and 3mm setup uncertainties) with beam angle 60, 135, 300, 225 degree. WETPI within CTV through all beam directions were calculated. 3%/3mm gamma index(GI) were used to provide a quantitative comparison between initial sim-CT and mapped daily CBCT. To simulate an extreme case where human error is involved, a couch bar was manually inserted in front of beam angle 225 degree of one CBCT. WETPI was compared in this scenario. Results: The average of GI passing rate of this patient from different beam angles throughout the treatment course is 91.5 ± 8.6. In the cases with low passing rate, it was found that the difference between shoulder and neck angle as well as the head rest often causes major deviation. This indicates that the most challenge in treating HNC is the setup around neck area. In the extreme case where a couch bar is accidently inserted in the beam line, GI passing rate drops to 52 from 95. Conclusion: WETPI and quantitative gamma analysis give clinicians, therapists and physicists a quick feedback of the patient’s setup accuracy or geometry changes. The tool could effectively avoid some human errors. Furthermore, this tool could be used potentially as an initial signal to trigger plan adaptation.« less

  12. Procedural Error and Task Interruption

    DTIC Science & Technology

    2016-09-30

    red for research on errors and individual differences . Results indicate predictive validity for fluid intelligence and specifi c forms of work...TERMS procedural error, task interruption, individual differences , fluid intelligence, sleep deprivation 16. SECURITY CLASSIFICATION OF: 17...and individual differences . It generates rich data on several kinds of errors, including procedural errors in which steps are skipped or repeated

  13. WHERE THE INDIVIDUAL MEETS THE ECOLOGICAL: A STUDY OF PARENT DRINKING PATTERNS, ALCOHOL OUTLETS AND CHILD PHYSICAL ABUSE

    PubMed Central

    Freisthler, Bridget; Gruenewald, Paul J.

    2012-01-01

    Background Despite well-known associations between heavy drinking and child physical abuse, little is known about specific risks related to drinking different amounts of alcohol in different drinking venues. This study uses a context specific dose-response model to examine how drinking in various venues (e.g., at bars or parties) are related to physically abusive parenting practices while controlling for individual and psychosocial characteristics. Methods Data were collected via a telephone survey of parents in 50 cities in California resulting in 2,163 respondents who reported drinking in the past year. Child physical abuse and corporal punishment were measured using the Conflict Tactics Scale, Parent Child version. Drinking behaviors were measured using continued drinking measures. Data were analyzed using zero inflated Poisson models. Results Drinking at homes, parties or bars more frequently was related to greater frequencies of physically abusive parenting practices. The use of greater amounts of alcohol in association with drinking at bars appeared to increase risks for corporal punishment, a dose-response effect. Dose-response relationships were not found for drinking at homes or parties or drinking at bars for physical abuse nor for drinking at home and parties for corporal punishment. Conclusion Frequencies of using drinking venues, particularly bars and home or parties, are associated with greater use of abusive parenting practices. These findings suggest that a parent’s routine drinking activities place children at different risks for being physically abused. They also suggest that interventions that take into account parents’ alcohol use at drinking venues are an important avenue for secondary prevention efforts. PMID:23316780

  14. Why hard-nosed executives should care about management theory.

    PubMed

    Christensen, Clayton M; Raynor, Michael E

    2003-09-01

    Theory often gets a bum rap among managers because it's associated with the word "theoretical," which connotes "impractical." But it shouldn't. Because experience is solely about the past, solid theories are the only way managers can plan future actions with any degree of confidence. The key word here is "solid." Gravity is a solid theory. As such, it lets us predict that if we step off a cliff we will fall, without actually having to do so. But business literature is replete with theories that don't seem to work in practice or actually contradict each other. How can a manager tell a good business theory from a bad one? The first step is understanding how good theories are built. They develop in three stages: gathering data, organizing it into categories highlighting significant differences, then making generalizations explaining what causes what, under which circumstances. For instance, professor Ananth Raman and his colleagues collected data showing that bar code-scanning systems generated notoriously inaccurate inventory records. These observations led them to classify the types of errors the scanning systems produced and the types of shops in which those errors most often occurred. Recently, some of Raman's doctoral students have worked as clerks to see exactly what kinds of behavior cause the errors. From this foundation, a solid theory predicting under which circumstances bar code systems work, and don't work, is beginning to emerge. Once we forgo one-size-fits-all explanations and insist that a theory describes the circumstances under which it does and doesn't work, we can bring predictable success to the world of management.

  15. PubMed Central

    Wassef, H. H.; Fox, E.; Abbatte, E. A.; Tolédo, J. F.; Rodier, G.

    1989-01-01

    Sexually transmitted diseases (STDs) are an increasing public health problem in Djibouti. The authors have attempted to obtain basic information on the level of knowledge concerning STDs and on the sexual behaviour of highly sexually promiscuous individuals for use in the organization of future STD control programmes; the information was obtained from a population of 213 bar hostesses, 66 unlicensed prostitutes, and 115 male sufferers from STDs. The level of knowledge of these diseases was very high among the prostitutes and the bar hostesses, except that little was known about syphilis by the bar hostesses; the male sufferers were relatively ignorant concerning both syphilis and AIDS. Medical and paramedical personnel do not figure among the sources given for knowledge of STDs. On the other hand, friends play an important role in this knowledge, especially among unlicensed prostitutes. The second most frequently instanced source was radio and TV. The bar hostesses and the unlicensed prostitutes often exhibited distinct social characteristics. Neither education nor marriage appeared to prevent men from contracting STDs. The use of condoms is extremely rare among STD patients and not very common among unlicensed prostitutes. Half the bar hostesses report their frequent use. PMID:2611976

  16. Emotional intelligence and the relationship to resident performance: a multi-institutional study.

    PubMed

    Talarico, Joseph F; Varon, Albert J; Banks, Shawn E; Berger, Jeffrey S; Pivalizza, Evan G; Medina-Rivera, Glorimar; Rimal, Jyotsna; Davidson, Melissa; Dai, Feng; Qin, Li; Ball, Ryan D; Loudd, Cheryl; Schoenberg, Catherine; Wetmore, Amy L; Metro, David G

    2013-05-01

    To test the hypothesis that emotional intelligence, as measured by a BarOn Emotional Quotient Inventory (EQ-i), the 125-item version personal inventory (EQ-i:125), correlates with resident performance. Survey (personal inventory) instrument. Five U.S. academic anesthesiology residency programs. Postgraduate year (PGY) 2, 3, and 4 residents enrolled in university-based anesthesiology residency programs. Residents confidentially completed the BarOn EQ-i:125 personal inventory. The deidentified resident evaluations were sent to the principal investigator of a separate data collection study for data analysis. Data collected from the inventory were correlated with daily evaluations of the residents by residency program faculty. Results of the individual BarOn EQ-i:125 and daily faculty evaluations of the residents were compiled and analyzed. Univariate correlation analysis and multivariate canonical analysis showed that some aspects of the BarOn EQ-i:125 were significantly correlated with, and likely to be predictors of, resident performance. Emotional intelligence, as measured by the BarOn EQ-i personal inventory, has considerable promise as an independent indicator of performance as an anesthesiology resident. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. ERROR REDUCTION IN DUCT LEAKAGE TESTING THROUGH DATA CROSS-CHECKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ANDREWS, J.W.

    1998-12-31

    One way to reduce uncertainty in scientific measurement is to devise a protocol in which more quantities are measured than are absolutely required, so that the result is over constrained. This report develops a method for so combining data from two different tests for air leakage in residential duct systems. An algorithm, which depends on the uncertainty estimates for the measured quantities, optimizes the use of the excess data. In many cases it can significantly reduce the error bar on at least one of the two measured duct leakage rates (supply or return), and it provides a rational method ofmore » reconciling any conflicting results from the two leakage tests.« less

  18. Competitive interactions and resource partitioning between northern spotted owls and barred owls in western Oregon

    USGS Publications Warehouse

    Wiens, J. David; Anthony, Robert G.; Forsman, Eric D.

    2014-01-01

    The federally threatened northern spotted owl (Strix occidentalis caurina) is the focus of intensive conservation efforts that have led to much forested land being reserved as habitat for the owl and associated wildlife species throughout the Pacific Northwest of the United States. Recently, however, a relatively new threat to spotted owls has emerged in the form of an invasive competitor: the congeneric barred owl (S. varia). As barred owls have rapidly expanded their populations into the entire range of the northern spotted owl, mounting evidence indicates that they are displacing, hybridizing with, and even killing spotted owls. The range expansion by barred owls into western North America has made an already complex conservation issue even more contentious, and a lack of information on the ecological relationships between the 2 species has hampered recovery efforts for northern spotted owls. We investigated spatial relationships, habitat use, diets, survival, and reproduction of sympatric spotted owls and barred owls in western Oregon, USA, during 2007–2009. Our overall objective was to determine the potential for and possible consequences of competition for space, habitat, and food between these previously allopatric owl species. Our study included 29 spotted owls and 28 barred owls that were radio-marked in 36 neighboring territories and monitored over a 24-month period. Based on repeated surveys of both species, the number of territories occupied by pairs of barred owls in the 745-km2 study area (82) greatly outnumbered those occupied by pairs of spotted owls (15). Estimates of mean size of home ranges and core-use areas of spotted owls (1,843 ha and 305 ha, respectively) were 2–4 times larger than those of barred owls (581 ha and 188 ha, respectively). Individual spotted and barred owls in adjacent territories often had overlapping home ranges, but interspecific space sharing was largely restricted to broader foraging areas in the home range with minimal spatial overlap among core-use areas. We used an information-theoretic approach to rank discrete-choice models representing alternative hypotheses about the influence of forest conditions, topography, and interspecific interactions on species-specific patterns of nighttime resource selection. Spotted owls spent a disproportionate amount of time foraging on steep slopes in ravines dominated by old (>120 yr) conifer trees. Barred owls used available forest types more evenly than spotted owls, and were most strongly associated with patches of large hardwood and conifer trees that occupied relatively flat areas along streams. Spotted and barred owls differed in the relative use of old conifer forest (greater for spotted owls) and slope conditions (steeper slopes for spotted owls), but we found no evidence that the 2 species differed in their use of young, mature, and riparian-hardwood forest types. Mean overlap in proportional use of different forest types between individual spotted owls and barred owls in adjacent territories was 81% (range = 30–99%). The best model of habitat use for spotted owls indicated that the relative probability of a location being used was substantially reduced if the location was within or in close proximity to a core-use area of a barred owl. We used pellet analysis and measures of food-niche overlap to determine the potential for dietary competition between spatially associated pairs of spotted owls and barred owls. We identified 1,223 prey items from 15 territories occupied by spotted owls and 4,299 prey items from 24 territories occupied by barred owls. Diets of both species were dominated by nocturnal mammals, but diets of barred owls included many terrestrial, aquatic, and diurnal prey species that were rare or absent in diets of spotted owls. Northern flying squirrels (Glaucomys sabrinus), woodrats (Neotoma fuscipes, N. cinerea), and lagomorphs (Lepus americanus, Sylvilagus bachmani) were primary prey for both owl species, accounting for 81% and 49% of total dietary biomass for spotted owls and barred owls, respectively. Mean dietary overlap between pairs of spotted and barred owls in adjacent territories was moderate (42%; range = 28–70%). Barred owls displayed demographic superiority over spotted owls; annual survival probability of spotted owls from known-fate analyses (0.81, SE = 0.05) was lower than that of barred owls (0.92, SE = 0.04), and pairs of barred owls produced an average of 4.4 times more young than pairs of spotted owls over a 3-year period. We found a strong, positive relationship between seasonal (6-month) survival probabilities of both species and the proportion of old (>120 yr) conifer forest within individual home ranges, which suggested that availability of old forest was a potential limiting factor in the competitive relationship between these 2 species. The annual number of young produced by spotted owls increased linearly with increasing distance from a territory center of a pair of barred owls, and all spotted owls that attempted to nest within 1.5 km of a nest used by barred owls failed to successfully produce young. We identified strong associations between the presence of barred owls and the behavior and fitness potential of spotted owls, as shown by changes in movements, habitat use, and reproductive output of spotted owls exposed to different levels of spatial overlap with territorial barred owls. When viewed collectively, our results support the hypothesis that interference competition with barred owls for territorial space can constrain the availability of critical resources required for successful recruitment and reproduction of spotted owls. Availability of old forests and associated prey species appeared to be the most strongly limiting factors in the competitive relationship between these species, indicating that further loss of these conditions can lead to increases in competitive pressure. Our findings have broad implications for the conservation of spotted owls, as they suggest that spatial heterogeneity in vital rates may not arise solely because of differences among territories in the quality or abundance of forest habitat, but also because of the spatial distribution of a newly established competitor. Experimental removal of barred owls could be used to test this hypothesis and determine whether localized control of barred owl numbers is an ecologically practical and socio-politically acceptable management tool to consider in conservation strategies for spotted owls.

  19. A Comparison of Sleep and Performance of Sailors on an Operationally Deployed U.S. Navy Warship

    DTIC Science & Technology

    2013-09-01

    The crew’s mission on a deployed warship is inherently dangerous. The nature of the job means navigating restricted waters, conducting underway...The nature of the job means navigating restricted waters, conducting underway replenishments with less than 200 feet of lateral separation from... concentration equivalent. Error bars ± s.e. (From Dawson & Reid, 1997). .............................9 Figure 4. Mean psychomotor vigilance task speed (and

  20. Metal Ion Sensor with Catalytic DNA in a Nanofluidic Intelligent Processor

    DTIC Science & Technology

    2011-12-01

    attributed to decreased diffusion and less active DNAzyme complex because of pore constraints. Uncleavable Alexa546 intensity is shown in gray ...is shown in gray , cleavable fluorescein in green, and the ratio of Fl/Alexa in red. Error bars represent one standard deviation of four independent...higher concentrations inhibiting cleaved fragment release. Uncleavable Alexa 546 intensity is shown in gray , cleavable fluorescein in green, and the

  1. The Sizing and Optimization Language (SOL): A computer language to improve the user/optimizer interface

    NASA Technical Reports Server (NTRS)

    Lucas, S. H.; Scotti, S. J.

    1989-01-01

    The nonlinear mathematical programming method (formal optimization) has had many applications in engineering design. A figure illustrates the use of optimization techniques in the design process. The design process begins with the design problem, such as the classic example of the two-bar truss designed for minimum weight as seen in the leftmost part of the figure. If formal optimization is to be applied, the design problem must be recast in the form of an optimization problem consisting of an objective function, design variables, and constraint function relations. The middle part of the figure shows the two-bar truss design posed as an optimization problem. The total truss weight is the objective function, the tube diameter and truss height are design variables, with stress and Euler buckling considered as constraint function relations. Lastly, the designer develops or obtains analysis software containing a mathematical model of the object being optimized, and then interfaces the analysis routine with existing optimization software such as CONMIN, ADS, or NPSOL. This final state of software development can be both tedious and error-prone. The Sizing and Optimization Language (SOL), a special-purpose computer language whose goal is to make the software implementation phase of optimum design easier and less error-prone, is presented.

  2. High-throughput sequencing reveals the core gut microbiome of Bar-headed goose (Anser indicus) in different wintering areas in Tibet.

    PubMed

    Wang, Wen; Cao, Jian; Yang, Fang; Wang, Xuelian; Zheng, Sisi; Sharshov, Kirill; Li, Laixing

    2016-04-01

    Elucidating the spatial dynamic and core gut microbiome related to wild bar-headed goose is of crucial importance for probiotics development that may meet the demands of bar-headed goose artificial breeding industries and accelerate the domestication of this species. However, the core microbial communities in the wild bar-headed geese remain totally unknown. Here, for the first time, we present a comprehensive survey of bar-headed geese gut microbial communities by Illumina high-throughput sequencing technology using nine individuals from three distinct wintering locations in Tibet. A total of 236,676 sequences were analyzed, and 607 OTUs were identified. We show that the gut microbial communities of bar-headed geese have representatives of 14 phyla and are dominated by Firmicutes, Proteobacteria, Actinobacteria, and Bacteroidetes. The additive abundance of these four most dominant phyla was above 96% across all the samples. At the genus level, the sequences represented 150 genera. A set of 19 genera were present in all samples and considered as core gut microbiome. The top seven most abundant core genera were distributed in that four dominant phyla. Among them, four genera (Lactococcus, Bacillus, Solibacillus, and Streptococcus) belonged to Firmicutes, while for other three phyla, each containing one genus, such as Proteobacteria (genus Pseudomonas), Actinobacteria (genus Arthrobacter), and Bacteroidetes (genus Bacteroides). This broad survey represents the most in-depth assessment, to date, of the gut microbes that associated with bar-headed geese. These data create a baseline for future bar-headed goose microbiology research, and make an original contribution to probiotics development for bar-headed goose artificial breeding industries. © 2015 The Authors. MicrobiologyOpen published by John Wiley & Sons Ltd.

  3. Effects of salt secretion on psychrometric determinations of water potential of cotton leaves.

    PubMed

    Klepper, B; Barrs, H D

    1968-07-01

    Thermocouple psychrometers gave lower estimates of water potential of cotton leaves than did a pressure chamber. This difference was considerable for turgid leaves, but progressively decreased for leaves with lower water potentials and fell to zero at water potentials below about -10 bars. The conductivity of washings from cotton leaves removed from the psychrometric equilibration chambers was related to the magnitude of this discrepancy in water potential, indicating that the discrepancy is due to salts on the leaf surface which make the psychrometric estimates too low. This error, which may be as great as 400 to 500%, cannot be eliminated by washing the leaves because salts may be secreted during the equilibration period. Therefore, a thermocouple psychrometer is not suitable for measuring the water potential of cotton leaves when it is above about -10 bars.

  4. 5 CFR 300.704 - Considering individuals for appointment.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Considering individuals for appointment. 300.704 Section 300.704 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS EMPLOYMENT (GENERAL) Statutory Bar to Appointment of Persons Who Fail To Register Under Selective...

  5. Geographic variation in morphology of Alaska-breeding Bar-tailed Godwits (Limosa lapponica) is not maintained on their nonbreeding grounds in New Zealand

    USGS Publications Warehouse

    Conklin, Jesse R.; Battley, Phil F.; Potter, Murray A.; Ruthrauff, Daniel R.

    2011-01-01

    Among scolopacid shorebirds, Bar-tailed Godwits (Limosa lapponica) have unusually high intra- and intersexual differences in size and breeding plumage. Despite historical evidence for population structure among Alaska-breeding Bar-tailed Godwits (L. l. baueri), no thorough analysis, or comparison with the population's nonbreeding distribution, has been undertaken. We used live captures, field photography, museum specimens, and individuals tracked from New Zealand to describe geographic variation in size and plumage within the Alaska breeding range. We found a north-south cline in body size in Alaska, in which the smallest individuals of each sex occurred at the highest latitudes. Extent of male breeding plumage (proportion of nonbreeding contour feathers replaced) also increased with latitude, but female breeding plumage was most extensive at mid-latitudes. This population structure was not maintained in the nonbreeding season: morphometrics of captured birds and timing of migratory departures indicated that individuals from a wide range of breeding latitudes occur in each region and site in New Zealand. Links among morphology, phenology, and breeding location suggest the possibility of distinct Alaska breeding populations that mix freely in the nonbreeding season, and also imply that the strongest selection for size occurs in the breeding season.

  6. Sediment Transport Variability in Global Rivers: Implications for the Interpretation of Paleoclimate Signals

    NASA Astrophysics Data System (ADS)

    Syvitski, J. P.; Hutton, E. W.

    2001-12-01

    A new numerical approach (HydroTrend, v.2) allows the daily flux of sediment to be estimated for any river, whether gauged or not. The model can be driven by actual climate measurements (precipitation, temperature) or with statistical estimates of climate (modeled climate, remotely-sensed climate). In both cases, the character (e.g. soil depth, relief, vegetation index) of the drainage terrain is needed to complete the model domain. The HydroTrend approach allows us to examine the effects of climate on the supply of sediment to continental margins, and the nature of supply variability. A new relationship is defined as: $Qs = f (Psi) Qs-bar (Q/Q-bar)c+-σ where Qs-bar is the long-term sediment load, Q-bar is the long-term discharge, c and sigma are mean and standard deviation of the inter-annual variability of the rating coefficient, and Psi captures the measurement errors associated with Q and Qs, and the annual transients, affecting the supply of sediment including sediment and water source, and river (flood wave) dynamics. F = F(Psi, s). Smaller-discharge rivers have larger values of s, and s asymptotes to a small but consistent value for larger-discharge rivers. The coefficient c is directly proportional to the long-term suspended load (Qs-bar) and basin relief (R), and inversely proportional to mean annual temperature (T). sigma is directly proportional to the mean annual discharge. The long-term sediment load is given by: Qs-bar = a R1.5 A0.5 TT $ where a is a global constant, A is basin area; and TT is a function of mean annual temperature. This new approach provides estimates of sediment flux at the dynamic (daily) level and provides us a means to experiment on the sensitivity of marine sedimentary deposits in recording a paleoclimate signal. In addition the method provides us with spatial estimates for the flux of sediment to the coastal zone at the global scale.

  7. A new photometric model of the Galactic bar using red clump giants

    NASA Astrophysics Data System (ADS)

    Cao, Liang; Mao, Shude; Nataf, David; Rattenbury, Nicholas J.; Gould, Andrew

    2013-09-01

    We present a study of the luminosity density distribution of the Galactic bar using number counts of red clump giants from the Optical Gravitational Lensing Experiment (OGLE) III survey. The data were recently published by Nataf et al. for 9019 fields towards the bulge and have 2.94 × 106 RC stars over a viewing area of 90.25 deg^2. The data include the number counts, mean distance modulus (μ), dispersion in μ and full error matrix, from which we fit the data with several triaxial parametric models. We use the Markov Chain Monte Carlo method to explore the parameter space and find that the best-fitting model is the E3 model, with the distance to the GC 8.13 kpc, the ratio of semimajor and semiminor bar axis scalelengths in the Galactic plane x0, y0 and vertical bar scalelength z0 x0: y0: z0 ≈ 1.00: 0.43: 0.40 (close to being prolate). The scalelength of the stellar density profile along the bar's major axis is ˜0.67 kpc and has an angle of 29.4°, slightly larger than the value obtained from a similar study based on OGLE-II data. The number of estimated RC stars within the field of view is 2.78 × 106, which is systematically lower than the observed value. We subtract the smooth parametric model from the observed counts and find that the residuals are consistent with the presence of an X-shaped structure in the Galactic Centre, the excess to the estimated mass content is ˜5.8 per cent. We estimate that the total mass of the bar is ˜1.8 × 1010 M⊙. Our results can be used as a key ingredient to construct new density models of the Milky Way and will have implications on the predictions of the optical depth to gravitational microlensing and the patterns of hydrodynamical gas flow in the Milky Way.

  8. Meal Replacement Mass Reduction Integration and Acceptability Study

    NASA Technical Reports Server (NTRS)

    Sirmons, T.; Douglas, G.; Schneiderman, J.; Slack, K.; Whitmire, A.; Williams, T.; Young, M.

    2018-01-01

    The Orion Multi-Purpose Crew Vehicle (MPCV) and future exploration missions are mass constrained; therefore we are challenged to reduce the mass of the food system by 10% while maintaining safety, nutrition, and acceptability to support crew health and performance for exploration missions. Meal replacement with nutritionally balanced, 700-900 calorie bars was identified as a method to reduce mass. However, commercially available products do not meet the requirements for a meal replacement in the spaceflight food system. The purpose of this task was to develop a variety of nutritionally balanced, high quality, breakfast replacement bars, which enable a 10% food mass savings. To date, six nutrient-dense meal replacement bars have been developed, all of which meet spaceflight nutritional, microbiological, sensory, and shelf-life requirements. The four highest scoring bars were evaluated based on final product sensory acceptability, nutritional stability, qualitative stability of analytical measurements (i.e. color and texture), and microbiological compliance over a period of two years to predict long-term acceptability. All bars maintained overall acceptability throughout the first year of storage, despite minor changes in color and texture. However, added vitamins C, B1, and B9 degraded rapidly in fortified samples of Banana Nut bars, indicating the need for additional development. In addition to shelf-life testing, four bar varieties were evaluated in the Human Exploration Research Analog (HERA), campaign 3, to assess the frequency with which actual meal replacement options may be implemented, based on impact to satiety and psychosocial measurements. Crewmembers (n=16) were asked to consume meal replacement bars every day for the first fifteen days of the mission and every three days for the second half of the mission. Daily surveys assessed the crew's responses to bar acceptability, mood, food fatigue and perceived stress. Preliminary results indicate that the majority of crew members were noncompliant with daily meal replacement during the first half of the mission. Several crew members chose to forgo the meal, resulting in caloric deficits that were higher on skipped-bar days. Body mass loss was significant throughout the mission. Although there was no significant difference in body mass loss overall between the first half and second half of the mission, a higher number of individual crew members lost more body mass in the first half of the mission. Analysis is still ongoing, but current trends suggest that daily involuntary meal replacement can lead to greater individual impacts on body mass and psychological factors, while meal replacement on a more limited basis may be acceptable to most crew for missions up to 30 days. This data should be considered in Orion mass trades with health and human performance.

  9. The State and Trends of Barcode, RFID, Biometric and Pharmacy Automation Technologies in US Hospitals.

    PubMed

    Uy, Raymonde Charles Y; Kury, Fabricio P; Fontelo, Paul A

    2015-01-01

    The standard of safe medication practice requires strict observance of the five rights of medication administration: the right patient, drug, time, dose, and route. Despite adherence to these guidelines, medication errors remain a public health concern that has generated health policies and hospital processes that leverage automation and computerization to reduce these errors. Bar code, RFID, biometrics and pharmacy automation technologies have been demonstrated in literature to decrease the incidence of medication errors by minimizing human factors involved in the process. Despite evidence suggesting the effectivity of these technologies, adoption rates and trends vary across hospital systems. The objective of study is to examine the state and adoption trends of automatic identification and data capture (AIDC) methods and pharmacy automation technologies in U.S. hospitals. A retrospective descriptive analysis of survey data from the HIMSS Analytics® Database was done, demonstrating an optimistic growth in the adoption of these patient safety solutions.

  10. User-centered design of quality of life reports for clinical care of patients with prostate cancer

    PubMed Central

    Izard, Jason; Hartzler, Andrea; Avery, Daniel I.; Shih, Cheryl; Dalkin, Bruce L.; Gore, John L.

    2014-01-01

    Background Primary treatment of localized prostate cancer can result in bothersome urinary, sexual, and bowel symptoms. Yet clinical application of health-related quality-of-life (HRQOL) questionnaires is rare. We employed user-centered design to develop graphic dashboards of questionnaire responses from patients with prostate cancer to facilitate clinical integration of HRQOL measurement. Methods We interviewed 50 prostate cancer patients and 50 providers, assessed literacy with validated instruments (Rapid Estimate of Adult Literacy in Medicine short form, Subjective Numeracy Scale, Graphical Literacy Scale), and presented participants with prototype dashboards that display prostate cancer-specific HRQOL with graphic elements derived from patient focus groups. We assessed dashboard comprehension and preferences in table, bar, line, and pictograph formats with patient scores contextualized with HRQOL scores of similar patients serving as a comparison group. Results Health literacy (mean score, 6.8/7) and numeracy (mean score, 4.5/6) of patient participants was high. Patients favored the bar chart (mean rank, 1.8 [P = .12] vs line graph [P <.01] vs table and pictograph); providers demonstrated similar preference for table, bar, and line formats (ranked first by 30%, 34%, and 34% of providers, respectively). Providers expressed unsolicited concerns over presentation of comparison group scores (n = 19; 38%) and impact on clinic efficiency (n = 16; 32%). Conclusion Based on preferences of prostate cancer patients and providers, we developed the design concept of a dynamic HRQOL dashboard that permits a base patient-centered report in bar chart format that can be toggled to other formats and include error bars that frame comparison group scores. Inclusion of lower literacy patients may yield different preferences. PMID:24787105

  11. User-centered design of quality of life reports for clinical care of patients with prostate cancer.

    PubMed

    Izard, Jason; Hartzler, Andrea; Avery, Daniel I; Shih, Cheryl; Dalkin, Bruce L; Gore, John L

    2014-05-01

    Primary treatment of localized prostate cancer can result in bothersome urinary, sexual, and bowel symptoms. Yet clinical application of health-related quality-of-life (HRQOL) questionnaires is rare. We employed user-centered design to develop graphic dashboards of questionnaire responses from patients with prostate cancer to facilitate clinical integration of HRQOL measurement. We interviewed 50 prostate cancer patients and 50 providers, assessed literacy with validated instruments (Rapid Estimate of Adult Literacy in Medicine short form, Subjective Numeracy Scale, Graphical Literacy Scale), and presented participants with prototype dashboards that display prostate cancer-specific HRQOL with graphic elements derived from patient focus groups. We assessed dashboard comprehension and preferences in table, bar, line, and pictograph formats with patient scores contextualized with HRQOL scores of similar patients serving as a comparison group. Health literacy (mean score, 6.8/7) and numeracy (mean score, 4.5/6) of patient participants was high. Patients favored the bar chart (mean rank, 1.8 [P = .12] vs line graph [P < .01] vs table and pictograph); providers demonstrated similar preference for table, bar, and line formats (ranked first by 30%, 34%, and 34% of providers, respectively). Providers expressed unsolicited concerns over presentation of comparison group scores (n = 19; 38%) and impact on clinic efficiency (n = 16; 32%). Based on preferences of prostate cancer patients and providers, we developed the design concept of a dynamic HRQOL dashboard that permits a base patient-centered report in bar chart format that can be toggled to other formats and include error bars that frame comparison group scores. Inclusion of lower literacy patients may yield different preferences. Copyright © 2014 Mosby, Inc. All rights reserved.

  12. Post-error response inhibition in high math-anxious individuals: Evidence from a multi-digit addition task.

    PubMed

    Núñez-Peña, M Isabel; Tubau, Elisabet; Suárez-Pellicioni, Macarena

    2017-06-01

    The aim of the study was to investigate how high math-anxious (HMA) individuals react to errors in an arithmetic task. Twenty HMA and 19 low math-anxious (LMA) individuals were presented with a multi-digit addition verification task and were given response feedback. Post-error adjustment measures (response time and accuracy) were analyzed in order to study differences between groups when faced with errors in an arithmetical task. Results showed that both HMA and LMA individuals were slower to respond following an error than following a correct answer. However, post-error accuracy effects emerged only for the HMA group, showing that they were also less accurate after having committed an error than after giving the right answer. Importantly, these differences were observed only when individuals needed to repeat the same response given in the previous trial. These results suggest that, for HMA individuals, errors caused reactive inhibition of the erroneous response, facilitating performance if the next problem required the alternative response but hampering it if the response was the same. This stronger reaction to errors could be a factor contributing to the difficulties that HMA individuals experience in learning math and doing math tasks. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Association between workarounds and medication administration errors in bar-code-assisted medication administration in hospitals.

    PubMed

    van der Veen, Willem; van den Bemt, Patricia M L A; Wouters, Hans; Bates, David W; Twisk, Jos W R; de Gier, Johan J; Taxis, Katja; Duyvendak, Michiel; Luttikhuis, Karen Oude; Ros, Johannes J W; Vasbinder, Erwin C; Atrafi, Maryam; Brasse, Bjorn; Mangelaars, Iris

    2018-04-01

    To study the association of workarounds with medication administration errors using barcode-assisted medication administration (BCMA), and to determine the frequency and types of workarounds and medication administration errors. A prospective observational study in Dutch hospitals using BCMA to administer medication. Direct observation was used to collect data. Primary outcome measure was the proportion of medication administrations with one or more medication administration errors. Secondary outcome was the frequency and types of workarounds and medication administration errors. Univariate and multivariate multilevel logistic regression analysis were used to assess the association between workarounds and medication administration errors. Descriptive statistics were used for the secondary outcomes. We included 5793 medication administrations for 1230 inpatients. Workarounds were associated with medication administration errors (adjusted odds ratio 3.06 [95% CI: 2.49-3.78]). Most commonly, procedural workarounds were observed, such as not scanning at all (36%), not scanning patients because they did not wear a wristband (28%), incorrect medication scanning, multiple medication scanning, and ignoring alert signals (11%). Common types of medication administration errors were omissions (78%), administration of non-ordered drugs (8.0%), and wrong doses given (6.0%). Workarounds are associated with medication administration errors in hospitals using BCMA. These data suggest that BCMA needs more post-implementation evaluation if it is to achieve the intended benefits for medication safety. In hospitals using barcode-assisted medication administration, workarounds occurred in 66% of medication administrations and were associated with large numbers of medication administration errors.

  14. A method for velocity signal reconstruction of AFDISAR/PDV based on crazy-climber algorithm

    NASA Astrophysics Data System (ADS)

    Peng, Ying-cheng; Guo, Xian; Xing, Yuan-ding; Chen, Rong; Li, Yan-jie; Bai, Ting

    2017-10-01

    The resolution of Continuous wavelet transformation (CWT) is different when the frequency is different. For this property, the time-frequency signal of coherent signal obtained by All Fiber Displacement Interferometer System for Any Reflector (AFDISAR) is extracted. Crazy-climber Algorithm is adopted to extract wavelet ridge while Velocity history curve of the measuring object is obtained. Numerical simulation is carried out. The reconstruction signal is completely consistent with the original signal, which verifies the accuracy of the algorithm. Vibration of loudspeaker and free end of Hopkinson incident bar under impact loading are measured by AFDISAR, and the measured coherent signals are processed. Velocity signals of loudspeaker and free end of Hopkinson incident bar are reconstructed respectively. Comparing with the theoretical calculation, the particle vibration arrival time difference error of the free end of Hopkinson incident bar is 2μs. It is indicated from the results that the algorithm is of high accuracy, and is of high adaptability to signals of different time-frequency feature. The algorithm overcomes the limitation of modulating the time window artificially according to the signal variation when adopting STFT, and is suitable for extracting signal measured by AFDISAR.

  15. Daily assessment of Alcohol Consumption and Condom Use with Known and Casual Partners among Young Female Bar Drinkers

    PubMed Central

    Parks, Kathleen A.; Hsieh, Ya-Ping; Collins, R. Lorraine; Levonyan-Radloff, Kristina

    2011-01-01

    The relationship between alcohol and condom use has been studied extensively over the past several decades. Reviews of event-level studies suggest that alcohol's effect on risky sexual behavior are not due to simple main effects, but appear to be dependent upon individual characteristics, and situational or contextual factors. In the current study, we assessed the temporal relationship between daily alcohol consumption and unprotected sexual behavior, taking into account sexual partner type (casual or known) as well as individual and situational characteristics among a group of young female bar drinkers. Greater alcohol consumption was not associated with unprotected sex. However, greater alcohol consumption was associated with an increase in sex (protected and unprotected) with casual partners. Having less HIV knowledge was associated with increased unprotected sex, while greater frequency of drinking in bars was associated with increased protected sex with casual partners. These findings are discussed in terms of possible prevention programs that increase HIV knowledge and decrease alcohol consumption to reduce young women's risky sexual behavior. PMID:20949313

  16. Technology-related medication errors in a tertiary hospital: a 5-year analysis of reported medication incidents.

    PubMed

    Samaranayake, N R; Cheung, S T D; Chui, W C M; Cheung, B M Y

    2012-12-01

    Healthcare technology is meant to reduce medication errors. The objective of this study was to assess unintended errors related to technologies in the medication use process. Medication incidents reported from 2006 to 2010 in a main tertiary care hospital were analysed by a pharmacist and technology-related errors were identified. Technology-related errors were further classified as socio-technical errors and device errors. This analysis was conducted using data from medication incident reports which may represent only a small proportion of medication errors that actually takes place in a hospital. Hence, interpretation of results must be tentative. 1538 medication incidents were reported. 17.1% of all incidents were technology-related, of which only 1.9% were device errors, whereas most were socio-technical errors (98.1%). Of these, 61.2% were linked to computerised prescription order entry, 23.2% to bar-coded patient identification labels, 7.2% to infusion pumps, 6.8% to computer-aided dispensing label generation and 1.5% to other technologies. The immediate causes for technology-related errors included, poor interface between user and computer (68.1%), improper procedures or rule violations (22.1%), poor interface between user and infusion pump (4.9%), technical defects (1.9%) and others (3.0%). In 11.4% of the technology-related incidents, the error was detected after the drug had been administered. A considerable proportion of all incidents were technology-related. Most errors were due to socio-technical issues. Unintended and unanticipated errors may happen when using technologies. Therefore, when using technologies, system improvement, awareness, training and monitoring are needed to minimise medication errors. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  17. Testing the carotenoid trade-off hypothesis in the polychromatic Midas cichlid, Amphilophus citrinellus.

    PubMed

    Lin, Susan M; Nieves-Puigdoller, Katherine; Brown, Alexandria C; McGraw, Kevin J; Clotfelter, Ethan D

    2010-01-01

    Many animals use carotenoid pigments derived from their diet for coloration and immunity. The carotenoid trade-off hypothesis predicts that, under conditions of carotenoid scarcity, individuals may be forced to allocate limited carotenoids to either coloration or immunity. In polychromatic species, the pattern of allocation may differ among individuals. We tested the carotenoid trade-off hypothesis in the Midas cichlid, Amphilophus citrinellus, a species with two ontogenetic color morphs, barred and gold, the latter of which is the result of carotenoid expression. We performed a diet-supplementation experiment in which cichlids of both color morphs were assigned to one of two diet treatments that differed only in carotenoid content (beta-carotene, lutein, and zeaxanthin). We measured integument color using spectrometry, quantified carotenoid concentrations in tissue and plasma, and assessed innate immunity using lysozyme activity and alternative complement pathway assays. In both color morphs, dietary carotenoid supplementation elevated plasma carotenoid circulation but failed to affect skin coloration. Consistent with observable differences in integument coloration, we found that gold fish sequestered more carotenoids in skin tissue than barred fish, but barred fish had higher concentrations of carotenoids in plasma than gold fish. Neither measure of innate immunity differed between gold and barred fish, or as a function of dietary carotenoid supplementation. Lysozyme activity, but not complement activity, was strongly affected by body condition. Our data show that a diet low in carotenoids is sufficient to maintain both coloration and innate immunity in Midas cichlids. Our data also suggest that the developmental transition from the barred to gold morph is not accompanied by a decrease in innate immunity in this species.

  18. Evidence for specificity of the impact of punishment on error-related brain activity in high versus low trait anxious individuals.

    PubMed

    Meyer, Alexandria; Gawlowska, Magda

    2017-10-01

    A previous study suggests that when participants were punished with a loud noise after committing errors, the error-related negativity (ERN) was enhanced in high trait anxious individuals. The current study sought to extend these findings by examining the ERN in conditions when punishment was related and unrelated to error commission as a function of individual differences in trait anxiety symptoms; further, the current study utilized an electric shock as an aversive unconditioned stimulus. Results confirmed that the ERN was increased when errors were punished among high trait anxious individuals compared to low anxious individuals; this effect was not observed when punishment was unrelated to errors. Findings suggest that the threat-value of errors may underlie the association between certain anxious traits and punishment-related increases in the ERN. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Bioavailability and Methylation Potential of Mercury Sulfides in Sediments

    DTIC Science & Technology

    2014-08-01

    such as size separation (i.e. filtration with a particular pore size or molecular weight cutoff) or metal-ligand complexation from experimentally ...and 6 nM HgS microparticles. The error bars represent ±1 s.d. for duplicate samples. Results of Hg fractionation by filtration and (ultra... results from filtration (Figures S2). These differences in the data indicated that the nHgS dissolution rate could be overestimated by the filtration data

  20. Image decomposition of barred galaxies and AGN hosts

    NASA Astrophysics Data System (ADS)

    Gadotti, Dimitri Alexei

    2008-02-01

    I present the results of multicomponent decomposition of V and R broad-band images of a sample of 17 nearby galaxies, most of them hosting bars and active galactic nuclei (AGN). I use BUDDA v2.1 to produce the fits, allowing the inclusion of bars and AGN in the models. A comparison with previous results from the literature shows a fairly good agreement. It is found that the axial ratio of bars, as measured from ellipse fits, can be severely underestimated if the galaxy axisymmetric component is relatively luminous. Thus, reliable bar axial ratios can only be determined by taking into account the contributions of bulge and disc to the light distribution in the galaxy image. Through a number of tests, I show that neglecting bars when modelling barred galaxies can result in an overestimation of the bulge-to-total luminosity ratio of a factor of 2. Similar effects result when bright, type 1 AGN are not considered in the models. By artificially redshifting the images, I show that the structural parameters of more distant galaxies can in general be reliably retrieved through image fitting, at least up to the point where the physical spatial resolution is ~1.5kpc. This corresponds, for instance, to images of galaxies at z = 0.05 with a seeing full width at half-maximum (FWHM) of 1.5arcsec, typical of the Sloan Digital Sky Survey (SDSS). In addition, such a resolution is also similar to what can be achieved with the Hubble Space Telescope (HST), and ground-based telescopes with adaptive optics, at z ~ 1-2. Thus, these results also concern deeper studies such as COSMOS and SINS. This exercise shows that disc parameters are particularly robust, but bulge parameters are prone to errors if its effective radius is small compared to the seeing radius, and might suffer from systematic effects. For instance, the bulge-to-total luminosity ratio is systematically overestimated, on average, by 0.05 (i.e. 5 per cent of the galaxy total luminosity). In this low-resolution regime, the effects of ignoring bars are still present, but AGN light is smeared out. I briefly discuss the consequences of these results to studies of the structural properties of galaxies, in particular on the stellar mass budget in the local Universe. With reasonable assumptions, it is possible to show that the stellar content in bars can be similar to that in classical bulges and elliptical galaxies. Finally, I revisit the cases of NGC4608 and 5701 and show that the lack of stars in the disc region inside the bar radius is significant. Accordingly, the best-fitting model for the former uses a Freeman type II disc.

  1. Effects of Salt Secretion on Psychrometric Determinations of Water Potential of Cotton Leaves

    PubMed Central

    Klepper, Betty; Barrs, H. D.

    1968-01-01

    Thermocouple psychrometers gave lower estimates of water potential of cotton leaves than did a pressure chamber. This difference was considerable for turgid leaves, but progressively decreased for leaves with lower water potentials and fell to zero at water potentials below about −10 bars. The conductivity of washings from cotton leaves removed from the psychrometric equilibration chambers was related to the magnitude of this discrepancy in water potential, indicating that the discrepancy is due to salts on the leaf surface which make the psychrometric estimates too low. This error, which may be as great as 400 to 500%, cannot be eliminated by washing the leaves because salts may be secreted during the equilibration period. Therefore, a thermocouple psychrometer is not suitable for measuring the water potential of cotton leaves when it is above about −10 bars. PMID:16656895

  2. Relationship between visual binding, reentry and awareness.

    PubMed

    Koivisto, Mika; Silvanto, Juha

    2011-12-01

    Visual feature binding has been suggested to depend on reentrant processing. We addressed the relationship between binding, reentry, and visual awareness by asking the participants to discriminate the color and orientation of a colored bar (presented either alone or simultaneously with a white distractor bar) and to report their phenomenal awareness of the target features. The success of reentry was manipulated with object substitution masking and backward masking. The results showed that late reentrant processes are necessary for successful binding but not for phenomenal awareness of the bound features. Binding errors were accompanied by phenomenal awareness of the misbound feature conjunctions, demonstrating that they were experienced as real properties of the stimuli (i.e., illusory conjunctions). Our results suggest that early preattentive binding and local recurrent processing enable features to reach phenomenal awareness, while later attention-related reentrant iterations modulate the way in which the features are bound and experienced in awareness. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Probabilistic parameter estimation in a 2-step chemical kinetics model for n-dodecane jet autoignition

    NASA Astrophysics Data System (ADS)

    Hakim, Layal; Lacaze, Guilhem; Khalil, Mohammad; Sargsyan, Khachik; Najm, Habib; Oefelein, Joseph

    2018-05-01

    This paper demonstrates the development of a simple chemical kinetics model designed for autoignition of n-dodecane in air using Bayesian inference with a model-error representation. The model error, i.e. intrinsic discrepancy from a high-fidelity benchmark model, is represented by allowing additional variability in selected parameters. Subsequently, we quantify predictive uncertainties in the results of autoignition simulations of homogeneous reactors at realistic diesel engine conditions. We demonstrate that these predictive error bars capture model error as well. The uncertainty propagation is performed using non-intrusive spectral projection that can also be used in principle with larger scale computations, such as large eddy simulation. While the present calibration is performed to match a skeletal mechanism, it can be done with equal success using experimental data only (e.g. shock-tube measurements). Since our method captures the error associated with structural model simplifications, we believe that the optimised model could then lead to better qualified predictions of autoignition delay time in high-fidelity large eddy simulations than the existing detailed mechanisms. This methodology provides a way to reduce the cost of reaction kinetics in simulations systematically, while quantifying the accuracy of predictions of important target quantities.

  4. A wireless passive pressure microsensor fabricated in HTCC MEMS technology for harsh environments.

    PubMed

    Tan, Qiulin; Kang, Hao; Xiong, Jijun; Qin, Li; Zhang, Wendong; Li, Chen; Ding, Liqiong; Zhang, Xiansheng; Yang, Mingliang

    2013-08-02

    A wireless passive high-temperature pressure sensor without evacuation channel fabricated in high-temperature co-fired ceramics (HTCC) technology is proposed. The properties of the HTCC material ensure the sensor can be applied in harsh environments. The sensor without evacuation channel can be completely gastight. The wireless data is obtained with a reader antenna by mutual inductance coupling. Experimental systems are designed to obtain the frequency-pressure characteristic, frequency-temperature characteristic and coupling distance. Experimental results show that the sensor can be coupled with an antenna at 600 °C and max distance of 2.8 cm at room temperature. The senor sensitivity is about 860 Hz/bar and hysteresis error and repeatability error are quite low.

  5. 78 FR 62709 - Tennessee Valley Authority; Watts Bar Nuclear Plant, Unit 2

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-22

    ... Docket ID NRC-2008-0369. Address questions about NRC dockets to Carol Gallagher; telephone: 301-287- 3422; email: [email protected] . For technical questions, contact the individual(s) listed in the FOR... site. The NRC staff also considered the cumulative impacts from past, present, and reasonably...

  6. Integration and evaluation of a needle-positioning robot with volumetric microcomputed tomography image guidance for small animal stereotactic interventions.

    PubMed

    Waspe, Adam C; McErlain, David D; Pitelka, Vasek; Holdsworth, David W; Lacefield, James C; Fenster, Aaron

    2010-04-01

    Preclinical research protocols often require insertion of needles to specific targets within small animal brains. To target biologically relevant locations in rodent brains more effectively, a robotic device has been developed that is capable of positioning a needle along oblique trajectories through a single burr hole in the skull under volumetric microcomputed tomography (micro-CT) guidance. An x-ray compatible stereotactic frame secures the head throughout the procedure using a bite bar, nose clamp, and ear bars. CT-to-robot registration enables structures identified in the image to be mapped to physical coordinates in the brain. Registration is accomplished by injecting a barium sulfate contrast agent as the robot withdraws the needle from predefined points in a phantom. Registration accuracy is affected by the robot-positioning error and is assessed by measuring the surface registration error for the fiducial and target needle tracks (FRE and TRE). This system was demonstrated in situ by injecting 200 microm tungsten beads into rat brains along oblique trajectories through a single burr hole on the top of the skull under micro-CT image guidance. Postintervention micro-CT images of each skull were registered with preintervention high-field magnetic resonance images of the brain to infer the anatomical locations of the beads. Registration using four fiducial needle tracks and one target track produced a FRE and a TRE of 96 and 210 microm, respectively. Evaluation with tissue-mimicking gelatin phantoms showed that locations could be targeted with a mean error of 154 +/- 113 microm. The integration of a robotic needle-positioning device with volumetric micro-CT image guidance should increase the accuracy and reduce the invasiveness of stereotactic needle interventions in small animals.

  7. A non-perturbative exploration of the high energy regime in Nf=3 QCD. ALPHA Collaboration

    NASA Astrophysics Data System (ADS)

    Dalla Brida, Mattia; Fritzsch, Patrick; Korzec, Tomasz; Ramos, Alberto; Sint, Stefan; Sommer, Rainer

    2018-05-01

    Using continuum extrapolated lattice data we trace a family of running couplings in three-flavour QCD over a large range of scales from about 4 to 128 GeV. The scale is set by the finite space time volume so that recursive finite size techniques can be applied, and Schrödinger functional (SF) boundary conditions enable direct simulations in the chiral limit. Compared to earlier studies we have improved on both statistical and systematic errors. Using the SF coupling to implicitly define a reference scale 1/L_0≈ 4 GeV through \\bar{g}^2(L_0) =2.012, we quote L_0 Λ ^{N_f=3}_{{\\overline{MS}}} =0.0791(21). This error is dominated by statistics; in particular, the remnant perturbative uncertainty is negligible and very well controlled, by connecting to infinite renormalization scale from different scales 2^n/L_0 for n=0,1,\\ldots ,5. An intermediate step in this connection may involve any member of a one-parameter family of SF couplings. This provides an excellent opportunity for tests of perturbation theory some of which have been published in a letter (ALPHA collaboration, M. Dalla Brida et al. in Phys Rev Lett 117(18):182001, 2016). The results indicate that for our target precision of 3 per cent in L_0 Λ ^{N_f=3}_{{\\overline{MS}}}, a reliable estimate of the truncation error requires non-perturbative data for a sufficiently large range of values of α _s=\\bar{g}^2/(4π ). In the present work we reach this precision by studying scales that vary by a factor 2^5= 32, reaching down to α _s≈ 0.1. We here provide the details of our analysis and an extended discussion.

  8. Integration and evaluation of a needle-positioning robot with volumetric microcomputed tomography image guidance for small animal stereotactic interventions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waspe, Adam C.; McErlain, David D.; Pitelka, Vasek

    Purpose: Preclinical research protocols often require insertion of needles to specific targets within small animal brains. To target biologically relevant locations in rodent brains more effectively, a robotic device has been developed that is capable of positioning a needle along oblique trajectories through a single burr hole in the skull under volumetric microcomputed tomography (micro-CT) guidance. Methods: An x-ray compatible stereotactic frame secures the head throughout the procedure using a bite bar, nose clamp, and ear bars. CT-to-robot registration enables structures identified in the image to be mapped to physical coordinates in the brain. Registration is accomplished by injecting amore » barium sulfate contrast agent as the robot withdraws the needle from predefined points in a phantom. Registration accuracy is affected by the robot-positioning error and is assessed by measuring the surface registration error for the fiducial and target needle tracks (FRE and TRE). This system was demonstrated in situ by injecting 200 {mu}m tungsten beads into rat brains along oblique trajectories through a single burr hole on the top of the skull under micro-CT image guidance. Postintervention micro-CT images of each skull were registered with preintervention high-field magnetic resonance images of the brain to infer the anatomical locations of the beads. Results: Registration using four fiducial needle tracks and one target track produced a FRE and a TRE of 96 and 210 {mu}m, respectively. Evaluation with tissue-mimicking gelatin phantoms showed that locations could be targeted with a mean error of 154{+-}113 {mu}m. Conclusions: The integration of a robotic needle-positioning device with volumetric micro-CT image guidance should increase the accuracy and reduce the invasiveness of stereotactic needle interventions in small animals.« less

  9. 76 FR 25690 - Ocean Transportation Intermediary License Applicants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-05

    .... Lanuzga, Vice President, Application Type: Business Structure Change. Embarque Bandera Shipping, Inc. (NVO... Road, Suite 222, Diamond Bar, CA 91765, Officer: Rachel Zhu, President, (Qualifying Individual...

  10. Error-Monitoring in Response to Social Stimuli in Individuals with Higher-Functioning Autism Spectrum Disorder

    PubMed Central

    McMahon, Camilla M.; Henderson, Heather A.

    2014-01-01

    Error-monitoring, or the ability to recognize one's mistakes and implement behavioral changes to prevent further mistakes, may be impaired in individuals with Autism Spectrum Disorder (ASD). Children and adolescents (ages 9-19) with ASD (n = 42) and typical development (n = 42) completed two face processing tasks that required discrimination of either the gender or affect of standardized face stimuli. Post-error slowing and the difference in Error-Related Negativity amplitude between correct and incorrect responses (ERNdiff) were used to index error-monitoring ability. Overall, ERNdiff increased with age. On the Gender Task, individuals with ASD had a smaller ERNdiff than individuals with typical development; however, on the Affect Task, there were no significant diagnostic group differences on ERNdiff. Individuals with ASD may have ERN amplitudes similar to those observed in individuals with typical development in more social contexts compared to less social contexts due to greater consequences for errors, more effortful processing, and/or reduced processing efficiency in these contexts. Across all participants, more post-error slowing on the Affect Task was associated with better social cognitive skills. PMID:25066088

  11. In-situ TEM on (de)hydrogenation of Pd at 0.5-4.5 bar hydrogen pressure and 20-400°C.

    PubMed

    Yokosawa, Tadahiro; Alan, Tuncay; Pandraud, Gregory; Dam, Bernard; Zandbergen, Henny

    2012-01-01

    We have developed a nanoreactor, sample holder and gas system for in-situ transmission electron microscopy (TEM) of hydrogen storage materials up to at least 4.5 bar. The MEMS-based nanoreactor has a microheater, two electron-transparent windows and a gas inlet and outlet. The holder contains various O-rings to have leak-tight connections with the nanoreactor. The system was tested with the (de)hydrogenation of Pd at pressures up to 4.5 bar. The Pd film consisted of islands being 15 nm thick and 50-500 nm wide. In electron diffraction mode we observed reproducibly a crystal lattice expansion and shrinkage owing to hydrogenation and dehydrogenation, respectively. In selected-area electron diffraction and bright/dark-field modes the (de)hydrogenation of individual Pd particles was followed. Some Pd islands are consistently hydrogenated faster than others. When thermally cycled, thermal hysteresis of about 10-16°C between hydrogen absorption and desorption was observed for hydrogen pressures of 0.5-4.5 bar. Experiments at 0.8 bar and 3.2 bar showed that the (de)hydrogenation temperature is not affected by the electron beam. This result shows that this is a fast method to investigate hydrogen storage materials with information at the nanometer scale. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Print media coverage of California's smokefree bar law

    PubMed Central

    Magzamen, S.; Charlesworth, A.; Glantz, S.

    2001-01-01

    OBJECTIVE—To assess the print media coverage of California's smokefree bar law in the state of California.
DESIGN—Content analysis of newspaper, trade journal, and magazine items.
SUBJECTS—Items regarding the smokefree bar law published seven months before and one year following the implementation of the smokefree bar law (June 1997 to December 1998). Items consisted of news articles (n = 446), opinion editorials (n = 31), editorials (n = 104), letters to the editor (n = 240), and cartoons (n = 10).
MAIN OUTCOME MEASURES—Number and timing of publication of items, presence of tobacco industry arguments or public health arguments regarding law, positive, negative, and neutral views of opinion items published.
RESULTS—53% of items published concerning the smokefree bar law were news articles, 47% were opinion items. 45% of items regarding the smokefree bar law were published during the first month of implementation. The tobacco industry dominated coverage in most categories (economics, choice, enforcement, ventilation, legislation, individual quotes), except for categories public health used the most frequently (government role, tactics, organisational quotes). Anti-law editorials and letters to the editor were published more than pro-law editorials and letters. Region of the state, paper size, presence of local clean indoor air legislation, and voting on tobacco related ballot initiatives did not have an impact on the presence of opinion items.
CONCLUSIONS—The tobacco industry succeeded in obtaining more coverage of the smokefree bar law, both in news items and opinion items. The tobacco industry used historical arguments of restricting freedom of choice and economic ramifications in fighting the smokefree bar law, while public health groups focused on the worker protection issue, and exposed tobacco industry tactics. Despite the skewed coverage, public health groups obtained adequate attention to their arguments to keep the law in effect.


Keywords: content analysis; politics; passive smoking; smokefree bar law; California PMID:11387536

  13. Alcohol Mixed with Energy Drink Use as an Event-Level Predictor of Physical and Verbal Aggression in Bar Conflicts.

    PubMed

    Miller, Kathleen E; Quigley, Brian M; Eliseo-Arras, Rebecca K; Ball, Natalie J

    2016-01-01

    Young adult use of alcohol mixed with caffeinated energy drinks (AmEDs) has been globally linked with increased odds of interpersonal aggression, compared with the use of alcohol alone. However, no prior research has linked these behaviors at the event level in bar drinking situations. The present study assessed whether AmED use is associated with the perpetration of verbal and physical aggression in bar conflicts at the event level. In Fall 2014, a community sample of 175 young adult AmED users (55% female) completed a web survey describing a recent conflict experienced while drinking in a bar. Use of both AmED and non-AmED alcoholic drinks in the incident were assessed, allowing calculation of our main predictor variable, the proportion of AmEDs consumed (AmED/total drinks consumed). To measure perpetration of aggression, participants reported on the occurrence of 6 verbal and 6 physical acts during the bar conflict incident. Linear regression analyses showed that the proportion of AmEDs consumed predicted scores for perpetration of both verbal aggression (β = 0.16, p < 0.05) and physical aggression (β = 0.19, p < 0.01) after controlling for gender, age, sensation-seeking and aggressive personality traits, aggressive alcohol expectancies, aggressogenic physical and social bar environments, and total number of drinks. Results of this study suggest that in alcohol-related bar conflicts, higher levels of young adult AmED use are associated with higher levels of aggression perpetration than alcohol use alone and that the elevated risk is not attributable to individual differences between AmED users and nonusers or to contextual differences in bar drinking settings. While future research is needed to identify motivations, dosages, and sequencing issues associated with AmED use, these beverages should be considered a potential risk factor in the escalation of aggressive bar conflicts. Copyright © 2016 by the Research Society on Alcoholism.

  14. Alcohol and environmental justice: the density of liquor stores and bars in urban neighborhoods in the United States.

    PubMed

    Romley, John A; Cohen, Deborah; Ringel, Jeanne; Sturm, Roland

    2007-01-01

    This study had two purposes: (1) to characterize the density of liquor stores and bars that individuals face according to race, economic status, and age in the urban United States and (2) to assess alternative measures of retailer density based on the road network and population. We used census data on business counts and sociodemographic characteristics to compute the densities facing individuals in 9,361 urban zip codes. Blacks face higher densities of liquor stores than do whites. The density of liquor stores is greater among nonwhites in lower-income areas than among whites in lower- and higher-income areas and nonwhites in higher-income areas. Nonwhite youths face higher densities of liquor stores than white youths. The density of liquor stores and bars is lower in higher-income areas, especially for nonwhites. Mismatches between alcohol demand and the supply of liquor stores within urban neighborhoods constitute an environmental injustice for minorities and lower-income persons, with potential adverse consequences for drinking behavior and other social ills. Our results for bars are sensitive to the measure of outlet density as well as population density. Although neither measure is clearly superior, a measure that accounts for roadway miles may reflect proximity to alcohol retailers and thus serve as a useful refinement to the per-capita measure. If so, alcohol policy might also focus on density per roadway mile. Further research on the existence, causes, and consequences of environmental injustice in alcohol retailing is warranted.

  15. Molecular Analysis of Motility in Metastatic Mammary Adenocarcinoma Cells

    DTIC Science & Technology

    1996-09-01

    elements of epidermoid carcinoma (A43 1) cells. J. Cell. Biol. 103: 87-94 Winkler, M. (1988). Translational regulation in sea urchin eggs: a complex...and Methods. Error bars show SEM . Figure 2. Rhodamine-actin polymerizes preferentially at the tips of lamellipods in EGF- stimulated cells. MTLn3...lamellipods. B) rhodamine-actin intensity at the cell center. Data for each time point is the average and SEM of 15 different cells. Images A and B

  16. The Effect of Information Level on Human-Agent Interaction for Route Planning

    DTIC Science & Technology

    2015-12-01

    13 Fig. 4 Experiment 1 shows regression results for time spent at DP predicting posttest trust group membership for the high LOI...decision time by pretest trust group membership. Bars denote standard error (SE). DT at DP was evaluated to see if it predicted posttest trust... group . Linear regression indicated that DT at DP was not a significant predictor of posttest trust for the Low or the Medium LOI conditions; however, it

  17. Thermal Conductivities of Some Polymers and Composites

    DTIC Science & Technology

    2018-02-01

    volume fraction of glass and fabric style. The experimental results are compared to modeled results for Kt in composites. 15. SUBJECT TERMS...entities in a polymer above TG increases, so Cp will increase at TG. For Kt to remain constant, there would have to be a comparable decrease in α due to...scanning calorimetry (DSC) method, and have error bars as large as the claimed effect. Their Kt values for their carbon fiber samples are comparable to

  18. New Methods for the Computational Fabrication of Appearance

    DTIC Science & Technology

    2015-06-01

    disadvantage is that it does not model phenomena such as retro-reflection and grazing-angle e↵ects. We find that previously proposed BRDF metrics performed well...Figure 3.15-right shows the mean BRDF in blue and the corresponding error bars. In order to interpret our data, we fit a parametric model to slices of the...and Wojciech Matusik. Image-driven navigation of analytical brdf models . In Eurographics Symposium on Rendering, 2006. 107 [40] F. E. Nicodemus, J. C

  19. On-line vs off-line electrical conductivity characterization. Polycarbonate composites developed with multiwalled carbon nanotubes by compounding technology

    NASA Astrophysics Data System (ADS)

    Llorens-Chiralt, R.; Weiss, P.; Mikonsaari, I.

    2014-05-01

    Material characterization is one of the key steps when conductive polymers are developed. The dispersion of carbon nanotubes (CNTs) in a polymeric matrix using melt mixing influence final composite properties. The compounding becomes trial and error using a huge amount of materials, spending time and money to obtain competitive composites. Traditional methods to carry out electrical conductivity characterization include compression and injection molding. Both methods need extra equipments and moulds to obtain standard bars. This study aims to investigate the accuracy of the data obtained from absolute resistance recorded during the melt compounding, using an on-line setup developed by our group, and to correlate these values with off-line characterization and processing parameters (screw/barrel configuration, throughput, screw speed, temperature profile and CNTs percentage). Compounds developed with different percentages of multi walled carbon nanotubes (MWCNTs) and polycarbonate has been characterized during and after extrusion. Measurements, on-line resistance and off-line resistivity, showed parallel response and reproducibility, confirming method validity. The significance of the results obtained stems from the fact that we are able to measure on-line resistance and to change compounding parameters during production to achieve reference values reducing production/testing cost and ensuring material quality. Also, this method removes errors which can be found in test bars development, showing better correlation with compounding parameters.

  20. [Constructing a database that can input record of use and product-specific information].

    PubMed

    Kawai, Satoru; Satoh, Kenichi; Yamamoto, Hideo

    2012-01-01

    In Japan, patients were infected by viral hepatitis C generally by administering a specific fibrinogen injection. However, it has been difficult to identify patients who were infected as result of the injections due to the lack of medical records. It is still not a common practice by a number of medical facilities to maintain detailed information because manual record keeping is extremely time consuming and subject to human error. Due to these reasons, the regulator required Medical device manufacturers and pharmaceutical companies to attach a bar code called "GS1-128" effective March 28, 2008. Based on this new process, we have come up with the idea of constructing a new database whose records can be entered by bar code scanning to ensure data integrity. Upon examining the efficacy of this new data collection process from the perspective of time efficiency and of course data accuracy, "GS1-128" proved that it significantly reduces time and record keeping mistakes. Patients not only became easily identifiable by a lot number and a serial number when immediate care was required, but "GS1-128" enhanced the ability to pinpoint manufacturing errors in the event any trouble or side effects are reported. This data can be shared with and utilized by the entire medical industry and will help perfect the products and enhance record keeping. I believe this new process is extremely important.

  1. The use of information technology to enhance patient safety and nursing efficiency.

    PubMed

    Lee, Tso-Ying; Sun, Gi-Tseng; Kou, Li-Tseng; Yeh, Mei-Ling

    2017-10-23

    Issues in patient safety and nursing efficiency have long been of concern. Advancing the role of nursing informatics is seen as the best way to address this. The aim of this study was to determine if the use, outcomes and satisfaction with a nursing information system (NIS) improved patient safety and the quality of nursing care in a hospital in Taiwan. This study adopts a quasi-experimental design. Nurses and patients were surveyed by questionnaire and data retrieval before and after the implementation of NIS in terms of blood drawing, nursing process, drug administration, bar code scanning, shift handover, and information and communication integration. Physiologic values were easier to read and interpret; it took less time to complete electronic records (3.7 vs. 9.1 min); the number of errors in drug administration was reduced (0.08% vs. 0.39%); bar codes reduced the number of errors in blood drawing (0 vs. 10) and transportation of specimens (0 vs. 0.42%); satisfaction with electronic shift handover increased significantly; there was a reduction in nursing turnover (14.9% vs. 16%); patient satisfaction increased significantly (3.46 vs. 3.34). Introduction of NIS improved patient safety and nursing efficiency and increased nurse and patient satisfaction. Medical organizations must continually improve the nursing information system if they are to provide patients with high quality service in a competitive environment.

  2. A Feedback Loop between Dynamin and Actin Recruitment during Clathrin-Mediated Endocytosis

    PubMed Central

    Taylor, Marcus J.; Lampe, Marko; Merrifield, Christien J.

    2012-01-01

    Clathrin-mediated endocytosis proceeds by a sequential series of reactions catalyzed by discrete sets of protein machinery. The final reaction in clathrin-mediated endocytosis is membrane scission, which is mediated by the large guanosine triophosphate hydrolase (GTPase) dynamin and which may involve the actin-dependent recruitment of N-terminal containing BIN/Amphiphysin/RVS domain containing (N-BAR) proteins. Optical microscopy has revealed a detailed picture of when and where particular protein types are recruited in the ∼20–30 s preceding scission. Nevertheless, the regulatory mechanisms and functions that underpin protein recruitment are not well understood. Here we used an optical assay to investigate the coordination and interdependencies between the recruitment of dynamin, the actin cytoskeleton, and N-BAR proteins to individual clathrin-mediated endocytic scission events. These measurements revealed that a feedback loop exists between dynamin and actin at sites of membrane scission. The kinetics of dynamin, actin, and N-BAR protein recruitment were modulated by dynamin GTPase activity. Conversely, acute ablation of actin dynamics using latrunculin-B led to a ∼50% decrease in the incidence of scission, an ∼50% decrease in the amplitude of dynamin recruitment, and abolished actin and N-BAR recruitment to scission events. Collectively these data suggest that dynamin, actin, and N-BAR proteins work cooperatively to efficiently catalyze membrane scission. Dynamin controls its own recruitment to scission events by modulating the kinetics of actin and N-BAR recruitment to sites of scission. Conversely actin serves as a dynamic scaffold that concentrates dynamin and N-BAR proteins at sites of scission. PMID:22505844

  3. Lesion correlates of impairments in actual tool use following unilateral brain damage.

    PubMed

    Salazar-López, E; Schwaiger, B J; Hermsdörfer, J

    2016-04-01

    To understand how the brain controls actions involving tools, tests have been developed employing different paradigms such as pantomime, imitation and real tool use. The relevant areas have been localized in the premotor cortex, the middle temporal gyrus and the superior and inferior parietal lobe. This study employs Voxel Lesion Symptom Mapping to relate the functional impairment in actual tool use with extent and localization of the structural damage in the left (LBD, N=31) and right (RBD, N=19) hemisphere in chronic stroke patients. A series of 12 tools was presented to participants in a carousel. In addition, a non-tool condition tested the prescribed manipulation of a bar. The execution was scored according to an apraxic error scale based on the dimensions grasp, movement, direction and space. Results in the LBD group show that the ventro-dorsal stream constitutes the core of the defective network responsible for impaired tool use; it is composed of the inferior parietal lobe, the supramarginal and angular gyrus and the dorsal premotor cortex. In addition, involvement of regions in the temporal lobe, the rolandic operculum, the ventral premotor cortex and the middle occipital gyrus provide evidence of the role of the ventral stream in this task. Brain areas related to the use of the bar largely overlapped with this network. For patients with RBD data were less conclusive; however, a trend for the involvement of the temporal lobe in apraxic errors was manifested. Skilled bar manipulation depended on the same temporal area in these patients. Therefore, actual tool use depends on a well described left fronto-parietal-temporal network. RBD affects actual tool use, however the underlying neural processes may be more widely distributed and more heterogeneous. Goal directed manipulation of non-tool objects seems to involve very similar brain areas as tool use, suggesting that both types of manipulation share identical processes and neural representations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. A quality assessment of 3D video analysis for full scale rockfall experiments

    NASA Astrophysics Data System (ADS)

    Volkwein, A.; Glover, J.; Bourrier, F.; Gerber, W.

    2012-04-01

    Main goal of full scale rockfall experiments is to retrieve a 3D trajectory of a boulder along the slope. Such trajectories then can be used to calibrate rockfall simulation models. This contribution presents the application of video analysis techniques capturing rock fall velocity of some free fall full scale rockfall experiments along a rock face with an inclination of about 50 degrees. Different scaling methodologies have been evaluated. They mainly differ in the way the scaling factors between the movie frames and the reality and are determined. For this purpose some scale bars and targets with known dimensions have been distributed in advance along the slope. The single scaling approaches are briefly described as follows: (i) Image raster is scaled to the distant fixed scale bar then recalibrated to the plane of the passing rock boulder by taking the measured position of the nearest impact as the distance to the camera. The distance between the camera, scale bar, and passing boulder are surveyed. (ii) The image raster was scaled using the four nearest targets (identified using frontal video) from the trajectory to be analyzed. The average of the scaling factors was finally taken as scaling factor. (iii) The image raster was scaled using the four nearest targets from the trajectory to be analyzed. The scaling factor for one trajectory was calculated by balancing the mean scaling factors associated with the two nearest and the two farthest targets in relation to their mean distance to the analyzed trajectory. (iv) Same as previous method but with varying scaling factors during along the trajectory. It has shown that a direct measure of the scaling target and nearest impact zone is the most accurate. If constant plane is assumed it doesn't account for the lateral deviations of the rock boulder from the fall line consequently adding error into the analysis. Thus a combination of scaling methods (i) and (iv) are considered to give the best results. For best results regarding the lateral rough positioning along the slope, the frontal video must also be scaled. The error in scaling the video images can be evaluated by comparing the data by additional combination of the vertical trajectory component over time with the theoretical polynomial trend according to gravity. The different tracking techniques used to plot the position of the boulder's center of gravity all generated positional data with minimal error acceptable for trajectory analysis. However, when calculating instantaneous velocities an amplification of this error becomes un acceptable. A regression analysis of the data is helpful to optimize trajectory and velocity, respectively.

  5. SpaceInn hare-and-hounds exercise: Estimation of stellar properties using space-based asteroseismic data

    NASA Astrophysics Data System (ADS)

    Reese, D. R.; Chaplin, W. J.; Davies, G. R.; Miglio, A.; Antia, H. M.; Ball, W. H.; Basu, S.; Buldgen, G.; Christensen-Dalsgaard, J.; Coelho, H. R.; Hekker, S.; Houdek, G.; Lebreton, Y.; Mazumdar, A.; Metcalfe, T. S.; Silva Aguirre, V.; Stello, D.; Verma, K.

    2016-07-01

    Context. Detailed oscillation spectra comprising individual frequencies for numerous solar-type stars and red giants are either currently available, e.g. courtesy of the CoRoT, Kepler, and K2 missions, or will become available with the upcoming NASA TESS and ESA PLATO 2.0 missions. The data can lead to a precise characterisation of these stars thereby improving our understanding of stellar evolution, exoplanetary systems, and the history of our galaxy. Aims: Our goal is to test and compare different methods for obtaining stellar properties from oscillation frequencies and spectroscopic constraints. Specifically, we would like to evaluate the accuracy of the results and reliability of the associated error bars, and to see where there is room for improvement. Methods: In the context of the SpaceInn network, we carried out a hare-and-hounds exercise in which one group, the hares, simulated observations of oscillation spectra for a set of ten artificial solar-type stars, and a number of hounds applied various methods for characterising these stars based on the data produced by the hares. Most of the hounds fell into two main groups. The first group used forward modelling (I.e. applied various search/optimisation algorithms in a stellar parameter space) whereas the second group relied on acoustic glitch signatures. Results: Results based on the forward modelling approach were accurate to 1.5% (radius), 3.9% (mass), 23% (age), 1.5% (surface gravity), and 1.8% (mean density), as based on the root mean square difference. Individual hounds reached different degrees of accuracy, some of which were substantially better than the above average values. For the two 1M⊙ stellar targets, the accuracy on the age is better than 10% thereby satisfying the requirements for the PLATO 2.0 mission. High stellar masses and atomic diffusion (which in our models does not include the effects of radiative accelerations) proved to be sources of difficulty. The average accuracies for the acoustic radii of the base of the convection zone, the He II ionisation, and the Γ1 peak located between the two He ionisation zones were 17%, 2.4%, and 1.9%, respectively. The results from the forward modelling were on average more accurate than those from the glitch fitting analysis as the latter seemed to be affected by aliasing problems for some of the targets. Conclusions: Our study indicates that forward modelling is the most accurate way of interpreting the pulsation spectra of solar-type stars. However, given its model-dependent nature, this method needs to be complemented by model-independent results from, e.g. glitch analysis. Furthermore, our results indicate that global rather than local optimisation algorithms should be used in order to obtain robust error bars.

  6. A large community outbreak of salmonellosis caused by intentional contamination of restaurant salad bars.

    PubMed

    Török, T J; Tauxe, R V; Wise, R P; Livengood, J R; Sokolow, R; Mauvais, S; Birkness, K A; Skeels, M R; Horan, J M; Foster, L R

    1997-08-06

    This large outbreak of foodborne disease highlights the challenge of investigating outbreaks caused by intentional contamination and demonstrates the vulnerability of self-service foods to intentional contamination. To investigate a large community outbreak of Salmonella Typhimurium infections. Epidemiologic investigation of patients with Salmonella gastroenteritis and possible exposures in The Dalles, Oregon. Cohort and case-control investigations were conducted among groups of restaurant patrons and employees to identify exposures associated with illness. A community in Oregon. Outbreak period was September and October 1984. A total of 751 persons with Salmonella gastroenteritis associated with eating or working at area restaurants. Most patients were identified through passive surveillance; active surveillance was conducted for selected groups. A case was defined either by clinical criteria or by a stool culture yielding S Typhimurium. The outbreak occurred in 2 waves, September 9 through 18 and September 19 through October 10. Most cases were associated with 10 restaurants, and epidemiologic studies of customers at 4 restaurants and of employees at all 10 restaurants implicated eating from salad bars as the major risk factor for infection. Eight (80%) of 10 affected restaurants compared with only 3 (11%) of the 28 other restaurants in The Dalles operated salad bars (relative risk, 7.5; 95% confidence interval, 2.4-22.7; P<.001). The implicated food items on the salad bars differed from one restaurant to another. The investigation did not identify any water supply, food item, supplier, or distributor common to all affected restaurants, nor were employees exposed to any single common source. In some instances, infected employees may have contributed to the spread of illness by inadvertently contaminating foods. However, no evidence was found linking ill employees to initiation of the outbreak. Errors in food rotation and inadequate refrigeration on ice-chilled salad bars may have facilitated growth of the S Typhimurium but could not have caused the outbreak. A subsequent criminal investigation revealed that members of a religious commune had deliberately contaminated the salad bars. An S Typhimurium strain found in a laboratory at the commune was indistinguishable from the outbreak strain. This outbreak of salmonellosis was caused by intentional contamination of restaurant salad bars by members of a religious commune.

  7. Action planning and position sense in children with Developmental Coordination Disorder.

    PubMed

    Adams, Imke L J; Ferguson, Gillian D; Lust, Jessica M; Steenbergen, Bert; Smits-Engelsman, Bouwien C M

    2016-04-01

    The present study examined action planning and position sense in children with Developmental Coordination Disorder (DCD). Participants performed two action planning tasks, the sword task and the bar grasping task, and an active elbow matching task to examine position sense. Thirty children were included in the DCD group (aged 6-10years) and age-matched to 90 controls. The DCD group had a MABC-2 total score ⩽5th percentile, the control group a total score ⩾25th percentile. Results from the sword-task showed that children with DCD planned less for end-state comfort. On the bar grasping task no significant differences in planning for end-state comfort between the DCD and control group were found. There was also no significant difference in the position sense error between the groups. The present study shows that children with DCD plan less for end-state comfort, but that this result is task-dependent and becomes apparent when more precision is needed at the end of the task. In that respect, the sword-task appeared to be a more sensitive task to assess action planning abilities, than the bar grasping task. The action planning deficit in children with DCD cannot be explained by an impaired position sense during active movements. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. The State and Trends of Barcode, RFID, Biometric and Pharmacy Automation Technologies in US Hospitals

    PubMed Central

    Uy, Raymonde Charles Y.; Kury, Fabricio P.; Fontelo, Paul A.

    2015-01-01

    The standard of safe medication practice requires strict observance of the five rights of medication administration: the right patient, drug, time, dose, and route. Despite adherence to these guidelines, medication errors remain a public health concern that has generated health policies and hospital processes that leverage automation and computerization to reduce these errors. Bar code, RFID, biometrics and pharmacy automation technologies have been demonstrated in literature to decrease the incidence of medication errors by minimizing human factors involved in the process. Despite evidence suggesting the effectivity of these technologies, adoption rates and trends vary across hospital systems. The objective of study is to examine the state and adoption trends of automatic identification and data capture (AIDC) methods and pharmacy automation technologies in U.S. hospitals. A retrospective descriptive analysis of survey data from the HIMSS Analytics® Database was done, demonstrating an optimistic growth in the adoption of these patient safety solutions. PMID:26958264

  9. Creating a Satellite-Based Record of Tropospheric Ozone

    NASA Technical Reports Server (NTRS)

    Oetjen, Hilke; Payne, Vivienne H.; Kulawik, Susan S.; Eldering, Annmarie; Worden, John; Edwards, David P.; Francis, Gene L.; Worden, Helen M.

    2013-01-01

    The TES retrieval algorithm has been applied to IASI radiances. We compare the retrieved ozone profiles with ozone sonde profiles for mid-latitudes for the year 2008. We find a positive bias in the IASI ozone profiles in the UTLS region of up to 22 %. The spatial coverage of the IASI instrument allows sampling of effectively the same air mass with several IASI scenes simultaneously. Comparisons of the root-mean-square of an ensemble of IASI profiles to theoretical errors indicate that the measurement noise and the interference of temperature and water vapour on the retrieval together mostly explain the empirically derived random errors. The total degrees of freedom for signal of the retrieval for ozone are 3.1 +/- 0.2 and the tropospheric degrees of freedom are 1.0 +/- 0.2 for the described cases. IASI ozone profiles agree within the error bars with coincident ozone profiles derived from a TES stare sequence for the ozone sonde station at Bratt's Lake (50.2 deg N, 104.7 deg W).

  10. Trait anger in relation to neural and behavioral correlates of response inhibition and error-processing.

    PubMed

    Lievaart, Marien; van der Veen, Frederik M; Huijding, Jorg; Naeije, Lilian; Hovens, Johannes E; Franken, Ingmar H A

    2016-01-01

    Effortful control is considered to be an important factor in explaining individual differences in trait anger. In the current study, we sought to investigate the relation between anger-primed effortful control (i.e., inhibitory control and error-processing) and trait anger using an affective Go/NoGo task. Individuals low (LTA; n=45) and high (HTA; n=49) on trait anger were selected for this study. Behavioral performance (accuracy) and Event-Related Potentials (ERPs; i.e., N2, P3, ERN, Pe) were compared between both groups. Contrary to our predictions, we found no group differences regarding inhibitory control. That is, HTA and LTA individuals made comparable numbers of commission errors on NoGo trials and no significant differences were found on the N2 and P3 amplitudes. With respect to error-processing, we found reduced Pe amplitudes following errors in HTA individuals as compared to LTA individuals, whereas the ERN amplitudes were comparable for both groups. These results indicate that high trait anger individuals show deficits in later stages of error-processing, which may explain the continuation of impulsive behaviors in HTA individuals despite their negative consequences. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Hypocholesterolaemic effects of lupin protein and pea protein/fibre combinations in moderately hypercholesterolaemic individuals.

    PubMed

    Sirtori, Cesare R; Triolo, Michela; Bosisio, Raffaella; Bondioli, Alighiero; Calabresi, Laura; De Vergori, Viviana; Gomaraschi, Monica; Mombelli, Giuliana; Pazzucconi, Franco; Zacherl, Christian; Arnoldi, Anna

    2012-04-01

    The present study was aimed to evaluate the effect of plant proteins (lupin protein or pea protein) and their combinations with soluble fibres (oat fibre or apple pectin) on plasma total and LDL-cholesterol levels. A randomised, double-blind, parallel group design was followed: after a 4-week run-in period, participants were randomised into seven treatment groups, each consisting of twenty-five participants. Each group consumed two bars containing specific protein/fibre combinations: the reference group consumed casein+cellulose; the second and third groups consumed bars containing lupin or pea proteins+cellulose; the fourth and fifth groups consumed bars containing casein and oat fibre or apple pectin; the sixth group and seventh group received bars containing combinations of pea protein and oat fibre or apple pectin, respectively. Bars containing lupin protein+cellulose ( - 116 mg/l, - 4·2%), casein+apple pectin ( - 152 mg/l, - 5·3%), pea protein+oat fibre ( - 135 mg/l, - 4·7%) or pea protein+apple pectin ( - 168 mg/l, - 6·4%) resulted in significant reductions of total cholesterol levels (P<0·05), whereas no cholesterol changes were observed in the subjects consuming the bars containing casein+cellulose, casein+oat fibre or pea protein+cellulose. The present study shows the hypocholesterolaemic activity and potential clinical benefits of consuming lupin protein or combinations of pea protein and a soluble fibre, such as oat fibre or apple pectin.

  12. Spectral narrowing of a 980 nm tapered diode laser bar

    NASA Astrophysics Data System (ADS)

    Vijayakumar, Deepak; Jensen, Ole Bjarlin; Lucas Leclin, Ga"lle; Petersen, Paul Michael; Thestrup, Birgitte

    2011-03-01

    High power diode laser bars are interesting in many applications such as solid state laser pumping, material processing, laser trapping, laser cooling and second harmonic generation. Often, the free running laser bars emit a broad spectrum of the order of several nanometres which limit their scope in wavelength specific applications and hence, it is vital to stabilize the emission spectrum of these devices. In our experiment, we describe the wavelength narrowing of a 12 element 980 nm tapered diode laser bar using a simple Littman configuration. The tapered laser bar which suffered from a big smile has been "smile corrected" using individual phase masks for each emitter. The external cavity consists of the laser bar, both fast and slow axis micro collimators, smile correcting phase mask, 6.5x beam expanding lens combination, a 1200 lines/mm reflecting grating with 85% efficiency in the first order, a slow axis focusing cylindrical lens of 40 mm focal length and an output coupler which is 10% reflective. In the free running mode, the laser emission spectrum was 5.5 nm wide at an operating current of 30A. The output power was measured to be in excess of 12W. Under the external cavity operation, the wavelength spread of the laser could be limited to 0.04 nm with an output power in excess of 8 W at an operating current of 30A. The spectrum was found to be tuneable in a range of 16 nm.

  13. Effects of Visual Communication Tool and Separable Status Display on Team Performance and Subjective Workload in Air Battle Management

    DTIC Science & Technology

    2008-06-01

    NASAS TLX (Hart & Staveland, 1987) was used to evaluate perceived task demands. In the modified version, participants were asked to estimate the...subjective workload (i.e., NASA - TLX ) was assessed for each trial. Unweighted NASA - TLX ratings were submitted to a 5 (Subscale) × 2 (Communication...Communication Condition M ea n TL X R at in g Figure 3. Mean unweighted NASA - TLX ratings as a function of communication modality. Error bars represent one

  14. The Effect of Information Level on Human-Agent Interaction for Route Planning

    DTIC Science & Technology

    2015-12-01

    χ2 (4, 60) = 11.41, p = 0.022, and Cramer’s V = 0.308, indicating there was no effect of experiment on posttest trust. Pretest trust was not a...decision time by pretest trust group membership. Bars denote standard error (SE). DT at DP was evaluated to see if it predicted posttest trust...0.007, Cramer’s V = 0.344, indicating there was no effect of experiment on posttest trust. Pretest trust was not a significant prediction of total DT

  15. Elimination of Sensor Artifacts from Infrared Data.

    DTIC Science & Technology

    1984-12-11

    channel to compensate detector responsivity nonuniformity . Before inspecting the bar target measurements, it was expected that the preceding sequence of...sample errors and by applyieg separate pain and offset costants to each canel for nonuniformity compensation. 12(t) 𔃻 -7. Y2 lar I ,ar hr’ In apern...W5 RICHARD STEIDRO E 1 -- t4 ii x3 .13 275 325 3i5 425 SAMPLE NUMBER FI. 4 - Postamplfler output waveform for LWIR channel 3, for data frame shown in

  16. Correction of Thermal Gradient Errors in Stem Thermocouple Hygrometers

    PubMed Central

    Michel, Burlyn E.

    1979-01-01

    Stem thermocouple hygrometers were subjected to transient and stable thermal gradients while in contact with reference solutions of NaCl. Both dew point and psychrometric voltages were directly related to zero offset voltages, the latter reflecting the size of the thermal gradient. Although slopes were affected by absolute temperature, they were not affected by water potential. One hygrometer required a correction of 1.75 bars water potential per microvolt of zero offset, a value that was constant from 20 to 30 C. PMID:16660685

  17. Implementation of a pharmacy automation system (robotics) to ensure medication safety at Norwalk hospital.

    PubMed

    Bepko, Robert J; Moore, John R; Coleman, John R

    2009-01-01

    This article reports an intervention to improve the quality and safety of hospital patient care by introducing the use of pharmacy robotics into the medication distribution process. Medication safety is vitally important. The integration of pharmacy robotics with computerized practitioner order entry and bedside medication bar coding produces a significant reduction in medication errors. The creation of a safe medication-from initial ordering to bedside administration-provides enormous benefits to patients, to health care providers, and to the organization as well.

  18. Watts Bar Nuclear Plant Title V Applicability

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  19. Volume Phase Masks in Photo-Thermo-Refractive Glass

    DTIC Science & Technology

    2014-10-06

    development when forming the nanocrystals. Fig. 1.1 shows the refractive index change curves for some common glass melts when exposed to a beam at 325 nm...integral curve to the curve for the ideal phase mask. If there is a deviation in the experimental curve from the ideal curve , whether the overlap...redevelopments of the sample. Note that the third point on the spherical curve and the third and fourth points on the coma y curve have larger error bars than

  20. Coupling constant for N*(1535)N{rho}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie Jujun; Graduate University of Chinese Academy of Sciences, Beijing 100049; Wilkin, Colin

    2008-05-15

    The value of the N*(1535)N{rho} coupling constant g{sub N*N{rho}} derived from the N*(1535){yields}N{rho}{yields}N{pi}{pi} decay is compared with that deduced from the radiative decay N*(1535){yields}N{gamma} using the vector-meson-dominance model. On the basis of an effective Lagrangian approach, we show that the values of g{sub N*N{rho}} extracted from the available experimental data on the two decays are consistent, though the error bars are rather large.

  1. Irregular analytical errors in diagnostic testing - a novel concept.

    PubMed

    Vogeser, Michael; Seger, Christoph

    2018-02-23

    In laboratory medicine, routine periodic analyses for internal and external quality control measurements interpreted by statistical methods are mandatory for batch clearance. Data analysis of these process-oriented measurements allows for insight into random analytical variation and systematic calibration bias over time. However, in such a setting, any individual sample is not under individual quality control. The quality control measurements act only at the batch level. Quantitative or qualitative data derived for many effects and interferences associated with an individual diagnostic sample can compromise any analyte. It is obvious that a process for a quality-control-sample-based approach of quality assurance is not sensitive to such errors. To address the potential causes and nature of such analytical interference in individual samples more systematically, we suggest the introduction of a new term called the irregular (individual) analytical error. Practically, this term can be applied in any analytical assay that is traceable to a reference measurement system. For an individual sample an irregular analytical error is defined as an inaccuracy (which is the deviation from a reference measurement procedure result) of a test result that is so high it cannot be explained by measurement uncertainty of the utilized routine assay operating within the accepted limitations of the associated process quality control measurements. The deviation can be defined as the linear combination of the process measurement uncertainty and the method bias for the reference measurement system. Such errors should be coined irregular analytical errors of the individual sample. The measurement result is compromised either by an irregular effect associated with the individual composition (matrix) of the sample or an individual single sample associated processing error in the analytical process. Currently, the availability of reference measurement procedures is still highly limited, but LC-isotope-dilution mass spectrometry methods are increasingly used for pre-market validation of routine diagnostic assays (these tests also involve substantial sets of clinical validation samples). Based on this definition/terminology, we list recognized causes of irregular analytical error as a risk catalog for clinical chemistry in this article. These issues include reproducible individual analytical errors (e.g. caused by anti-reagent antibodies) and non-reproducible, sporadic errors (e.g. errors due to incorrect pipetting volume due to air bubbles in a sample), which can both lead to inaccurate results and risks for patients.

  2. Intensified depolymerization of aqueous polyacrylamide solution using combined processes based on hydrodynamic cavitation, ozone, ultraviolet light and hydrogen peroxide.

    PubMed

    Prajapat, Amrutlal L; Gogate, Parag R

    2016-07-01

    The present work deals with intensification of depolymerization of polyacrylamide (PAM) solution using hydrodynamic cavitation (HC) reactors based on a combination with hydrogen peroxide (H2O2), ozone (O3) and ultraviolet (UV) irradiation. Effect of inlet pressure in hydrodynamic cavitation reactor and power dissipation in the case of UV irradiation on the extent of viscosity reduction has been investigated. The combined approaches such as HC+UV, HC+O3, HC+H2O2, UV+H2O2 and UV+O3 have been subsequently investigated and found to be more efficient as compared to individual approaches. For the approach based on HC+UV+H2O2, the extent of viscosity reduction under the optimized conditions of HC (3 bar inlet pressure)+UV (8 W power)+H2O2 (0.2% loading) was 97.27% in 180 min whereas individual operations of HC (3 bar inlet pressure) and UV (8 W power) resulted in about 35.38% and 40.83% intrinsic viscosity reduction in 180 min respectively. In the case of HC (3 bar inlet pressure)+UV (8 W power)+ozone (400 mg/h flow rate) approach, the extent of viscosity reduction was 89.06% whereas individual processes of only ozone (400 mg/h flow rate), ozone (400 mg/h flow rate)+HC (3 bar inlet pressure) and ozone (400 mg/h flow rate)+UV (8 W power) resulted in lower extent of viscosity reduction as 50.34%, 60.65% and 75.31% respectively. The chemical structure of the treated PAM by all approaches was also characterized using FTIR (Fourier transform infrared) spectra and it was established that no significant chemical structure changes were obtained during the treatment. Overall, it can be said that the combination of HC+UV+H2O2 is an efficient approach for the depolymerization of PAM solution. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Neural evidence for enhanced error detection in major depressive disorder.

    PubMed

    Chiu, Pearl H; Deldin, Patricia J

    2007-04-01

    Anomalies in error processing have been implicated in the etiology and maintenance of major depressive disorder. In particular, depressed individuals exhibit heightened sensitivity to error-related information and negative environmental cues, along with reduced responsivity to positive reinforcers. The authors examined the neural activation associated with error processing in individuals diagnosed with and without major depression and the sensitivity of these processes to modulation by monetary task contingencies. The error-related negativity and error-related positivity components of the event-related potential were used to characterize error monitoring in individuals with major depressive disorder and the degree to which these processes are sensitive to modulation by monetary reinforcement. Nondepressed comparison subjects (N=17) and depressed individuals (N=18) performed a flanker task under two external motivation conditions (i.e., monetary reward for correct responses and monetary loss for incorrect responses) and a nonmonetary condition. After each response, accuracy feedback was provided. The error-related negativity component assessed the degree of anomaly in initial error detection, and the error positivity component indexed recognition of errors. Across all conditions, the depressed participants exhibited greater amplitude of the error-related negativity component, relative to the comparison subjects, and equivalent error positivity amplitude. In addition, the two groups showed differential modulation by task incentives in both components. These data implicate exaggerated early error-detection processes in the etiology and maintenance of major depressive disorder. Such processes may then recruit excessive neural and cognitive resources that manifest as symptoms of depression.

  4. Technology utilization to prevent medication errors.

    PubMed

    Forni, Allison; Chu, Hanh T; Fanikos, John

    2010-01-01

    Medication errors have been increasingly recognized as a major cause of iatrogenic illness and system-wide improvements have been the focus of prevention efforts. Critically ill patients are particularly vulnerable to injury resulting from medication errors because of the severity of illness, need for high risk medications with a narrow therapeutic index and frequent use of intravenous infusions. Health information technology has been identified as method to reduce medication errors as well as improve the efficiency and quality of care; however, few studies regarding the impact of health information technology have focused on patients in the intensive care unit. Computerized physician order entry and clinical decision support systems can play a crucial role in decreasing errors in the ordering stage of the medication use process through improving the completeness and legibility of orders, alerting physicians to medication allergies and drug interactions and providing a means for standardization of practice. Electronic surveillance, reminders and alerts identify patients susceptible to an adverse event, communicate critical changes in a patient's condition, and facilitate timely and appropriate treatment. Bar code technology, intravenous infusion safety systems, and electronic medication administration records can target prevention of errors in medication dispensing and administration where other technologies would not be able to intercept a preventable adverse event. Systems integration and compliance are vital components in the implementation of health information technology and achievement of a safe medication use process.

  5. Accuracy of non-resonant laser-induced thermal acoustics (LITA) in a convergent-divergent nozzle flow

    NASA Astrophysics Data System (ADS)

    Richter, J.; Mayer, J.; Weigand, B.

    2018-02-01

    Non-resonant laser-induced thermal acoustics (LITA) was applied to measure Mach number, temperature and turbulence level along the centerline of a transonic nozzle flow. The accuracy of the measurement results was systematically studied regarding misalignment of the interrogation beam and frequency analysis of the LITA signals. 2D steady-state Reynolds-averaged Navier-Stokes (RANS) simulations were performed for reference. The simulations were conducted using ANSYS CFX 18 employing the shear-stress transport turbulence model. Post-processing of the LITA signals is performed by applying a discrete Fourier transformation (DFT) to determine the beat frequencies. It is shown that the systematical error of the DFT, which depends on the number of oscillations, signal chirp, and damping rate, is less than 1.5% for our experiments resulting in an average error of 1.9% for Mach number. Further, the maximum calibration error is investigated for a worst-case scenario involving maximum in situ readjustment of the interrogation beam within the limits of constructive interference. It is shown that the signal intensity becomes zero if the interrogation angle is altered by 2%. This, together with the accuracy of frequency analysis, results in an error of about 5.4% for temperature throughout the nozzle. Comparison with numerical results shows good agreement within the error bars.

  6. Improved simulation of aerosol, cloud, and density measurements by shuttle lidar

    NASA Technical Reports Server (NTRS)

    Russell, P. B.; Morley, B. M.; Livingston, J. M.; Grams, G. W.; Patterson, E. W.

    1981-01-01

    Data retrievals are simulated for a Nd:YAG lidar suitable for early flight on the space shuttle. Maximum assumed vertical and horizontal resolutions are 0.1 and 100 km, respectively, in the boundary layer, increasing to 2 and 2000 km in the mesosphere. Aerosol and cloud retrievals are simulated using 1.06 and 0.53 microns wavelengths independently. Error sources include signal measurement, conventional density information, atmospheric transmission, and lidar calibration. By day, tenuous clouds and Saharan and boundary layer aerosols are retrieved at both wavelengths. By night, these constituents are retrieved, plus upper tropospheric, stratospheric, and mesospheric aerosols and noctilucent clouds. Density, temperature, and improved aerosol and cloud retrievals are simulated by combining signals at 0.35, 1.06, and 0.53 microns. Particlate contamination limits the technique to the cloud free upper troposphere and above. Error bars automatically show effect of this contamination, as well as errors in absolute density nonmalization, reference temperature or pressure, and the sources listed above. For nonvolcanic conditions, relative density profiles have rms errors of 0.54 to 2% in the upper troposphere and stratosphere. Temperature profiles have rms errors of 1.2 to 2.5 K and can define the tropopause to 0.5 km and higher wave structures to 1 or 2 km.

  7. Looking for trouble? Diagnostics expanding disease and producing patients.

    PubMed

    Hofmann, Bjørn

    2018-05-23

    Novel tests give great opportunities for earlier and more precise diagnostics. At the same time, new tests expand disease, produce patients, and cause unnecessary harm in overdiagnosis and overtreatment. How can we evaluate diagnostics to obtain the benefits and avoid harm? One way is to pay close attention to the diagnostic process and its core concepts. Doing so reveals 3 errors that expand disease and increase overdiagnosis. The first error is to decouple diagnostics from harm, eg, by diagnosing insignificant conditions. The second error is to bypass proper validation of the relationship between test indicator and disease, eg, by introducing biomarkers for Alzheimer's disease before the tests are properly validated. The third error is to couple the name of disease to insignificant or indecisive indicators, eg, by lending the cancer name to preconditions, such as ductal carcinoma in situ. We need to avoid these errors to promote beneficial testing, bar harmful diagnostics, and evade unwarranted expansion of disease. Accordingly, we must stop identifying and testing for conditions that are only remotely associated with harm. We need more stringent verification of tests, and we must avoid naming indicators and indicative conditions after diseases. If not, we will end like ancient tragic heroes, succumbing because of our very best abilities. © 2018 John Wiley & Sons, Ltd.

  8. Mixtures of Berkson and classical covariate measurement error in the linear mixed model: Bias analysis and application to a study on ultrafine particles.

    PubMed

    Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette

    2018-05-01

    The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Synthetic aperture imaging in ultrasound calibration

    NASA Astrophysics Data System (ADS)

    Ameri, Golafsoun; Baxter, John S. H.; McLeod, A. Jonathan; Jayaranthe, Uditha L.; Chen, Elvis C. S.; Peters, Terry M.

    2014-03-01

    Ultrasound calibration allows for ultrasound images to be incorporated into a variety of interventional applica­ tions. Traditional Z- bar calibration procedures rely on wired phantoms with an a priori known geometry. The line fiducials produce small, localized echoes which are then segmented from an array of ultrasound images from different tracked probe positions. In conventional B-mode ultrasound, the wires at greater depths appear blurred and are difficult to segment accurately, limiting the accuracy of ultrasound calibration. This paper presents a novel ultrasound calibration procedure that takes advantage of synthetic aperture imaging to reconstruct high resolution ultrasound images at arbitrary depths. In these images, line fiducials are much more readily and accu­ rately segmented, leading to decreased calibration error. The proposed calibration technique is compared to one based on B-mode ultrasound. The fiducial localization error was improved from 0.21mm in conventional B-mode images to 0.15mm in synthetic aperture images corresponding to an improvement of 29%. This resulted in an overall reduction of calibration error from a target registration error of 2.00mm to 1.78mm, an improvement of 11%. Synthetic aperture images display greatly improved segmentation capabilities due to their improved resolution and interpretability resulting in improved calibration.

  10. A partial least squares based spectrum normalization method for uncertainty reduction for laser-induced breakdown spectroscopy measurements

    NASA Astrophysics Data System (ADS)

    Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou

    2013-10-01

    A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.

  11. The Neural-fuzzy Thermal Error Compensation Controller on CNC Machining Center

    NASA Astrophysics Data System (ADS)

    Tseng, Pai-Chung; Chen, Shen-Len

    The geometric errors and structural thermal deformation are factors that influence the machining accuracy of Computer Numerical Control (CNC) machining center. Therefore, researchers pay attention to thermal error compensation technologies on CNC machine tools. Some real-time error compensation techniques have been successfully demonstrated in both laboratories and industrial sites. The compensation results still need to be enhanced. In this research, the neural-fuzzy theory has been conducted to derive a thermal prediction model. An IC-type thermometer has been used to detect the heat sources temperature variation. The thermal drifts are online measured by a touch-triggered probe with a standard bar. A thermal prediction model is then derived by neural-fuzzy theory based on the temperature variation and the thermal drifts. A Graphic User Interface (GUI) system is also built to conduct the user friendly operation interface with Insprise C++ Builder. The experimental results show that the thermal prediction model developed by neural-fuzzy theory methodology can improve machining accuracy from 80µm to 3µm. Comparison with the multi-variable linear regression analysis the compensation accuracy is increased from ±10µm to ±3µm.

  12. Cyclic behavior, development, and characteristics of a ductile hybrid fiber-reinforced polymer (DHFRP) for reinforced concrete members

    NASA Astrophysics Data System (ADS)

    Hampton, Francis Patrick

    Reinforced concrete (R/C) structures especially pavements and bridge decks that constitute vital elements of the infrastructure of all industrialized societies are deteriorating prematurely. Structural repair and upgrading of these structural elements have become a more economical option for constructed facilities especially in the United States and Canada. One method of retrofitting concrete structures is the use of advanced materials. Fiber reinforced polymer (FRP) composite materials typically are in the form of fabric sheets or reinforcing bars. While the strength and stiffness of the FRP is high, composites are inherently brittle, with limited or no ductility. Conventional FRP systems cannot currently meet ductility demand, and therefore, may fail in a catastrophic failure mode. The primary goal of this research was to develop an optimized prototype 10-mm diameter DHFRP bar. The behavior of the bar under full load reversals to failure was investigated. However, this bar first needed to be designed and manufactured in the Fibrous Materials Research at Drexel University. Material properties were determined through testing to categorize the strength properties of the DHFRP. Similitude was used to demonstrate the scaling of properties from the original model bars. The four most important properties of the DHFRP bars are sufficient strength and stiffness, significant ductility for plasticity to develop in the R/C section, and sufficient bond strength for the R/C section to develop its full strength. Once these properties were determined the behavior of reinforced concrete members was investigated. This included the testing of prototype-size beams under monotonic loading and model and prototype beam-columns under reverse cyclic loading. These tests confirmed the large ductility exhibited by the DHFRP. Also the energy absorption capacity of the bar was demonstrated by the hysteretic behavior of the beam-columns. Displacement ductility factors in the range of 3--6 were achieved for all concrete elements tested. To study the long-term behavior of DHFRP, the creep-rupture strength of 5-mm bars was tested. This was conducted first on individual bar specimens and is important in the life-cycle design and performance of DHFRP reinforced concrete.

  13. Automatic Ammunition Identification Technology Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weil, B.

    1993-01-01

    The Automatic Ammunition Identification Technology (AAIT) Project is an activity of the Robotics Process Systems Division at the Oak Ridge National Laboratory (ORNL) for the US Army's Project Manager-Ammunition Logistics (PM-AMMOLOG) at the Picatinny Arsenal in Picatinny, New Jersey. The project objective is to evaluate new two-dimensional bar code symbologies for potential use in ammunition logistics systems and automated reloading equipment. These new symbologies are a significant improvement over typical linear bar codes since machine-readable alphanumeric messages up to 2000 characters long are achievable. These compressed data symbologies are expected to significantly improve logistics and inventory management tasks and permitmore » automated feeding and handling of ammunition to weapon systems. The results will be increased throughout capability, better inventory control, reduction of human error, lower operation and support costs, and a more timely re-supply of various weapon systems. This paper will describe the capabilities of existing compressed data symbologies and the symbol testing activities being conducted at ORNL for the AAIT Project.« less

  14. Automatic Ammunition Identification Technology Project. Ammunition Logistics Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weil, B.

    1993-03-01

    The Automatic Ammunition Identification Technology (AAIT) Project is an activity of the Robotics & Process Systems Division at the Oak Ridge National Laboratory (ORNL) for the US Army`s Project Manager-Ammunition Logistics (PM-AMMOLOG) at the Picatinny Arsenal in Picatinny, New Jersey. The project objective is to evaluate new two-dimensional bar code symbologies for potential use in ammunition logistics systems and automated reloading equipment. These new symbologies are a significant improvement over typical linear bar codes since machine-readable alphanumeric messages up to 2000 characters long are achievable. These compressed data symbologies are expected to significantly improve logistics and inventory management tasks andmore » permit automated feeding and handling of ammunition to weapon systems. The results will be increased throughout capability, better inventory control, reduction of human error, lower operation and support costs, and a more timely re-supply of various weapon systems. This paper will describe the capabilities of existing compressed data symbologies and the symbol testing activities being conducted at ORNL for the AAIT Project.« less

  15. Universal behavior of the γ⁎γ→(π0,η,η′) transition form factors

    PubMed Central

    Melikhov, Dmitri; Stech, Berthold

    2012-01-01

    The photon transition form factors of π, η and η′ are discussed in view of recent measurements. It is shown that the exact axial anomaly sum rule allows a precise comparison of all three form factors at high-Q2 independent of the different structures and distribution amplitudes of the participating pseudoscalar mesons. We conclude: (i) The πγ form factor reported by Belle is in excellent agreement with the nonstrange I=0 component of the η and η′ form factors obtained from the BaBar measurements. (ii) Within errors, the πγ form factor from Belle is compatible with the asymptotic pQCD behavior, similar to the η and η′ form factors from BaBar. Still, the best fits to the data sets of πγ, ηγ, and η′γ form factors favor a universal small logarithmic rise Q2FPγ(Q2)∼log(Q2). PMID:23226917

  16. Interpretation of fast-ion signals during beam modulation experiments

    DOE PAGES

    Heidbrink, W. W.; Collins, C. S.; Stagner, L.; ...

    2016-07-22

    Fast-ion signals produced by a modulated neutral beam are used to infer fast-ion transport. The measured quantity is the divergence of perturbed fast-ion flux from the phase-space volume measured by the diagnostic, ∇•more » $$\\bar{Γ}$$. Since velocity-space transport often contributes to this divergence, the phase-space sensitivity of the diagnostic (or “weight function”) plays a crucial role in the interpretation of the signal. The source and sink make major contributions to the signal but their effects are accurately modeled by calculations that employ an exponential decay term for the sink. Recommendations for optimal design of a fast-ion transport experiment are given, illustrated by results from DIII-D measurements of fast-ion transport by Alfv´en eigenmodes. Finally, the signal-to-noise ratio of the diagnostic, systematic uncertainties in the modeling of the source and sink, and the non-linearity of the perturbation all contribute to the error in ∇•$$\\bar{Γ}$$.« less

  17. New estimates of the CMB angular power spectra from the WMAP 5 year low-resolution data

    NASA Astrophysics Data System (ADS)

    Gruppuso, A.; de Rosa, A.; Cabella, P.; Paci, F.; Finelli, F.; Natoli, P.; de Gasperis, G.; Mandolesi, N.

    2009-11-01

    A quadratic maximum likelihood (QML) estimator is applied to the Wilkinson Microwave Anisotropy Probe (WMAP) 5 year low-resolution maps to compute the cosmic microwave background angular power spectra (APS) at large scales for both temperature and polarization. Estimates and error bars for the six APS are provided up to l = 32 and compared, when possible, to those obtained by the WMAP team, without finding any inconsistency. The conditional likelihood slices are also computed for the Cl of all the six power spectra from l = 2 to 10 through a pixel-based likelihood code. Both the codes treat the covariance for (T, Q, U) in a single matrix without employing any approximation. The inputs of both the codes (foreground-reduced maps, related covariances and masks) are provided by the WMAP team. The peaks of the likelihood slices are always consistent with the QML estimates within the error bars; however, an excellent agreement occurs when the QML estimates are used as a fiducial power spectrum instead of the best-fitting theoretical power spectrum. By the full computation of the conditional likelihood on the estimated spectra, the value of the temperature quadrupole CTTl=2 is found to be less than 2σ away from the WMAP 5 year Λ cold dark matter best-fitting value. The BB spectrum is found to be well consistent with zero, and upper limits on the B modes are provided. The parity odd signals TB and EB are found to be consistent with zero.

  18. Reconstruction of primordial tensor power spectra from B -mode polarization of the cosmic microwave background

    NASA Astrophysics Data System (ADS)

    Hiramatsu, Takashi; Komatsu, Eiichiro; Hazumi, Masashi; Sasaki, Misao

    2018-06-01

    Given observations of the B -mode polarization power spectrum of the cosmic microwave background (CMB), we can reconstruct power spectra of primordial tensor modes from the early Universe without assuming their functional form such as a power-law spectrum. The shape of the reconstructed spectra can then be used to probe the origin of tensor modes in a model-independent manner. We use the Fisher matrix to calculate the covariance matrix of tensor power spectra reconstructed in bins. We find that the power spectra are best reconstructed at wave numbers in the vicinity of k ≈6 ×10-4 and 5 ×10-3 Mpc-1 , which correspond to the "reionization bump" at ℓ≲6 and "recombination bump" at ℓ≈80 of the CMB B -mode power spectrum, respectively. The error bar between these two wave numbers is larger because of the lack of the signal between the reionization and recombination bumps. The error bars increase sharply toward smaller (larger) wave numbers because of the cosmic variance (CMB lensing and instrumental noise). To demonstrate the utility of the reconstructed power spectra, we investigate whether we can distinguish between various sources of tensor modes including those from the vacuum metric fluctuation and SU(2) gauge fields during single-field slow-roll inflation, open inflation, and massive gravity inflation. The results depend on the model parameters, but we find that future CMB experiments are sensitive to differences in these models. We make our calculation tool available online.

  19. Project ARM: alcohol risk management to prevent sales to underage and intoxicated patrons.

    PubMed

    Toomey, T L; Wagenaar, A C; Gehan, J P; Kilian, G; Murray, D M; Perry, C L

    2001-04-01

    Clear policies and expectations are key to increasing responsible service of alcohol in licensed establishments. Few training programs focus exclusively on owners and managers of alcohol establishments to reduce the risk of alcohol service. Project ARM: Alcohol Risk Management is a one-on-one consultation program for owners and managers. Participants received information on risk level, policies to prevent illegal sales, legal issues, and staff communication. This nonrandomized demonstration project was implemented in five diverse bars. Two waves of underage and pseudo-intoxicated purchase attempts were conducted pre- and postintervention in the five intervention bars and nine matched control bars. Underage sales decreased by 11.5%, and sales to pseudo-intoxicated buyers decreased by 46%. Results were in the hypothesized direction but not statistically significant. A one-on-one, outlet-specific training program for owners and managers is a promising way to reduce illegal alcohol sales, particularly to obviously intoxicated individuals.

  20. Numerical evaluation of magnetic absolute measurements with arbitrarily distributed DI-fluxgate theodolite orientations

    NASA Astrophysics Data System (ADS)

    Brunke, Heinz-Peter; Matzka, Jürgen

    2018-01-01

    At geomagnetic observatories the absolute measurements are needed to determine the calibration parameters of the continuously recording vector magnetometer (variometer). Absolute measurements are indispensable for determining the vector of the geomagnetic field over long periods of time. A standard DI (declination, inclination) measuring scheme for absolute measurements establishes routines in magnetic observatories. The traditional measuring schema uses a fixed number of eight orientations (Jankowski et al., 1996).

    We present a numerical method, allowing for the evaluation of an arbitrary number (minimum of five as there are five independent parameters) of telescope orientations. Our method provides D, I and Z base values and calculated error bars of them.

    A general approach has significant advantages. Additional measurements may be seamlessly incorporated for higher accuracy. Individual erroneous readings are identified and can be discarded without invalidating the entire data set. A priori information can be incorporated. We expect the general method to also ease requirements for automated DI-flux measurements. The method can reveal certain properties of the DI theodolite which are not captured by the conventional method.

    Based on the alternative evaluation method, a new faster and less error-prone measuring schema is presented. It avoids needing to calculate the magnetic meridian prior to the inclination measurements.

    Measurements in the vicinity of the magnetic equator are possible with theodolites and without a zenith ocular.

    The implementation of the method in MATLAB is available as source code at the GFZ Data Center Brunke (2017).

  1. Modified method for estimating petroleum source-rock potential using wireline logs, with application to the Kingak Shale, Alaska North Slope

    USGS Publications Warehouse

    Rouse, William A.; Houseknecht, David W.

    2016-02-11

    In 2012, the U.S. Geological Survey completed an assessment of undiscovered, technically recoverable oil and gas resources in three source rocks of the Alaska North Slope, including the lower part of the Jurassic to Lower Cretaceous Kingak Shale. In order to identify organic shale potential in the absence of a robust geochemical dataset from the lower Kingak Shale, we introduce two quantitative parameters, $\\Delta DT_\\bar{x}$ and $\\Delta DT_z$, estimated from wireline logs from exploration wells and based in part on the commonly used delta-log resistivity ($\\Delta \\text{ }log\\text{ }R$) technique. Calculation of $\\Delta DT_\\bar{x}$ and $\\Delta DT_z$ is intended to produce objective parameters that may be proportional to the quality and volume, respectively, of potential source rocks penetrated by a well and can be used as mapping parameters to convey the spatial distribution of source-rock potential. Both the $\\Delta DT_\\bar{x}$ and $\\Delta DT_z$ mapping parameters show increased source-rock potential from north to south across the North Slope, with the largest values at the toe of clinoforms in the lower Kingak Shale. Because thermal maturity is not considered in the calculation of $\\Delta DT_\\bar{x}$ or $\\Delta DT_z$, total organic carbon values for individual wells cannot be calculated on the basis of $\\Delta DT_\\bar{x}$ or $\\Delta DT_z$ alone. Therefore, the $\\Delta DT_\\bar{x}$ and $\\Delta DT_z$ mapping parameters should be viewed as first-step reconnaissance tools for identifying source-rock potential.

  2. Other Tobacco Product Use Among Sexual Minority Young Adult Bar Patrons.

    PubMed

    Fallin-Bennett, Amanda; Lisha, Nadra E; Ling, Pamela M

    2017-09-01

    Lesbian, gay, and bisexual (LGB) individuals smoke at rates 1.5-2 times higher than the general population, but less is known about LGB consumption of other tobacco products (OTPs) and gender differences. OTP use among young adult LGB bar patrons and the relationship among past quit attempts, intention to quit, and binge drinking with OTP use was examined. A cross-sectional survey of young adults (aged 18-26) in bars/nightclubs in seven U.S. cities between 2012 and 2014 (N=8,010; 1,101 LGB participants) was analyzed in 2016. Logistic regressions examined current use of five OTPs (cigarillos, electronic cigarettes, hookah, chewing tobacco, and snus) and sexual minority status, adjusting for demographics and comparing LB women and GB men with their heterosexual counterparts. LGB bar/nightclub patrons used all OTPs more than their heterosexual counterparts. LB women were more likely than heterosexual women to use cigarillos, electronic cigarettes, hookah, chew, and snus. GB men were more likely than heterosexual men to smoke cigarillos, electronic cigarettes, hookah, and use chew and snus. Past-year quit attempt was associated with increased odds of electronic cigarette use in men and women, and increased odds of dual use (cigarettes and OTPs) among men. Intention to quit was negatively associated with dual use among women. Binge drinking was associated with increased use of all OTPs across genders. LGB bar-going young adults are at higher risk for OTP use than their heterosexual counterparts. Bar-based interventions are needed to address all forms of tobacco use in this high-risk group. Copyright © 2017. Published by Elsevier Inc.

  3. Usability of a barcode scanning system as a means of data entry on a PDA for self-report health outcome questionnaires: a pilot study in individuals over 60 years of age

    PubMed Central

    Boissy, Patrick; Jacobs, Karen; Roy, Serge H

    2006-01-01

    Background Throughout the medical and paramedical professions, self-report health status questionnaires are used to gather patient-reported outcome measures. The objective of this pilot study was to evaluate in individuals over 60 years of age the usability of a PDA-based barcode scanning system with a text-to-speech synthesizer to collect data electronically from self-report health outcome questionnaires. Methods Usability of the system was tested on a sample of 24 community-living older adults (7 men, 17 women) ranging in age from 63 to 93 years. After receiving a brief demonstration on the use of the barcode scanner, participants were randomly assigned to complete two sets of 16 questions using the bar code wand scanner for one set and a pen for the other. Usability was assessed using directed interviews with a usability questionnaire and performance-based metrics (task times, errors, sources of errors). Results Overall, participants found barcode scanning easy to learn, easy to use, and pleasant. Participants were marginally faster in completing the 16 survey questions when using pen entry (20/24 participants). The mean response time with the barcode scanner was 31 seconds longer than traditional pen entry for a subset of 16 questions (p = 0.001). The responsiveness of the scanning system, expressed as first scan success rate, was less than perfect, with approximately one-third of first scans requiring a rescan to successfully capture the data entry. The responsiveness of the system can be explained by a combination of factors such as the location of the scanning errors, the type of barcode used as an answer field in the paper version, and the optical characteristics of the barcode scanner. Conclusion The results presented in this study offer insights regarding the feasibility, usability and effectiveness of using a barcode scanner with older adults as an electronic data entry method on a PDA. While participants in this study found their experience with the barcode scanning system enjoyable and learned to become proficient in its use, the responsiveness of the system constitutes a barrier to wide-scale use of such a system. Optimizing the graphical presentation of the information on paper should significantly increase the system's responsiveness. PMID:17184533

  4. Prescription Errors in Older Individuals with an Intellectual Disability: Prevalence and Risk Factors in the Healthy Ageing and Intellectual Disability Study

    ERIC Educational Resources Information Center

    Zaal, Rianne J.; van der Kaaij, Annemieke D. M.; Evenhuis, Heleen M.; van den Bemt, Patricia M. L. A.

    2013-01-01

    Prescribing pharmacotherapy for older individuals with an intellectual disability (ID) is a complex process, possibly leading to an increased risk of prescription errors. The objectives of this study were (1) to determine the prevalence of older individuals with an intellectual disability with at least one prescription error and (2) to identify…

  5. Smoking restrictions in bars and bartender smoking in the US, 1992-2007.

    PubMed

    Bitler, Marianne P; Carpenter, Christopher; Zavodny, Madeline

    2011-05-01

    The present work is an analysis of whether adoption of state clean indoor air laws (SCIALs) covering bars reduces the proportion of bartenders who smoke primarily by reducing smoking among people already employed as bartenders when restrictions are adopted or by changing the composition of the bartender workforce with respect to smoking behaviours. Logistic regressions were estimated for a variety of smoking outcomes, controlling for individual demographic characteristics, state economic characteristics, and state, year, and month fixed effects, using data on 1380 bartenders from the 1992-2007 Tobacco Use Supplement to the Current Population Survey combined with data on SCIALs from ImpacTeen. State restrictions on smoking in bars are negatively associated with whether a bartender smokes, with a 1-point increase in restrictiveness (on a scale of 0-3) associated with a 5.3% reduction in the odds of smoking. Bar SCIALs are positively associated with the likelihood a bartender reports never having smoked cigarettes but not with the likelihood a bartender reports having been a former smoker. State clean indoor air laws covering bars appear to reduce smoking among bartenders primarily by changing the composition of the bartender workforce with respect to smoking rather than by reducing smoking among people already employed as bartenders when restrictions are adopted. Such laws may nonetheless be an important public health tool for reducing secondhand smoke.

  6. Modified SPC for short run test and measurement process in multi-stations

    NASA Astrophysics Data System (ADS)

    Koh, C. K.; Chin, J. F.; Kamaruddin, S.

    2018-03-01

    Due to short production runs and measurement error inherent in electronic test and measurement (T&M) processes, continuous quality monitoring through real-time statistical process control (SPC) is challenging. Industry practice allows the installation of guard band using measurement uncertainty to reduce the width of acceptance limit, as an indirect way to compensate the measurement errors. This paper presents a new SPC model combining modified guard band and control charts (\\bar{\\text{Z}} chart and W chart) for short runs in T&M process in multi-stations. The proposed model standardizes the observed value with measurement target (T) and rationed measurement uncertainty (U). S-factor (S f) is introduced to the control limits to improve the sensitivity in detecting small shifts. The model was embedded in automated quality control system and verified with a case study in real industry.

  7. New and revised 14C dates for Hawaiian surface lava flows: Paleomagnetic and geomagnetic implications

    USGS Publications Warehouse

    Pressline, N.; Trusdell, F.A.; Gubbins, David

    2009-01-01

    Radiocarbon dates have been obtained for 30 charcoal samples corresponding to 27 surface lava flows from the Mauna Loa and Kilauea volcanoes on the Island of Hawaii. The submitted charcoal was a mixture of fresh and archived material. Preparation and analysis was undertaken at the NERC Radiocarbon Laboratory in Glasgow, Scotland, and the associated SUERC Accelerator Mass Spectrometry facility. The resulting dates range from 390 years B.P. to 12,910 years B.P. with corresponding error bars an order of magnitude smaller than previously obtained using the gas-counting method. The new and revised 14C data set can aid hazard and risk assessment on the island. The data presented here also have implications for geomagnetic modelling, which at present is limited by large dating errors. Copyright 2009 by the American Geophysical Union.

  8. An accurate ab initio quartic force field for ammonia

    NASA Technical Reports Server (NTRS)

    Martin, J. M. L.; Lee, Timothy J.; Taylor, Peter R.

    1992-01-01

    The quartic force field of ammonia is computed using basis sets of spdf/spd and spdfg/spdf quality and an augmented coupled cluster method. After correcting for Fermi resonance, the computed fundamentals and nu 4 overtones agree on average to better than 3/cm with the experimental ones except for nu 2. The discrepancy for nu 2 is principally due to higher-order anharmonicity effects. The computed omega 1, omega 3, and omega 4 confirm the recent experimental determination by Lehmann and Coy (1988) but are associated with smaller error bars. The discrepancy between the computed and experimental omega 2 is far outside the expected error range, which is also attributed to higher-order anharmonicity effects not accounted for in the experimental determination. Spectroscopic constants are predicted for a number of symmetric and asymmetric top isotopomers of NH3.

  9. Is a shift from research on individual medical error to research on health information technology underway? A 40-year analysis of publication trends in medical journals.

    PubMed

    Erlewein, Daniel; Bruni, Tommaso; Gadebusch Bondio, Mariacarla

    2018-06-07

    In 1983, McIntyre and Popper underscored the need for more openness in dealing with errors in medicine. Since then, much has been written on individual medical errors. Furthermore, at the beginning of the 21st century, researchers and medical practitioners increasingly approached individual medical errors through health information technology. Hence, the question arises whether the attention of biomedical researchers shifted from individual medical errors to health information technology. We ran a study to determine publication trends concerning individual medical errors and health information technology in medical journals over the last 40 years. We used the Medical Subject Headings (MeSH) taxonomy in the database MEDLINE. Each year, we analyzed the percentage of relevant publications to the total number of publications in MEDLINE. The trends identified were tested for statistical significance. Our analysis showed that the percentage of publications dealing with individual medical errors increased from 1976 until the beginning of the 21st century but began to drop in 2003. Both the upward and the downward trends were statistically significant (P < 0.001). A breakdown by country revealed that it was the weight of the US and British publications that determined the overall downward trend after 2003. On the other hand, the percentage of publications dealing with health information technology doubled between 2003 and 2015. The upward trend was statistically significant (P < 0.001). The identified trends suggest that the attention of biomedical researchers partially shifted from individual medical errors to health information technology in the USA and the UK. © 2018 Chinese Cochrane Center, West China Hospital of Sichuan University and John Wiley & Sons Australia, Ltd.

  10. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  11. Refractive Errors Affect the Vividness of Visual Mental Images

    PubMed Central

    Palermo, Liana; Nori, Raffaella; Piccardi, Laura; Zeri, Fabrizio; Babino, Antonio; Giusberti, Fiorella; Guariglia, Cecilia

    2013-01-01

    The hypothesis that visual perception and mental imagery are equivalent has never been explored in individuals with vision defects not preventing the visual perception of the world, such as refractive errors. Refractive error (i.e., myopia, hyperopia or astigmatism) is a condition where the refracting system of the eye fails to focus objects sharply on the retina. As a consequence refractive errors cause blurred vision. We subdivided 84 individuals according to their spherical equivalent refraction into Emmetropes (control individuals without refractive errors) and Ametropes (individuals with refractive errors). Participants performed a vividness task and completed a questionnaire that explored their cognitive style of thinking before their vision was checked by an ophthalmologist. Although results showed that Ametropes had less vivid mental images than Emmetropes this did not affect the development of their cognitive style of thinking; in fact, Ametropes were able to use both verbal and visual strategies to acquire and retrieve information. Present data are consistent with the hypothesis of equivalence between imagery and perception. PMID:23755186

  12. Refractive errors affect the vividness of visual mental images.

    PubMed

    Palermo, Liana; Nori, Raffaella; Piccardi, Laura; Zeri, Fabrizio; Babino, Antonio; Giusberti, Fiorella; Guariglia, Cecilia

    2013-01-01

    The hypothesis that visual perception and mental imagery are equivalent has never been explored in individuals with vision defects not preventing the visual perception of the world, such as refractive errors. Refractive error (i.e., myopia, hyperopia or astigmatism) is a condition where the refracting system of the eye fails to focus objects sharply on the retina. As a consequence refractive errors cause blurred vision. We subdivided 84 individuals according to their spherical equivalent refraction into Emmetropes (control individuals without refractive errors) and Ametropes (individuals with refractive errors). Participants performed a vividness task and completed a questionnaire that explored their cognitive style of thinking before their vision was checked by an ophthalmologist. Although results showed that Ametropes had less vivid mental images than Emmetropes this did not affect the development of their cognitive style of thinking; in fact, Ametropes were able to use both verbal and visual strategies to acquire and retrieve information. Present data are consistent with the hypothesis of equivalence between imagery and perception.

  13. Stellar mass distribution of S4G disk galaxies and signatures of bar-induced secular evolution

    NASA Astrophysics Data System (ADS)

    Díaz-García, S.; Salo, H.; Laurikainen, E.

    2016-12-01

    Context. Models of galaxy formation in a cosmological framework need to be tested against observational constraints, such as the average stellar density profiles (and their dispersion) as a function of fundamental galaxy properties (e.g. the total stellar mass). Simulation models predict that the torques produced by stellar bars efficiently redistribute the stellar and gaseous material inside the disk, pushing it outwards or inwards depending on whether it is beyond or inside the bar corotation resonance radius. Bars themselves are expected to evolve, getting longer and narrower as they trap particles from the disk and slow down their rotation speed. Aims: We use 3.6 μm photometry from the Spitzer Survey of Stellar Structure in Galaxies (S4G) to trace the stellar distribution in nearby disk galaxies (z ≈ 0) with total stellar masses 108.5 ≲ M∗/M⊙ ≲ 1011 and mid-IR Hubble types - 3 ≤ T ≤ 10. We characterize the stellar density profiles (Σ∗), the stellar contribution to the rotation curves (V3.6 μm), and the m = 2 Fourier amplitudes (A2) as a function of M∗ and T. We also describe the typical shapes and strengths of stellar bars in the S4G sample and link their properties to the total stellar mass and morphology of their host galaxy. Methods: For 1154 S4G galaxies with disk inclinations lower than 65°, we perform a Fourier decomposition and rescale their images to a common frame determined by the size in physical units, by their disk scalelength, and for 748 barred galaxies by both the length and orientation of their bars. We stack the resized density profiles and images to obtain statistically representative average stellar disks and bars in bins of M∗ and T. Based on the radial force profiles of individual galaxies we calculate the mean stellar contribution to the circular velocity. We also calculate average A2 profiles, where the radius is normalized to R25.5. Furthermore, we infer the gravitational potentials from the synthetic bars to obtain the tangential-to-radial force ratio (QT) and A2 profiles in the different bins. We also apply ellipse fitting to quantitatively characterize the shape of the bar stacks. Results: For M∗ ≥ 109M⊙, we find a significant difference in the stellar density profiles of barred and non-barred systems: (I) disks in barred galaxies show larger scalelengths (hR) and fainter extrapolated central surface brightnesses (Σ°); (II) the mean surface brightness profiles (Σ∗) of barred and non-barred galaxies intersect each other slightly beyond the mean bar length, most likely at the bar corotation; and (III) the central mass concentration of barred galaxies is higher (by almost a factor 2 when T ≤ 5) than in their non-barred counterparts. The averaged Σ∗ profiles follow an exponential slope down to at least 10 M⊙ pc-2, which is the typical depth beyond which the sample coverage in the radial direction starts to drop. Central mass concentrations in massive systems (≥1010M⊙) are substantially larger than in fainter galaxies, and their prominence scales with T. This segregation also manifests in the inner slope of the mean stellar component of the circular velocity: lenticular (S0) galaxies present the most sharply rising V3.6 μm. Based on the analysis of bar stacks, we show that early- and intermediate-type spirals (0 ≤ T< 5) have intrinsically narrower bars than later types and S0s, whose bars are oval-shaped. We show a clear agreement between galaxy family and quantitative estimates of bar strength. In early- and intermediate-type spirals, A2 is larger within and beyond the typical bar region among barred galaxies than in the non-barred subsample. Strongly barred systems also tend to have larger A2 amplitudes at all radii than their weakly barred counterparts. Conclusions: Using near-IR wavelengths (S4G 3.6 μm), we provide observational constraints that galaxy formation models can be checked against. In particular, we calculate the mean stellar density profiles, and the disk(+bulge) component of the rotation curve (and their dispersion) in bins of M∗ and T. We find evidence for bar-induced secular evolution of disk galaxies in terms of disk spreading and enhanced central mass concentration. We also obtain average bars (2D), and we show that bars hosted by early-type galaxies are more centrally concentrated and have larger density amplitudes than their late-type counterparts. The FITS files of the synthetic images and the tabulated radial profiles of the mean (and dispersion of) stellar mass density, 3.6 μm surface brightness, Fourier amplitudes, gravitational force, and the stellar contribution to the circular velocity are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/596/A84

  14. Absolute binding free energy calculations of CBClip host–guest systems in the SAMPL5 blind challenge

    PubMed Central

    Tofoleanu, Florentina; Pickard, Frank C.; König, Gerhard; Huang, Jing; Damjanović, Ana; Baek, Minkyung; Seok, Chaok; Brooks, Bernard R.

    2016-01-01

    Herein, we report the absolute binding free energy calculations of CBClip complexes in the SAMPL5 blind challenge. Initial conformations of CBClip complexes were obtained using docking and molecular dynamics simulations. Free energy calculations were performed using thermodynamic integration (TI) with soft-core potentials and Bennett’s acceptance ratio (BAR) method based on a serial insertion scheme. We compared the results obtained with TI simulations with soft-core potentials and Hamiltonian replica exchange simulations with the serial insertion method combined with the BAR method. The results show that the difference between the two methods can be mainly attributed to the van der Waals free energies, suggesting that either the simulations used for TI or the simulations used for BAR, or both are not fully converged and the two sets of simulations may have sampled difference phase space regions. The penalty scores of force field parameters of the 10 guest molecules provided by CHARMM Generalized Force Field can be an indicator of the accuracy of binding free energy calculations. Among our submissions, the combination of docking and TI performed best, which yielded the root mean square deviation of 2.94 kcal/mol and an average unsigned error of 3.41 kcal/mol for the ten guest molecules. These values were best overall among all participants. However, our submissions had little correlation with experiments. PMID:27677749

  15. Evaluation of force-torque displays for use with space station telerobotic activities

    NASA Technical Reports Server (NTRS)

    Hendrich, Robert C.; Bierschwale, John M.; Manahan, Meera K.; Stuart, Mark A.; Legendre, A. Jay

    1992-01-01

    Recent experiments which addressed Space Station remote manipulation tasks found that tactile force feedback (reflecting forces and torques encountered at the end-effector through the manipulator hand controller) does not improve performance significantly. Subjective response from astronaut and non-astronaut test subjects indicated that force information, provided visually, could be useful. No research exists which specifically investigates methods of presenting force-torque information visually. This experiment was designed to evaluate seven different visual force-torque displays which were found in an informal telephone survey. The displays were prototyped in the HyperCard programming environment. In a within-subjects experiment, 14 subjects nullified forces and torques presented statically, using response buttons located at the bottom of the screen. Dependent measures included questionnaire data, errors, and response time. Subjective data generally demonstrate that subjects rated variations of pseudo-perspective displays consistently better than bar graph and digital displays. Subjects commented that the bar graph and digital displays could be used, but were not compatible with using hand controllers. Quantitative data show similar trends to the subjective data, except that the bar graph and digital displays both provided good performance, perhaps do to the mapping of response buttons to display elements. Results indicate that for this set of displays, the pseudo-perspective displays generally represent a more intuitive format for presenting force-torque information.

  16. Measurement of elliptic flow of light nuclei at s N N = 200 , 62.4, 39, 27, 19.6, 11.5, and 7.7 GeV at the BNL Relativistic Heavy Ion Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adamczyk, L.; Adkins, J. K.; Agakishiev, G.

    Here we present measurements of second-order azimuthal anisotropy ( v 2 ) at midrapidity ( |y| < 1.0 ) for light nuclei d , t , 3He (formore » $$\\sqrt{s}$$$_{NN}$$ = 200 , 62.4, 39, 27, 19.6, 11.5, and 7.7 GeV) and antinuclei$$\\bar{d}$$ ( $$\\sqrt{s}$$$_{NN}$$ = 200 , 62.4, 39, 27, and 19.6 GeV) and 3 ¯¯¯¯¯ He ( $$\\sqrt{s}$$$_{NN}$$ = 200 GeV) in the STAR (Solenoidal Tracker at RHIC) experiment. The v 2 for these light nuclei produced in heavy-ion collisions is compared with those for p and $$\\bar{p}$$. We observe mass ordering in nuclei v 2 ( p T) at low transverse momenta ( p T < 2.0 GeV/c). We also find a centrality dependence of v 2 for d and $$\\bar{d}$$ . The magnitude of v 2 for t and 3He agree within statistical errors. Light-nuclei v 2 are compared with predictions from a blast-wave model. Atomic mass number ( A ) scaling of light-nuclei v 2 (p T) seems to hold for p T / A < 1.5 GeV/c . Results on light-nuclei v 2 from a transport-plus-coalescence model are consistent with the experimental measurements.« less

  17. Measurement of elliptic flow of light nuclei at s N N = 200 , 62.4, 39, 27, 19.6, 11.5, and 7.7 GeV at the BNL Relativistic Heavy Ion Collider

    DOE PAGES

    Adamczyk, L.; Adkins, J. K.; Agakishiev, G.; ...

    2016-09-23

    Here we present measurements of second-order azimuthal anisotropy ( v 2 ) at midrapidity ( |y| < 1.0 ) for light nuclei d , t , 3He (formore » $$\\sqrt{s}$$$_{NN}$$ = 200 , 62.4, 39, 27, 19.6, 11.5, and 7.7 GeV) and antinuclei$$\\bar{d}$$ ( $$\\sqrt{s}$$$_{NN}$$ = 200 , 62.4, 39, 27, and 19.6 GeV) and 3 ¯¯¯¯¯ He ( $$\\sqrt{s}$$$_{NN}$$ = 200 GeV) in the STAR (Solenoidal Tracker at RHIC) experiment. The v 2 for these light nuclei produced in heavy-ion collisions is compared with those for p and $$\\bar{p}$$. We observe mass ordering in nuclei v 2 ( p T) at low transverse momenta ( p T < 2.0 GeV/c). We also find a centrality dependence of v 2 for d and $$\\bar{d}$$ . The magnitude of v 2 for t and 3He agree within statistical errors. Light-nuclei v 2 are compared with predictions from a blast-wave model. Atomic mass number ( A ) scaling of light-nuclei v 2 (p T) seems to hold for p T / A < 1.5 GeV/c . Results on light-nuclei v 2 from a transport-plus-coalescence model are consistent with the experimental measurements.« less

  18. Ghost-Free APT Analysis of Perturbative QCD Observables

    NASA Astrophysics Data System (ADS)

    Shirkov, Dmitry V.

    The review of the essence and of application of recently devised ghost-free Analytic Perturbation Theory (APT) is presented. First, we discuss the main intrinsic problem of perturbative QCD - ghost singularities and with the resume of its resolving within the APT. By examples for diverse energy and momentum transfer values we show the property of better convergence for the APT modified QCD expansion. It is shown that in the APT analysis the three-loop contribution (sim alpha_s^3) is numerically inessential. This gives raise a hope for practical solution of the well-known problem of non-satisfactory convergence of QFT perturbation series due to its asymptotic nature. Our next result is that a usual perturbative analysis of time-like events is not adequate at sleq 2 GeV2. In particular, this relates to tau decay. Then, for the "high" (f=5) region it is shown that the common NLO, NLLA perturbation approximation widely used there (at 10 GeV lesssimsqrt{s}lesssim 170 GeV) yields a systematic theoretic negative error of a couple per cent level for the bar {alpha}_s^2 values. This results in a conclusion that the bar α_s(M^2_Z) value averaged over the f=5 data appreciably differs < bar {alpha}_s(M^2_Z)rangle_{f=5} simeq 0.124 from the currently popular "world average" (=0.118 ).

  19. The Swiss cheese model of adverse event occurrence--Closing the holes.

    PubMed

    Stein, James E; Heiss, Kurt

    2015-12-01

    Traditional surgical attitude regarding error and complications has focused on individual failings. Human factors research has brought new and significant insights into the occurrence of error in healthcare, helping us identify systemic problems that injure patients while enhancing individual accountability and teamwork. This article introduces human factors science and its applicability to teamwork, surgical culture, medical error, and individual accountability. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. An anthropomorphic phantom for quantitative evaluation of breast MRI.

    PubMed

    Freed, Melanie; de Zwart, Jacco A; Loud, Jennifer T; El Khouli, Riham H; Myers, Kyle J; Greene, Mark H; Duyn, Jeff H; Badano, Aldo

    2011-02-01

    In this study, the authors aim to develop a physical, tissue-mimicking phantom for quantitative evaluation of breast MRI protocols. The objective of this phantom is to address the need for improved standardization in breast MRI and provide a platform for evaluating the influence of image protocol parameters on lesion detection and discrimination. Quantitative comparisons between patient and phantom image properties are presented. The phantom is constructed using a mixture of lard and egg whites, resulting in a random structure with separate adipose- and glandular-mimicking components. T1 and T2 relaxation times of the lard and egg components of the phantom were estimated at 1.5 T from inversion recovery and spin-echo scans, respectively, using maximum-likelihood methods. The image structure was examined quantitatively by calculating and comparing spatial covariance matrices of phantom and patient images. A static, enhancing lesion was introduced by creating a hollow mold with stereolithography and filling it with a gadolinium-doped water solution. Measured phantom relaxation values fall within 2 standard errors of human values from the literature and are reasonably stable over 9 months of testing. Comparison of the covariance matrices of phantom and patient data demonstrates that the phantom and patient data have similar image structure. Their covariance matrices are the same to within error bars in the anterior-posterior direction and to within about two error bars in the right-left direction. The signal from the phantom's adipose-mimicking material can be suppressed using active fat-suppression protocols. A static, enhancing lesion can also be included with the ability to change morphology and contrast agent concentration. The authors have constructed a phantom and demonstrated its ability to mimic human breast images in terms of key physical properties that are relevant to breast MRI. This phantom provides a platform for the optimization and standardization of breast MRI imaging protocols for lesion detection and characterization.

  1. Integrating technology to improve medication administration.

    PubMed

    Prusch, Amanda E; Suess, Tina M; Paoletti, Richard D; Olin, Stephen T; Watts, Starann D

    2011-05-01

    The development, implementation, and evaluation of an i.v. interoperability program to advance medication safety at the bedside are described. I.V. interoperability integrates intelligent infusion devices (IIDs), the bar-code-assisted medication administration system, and the electronic medication administration record system into a bar-code-driven workflow that populates provider-ordered, pharmacist-validated infusion parameters on IIDs. The purpose of this project was to improve medication safety through the integration of these technologies and decrease the potential for error during i.v. medication administration. Four key phases were essential to developing and implementing i.v. interoperability: (a) preparation, (b) i.v. interoperability pilot, (c) preliminary validation, and (d) expansion. The establishment of pharmacy involvement in i.v. interoperability resulted in two additional safety checks: pharmacist infusion rate oversight and nurse independent validation of the autoprogrammed rate. After instituting i.v. interoperability, monthly compliance to the telemetry drug library increased to a mean ± S.D. of 72.1% ± 2.1% from 56.5% ± 1.5%, and the medical-surgical nursing unit's drug library monthly compliance rate increased to 58.6% ± 2.9% from 34.1% ± 2.6% (p < 0.001 for both comparisons). The number of manual pump edits decreased with both telemetry and medical-surgical drug libraries, demonstrating a reduction from 56.9 ± 12.8 to 14.2 ± 3.9 and from 61.2 ± 15.4 to 14.7 ± 3.8, respectively (p < 0.001 for both comparisons). Through the integration and incorporation of pharmacist oversight for rate changes, the telemetry and medical-surgical patient care areas demonstrated a 32% reduction in reported monthly errors involving i.v. administration of heparin. By integrating two stand-alone technologies, i.v. interoperability was implemented to improve medication administration. Medication errors were reduced, nursing workflow was simplified, and pharmacists became involved in checking infusion rates of i.v. medications.

  2. Does the Newtonian Gravity "Constant" G Vary?

    NASA Astrophysics Data System (ADS)

    Noerdlinger, Peter D.

    2015-08-01

    A series of measurements of Newton's gravity constant, G, dating back as far as 1893, yielded widely varying values, the variation greatly exceeding the stated error estimates (Gillies, 1997; Quinn, 2000, Mohr et al 2008). The value of G is usually said to be unrelated to other physics, but we point out that the 8B Solar Neutrino Rate ought to be very sensitive. Improved pulsar timing could also help settle the issue as to whether G really varies. We claim that the variation in measured values over time (1893-2014 C.E.) is a more serious problem than the failure of the error bars to overlap; it appears that challenging or adjusting the error bars hardly masks the underlying disagreement in central values. We have assessed whether variations in the gravitational potential due to (for example) local dark matter (DM) could explain the variations. We find that the required potential fluctuations could transiently accelerate the Solar System and nearby stars to speeds in excess of the Galactic escape speed. Previous theories for the variation in G generally deal with supposed secular variation on a cosmological timescale, or very rapid oscillations whose envelope changes on that scale (Steinhardt and Will 1995). Therefore, these analyses fail to support variations on the timescale of years or spatial scales of order parsecs, which would be required by the data for G. We note that true variations in G would be associated with variations in clock rates (Derevianko and Pospelov 2014; Loeb and Maoz 2015), which could mask changes in orbital dynamics. Geringer-Sameth et al (2014) studied γ-ray emission from the nearby Reticulum dwarf galaxy, which is expected to be free of "ordinary" (stellar, black hole) γ-ray sources and found evidence for DM decay. Bernabei et al (2003) also found evidence for DM penetrating deep underground at Gran Sasso. If, indeed, variations in G can be tied to variations in gravitational potential, we have a new tool to assess the DM density.

  3. Galaxy bias from the Dark Energy Survey Science Verification data: Combining galaxy density maps and weak lensing maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, C.; Pujol, A.; Gaztañaga, E.

    We measure the redshift evolution of galaxy bias for a magnitude-limited galaxy sample by combining the galaxy density maps and weak lensing shear maps for a ~116 deg 2 area of the Dark Energy Survey (DES) Science Verification (SV) data. This method was first developed in Amara et al. and later re-examined in a companion paper with rigorous simulation tests and analytical treatment of tomographic measurements. In this work we apply this method to the DES SV data and measure the galaxy bias for a i < 22.5 galaxy sample. We find the galaxy bias and 1σ error bars inmore » four photometric redshift bins to be 1.12 ± 0.19 (z = 0.2–0.4), 0.97 ± 0.15 (z = 0.4–0.6), 1.38 ± 0.39 (z = 0.6–0.8), and 1.45 ± 0.56 (z = 0.8–1.0). These measurements are consistent at the 2σ level with measurements on the same data set using galaxy clustering and cross-correlation of galaxies with cosmic microwave background lensing, with most of the redshift bins consistent within the 1σ error bars. In addition, our method provides the only σ8 independent constraint among the three. We forward model the main observational effects using mock galaxy catalogues by including shape noise, photo-z errors, and masking effects. We show that our bias measurement from the data is consistent with that expected from simulations. With the forthcoming full DES data set, we expect this method to provide additional constraints on the galaxy bias measurement from more traditional methods. Moreover, in the process of our measurement, we build up a 3D mass map that allows further exploration of the dark matter distribution and its relation to galaxy evolution.« less

  4. Galaxy bias from the Dark Energy Survey Science Verification data: Combining galaxy density maps and weak lensing maps

    DOE PAGES

    Chang, C.; Pujol, A.; Gaztañaga, E.; ...

    2016-04-15

    We measure the redshift evolution of galaxy bias for a magnitude-limited galaxy sample by combining the galaxy density maps and weak lensing shear maps for a ~116 deg 2 area of the Dark Energy Survey (DES) Science Verification (SV) data. This method was first developed in Amara et al. and later re-examined in a companion paper with rigorous simulation tests and analytical treatment of tomographic measurements. In this work we apply this method to the DES SV data and measure the galaxy bias for a i < 22.5 galaxy sample. We find the galaxy bias and 1σ error bars inmore » four photometric redshift bins to be 1.12 ± 0.19 (z = 0.2–0.4), 0.97 ± 0.15 (z = 0.4–0.6), 1.38 ± 0.39 (z = 0.6–0.8), and 1.45 ± 0.56 (z = 0.8–1.0). These measurements are consistent at the 2σ level with measurements on the same data set using galaxy clustering and cross-correlation of galaxies with cosmic microwave background lensing, with most of the redshift bins consistent within the 1σ error bars. In addition, our method provides the only σ8 independent constraint among the three. We forward model the main observational effects using mock galaxy catalogues by including shape noise, photo-z errors, and masking effects. We show that our bias measurement from the data is consistent with that expected from simulations. With the forthcoming full DES data set, we expect this method to provide additional constraints on the galaxy bias measurement from more traditional methods. Moreover, in the process of our measurement, we build up a 3D mass map that allows further exploration of the dark matter distribution and its relation to galaxy evolution.« less

  5. Drinking Context and Drinking Problems Among Black, White, and Hispanic Men and Women in the 1984, 1995, and 2005 U.S. National Alcohol Surveys*

    PubMed Central

    Nyaronga, Dan; Greenfield, Thomas K.; McDaniel, Patricia A.

    2009-01-01

    Objective: The purpose of this study was to investigate the preferred drinking contexts of different gender and ethnic groups (white, black, and Hispanic men and women), by examining where these groups do most of their drinking and to what extent drinking contexts preferences are associated with certain drinking-related consequences. Method: The study used data from the 1984, 1995, and 2005 U.S. National Alcohol Surveys. Among current drinkers, cluster analyses of volume drunk in six contexts (restaurants, bars, others' parties, or when spending a quiet evening at home, having friends drop over at home, and hanging out in public places) were used to classify individuals by their drinking context preferences in each gender by ethnicity subgroup. Results: We identified three highly similar drinking context-preference clusters within each of the six subgroups: (1) bar-plus group (did most drinking in bars, plus much in other venues), (2) home group (did most drinking at home, and a fair amount elsewhere), and (3) light group (drank almost nothing quietly at home and also less in other settings than the other two clusters). For a number of ethnic-by-gender groups, context preference group assignment predicted drinking-related problems, over and above general drinking patterns. For example, for all groups, the bar-plus preference group relative to the light group showed higher risk of arguments, fighting, and drunk driving, after taking into account the volume consumed, frequency of heavy drinking, age, and year of survey. Conclusions: Examining individuals' preferred drinking contexts may provide important information to augment overall drinking patterns in risk and prevention studies. PMID:19118387

  6. A novel in-frame deletion affecting the BAR domain of OPHN1 in a family with intellectual disability and hippocampal alterations

    PubMed Central

    Santos-Rebouças, Cíntia Barros; Belet, Stefanie; Guedes de Almeida, Luciana; Ribeiro, Márcia Gonçalves; Medina-Acosta, Enrique; Bahia, Paulo Roberto Valle; Alves da Silva, Antônio Francisco; Lima dos Santos, Flávia; Borges de Lacerda, Glenda Corrêa; Pimentel, Márcia Mattos Gonçalves; Froyen, Guy

    2014-01-01

    Oligophrenin-1 (OPHN1) is one of at least seven genes located on chromosome X that take part in Rho GTPase-dependent signaling pathways involved in X-linked intellectual disability (XLID). Mutations in OPHN1 were primarily described as an exclusive cause of non-syndromic XLID, but the re-evaluation of the affected individuals using brain imaging displayed fronto-temporal atrophy and cerebellar hypoplasia as neuroanatomical marks. In this study, we describe clinical, genetic and neuroimaging data of a three generation Brazilian XLID family co-segregating a novel intragenic deletion in OPHN1. This deletion results in an in-frame loss of exon 7 at transcription level (c.781_891del; r.487_597del), which is predicted to abolish 37 amino acids from the highly conserved N-terminal BAR domain of OPHN1. cDNA expression analysis demonstrated that the mutant OPHN1 transcript is stable and no abnormal splicing was observed. Features shared by the affected males of this family include neonatal hypotonia, strabismus, prominent root of the nose, deep set eyes, hyperactivity and instability/intolerance to frustration. Cranial MRI scans showed large lateral ventricles, vermis hypoplasia and cystic dilatation of the cisterna magna in all affected males. Interestingly, hippocampal alterations that have not been reported in patients with loss-of-function OPHN1 mutations were found in three affected individuals, suggesting an important function for the BAR domain in the hippocampus. This is the first description of an in-frame deletion within the BAR domain of OPHN1 and could provide new insights into the role of this domain in relation to brain and cognitive development or function. PMID:24105372

  7. Regulation of the Two Delta Crystallin Genes during Lens Development in the Chicken Embryo

    DTIC Science & Technology

    1991-08-22

    Stabilization of tubulin mRNA by inhibition of protein synthesis sea 148 urchin embryos. Mol. Cell. Biol. 8, 3518-3525. Goto, K., Okada, T.S...counts from twenty lens epithelia. Error bars are ± SEM . Symbols: control lens tissue, (square), 0.5 ng/ml actinomycin D, (inverted triangle), 30 ng...Ŝ]-methionine for 5 hr in the absence or presence of actinomycin D (0.5 or 30 M-g/̂ iD • Values are the means ± SEM for ten groups of three lens

  8. A Search for Periodicity in the X-Ray Spectrum of Black Hole Candidate A0620-00

    DTIC Science & Technology

    1991-06-01

    They are observed as radio pulsars and as the X-ray emitting components of binary X-ray sources. The limits of stability of neutron stars are not...4 Lo ). The three candidates are CYG X-1, LMC X-3, and A0620. In this section all data such as mass functions, luminosities, distances, periods, etc...1.4. Finally, we discard data for which a/ lo > 1. Such a point is of little statistical significance since its error bars are so large. Figure 2.2d

  9. The nuclear electric quadrupole moment of copper.

    PubMed

    Santiago, Régis Tadeu; Teodoro, Tiago Quevedo; Haiduke, Roberto Luiz Andrade

    2014-06-21

    The nuclear electric quadrupole moment (NQM) of the (63)Cu nucleus was determined from an indirect approach by combining accurate experimental nuclear quadrupole coupling constants (NQCCs) with relativistic Dirac-Coulomb coupled cluster calculations of the electric field gradient (EFG). The data obtained at the highest level of calculation, DC-CCSD-T, from 14 linear molecules containing the copper atom give rise to an indicated NQM of -198(10) mbarn. Such result slightly deviates from the previously accepted standard value given by the muonic method, -220(15) mbarn, although the error bars are superimposed.

  10. Watts Bar Nuclear Plant Title V Applicability

    EPA Pesticide Factsheets

    This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Policy and Guidance Database available at www2.epa.gov/title-v-operating-permits/title-v-operating-permit-policy-and-guidance-document-index. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  11. The relationship between mean anomaly block sizes and spherical harmonic representations. [of earth gravity

    NASA Technical Reports Server (NTRS)

    Rapp, R. H.

    1977-01-01

    The frequently used rule specifying the relationship between a mean gravity anomaly in a block whose side length is theta degrees and a spherical harmonic representation of these data to degree l-bar is examined in light of the smoothing parameter used by Pellinen (1966). It is found that if the smoothing parameter is not considered, mean anomalies computed from potential coefficients can be in error by about 30% of the rms anomaly value. It is suggested that the above mentioned rule should be considered only a crude approximation.

  12. Data free inference with processed data products

    DOE PAGES

    Chowdhary, K.; Najm, H. N.

    2014-07-12

    Here, we consider the context of probabilistic inference of model parameters given error bars or confidence intervals on model output values, when the data is unavailable. We introduce a class of algorithms in a Bayesian framework, relying on maximum entropy arguments and approximate Bayesian computation methods, to generate consistent data with the given summary statistics. Once we obtain consistent data sets, we pool the respective posteriors, to arrive at a single, averaged density on the parameters. This approach allows us to perform accurate forward uncertainty propagation consistent with the reported statistics.

  13. Aquarius Radiometer Performance: Early On-Orbit Calibration and Results

    NASA Technical Reports Server (NTRS)

    Piepmeier, Jeffrey R.; LeVine, David M.; Yueh, Simon H.; Wentz, Frank; Ruf, Christopher

    2012-01-01

    The Aquarius/SAC-D observatory was launched into a 657-km altitude, 6-PM ascending node, sun-synchronous polar orbit from Vandenberg, California, USA on June 10, 2011. The Aquarius instrument was commissioned two months after launch and began operating in mission mode August 25. The Aquarius radiometer meets all engineering requirements, exhibited initial calibration biases within expected error bars, and continues to operate well. A review of the instrument design, discussion of early on-orbit performance and calibration assessment, and investigation of an on-going calibration drift are summarized in this abstract.

  14. Micro Computer Feedback Report for the Strategic Leader Development Inventory; Source Code

    DTIC Science & Technology

    1994-03-01

    SEL5 ;exit if error CALL SELZCT SCRZEN ;display select screen JC SEL4 ;no files in directory .------- display the files NOV BX, [BarPos] ;starting...SEL2 ;if not goto next test imp SEL4 ; Ecit SEL2: CUP AL,ODh ;in it a pick ? 3Z SEL3 ;if YES exit loop ------- see if an active control key was...file CALL READCOMFIG eread file into memory JC SEL5 ;exit to main menu CALL OPEN DATA FILE ;is data arailable? SEL4 : CALL RELEASE_ _MDR ;release mom

  15. A Reassessment of the Precision of Carbonate Clumped Isotope Measurements: Implications for Calibrations and Paleoclimate Reconstructions

    NASA Astrophysics Data System (ADS)

    Fernandez, Alvaro; Müller, Inigo A.; Rodríguez-Sanz, Laura; van Dijk, Joep; Looser, Nathan; Bernasconi, Stefano M.

    2017-12-01

    Carbonate clumped isotopes offer a potentially transformational tool to interpret Earth's history, but the proxy is still limited by poor interlaboratory reproducibility. Here, we focus on the uncertainties that result from the analysis of only a few replicate measurements to understand the extent to which unconstrained errors affect calibration relationships and paleoclimate reconstructions. We find that highly precise data can be routinely obtained with multiple replicate analyses, but this is not always done in many laboratories. For instance, using published estimates of external reproducibilities we find that typical clumped isotope measurements (three replicate analyses) have margins of error at the 95% confidence level (CL) that are too large for many applications. These errors, however, can be systematically reduced with more replicate measurements. Second, using a Monte Carlo-type simulation we demonstrate that the degree of disagreement on published calibration slopes is about what we should expect considering the precision of Δ47 data, the number of samples and replicate analyses, and the temperature range covered in published calibrations. Finally, we show that the way errors are typically reported in clumped isotope data can be problematic and lead to the impression that data are more precise than warranted. We recommend that uncertainties in Δ47 data should no longer be reported as the standard error of a few replicate measurements. Instead, uncertainties should be reported as margins of error at a specified confidence level (e.g., 68% or 95% CL). These error bars are a more realistic indication of the reliability of a measurement.

  16. Underlying Cause(s) of Letter Perseveration Errors

    PubMed Central

    Fischer-Baum, Simon; Rapp, Brenda

    2011-01-01

    Perseverations, the inappropriate intrusion of elements from a previous response into a current response, are commonly observed in individuals with acquired deficits. This study specifically investigates the contribution of failure-to activate and failure-to-inhibit deficit(s) in the generation of letter perseveration errors in acquired dysgraphia. We provide evidence from the performance 12 dysgraphic individuals indicating that a failure to activate graphemes for a target word gives rise to letter perseveration errors. In addition, we also provide evidence that, in some individuals, a failure-to-inhibit deficit may also contribute to the production of perseveration errors. PMID:22178232

  17. Impact of Frequent Interruption on Nurses' Patient-Controlled Analgesia Programming Performance.

    PubMed

    Campoe, Kristi R; Giuliano, Karen K

    2017-12-01

    The purpose was to add to the body of knowledge regarding the impact of interruption on acute care nurses' cognitive workload, total task completion times, nurse frustration, and medication administration error while programming a patient-controlled analgesia (PCA) pump. Data support that the severity of medication administration error increases with the number of interruptions, which is especially critical during the administration of high-risk medications. Bar code technology, interruption-free zones, and medication safety vests have been shown to decrease administration-related errors. However, there are few published data regarding the impact of number of interruptions on nurses' clinical performance during PCA programming. Nine acute care nurses completed three PCA pump programming tasks in a simulation laboratory. Programming tasks were completed under three conditions where the number of interruptions varied between two, four, and six. Outcome measures included cognitive workload (six NASA Task Load Index [NASA-TLX] subscales), total task completion time (seconds), nurse frustration (NASA-TLX Subscale 6), and PCA medication administration error (incorrect final programming). Increases in the number of interruptions were associated with significant increases in total task completion time ( p = .003). We also found increases in nurses' cognitive workload, nurse frustration, and PCA pump programming errors, but these increases were not statistically significant. Complex technology use permeates the acute care nursing practice environment. These results add new knowledge on nurses' clinical performance during PCA pump programming and high-risk medication administration.

  18. [Measures to prevent patient identification errors in blood collection/physiological function testing utilizing a laboratory information system].

    PubMed

    Shimazu, Chisato; Hoshino, Satoshi; Furukawa, Taiji

    2013-08-01

    We constructed an integrated personal identification workflow chart using both bar code reading and an all in-one laboratory information system. The information system not only handles test data but also the information needed for patient guidance in the laboratory department. The reception terminals at the entrance, displays for patient guidance and patient identification tools at blood-sampling booths are all controlled by the information system. The number of patient identification errors was greatly reduced by the system. However, identification errors have not been abolished in the ultrasound department. After re-evaluation of the patient identification process in this department, we recognized that the major reason for the errors came from excessive identification workflow. Ordinarily, an ultrasound test requires patient identification 3 times, because 3 different systems are required during the entire test process, i.e. ultrasound modality system, laboratory information system and a system for producing reports. We are trying to connect the 3 different systems to develop a one-time identification workflow, but it is not a simple task and has not been completed yet. Utilization of the laboratory information system is effective, but is not yet perfect for patient identification. The most fundamental procedure for patient identification is to ask a person's name even today. Everyday checks in the ordinary workflow and everyone's participation in safety-management activity are important for the prevention of patient identification errors.

  19. Effects of line fiducial parameters and beamforming on ultrasound calibration

    PubMed Central

    Ameri, Golafsoun; Baxter, John S. H.; McLeod, A. Jonathan; Peters, Terry M.; Chen, Elvis C. S.

    2017-01-01

    Abstract. Ultrasound (US)-guided interventions are often enhanced via integration with an augmented reality environment, a necessary component of which is US calibration. Calibration requires the segmentation of fiducials, i.e., a phantom, in US images. Fiducial localization error (FLE) can decrease US calibration accuracy, which fundamentally affects the total accuracy of the interventional guidance system. Here, we investigate the effects of US image reconstruction techniques as well as phantom material and geometry on US calibration. It was shown that the FLE was reduced by 29% with synthetic transmit aperture imaging compared with conventional B-mode imaging in a Z-bar calibration, resulting in a 10% reduction of calibration error. In addition, an evaluation of a variety of calibration phantoms with different geometrical and material properties was performed. The phantoms included braided wire, plastic straws, and polyvinyl alcohol cryogel tubes with different diameters. It was shown that these properties have a significant effect on calibration error, which is a variable based on US beamforming techniques. These results would have important implications for calibration procedures and their feasibility in the context of image-guided procedures. PMID:28331886

  20. Effects of line fiducial parameters and beamforming on ultrasound calibration.

    PubMed

    Ameri, Golafsoun; Baxter, John S H; McLeod, A Jonathan; Peters, Terry M; Chen, Elvis C S

    2017-01-01

    Ultrasound (US)-guided interventions are often enhanced via integration with an augmented reality environment, a necessary component of which is US calibration. Calibration requires the segmentation of fiducials, i.e., a phantom, in US images. Fiducial localization error (FLE) can decrease US calibration accuracy, which fundamentally affects the total accuracy of the interventional guidance system. Here, we investigate the effects of US image reconstruction techniques as well as phantom material and geometry on US calibration. It was shown that the FLE was reduced by 29% with synthetic transmit aperture imaging compared with conventional B-mode imaging in a Z-bar calibration, resulting in a 10% reduction of calibration error. In addition, an evaluation of a variety of calibration phantoms with different geometrical and material properties was performed. The phantoms included braided wire, plastic straws, and polyvinyl alcohol cryogel tubes with different diameters. It was shown that these properties have a significant effect on calibration error, which is a variable based on US beamforming techniques. These results would have important implications for calibration procedures and their feasibility in the context of image-guided procedures.

  1. Geometric errors in 3D optical metrology systems

    NASA Astrophysics Data System (ADS)

    Harding, Kevin; Nafis, Chris

    2008-08-01

    The field of 3D optical metrology has seen significant growth in the commercial market in recent years. The methods of using structured light to obtain 3D range data is well documented in the literature, and continues to be an area of development in universities. However, the step between getting 3D data, and getting geometrically correct 3D data that can be used for metrology is not nearly as well developed. Mechanical metrology systems such as CMMs have long established standard means of verifying the geometric accuracies of their systems. Both local and volumentric measurments are characterized on such system using tooling balls, grid plates, and ball bars. This paper will explore the tools needed to characterize and calibrate an optical metrology system, and discuss the nature of the geometric errors often found in such systems, and suggest what may be a viable standard method of doing characterization of 3D optical systems. Finally, we will present a tradeoff analysis of ways to correct geometric errors in an optical systems considering what can be gained by hardware methods versus software corrections.

  2. Optics measurement algorithms and error analysis for the proton energy frontier

    NASA Astrophysics Data System (ADS)

    Langner, A.; Tomás, R.

    2015-03-01

    Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β -functions (β*). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased the average error bars by a factor of three to four. This allowed the calculation of β* values and demonstrated to be fundamental in the understanding of emittance evolution during the energy ramp.

  3. Medication Errors in Pediatric Anesthesia: A Report From the Wake Up Safe Quality Improvement Initiative.

    PubMed

    Lobaugh, Lauren M Y; Martin, Lizabeth D; Schleelein, Laura E; Tyler, Donald C; Litman, Ronald S

    2017-09-01

    Wake Up Safe is a quality improvement initiative of the Society for Pediatric Anesthesia that contains a deidentified registry of serious adverse events occurring in pediatric anesthesia. The aim of this study was to describe and characterize reported medication errors to find common patterns amenable to preventative strategies. In September 2016, we analyzed approximately 6 years' worth of medication error events reported to Wake Up Safe. Medication errors were classified by: (1) medication category; (2) error type by phase of administration: prescribing, preparation, or administration; (3) bolus or infusion error; (4) provider type and level of training; (5) harm as defined by the National Coordinating Council for Medication Error Reporting and Prevention; and (6) perceived preventability. From 2010 to the time of our data analysis in September 2016, 32 institutions had joined and submitted data on 2087 adverse events during 2,316,635 anesthetics. These reports contained details of 276 medication errors, which comprised the third highest category of events behind cardiac and respiratory related events. Medication errors most commonly involved opioids and sedative/hypnotics. When categorized by phase of handling, 30 events occurred during preparation, 67 during prescribing, and 179 during administration. The most common error type was accidental administration of the wrong dose (N = 84), followed by syringe swap (accidental administration of the wrong syringe, N = 49). Fifty-seven (21%) reported medication errors involved medications prepared as infusions as opposed to 1 time bolus administrations. Medication errors were committed by all types of anesthesia providers, most commonly by attendings. Over 80% of reported medication errors reached the patient and more than half of these events caused patient harm. Fifteen events (5%) required a life sustaining intervention. Nearly all cases (97%) were judged to be either likely or certainly preventable. Our findings characterize the most common types of medication errors in pediatric anesthesia practice and provide guidance on future preventative strategies. Many of these errors will be almost entirely preventable with the use of prefilled medication syringes to avoid accidental ampule swap, bar-coding at the point of medication administration to prevent syringe swap and to confirm the proper dose, and 2-person checking of medication infusions for accuracy.

  4. A Handheld Point-of-Care Genomic Diagnostic System

    PubMed Central

    Myers, Frank B.; Henrikson, Richard H.; Bone, Jennifer; Lee, Luke P.

    2013-01-01

    The rapid detection and identification of infectious disease pathogens is a critical need for healthcare in both developed and developing countries. As we gain more insight into the genomic basis of pathogen infectivity and drug resistance, point-of-care nucleic acid testing will likely become an important tool for global health. In this paper, we present an inexpensive, handheld, battery-powered instrument designed to enable pathogen genotyping in the developing world. Our Microfluidic Biomolecular Amplification Reader (µBAR) represents the convergence of molecular biology, microfluidics, optics, and electronics technology. The µBAR is capable of carrying out isothermal nucleic acid amplification assays with real-time fluorescence readout at a fraction of the cost of conventional benchtop thermocyclers. Additionally, the µBAR features cell phone data connectivity and GPS sample geotagging which can enable epidemiological surveying and remote healthcare delivery. The µBAR controls assay temperature through an integrated resistive heater and monitors real-time fluorescence signals from 60 individual reaction chambers using LEDs and phototransistors. Assays are carried out on PDMS disposable microfluidic cartridges which require no external power for sample loading. We characterize the fluorescence detection limits, heater uniformity, and battery life of the instrument. As a proof-of-principle, we demonstrate the detection of the HIV-1 integrase gene with the µBAR using the Loop-Mediated Isothermal Amplification (LAMP) assay. Although we focus on the detection of purified DNA here, LAMP has previously been demonstrated with a range of clinical samples, and our eventual goal is to develop a microfluidic device which includes on-chip sample preparation from raw samples. The µBAR is based entirely around open source hardware and software, and in the accompanying online supplement we present a full set of schematics, bill of materials, PCB layouts, CAD drawings, and source code for the µBAR instrument with the goal of spurring further innovation toward low-cost genetic diagnostics. PMID:23936402

  5. Routine cognitive errors: a trait-like predictor of individual differences in anxiety and distress.

    PubMed

    Fetterman, Adam K; Robinson, Michael D

    2011-02-01

    Five studies (N=361) sought to model a class of errors--namely, those in routine tasks--that several literatures have suggested may predispose individuals to higher levels of emotional distress. Individual differences in error frequency were assessed in choice reaction-time tasks of a routine cognitive type. In Study 1, it was found that tendencies toward error in such tasks exhibit trait-like stability over time. In Study 3, it was found that tendencies toward error exhibit trait-like consistency across different tasks. Higher error frequency, in turn, predicted higher levels of negative affect, general distress symptoms, displayed levels of negative emotion during an interview, and momentary experiences of negative emotion in daily life (Studies 2-5). In all cases, such predictive relations remained significant with individual differences in neuroticism controlled. The results thus converge on the idea that error frequency in simple cognitive tasks is a significant and consequential predictor of emotional distress in everyday life. The results are novel, but discussed within the context of the wider literatures that informed them. © 2010 Psychology Press, an imprint of the Taylor & Francis Group, an Informa business

  6. A cognitive taxonomy of medical errors.

    PubMed

    Zhang, Jiajie; Patel, Vimla L; Johnson, Todd R; Shortliffe, Edward H

    2004-06-01

    Propose a cognitive taxonomy of medical errors at the level of individuals and their interactions with technology. Use cognitive theories of human error and human action to develop the theoretical foundations of the taxonomy, develop the structure of the taxonomy, populate the taxonomy with examples of medical error cases, identify cognitive mechanisms for each category of medical error under the taxonomy, and apply the taxonomy to practical problems. Four criteria were used to evaluate the cognitive taxonomy. The taxonomy should be able (1) to categorize major types of errors at the individual level along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to describe how and explain why a specific error occurs, and (4) to generate intervention strategies for each type of error. The proposed cognitive taxonomy largely satisfies the four criteria at a theoretical and conceptual level. Theoretically, the proposed cognitive taxonomy provides a method to systematically categorize medical errors at the individual level along cognitive dimensions, leads to a better understanding of the underlying cognitive mechanisms of medical errors, and provides a framework that can guide future studies on medical errors. Practically, it provides guidelines for the development of cognitive interventions to decrease medical errors and foundation for the development of medical error reporting system that not only categorizes errors but also identifies problems and helps to generate solutions. To validate this model empirically, we will next be performing systematic experimental studies.

  7. Prevalence and predictors of smoking in “smoke-free” bars. Findings from the International Tobacco Control (ITC) Europe Surveys

    PubMed Central

    Nagelhout, Gera E.; Mons, Ute; Allwright, Shane; Guignard, Romain; Beck, Francois; Fong, Geoffrey T.; de Vries, Hein; Willemsen, Marc C.

    2015-01-01

    National level smoke-free legislation is implemented to protect the public from exposure to second-hand tobacco smoke (SHS). The first aim of this study was to investigate how successful the smoke-free hospitality industry legislation in Ireland (March 2004), France (January 2008), the Netherlands (July 2008), and Germany (between August 2007 and July 2008) was in reducing smoking in bars. The second aim was to assess individual smokers’ predictors of smoking in bars post-ban. The third aim was to examine country differences in predictors and the fourth aim to examine differences between educational levels (as an indicator of socioeconomic status). This study used nationally representative samples of 3,147 adult smokers from the International Tobacco Control (ITC) Europe Surveys who were surveyed pre- and post-ban. The results reveal that while the partial smoke-free legislation in the Netherlands and Germany was effective in reducing smoking in bars (from 88% to 34% and from 87% to 44% respectively), the effectiveness was much lower than the comprehensive legislation in Ireland and France which almost completely eliminated smoking in bars (from 97% to 3% and from 84% to 3% respectively). Smokers who were more supportive of the ban, were more aware of the harm of SHS, and who had negative opinions of smoking were less likely to smoke in bars post-ban. Support for the ban was a stronger predictor in Germany. SHS harm awareness was a stronger predictor among less educated smokers in the Netherlands and Germany. The results indicate the need for strong comprehensive smoke-free legislation without exceptions. This should be accompanied by educational campaigns in which the public health rationale for the legislation is clearly explained. PMID:21497973

  8. The impact of Michigan's Dr Ron Davis smoke-free air law on levels of cotinine, tobacco-specific lung carcinogen and severity of self-reported respiratory symptoms among non-smoking bar employees.

    PubMed

    Wilson, Teri; Shamo, Farid; Boynton, Katherine; Kiley, Janet

    2012-11-01

    To determine the impact on bar employee's health and exposure to secondhand smoke (SHS) before and after the implementation of Michigan's Dr Ron Davis smoke-free air law that went into effect on 1 May 2010, prohibiting smoking in places of work, including bars. This study used a pre/postintervention experimental design. The setting was bars in 12 Michigan counties. Subjects were bar employees, recruited through flyers and individual discussions with local health department staff. Participants completed a screening questionnaire to determine eligibility. A total of 40 eligible employees completed a demographic survey, provided urine samples for analysis of cotinine and 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanol (NNAL) and completed questionnaires on respiratory and general health status 6 weeks before and 6-10 weeks after the law went into effect. The main outcome measures were urine samples for total cotinine and total NNAL and data from a self-administered respiratory and general health status questionnaire collected during the pre-law and post-law study periods. There was a significant decrease in the mean cotinine levels from 35.9 ng/ml to a non-quantifiable value (p<0.001), and there was a significant reduction in the mean NNAL level from 0.086 pmol/ml to 0.034 pmol/ml (p<0.001) 2 months after implementation of the law. There was also a significant improvement in all six self-reported respiratory symptoms (p<0.001) and general health status (p<0.001). The reduction in the SHS biomarkers cotinine and NNAL and reported improvement in respiratory health demonstrates that the Michigan smoke-free workplace law is protecting bar employee health.

  9. Seagrass habitat complexity and macroinvertebrate abundance in Lakshadweep coral reef lagoons, Arabian Sea

    NASA Astrophysics Data System (ADS)

    Ansari, Z. A.; Rivonker, C. U.; Ramani, P.; Parulekar, A. H.

    1991-09-01

    Macrofauna of seagrass community in the five Lakshadweep atolls were studied and compared. The associated epifaunal and infaunal taxa comprising nine major taxonomic groups, showed significant differences in the total number of individuals (1041 8411 m-2) among sites and habitats. The density of macrofauna was directly related to mean macrophytic biomass (405 895 g wet wt. m-2). The fauna was dominated by epifaunal polychaetes, amphipods and isopods in the vegetated areas. When compared with the density of nearby unvegetated areasleft( {bar x = 815{text{m }}^{ - 2} } right), seagrass meadows harbour a denser and richer macroinvertebrate assemblageleft( {bar x = 4023{text{m }}^{ - 2} } right).

  10. Avoiding Substantive Errors in Individualized Education Program Development

    ERIC Educational Resources Information Center

    Yell, Mitchell L.; Katsiyannis, Antonis; Ennis, Robin Parks; Losinski, Mickey; Christle, Christine A.

    2016-01-01

    The purpose of this article is to discuss major substantive errors that school personnel may make when developing students' Individualized Education Programs (IEPs). School IEP team members need to understand the importance of the procedural and substantive requirements of the IEP, have an awareness of the five serious substantive errors that IEP…

  11. Individual Differences in Mathematical Competence Modulate Brain Responses to Arithmetic Errors: An fMRI Study

    ERIC Educational Resources Information Center

    Ansari, Daniel; Grabner, Roland H.; Koschutnig, Karl; Reishofer, Gernot; Ebner, Franz

    2011-01-01

    Data from both neuropsychological and neuroimaging studies have implicated the left inferior parietal cortex in calculation. Comparatively less attention has been paid to the neural responses associated with the commission of calculation errors and how the processing of arithmetic errors is modulated by individual differences in mathematical…

  12. Investigation of experimental pole-figure errors by simulation of individual spectra

    NASA Astrophysics Data System (ADS)

    Lychagina, T. A.; Nikolaev, D. I.

    2007-09-01

    The errors in measuring the crystallographic texture described by pole figures are studied. A set of diffraction spectra for a sample of the MA2-1 alloy (Mg + 4.5% Al + 1% Zn) are measured, simulation of individual spectra on the basis of which the pole figures were obtained is performed, and their errors are determined. The conclusion about the possibility of determining the effect of errors of the diffraction peak half-width on the pole figure errors that was drawn in our previous studies is confirmed.

  13. Current measurement by Faraday effect on GEPOPU

    NASA Astrophysics Data System (ADS)

    N, Correa; H, Chuaqui; E, Wyndham; F, Veloso; J, Valenzuela; M, Favre; H, Bhuyan

    2014-05-01

    The design and calibration of an optical current sensor using BK7 glass is presented. The current sensor is based on the polarization rotation by Faraday effect. GEPOPU is a pulsed power generator, double transit time 120ns, 1.5 Ohm impedance, coaxial geometry, where Z pinch experiment are performed. The measurements were performed at the Optics and Plasma Physics Laboratory of Pontificia Universidad Catolica de Chile. The verdet constant for two different optical materials was obtained using He-Ne laser. The values obtained are within the experimental error bars of measurements published in the literature (less than 15% difference). Two different sensor geometries were tried. We present the preliminary results for one of the geometries. The values obtained for the current agree within the measurement error with those obtained by means of a Spice simulation of the generator. Signal traces obtained are completely noise free.

  14. Confidence limits for data mining models of options prices

    NASA Astrophysics Data System (ADS)

    Healy, J. V.; Dixon, M.; Read, B. J.; Cai, F. F.

    2004-12-01

    Non-parametric methods such as artificial neural nets can successfully model prices of financial options, out-performing the Black-Scholes analytic model (Eur. Phys. J. B 27 (2002) 219). However, the accuracy of such approaches is usually expressed only by a global fitting/error measure. This paper describes a robust method for determining prediction intervals for models derived by non-linear regression. We have demonstrated it by application to a standard synthetic example (29th Annual Conference of the IEEE Industrial Electronics Society, Special Session on Intelligent Systems, pp. 1926-1931). The method is used here to obtain prediction intervals for option prices using market data for LIFFE “ESX” FTSE 100 index options ( http://www.liffe.com/liffedata/contracts/month_onmonth.xls). We avoid special neural net architectures and use standard regression procedures to determine local error bars. The method is appropriate for target data with non constant variance (or volatility).

  15. Comparison between thermochemical and phase stability data for the quartz-coesite-stishovite transformations

    NASA Technical Reports Server (NTRS)

    Weaver, J. S.; Chipman, D. W.; Takahashi, T.

    1979-01-01

    Phase stability and elasticity data have been used to calculate the Gibbs free energy, enthalpy, and entropy changes at 298 K and 1 bar associated with the quartz-coesite and coesite-stishovite transformations in the system SiO2. For the quartz-coesite transformation, these changes disagree by a factor of two or three with those obtained by calorimetric techniques. The phase boundary for this transformation appears to be well determined by experiment; the discrepancy, therefore, suggests that the calorimetric data for coesite are in error. Although the calorimetric and phase stability data for the coesite-stishovite transformation yield the same transition pressure at 298 K, the phase-boundary slopes disagree by a factor of two. At present, it is not possible to determine which of the data are in error. Thus serious inconsistencies exist in the thermodynamic data for the polymorphic transformations of silica.

  16. Information technology-based approaches to reducing repeat drug exposure in patients with known drug allergies.

    PubMed

    Cresswell, Kathrin M; Sheikh, Aziz

    2008-05-01

    There is increasing interest internationally in ways of reducing the high disease burden resulting from errors in medicine management. Repeat exposure to drugs to which patients have a known allergy has been a repeatedly identified error, often with disastrous consequences. Drug allergies are immunologically mediated reactions that are characterized by specificity and recurrence on reexposure. These repeat reactions should therefore be preventable. We argue that there is insufficient attention being paid to studying and implementing system-based approaches to reducing the risk of such accidental reexposure. Drawing on recent and ongoing research, we discuss a number of information technology-based interventions that can be used to reduce the risk of recurrent exposure. Proven to be effective in this respect are interventions that provide real-time clinical decision support; also promising are interventions aiming to enhance patient recognition, such as bar coding, radiofrequency identification, and biometric technologies.

  17. Molecular analysis, cytogenetics and fertility of introgression lines from transgenic wheat to Aegilops cylindrica host.

    PubMed

    Schoenenberger, Nicola; Guadagnuolo, Roberto; Savova-Bianchi, Dessislava; Küpfer, Philippe; Felber, François

    2006-12-01

    Natural hybridization and backcrossing between Aegilops cylindrica and Triticum aestivum can lead to introgression of wheat DNA into the wild species. Hybrids between Ae. cylindrica and wheat lines bearing herbicide resistance (bar), reporter (gus), fungal disease resistance (kp4), and increased insect tolerance (gna) transgenes were produced by pollination of emasculated Ae. cylindrica plants. F1 hybrids were backcrossed to Ae. cylindrica under open-pollination conditions, and first backcrosses were selfed using pollen bags. Female fertility of F1 ranged from 0.03 to 0.6%. Eighteen percent of the sown BC1s germinated and flowered. Chromosome numbers ranged from 30 to 84 and several of the plants bore wheat-specific sequence-characterized amplified regions (SCARs) and the bar gene. Self fertility in two BC1 plants was 0.16 and 5.21%, and the others were completely self-sterile. Among 19 BC1S1 individuals one plant was transgenic, had 43 chromosomes, contained the bar gene, and survived glufosinate treatments. The other BC1S1 plants had between 28 and 31 chromosomes, and several of them carried SCARs specific to wheat A and D genomes. Fertility of these plants was higher under open-pollination conditions than by selfing and did not necessarily correlate with even or euploid chromosome number. Some individuals having supernumerary wheat chromosomes recovered full fertility.

  18. Maximizing return on socioeconomic investment in phase II proof-of-concept trials.

    PubMed

    Chen, Cong; Beckman, Robert A

    2014-04-01

    Phase II proof-of-concept (POC) trials play a key role in oncology drug development, determining which therapeutic hypotheses will undergo definitive phase III testing according to predefined Go-No Go (GNG) criteria. The number of possible POC hypotheses likely far exceeds available public or private resources. We propose a design strategy for maximizing return on socioeconomic investment in phase II trials that obtains the greatest knowledge with the minimum patient exposure. We compare efficiency using the benefit-cost ratio, defined to be the risk-adjusted number of truly active drugs correctly identified for phase III development divided by the risk-adjusted total sample size in phase II and III development, for different POC trial sizes, powering schemes, and associated GNG criteria. It is most cost-effective to conduct small POC trials and set the corresponding GNG bars high, so that more POC trials can be conducted under socioeconomic constraints. If δ is the minimum treatment effect size of clinical interest in phase II, the study design with the highest benefit-cost ratio has approximately 5% type I error rate and approximately 20% type II error rate (80% power) for detecting an effect size of approximately 1.5δ. A Go decision to phase III is made when the observed effect size is close to δ. With the phenomenal expansion of our knowledge in molecular biology leading to an unprecedented number of new oncology drug targets, conducting more small POC trials and setting high GNG bars maximize the return on socioeconomic investment in phase II POC trials. ©2014 AACR.

  19. Phenotype-limited distributions: short-billed birds move away during times that prey bury deeply

    PubMed Central

    Duijns, Sjoerd; van Gils, Jan A.; Smart, Jennifer; Piersma, Theunis

    2015-01-01

    In our seasonal world, animals face a variety of environmental conditions in the course of the year. To cope with such seasonality, animals may be phenotypically flexible, but some phenotypic traits are fixed. If fixed phenotypic traits are functionally linked to resource use, then animals should redistribute in response to seasonally changing resources, leading to a ‘phenotype-limited’ distribution. Here, we examine this possibility for a shorebird, the bar-tailed godwit (Limosa lapponica; a long-billed and sexually dimorphic shorebird), that has to reach buried prey with a probing bill of fixed length. The main prey of female bar-tailed godwits is buried deeper in winter than in summer. Using sightings of individually marked females, we found that in winter only longer-billed individuals remained in the Dutch Wadden Sea, while the shorter-billed individuals moved away to an estuary with a more benign climate such as the Wash. Although longer-billed individuals have the widest range of options in winter and could therefore be selected for, counterselection may occur during the breeding season on the tundra, where surface-living prey may be captured more easily with shorter bills. Phenotype-limited distributions could be a widespread phenomenon and, when associated with assortative migration and mating, it may act as a precursor of phenotypic evolution. PMID:26543585

  20. Phenotype-limited distributions: short-billed birds move away during times that prey bury deeply.

    PubMed

    Duijns, Sjoerd; van Gils, Jan A; Smart, Jennifer; Piersma, Theunis

    2015-06-01

    In our seasonal world, animals face a variety of environmental conditions in the course of the year. To cope with such seasonality, animals may be phenotypically flexible, but some phenotypic traits are fixed. If fixed phenotypic traits are functionally linked to resource use, then animals should redistribute in response to seasonally changing resources, leading to a 'phenotype-limited' distribution. Here, we examine this possibility for a shorebird, the bar-tailed godwit (Limosa lapponica; a long-billed and sexually dimorphic shorebird), that has to reach buried prey with a probing bill of fixed length. The main prey of female bar-tailed godwits is buried deeper in winter than in summer. Using sightings of individually marked females, we found that in winter only longer-billed individuals remained in the Dutch Wadden Sea, while the shorter-billed individuals moved away to an estuary with a more benign climate such as the Wash. Although longer-billed individuals have the widest range of options in winter and could therefore be selected for, counterselection may occur during the breeding season on the tundra, where surface-living prey may be captured more easily with shorter bills. Phenotype-limited distributions could be a widespread phenomenon and, when associated with assortative migration and mating, it may act as a precursor of phenotypic evolution.

  1. Predictability of the individual clinical outcome of extracorporeal shock wave therapy for cellulite

    PubMed Central

    Schlaudraff, Kai-Uwe; Kiessling, Maren C; Császár, Nikolaus BM; Schmitz, Christoph

    2014-01-01

    Background Extracorporeal shock wave therapy has been successfully introduced for the treatment of cellulite in recent years. However, it is still unknown whether the individual clinical outcome of cellulite treatment with extracorporeal shock wave therapy can be predicted by the patient’s individual cellulite grade at baseline, individual patient age, body mass index (BMI), weight, and/or height. Methods Fourteen Caucasian females with cellulite were enrolled in a prospective, single-center, randomized, open-label Phase II study. The mean (± standard error of the mean) cellulite grade at baseline was 2.5±0.09 and mean BMI was 22.8±1.17. All patients were treated with radial extracorporeal shock waves using the Swiss DolorClast® device (Electro Medical Systems, S.A., Nyon, Switzerland). Patients were treated unilaterally with 2 weekly treatments for 4 weeks on a randomly selected side (left or right), totaling eight treatments on the selected side. Treatment was performed at 3.5–4.0 bar, with 15,000 impulses per session applied at 15 Hz. Impulses were homogeneously distributed over the posterior thigh and buttock area (resulting in 7,500 impulses per area). Treatment success was evaluated after the last treatment and 4 weeks later by clinical examination, photographic documentation, contact thermography, and patient satisfaction questionnaires. Results The mean cellulite grade improved from 2.5±0.09 at baseline to 1.57±0.18 after the last treatment (ie, mean δ-1 was 0.93 cellulite grades) and 1.68±0.16 at follow-up (ie, mean δ-2 was 0.82 cellulite grades). Compared with baseline, no patient’s condition worsened, the treatment was well tolerated, and no unwanted side effects were observed. No statistically significant (ie, P<0.05) correlation was found between individual values for δ-1 and δ-2 and cellulite grade at baseline, BMI, weight, height, or age. Conclusion Radial shock wave therapy is a safe and effective treatment option for cellulite. The individual clinical outcome cannot be predicted by the patient’s individual cellulite grade at baseline, BMI, weight, height, or age. PMID:24920933

  2. Predictability of the individual clinical outcome of extracorporeal shock wave therapy for cellulite.

    PubMed

    Schlaudraff, Kai-Uwe; Kiessling, Maren C; Császár, Nikolaus Bm; Schmitz, Christoph

    2014-01-01

    Extracorporeal shock wave therapy has been successfully introduced for the treatment of cellulite in recent years. However, it is still unknown whether the individual clinical outcome of cellulite treatment with extracorporeal shock wave therapy can be predicted by the patient's individual cellulite grade at baseline, individual patient age, body mass index (BMI), weight, and/or height. Fourteen Caucasian females with cellulite were enrolled in a prospective, single-center, randomized, open-label Phase II study. The mean (± standard error of the mean) cellulite grade at baseline was 2.5±0.09 and mean BMI was 22.8±1.17. All patients were treated with radial extracorporeal shock waves using the Swiss DolorClast(®) device (Electro Medical Systems, S.A., Nyon, Switzerland). Patients were treated unilaterally with 2 weekly treatments for 4 weeks on a randomly selected side (left or right), totaling eight treatments on the selected side. Treatment was performed at 3.5-4.0 bar, with 15,000 impulses per session applied at 15 Hz. Impulses were homogeneously distributed over the posterior thigh and buttock area (resulting in 7,500 impulses per area). Treatment success was evaluated after the last treatment and 4 weeks later by clinical examination, photographic documentation, contact thermography, and patient satisfaction questionnaires. The mean cellulite grade improved from 2.5±0.09 at baseline to 1.57±0.18 after the last treatment (ie, mean δ-1 was 0.93 cellulite grades) and 1.68±0.16 at follow-up (ie, mean δ-2 was 0.82 cellulite grades). Compared with baseline, no patient's condition worsened, the treatment was well tolerated, and no unwanted side effects were observed. No statistically significant (ie, P<0.05) correlation was found between individual values for δ-1 and δ-2 and cellulite grade at baseline, BMI, weight, height, or age. Radial shock wave therapy is a safe and effective treatment option for cellulite. The individual clinical outcome cannot be predicted by the patient's individual cellulite grade at baseline, BMI, weight, height, or age.

  3. Emotional intelligence and its correlation to performance as a resident: a preliminary study.

    PubMed

    Talarico, Joseph F; Metro, David G; Patel, Rita M; Carney, Patricia; Wetmore, Amy L

    2008-03-01

    To test the hypothesis that emotional intelligence, as measured by the Bar-On Emotional Quotient Inventory (EQ-I) 125 (Multi Health Systems, Toronto, Ontario, Canada) personal inventory, would correlate with resident performance. Prospective survey. University-affiliated, multiinstitutional anesthesiology residency program. Current clinical anesthesiology years one to three (PGY 2-4) anesthesiology residents enrolled in the University of Pittsburgh Anesthesiology Residency Program. Participants confidentially completed the Bar-On EQ-I 125 survey. Results of the individual EQ-I 125 and daily evaluations by the faculty of the residency program were compiled and analyzed. There was no positive correlation between any facet of emotional intelligence and resident performance. There was statistically significant negative correlation (-0.40; P < 0.05) between assertiveness and the "American Board of Anesthesiology essential attributes" component of the resident evaluation. Emotional intelligence, as measured by the Bar-On EQ-I personal inventory, does not strongly correlate to resident performance as defined at the University of Pittsburgh.

  4. Tracking the autumn migration of the bar-headed goose (Anser indicus) with satellite telemetry and relationship to environmental conditions

    USGS Publications Warehouse

    Zhang, Yaonan; Hao, Meiyu; Takekawa, John Y.; Lei, Fumin; Yan, Baoping; Prosser, Diann J.; Douglas, David C.; Xing, Zhi; Newman, Scott H.

    2011-01-01

    The autumn migration routes of bar-headed geese captured before the 2008 breeding season at Qinghai Lake, China, were documented using satellite tracking data. To assess how the migration strategies of bar-headed geese are influenced by environmental conditions, the relationship between migratory routes, temperatures, and vegetation coverage at stopovers sites estimated with the Normalized Difference Vegetation Index (NDVI) were analyzed. Our results showed that there were four typical migration routes in autumn with variation in timing among individuals in start and end times and in total migration and stopover duration. The observed variation may be related to habitat type and other environmental conditions along the routes. On average, these birds traveled about 1300 to 1500 km, refueled at three to six stopover sites and migrated for 73 to 83 days. The majority of the habitat types at stopover sites were lake, marsh, and shoal wetlands, with use of some mountainous regions, and farmland areas.

  5. HIV type 1 subtypes among bar and hotel workers in Moshi, Tanzania.

    PubMed

    Kiwelu, Ireen E; Renjifo, Boris; Chaplin, Beth; Sam, Noel; Nkya, Watoky M M M; Shao, John; Kapiga, Saidi; Essex, Max

    2003-01-01

    The HIV-1 prevalence among bar and hotel workers in Tanzania suggests they are a high-risk group for HIV-1 infection. We determined the HIV-1 subtype of 3'-p24/5'-p7 gag and C2-C5 env sequences from 40 individuals representing this population in Moshi. Genetic patterns composed of A(gag)-A(env), C(gag)-C(env), and D(gag)-D(env) were found in 19 (48.0%), 8 (20.0%), and 3 (8.0%) samples, respectively. The remaining 10 samples (25%) had different subtypes in gag and env, indicative of intersubtype recombinants. Among these recombinants, two contained sequences from HIV-1 subsubtype A2, a new genetic variant in Tanzania. Five bar and hotel workers may have been infected with viruses from a common source, based on phylogenetic analysis. The information obtained by surveillance of HIV-1 subtypes in a high-risk population should be useful in the design and evaluation of behavioral, therapeutic, and vaccine trial interventions aimed at reducing HIV-1 transmission.

  6. Mitogen-induced responses in lymphocytes from platypus, the Tasmanian devil and the eastern barred bandicoot.

    PubMed

    Stewart, N J; Bettiol, S S; Kreiss, A; Fox, N; Woods, G M

    2008-10-01

    As the platypus (Ornithorhynchus anatinus), the Tasmanian devil (Sarcophilus harrisi) and the eastern barred bandicoot (Perameles gunni) are currently at risk of serious population decline or extinction from fatal diseases in Tasmania, the goal of the present study was to describe the normal immune response of these species to challenge using the lymphocyte proliferation assay, to give a solid basis for further studies. For this preliminary study, we performed lymphocyte proliferation assays on peripheral blood mononuclear cells (PBMC) from the three species. We used the common mitogens phytohaemagglutinin (PHA), concanavalin A (ConA), lipopolysaccharide (LPS) and pokeweed mitogen (PWM). All three species recorded the highest stimulation index (SI) with the T-cell mitogens PHA and ConA. Tasmanian devils and bandicoots had greater responses than platypuses, although variability between individual animals was high. For the first time, we report the normal cellular response of the platypus, the Tasmanian devil and the eastern barred bandicoot to a range of commonly used mitogens.

  7. New technologies for information retrieval to achieve situational awareness and higher patient safety in the surgical operating room: the MRI institutional approach and review of the literature.

    PubMed

    Kranzfelder, Michael; Schneider, Armin; Gillen, Sonja; Feussner, Hubertus

    2011-03-01

    Technical progress in the operating room (OR) increases constantly, but advanced techniques for error prevention are lacking. It has been the vision to create intelligent OR systems ("autopilot") that not only collect intraoperative data but also interpret whether the course of the operation is normal or deviating from the schedule ("situation awareness"), to recommend the adequate next steps of the intervention, and to identify imminent risky situations. Recently introduced technologies in health care for real-time data acquisition (bar code, radiofrequency identification [RFID], voice and emotion recognition) may have the potential to meet these demands. This report aims to identify, based on the authors' institutional experience and a review of the literature (MEDLINE search 2000-2010), which technologies are currently most promising for providing the required data and to describe their fields of application and potential limitations. Retrieval of information on the functional state of the peripheral devices in the OR is technically feasible by continuous sensor-based data acquisition and online analysis. Using bar code technologies, automatic instrument identification seems conceivable, with information given about the actual part of the procedure and indication of any change in the routine workflow. The dynamics of human activities also comprise key information. A promising technology for continuous personnel tracking is data acquisition with RFID. Emotional data capture and analysis in the OR are difficult. Although technically feasible, nonverbal emotion recognition is difficult to assess. In contrast, emotion recognition by speech seems to be a promising technology for further workflow prediction. The presented technologies are a first step to achieving an increased situational awareness in the OR. However, workflow definition in surgery is feasible only if the procedure is standardized, the peculiarities of the individual patient are taken into account, the level of the surgeon's expertise is regarded, and a comprehensive data capture can be obtained.

  8. Bayesian adjustment for measurement error in continuous exposures in an individually matched case-control study.

    PubMed

    Espino-Hernandez, Gabriela; Gustafson, Paul; Burstyn, Igor

    2011-05-14

    In epidemiological studies explanatory variables are frequently subject to measurement error. The aim of this paper is to develop a Bayesian method to correct for measurement error in multiple continuous exposures in individually matched case-control studies. This is a topic that has not been widely investigated. The new method is illustrated using data from an individually matched case-control study of the association between thyroid hormone levels during pregnancy and exposure to perfluorinated acids. The objective of the motivating study was to examine the risk of maternal hypothyroxinemia due to exposure to three perfluorinated acids measured on a continuous scale. Results from the proposed method are compared with those obtained from a naive analysis. Using a Bayesian approach, the developed method considers a classical measurement error model for the exposures, as well as the conditional logistic regression likelihood as the disease model, together with a random-effect exposure model. Proper and diffuse prior distributions are assigned, and results from a quality control experiment are used to estimate the perfluorinated acids' measurement error variability. As a result, posterior distributions and 95% credible intervals of the odds ratios are computed. A sensitivity analysis of method's performance in this particular application with different measurement error variability was performed. The proposed Bayesian method to correct for measurement error is feasible and can be implemented using statistical software. For the study on perfluorinated acids, a comparison of the inferences which are corrected for measurement error to those which ignore it indicates that little adjustment is manifested for the level of measurement error actually exhibited in the exposures. Nevertheless, a sensitivity analysis shows that more substantial adjustments arise if larger measurement errors are assumed. In individually matched case-control studies, the use of conditional logistic regression likelihood as a disease model in the presence of measurement error in multiple continuous exposures can be justified by having a random-effect exposure model. The proposed method can be successfully implemented in WinBUGS to correct individually matched case-control studies for several mismeasured continuous exposures under a classical measurement error model.

  9. Performance of GPS-devices for environmental exposure assessment.

    PubMed

    Beekhuizen, Johan; Kromhout, Hans; Huss, Anke; Vermeulen, Roel

    2013-01-01

    Integration of individual time-location patterns with spatially resolved exposure maps enables a more accurate estimation of personal exposures to environmental pollutants than using estimates at fixed locations. Current global positioning system (GPS) devices can be used to track an individual's location. However, information on GPS-performance in environmental exposure assessment is largely missing. We therefore performed two studies. First, a commute-study, where the commute of 12 individuals was tracked twice, testing GPS-performance for five transport modes and two wearing modes. Second, an urban-tracking study, where one individual was tracked repeatedly through different areas, focused on the effect of building obstruction on GPS-performance. The median error from the true path for walking was 3.7 m, biking 2.9 m, train 4.8 m, bus 4.9 m, and car 3.3 m. Errors were larger in a high-rise commercial area (median error=7.1 m) compared with a low-rise residential area (median error=2.2 m). Thus, GPS-performance largely depends on the transport mode and urban built-up. Although ~85% of all errors were <10 m, almost 1% of the errors were >50 m. Modern GPS-devices are useful tools for environmental exposure assessment, but large GPS-errors might affect estimates of exposures with high spatial variability.

  10. Data visualization, bar naked: A free tool for creating interactive graphics.

    PubMed

    Weissgerber, Tracey L; Savic, Marko; Winham, Stacey J; Stanisavljevic, Dejana; Garovic, Vesna D; Milic, Natasa M

    2017-12-15

    Although bar graphs are designed for categorical data, they are routinely used to present continuous data in studies that have small sample sizes. This presentation is problematic, as many data distributions can lead to the same bar graph, and the actual data may suggest different conclusions from the summary statistics. To address this problem, many journals have implemented new policies that require authors to show the data distribution. This paper introduces a free, web-based tool for creating an interactive alternative to the bar graph (http://statistika.mfub.bg.ac.rs/interactive-dotplot/). This tool allows authors with no programming expertise to create customized interactive graphics, including univariate scatterplots, box plots, and violin plots, for comparing values of a continuous variable across different study groups. Individual data points may be overlaid on the graphs. Additional features facilitate visualization of subgroups or clusters of non-independent data. A second tool enables authors to create interactive graphics from data obtained with repeated independent experiments (http://statistika.mfub.bg.ac.rs/interactive-repeated-experiments-dotplot/). These tools are designed to encourage exploration and critical evaluation of the data behind the summary statistics and may be valuable for promoting transparency, reproducibility, and open science in basic biomedical research. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  11. High power diode lasers for solid-state laser pumps

    NASA Technical Reports Server (NTRS)

    Linden, Kurt J.; Mcdonnell, Patrick N.

    1994-01-01

    The development and commercial application of high power diode laser arrays for use as solid-state laser pumps is described. Such solid-state laser pumps are significantly more efficient and reliable than conventional flash-lamps. This paper describes the design and fabrication of diode lasers emitting in the 780 - 900 nm spectral region, and discusses their performance and reliability. Typical measured performance parameters include electrical-to-optical power conversion efficiencies of 50 percent, narrow-band spectral emission of 2 to 3 nm FWHM, pulsed output power levels of 50 watts/bar with reliability values of over 2 billion shots to date (tests to be terminated after 10 billion shots), and reliable operation to pulse lengths of 1 ms. Pulse lengths up to 5 ms have been demonstrated at derated power levels, and CW performance at various power levels has been evaluated in a 'bar-in-groove' laser package. These high-power 1-cm stacked-bar arrays are now being manufactured for OEM use. Individual diode laser bars, ready for package-mounting by OEM customers, are being sold as commodity items. Commercial and medical applications of these laser arrays include solid-state laser pumping for metal-working, cutting, industrial measurement and control, ranging, wind-shear/atmospheric turbulence detection, X-ray generation, materials surface cleaning, microsurgery, ophthalmology, dermatology, and dental procedures.

  12. Visual presentations of efficacy data in direct-to-consumer prescription drug print and television advertisements: A randomized study.

    PubMed

    Sullivan, Helen W; O'Donoghue, Amie C; Aikin, Kathryn J; Chowdhury, Dhuly; Moultrie, Rebecca R; Rupert, Douglas J

    2016-05-01

    To determine whether visual aids help people recall quantitative efficacy information in direct-to-consumer (DTC) prescription drug advertisements, and if so, which types of visual aids are most helpful. Individuals diagnosed with high cholesterol (n=2504) were randomized to view a fictional DTC print or television advertisement with no visual aid or one of four visual aids (pie chart, bar chart, table, or pictograph) depicting drug efficacy. We measured drug efficacy and risk recall, drug perceptions and attitudes, and behavioral intentions. For print advertisements, a bar chart or table, compared with no visual aid, elicited more accurate drug efficacy recall. The bar chart was better at this than the pictograph and the table was better than the pie chart. For television advertisements, any visual aid, compared with no visual aid, elicited more accurate drug efficacy recall. The bar chart was better at this than the pictograph or the table. Visual aids depicting quantitative efficacy information in DTC print and television advertisements increased drug efficacy recall, which may help people make informed decisions about prescription drugs. Adding visual aids to DTC advertising may increase the public's knowledge of how well prescription drugs work. Published by Elsevier Ireland Ltd.

  13. Relationships among peak power output, peak bar velocity, and mechanomyographic amplitude during the free-weight bench press exercise.

    PubMed

    Stock, Matt S; Beck, Travis W; Defreitas, Jason M; Dillon, Michael A

    2010-10-01

    The purpose of this study was to examine the relationships among mechanomyographic (MMG) amplitude, power output, and bar velocity during the free-weight bench press exercise. Twenty-one resistance-trained men [one-repetition maximum (1-RM) bench press = 125.4+18.4 kg] performed bench press muscle actions as explosively as possible from 10% to 90% of the 1-RM while peak power output and peak bar velocity were assessed with a TENDO Weightlifting Analyzer. During each muscle action, surface MMG signals were detected from the right and left pectoralis major and triceps brachii, and the concentric portion of the range of motion was selected for analysis. Results indicated that power output increased from 10% to 50% 1-RM, followed by decreases from 50% to 90% 1-RM, but MMG amplitude for each of the muscles increased from 10 to 80% 1-RM. The results of this study indicate that during the free-weight bench press exercise, MMG amplitude was not related to power output, but was inversely related to bar velocity and directly related to the external load being lifted. In future research, coaches and sport scientists may be able to estimate force/torque production from individual muscles during multi-joint, dynamic constant external resistance muscle actions.

  14. Presence of undeclared peanut protein in chocolate bars imported from Europe.

    PubMed

    Vadas, Peter; Perelman, Boris

    2003-10-01

    Peanut allergens are both stable and potent and are capable of inducing anaphylactic reactions at low concentrations. Consequently, the consumption of peanuts remains the most common cause of food-induced anaphylactic death. Since accidental exposure to peanuts is a common cause of potentially fatal anaphylaxis in peanut-allergic individuals, we tested for the presence of peanut protein in chocolate bars produced in Europe and North America that did not list peanuts as an ingredient. Ninety-two chocolate bars, of which 32 were manufactured in North America and 60 were imported from Europe, were tested by the Veratox assay. None of the 32 North American chocolate products, including 19 with precautionary labeling, contained detectable peanut protein. In contrast, 30.8% of products from western Europe without precautionary labeling contained detectable levels of peanut protein. Sixty-two percent of products from eastern Europe without precautionary labeling contained detectable peanut protein at levels of up to 245 ppm. The absence of precautionary labeling and the absence of the declaration of "peanut" as an ingredient in chocolate bars made in eastern and central Europe were not found to guarantee that these products were actually free of contaminating peanut protein. In contrast, North American manufacturers have attained a consistent level of safety and reliability for peanut-allergic consumers.

  15. Use of Highly Fortified Products among US Adults

    PubMed Central

    Costello, Rebecca B; Dwyer, Johanna T; Bailey, Regan L; Saldanha, Leila; French, Steven

    2015-01-01

    It is complicated to ascertain the composition and prevalence of the use of highly fortified food and supplement products (HFPs) because HFP foods and HFP supplements have different labeling requirements. However, HFPs (energy bars, energy drinks, sports drinks, protein bars, energy shots, and fortified foods/beverages) are popular in the United States. A web-based survey balanced to reflect US census data was used to describe their use in a sample of 2,355 US adults >18 yr in 2011 and trends in their use from 2005. In 2011, 33% of adults reported using HFP; use was significantly higher among males, African Americans, Hispanics, and more highly educated individuals (e.g. some college or more) and those <45 yr compared to non-users. Multiple product use was common. Of users, 46% consumed sports drinks, 37% fortified foods/beverages, 32% protein bars, 27% energy drinks, 24% energy bars, and 12% energy shots. For those HFP products as a group, prevalence of use was 36% (n=2039) in 2005, 35% in 2009 (n=2010), and 30% in 2011 (n=2355). Although use was significantly lower in 2011 than in 2005 especially among females, non-Hispanics, and those with high school education or less (P≤0.05). HFP, particularly energy and sports drinks, continue to be widely used by many U.S. adults. PMID:26823624

  16. Individual differences in political ideology are effects of adaptive error management.

    PubMed

    Petersen, Michael Bang; Aarøe, Lene

    2014-06-01

    We apply error management theory to the analysis of individual differences in the negativity bias and political ideology. Using principles from evolutionary psychology, we propose a coherent theoretical framework for understanding (1) why individuals differ in their political ideology and (2) the conditions under which these individual differences influence and fail to influence the political choices people make.

  17. The CKM Matrix and The Unitarity Triangle: Another Look

    NASA Astrophysics Data System (ADS)

    Buras, Andrzej J.; Parodi, Fabrizio; Stocchi, Achille

    2003-01-01

    The unitarity triangle can be determined by means of two measurements of its sides or angles. Assuming the same relative errors on the angles (alpha,beta,gamma) and the sides (Rb,Rt), we find that the pairs (gamma,beta) and (gamma,Rb) are most efficient in determining (bar varrho,bar eta) that describe the apex of the unitarity triangle. They are followed by (alpha,beta), (alpha,Rb), (Rt,beta), (Rt,Rb) and (Rb,beta). As the set |Vus|, |Vcb|, Rt and beta appears to be the best candidate for the fundamental set of flavour violating parameters in the coming years, we show various constraints on the CKM matrix in the (Rt,beta) plane. Using the best available input we determine the universal unitarity triangle for models with minimal flavour violation (MFV) and compare it with the one in the Standard Model. We present allowed ranges for sin 2beta, sin 2alpha, gamma, Rb, Rt and DeltaMs within the Standard Model and MFV models. We also update the allowed range for the function Ftt that parametrizes various MFV-models.

  18. The Astro-H Soft X-Ray Mirror

    NASA Technical Reports Server (NTRS)

    Robinson, David; Okajima, Takashi; Serlemitsos, Peter; Soong, Yang

    2012-01-01

    The Astro-H is led by the Japanese Space Agency (JAXA) in collaboration with many other institutions including the NASA Goddard Space Flight Center. Goddard's contributions include two soft X-ray telescopes (SXTs). The telescopes have an effective area of 562 square cm at 1 keV and 425 square cm at 6 keV with an image quality requirement of 1.7 arc-minutes half power diameter (HPD). The engineering model has demonstrated 1.1 arc-minutes HPD error. The design of the SXT is based on the successful Suzaku mission mirrors with some enhancements to improve the image quality. Two major enhancements are bonding the X-ray mirror foils to alignment bars instead of allowing the mirrors to float, and fabricating alignment bars with grooves within 5 microns of accuracy. An engineering model SXT was recently built and subjected to several tests including vibration, thermal, and X-ray performance in a beamline. Several lessons were learned during this testing that will be incorporated in the flight design. Test results and optical performance are discussed, along with a description of the design of the SXT.

  19. The Galileo probe Doppler wind experiment: Measurement of the deep zonal winds on Jupiter

    NASA Astrophysics Data System (ADS)

    Atkinson, David H.; Pollack, James B.; Seiff, Alvin

    1998-09-01

    During its descent into the upper atmosphere of Jupiter, the Galileo probe transmitted data to the orbiter for 57.5 min. Accurate measurements of the probe radio frequency, driven by an ultrastable oscillator, allowed an accurate time history of the probe motions to be reconstructed. Removal from the probe radio frequency profile of known Doppler contributions, including the orbiter trajectory, the probe descent velocity, and the rotation of Jupiter, left a measurable frequency residual due to Jupiter's zonal winds, and microdynamical motion of the probe from spin, swing under the parachute, atmospheric turbulence, and aerodynamic buffeting. From the assumption of the dominance of the zonal horizontal winds, the frequency residuals were inverted and resulted in the first in situ measurements of the vertical profile of Jupiter's deep zonal winds. A number of error sources with the capability of corrupting the frequency measurements or the interpretation of the frequency residuals were considered using reasonable assumptions and calibrations from prelaunch and in-flight testing. It is found that beneath the cloud tops (about 700 mbar) the winds are prograde and rise rapidly to 170 m/s at 4 bars. Beyond 4 bars to the depth at which the link with the probe was lost, nearly 21 bars, the winds remain constant and strong. Corrections for the high temperatures encountered by the probe have recently been completed and provide no evidence of diminishing or strengthening of the zonal wind profile in the deeper regions explored by the Galileo probe.

  20. Headspace sorptive extraction-gas chromatography-mass spectrometry method to measure volatile emissions from human airway cell cultures.

    PubMed

    Yamaguchi, Mei S; McCartney, Mitchell M; Linderholm, Angela L; Ebeler, Susan E; Schivo, Michael; Davis, Cristina E

    2018-05-12

    The human respiratory tract releases volatile metabolites into exhaled breath that can be utilized for noninvasive health diagnostics. To understand the origin of this metabolic process, our group has previously analyzed the headspace above human epithelial cell cultures using solid phase microextraction-gas chromatography-mass spectrometry (SPME-GC-MS). In the present work, we improve our model by employing sorbent-covered magnetic stir bars for headspace sorptive extraction (HSSE). Sorbent-coated stir bar analyte recovery increased by 52 times and captured 97 more compounds than SPME. Our data show that HSSE is preferred over liquid extraction via stir bar sorptive extraction (SBSE), which failed to distinguish volatiles unique to the cell samples compared against media controls. Two different cellular media were also compared, and we found that Opti-MEM® is preferred for volatile analysis. We optimized HSSE analytical parameters such as extraction time (24 h), desorption temperature (300 °C) and desorption time (7 min). Finally, we developed an internal standard for cell culture VOC studies by introducing 842 ng of deuterated decane per 5 mL of cell medium to account for error from extraction, desorption, chromatography and detection. This improved model will serve as a platform for future metabolic cell culture studies to examine changes in epithelial VOCs caused by perturbations such as viral or bacterial infections, opening opportunities for improved, noninvasive pulmonary diagnostics. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Medication errors in the emergency department: a systems approach to minimizing risk.

    PubMed

    Peth, Howard A

    2003-02-01

    Adverse drug events caused by medication errors represent a common cause of patient injury in the practice of medicine. Many medication errors are preventable and hence particularly tragic when they occur, often with serious consequences. The enormous increase in the number of available drugs on the market makes it all but impossible for physicians, nurses, and pharmacists to possess the knowledge base necessary for fail-safe medication practice. Indeed, the greatest single systemic factor associated with medication errors is a deficiency in the knowledge requisite to the safe use of drugs. It is vital that physicians, nurses, and pharmacists have at their immediate disposal up-to-date drug references. Patients presenting for care in EDs are usually unfamiliar to their EPs and nurses, and the unique patient factors affecting medication response and toxicity are obscured. An appropriate history, physical examination, and diagnostic workup will assist EPs, nurses, and pharmacists in selecting the safest and most optimum therapeutic regimen for each patient. EDs deliver care "24/7" and are open when valuable information resources, such as hospital pharmacists and previously treating physicians, may not be available for consultation. A systems approach to the complex problem of medication errors will help emergency clinicians eliminate preventable adverse drug events and achieve a goal of a zero-defects system, in which medication errors are a thing of the past. New developments in information technology and the advent of electronic medical records with computerized physician order entry, ward-based clinical pharmacists, and standardized bar codes promise substantial reductions in the incidence of medication errors and adverse drug events. ED patients expect and deserve nothing less than the safest possible emergency medicine service.

  2. Supermassive Black Holes and Their Host Spheroids. I. Disassembling Galaxies

    NASA Astrophysics Data System (ADS)

    Savorgnan, G. A. D.; Graham, A. W.

    2016-01-01

    Several recent studies have performed galaxy decompositions to investigate correlations between the black hole mass and various properties of the host spheroid, but they have not converged on the same conclusions. This is because their models for the same galaxy were often significantly different and not consistent with each other in terms of fitted components. Using 3.6 μm Spitzer imagery, which is a superb tracer of the stellar mass (superior to the K band), we have performed state-of-the-art multicomponent decompositions for 66 galaxies with directly measured black hole masses. Our sample is the largest to date and, unlike previous studies, contains a large number (17) of spiral galaxies with low black hole masses. We paid careful attention to the image mosaicking, sky subtraction, and masking of contaminating sources. After a scrupulous inspection of the galaxy photometry (through isophotal analysis and unsharp masking) and—for the first time—2D kinematics, we were able to account for spheroids large-scale, intermediate-scale, and nuclear disks bars rings spiral arms halos extended or unresolved nuclear sources; and partially depleted cores. For each individual galaxy, we compared our best-fit model with previous studies, explained the discrepancies, and identified the optimal decomposition. Moreover, we have independently performed one-dimensional (1D) and two-dimensional (2D) decompositions and concluded that, at least when modeling large, nearby galaxies, 1D techniques have more advantages than 2D techniques. Finally, we developed a prescription to estimate the uncertainties on the 1D best-fit parameters for the 66 spheroids that takes into account systematic errors, unlike popular 2D codes that only consider statistical errors.

  3. SUPERMASSIVE BLACK HOLES AND THEIR HOST SPHEROIDS. I. DISASSEMBLING GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savorgnan, G. A. D.; Graham, A. W., E-mail: gsavorgn@astro.swin.edu.au

    Several recent studies have performed galaxy decompositions to investigate correlations between the black hole mass and various properties of the host spheroid, but they have not converged on the same conclusions. This is because their models for the same galaxy were often significantly different and not consistent with each other in terms of fitted components. Using 3.6 μm Spitzer imagery, which is a superb tracer of the stellar mass (superior to the K band), we have performed state-of-the-art multicomponent decompositions for 66 galaxies with directly measured black hole masses. Our sample is the largest to date and, unlike previous studies, containsmore » a large number (17) of spiral galaxies with low black hole masses. We paid careful attention to the image mosaicking, sky subtraction, and masking of contaminating sources. After a scrupulous inspection of the galaxy photometry (through isophotal analysis and unsharp masking) and—for the first time—2D kinematics, we were able to account for spheroids; large-scale, intermediate-scale, and nuclear disks; bars; rings; spiral arms; halos; extended or unresolved nuclear sources; and partially depleted cores. For each individual galaxy, we compared our best-fit model with previous studies, explained the discrepancies, and identified the optimal decomposition. Moreover, we have independently performed one-dimensional (1D) and two-dimensional (2D) decompositions and concluded that, at least when modeling large, nearby galaxies, 1D techniques have more advantages than 2D techniques. Finally, we developed a prescription to estimate the uncertainties on the 1D best-fit parameters for the 66 spheroids that takes into account systematic errors, unlike popular 2D codes that only consider statistical errors.« less

  4. SEDS: The Spitzer Extended Deep Survey. Survey Design, Photometry, and Deep IRAC Source Counts

    NASA Technical Reports Server (NTRS)

    Ashby, M. L. N.; Willner, S. P.; Fazio, G. G.; Huang, J.-S.; Arendt, A.; Barmby, P.; Barro, G; Bell, E. F.; Bouwens, R.; Cattaneo, A.; hide

    2013-01-01

    The Spitzer Extended Deep Survey (SEDS) is a very deep infrared survey within five well-known extragalactic science fields: the UKIDSS Ultra-Deep Survey, the Extended Chandra Deep Field South, COSMOS, the Hubble Deep Field North, and the Extended Groth Strip. SEDS covers a total area of 1.46 deg(exp 2) to a depth of 26 AB mag (3sigma) in both of the warm Infrared Array Camera (IRAC) bands at 3.6 and 4.5 micron. Because of its uniform depth of coverage in so many widely-separated fields, SEDS is subject to roughly 25% smaller errors due to cosmic variance than a single-field survey of the same size. SEDS was designed to detect and characterize galaxies from intermediate to high redshifts (z = 2-7) with a built-in means of assessing the impact of cosmic variance on the individual fields. Because the full SEDS depth was accumulated in at least three separate visits to each field, typically with six-month intervals between visits, SEDS also furnishes an opportunity to assess the infrared variability of faint objects. This paper describes the SEDS survey design, processing, and publicly-available data products. Deep IRAC counts for the more than 300,000 galaxies detected by SEDS are consistent with models based on known galaxy populations. Discrete IRAC sources contribute 5.6 +/- 1.0 and 4.4 +/- 0.8 nW / square m/sr at 3.6 and 4.5 micron to the diffuse cosmic infrared background (CIB). IRAC sources cannot contribute more than half of the total CIB flux estimated from DIRBE data. Barring an unexpected error in the DIRBE flux estimates, half the CIB flux must therefore come from a diffuse component.

  5. Hubble Space Telescope Proper Motion (HSTPROMO) Catalogs of Galactic Globular Clusters. IV. Kinematic Profiles and Average Masses of Blue Straggler Stars

    NASA Astrophysics Data System (ADS)

    Baldwin, A. T.; Watkins, L. L.; van der Marel, R. P.; Bianchini, P.; Bellini, A.; Anderson, J.

    2016-08-01

    We make use of the Hubble Space Telescope proper-motion catalogs derived by Bellini et al. to produce the first radial velocity dispersion profiles σ (R) for blue straggler stars (BSSs) in Galactic globular clusters (GCs), as well as the first dynamical estimates for the average mass of the entire BSS population. We show that BSSs typically have lower velocity dispersions than stars with mass equal to the main-sequence turnoff mass, as one would expect for a more massive population of stars. Since GCs are expected to experience some degree of energy equipartition, we use the relation σ \\propto {M}-η , where η is related to the degree of energy equipartition, along with our velocity dispersion profiles to estimate BSS masses. We estimate η as a function of cluster relaxation from recent Monte Carlo cluster simulations by Bianchini et al. and then derive an average mass ratio {M}{BSS}/{M}{MSTO}=1.50+/- 0.14 and an average mass {M}{BSS}=1.22+/- 0.12 M ⊙ from 598 BSSs across 19 GCs. The final error bars include any systematic errors that are random between different clusters, but not any potential biases inherent to our methodology. Our results are in good agreement with the average mass of {M}{BSS}=1.22+/- 0.06 M ⊙ for the 35 BSSs in Galactic GCs in the literature with properties that have allowed individual mass determination. Based on proprietary and archival observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555.

  6. The impact of using an intravenous workflow management system (IVWMS) on cost and patient safety.

    PubMed

    Lin, Alex C; Deng, Yihong; Thaibah, Hilal; Hingl, John; Penm, Jonathan; Ivey, Marianne F; Thomas, Mark

    2018-07-01

    The aim of this study was to determine the financial costs associated with wasted and missing doses before and after the implementation of an intravenous workflow management system (IVWMS) and to quantify the number and the rate of detected intravenous (IV) preparation errors. A retrospective analysis of the sample hospital information system database was conducted using three months of data before and after the implementation of an IVWMS System (DoseEdge ® ) which uses barcode scanning and photographic technologies to track and verify each step of the preparation process. The financial impact associated with wasted and missing >IV doses was determined by combining drug acquisition, labor, accessory, and disposal costs. The intercepted error reports and pharmacist detected error reports were drawn from the IVWMS to quantify the number of errors by defined error categories. The total number of IV doses prepared before and after the implementation of the IVWMS system were 110,963 and 101,765 doses, respectively. The adoption of the IVWMS significantly reduced the amount of wasted and missing IV doses by 14,176 and 2268 doses, respectively (p < 0.001). The overall cost savings of using the system was $144,019 over 3 months. The total number of errors detected was 1160 (1.14%) after using the IVWMS. The implementation of the IVWMS facilitated workflow changes that led to a positive impact on cost and patient safety. The implementation of the IVWMS increased patient safety by enforcing standard operating procedures and bar code verifications. Published by Elsevier B.V.

  7. 42 CFR 435.406 - Citizenship and alienage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... ISLANDS, AND AMERICAN SAMOA General Eligibility Requirements § 435.406 Citizenship and alienage. (a) The... national of the United States; and (ii) The individual has provided satisfactory documentary evidence of... qualified aliens subject to the 5-year bar) who have provided satisfactory documentary evidence of Qualified...

  8. 42 CFR 435.406 - Citizenship and alienage.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... ISLANDS, AND AMERICAN SAMOA General Eligibility Requirements § 435.406 Citizenship and alienage. (a) The... national of the United States; and (ii) The individual has provided satisfactory documentary evidence of... qualified aliens subject to the 5-year bar) who have provided satisfactory documentary evidence of Qualified...

  9. 42 CFR 435.406 - Citizenship and alienage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... ISLANDS, AND AMERICAN SAMOA General Eligibility Requirements § 435.406 Citizenship and alienage. (a) The... national of the United States; and (ii) The individual has provided satisfactory documentary evidence of... qualified aliens subject to the 5-year bar) who have provided satisfactory documentary evidence of Qualified...

  10. 42 CFR 435.406 - Citizenship and alienage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... ISLANDS, AND AMERICAN SAMOA General Eligibility Requirements § 435.406 Citizenship and alienage. (a) The... national of the United States; and (ii) The individual has provided satisfactory documentary evidence of... qualified aliens subject to the 5-year bar) who have provided satisfactory documentary evidence of Qualified...

  11. Brain Mechanisms Underlying Individual Differences in Reaction to Stress: An Animal Model

    DTIC Science & Technology

    1988-10-29

    Schooler, et al., 1976; Gershon & Buchsbaum, 1977; Buchsbaum, et al., 1977), personality scales of extraversion- introversion (Haier, 1984) and sensation...exploratory and learned to bar press more quickly and efficiently. Reducers with a lower inhibitory threshold learned the differential reinforcement of

  12. 3DXRD at the Advanced Photon Source: Orientation Mapping and Deformation Studies

    DTIC Science & Technology

    2010-09-01

    statistics in the same sample (Hefferan et al. (2010)). This low orientation uncertainty or error bar might be surprising at first since we do measurements...may be a combination of noise and real gradients. Some of the intra‐ granular  disorder  in  (b)  should  be  interpreted  as  statistical   and  only...cooling (AC), but are not present after ice water quenching (IWQ). The presence of SRO domains is known to lead to planar slip bands during tensile

  13. Numerical prediction of a draft tube flow taking into account uncertain inlet conditions

    NASA Astrophysics Data System (ADS)

    Brugiere, O.; Balarac, G.; Corre, C.; Metais, O.; Flores, E.; Pleroy

    2012-11-01

    The swirling turbulent flow in a hydroturbine draft tube is computed with a non-intrusive uncertainty quantification (UQ) method coupled to Reynolds-Averaged Navier-Stokes (RANS) modelling in order to take into account in the numerical prediction the physical uncertainties existing on the inlet flow conditions. The proposed approach yields not only mean velocity fields to be compared with measured profiles, as is customary in Computational Fluid Dynamics (CFD) practice, but also variance of these quantities from which error bars can be deduced on the computed profiles, thus making more significant the comparison between experiment and computation.

  14. Abundances of Jupiter's Trace Hydrocarbons from Voyager and Cassini. Data Tables: Cassini CIRS Observations Planetary and Space Science, Forthcoming 2010

    NASA Technical Reports Server (NTRS)

    Nixon, C. A.; Achterberg, R. K.; Romani, P. N.; Allen, M.; Zhang, X.; Teanby, N. A.; Irwin, P. G. J.; Flasar, F. M.

    2010-01-01

    The following six tables give the retrieved temperatures and volume mixing ratios of C2H2 and C2H6 and the formal errors on these results from the retrieval, as described in the manuscript. These are in the form of two-dimensional tables, specified on a latitudinal and vertical grid. The first column is the pressure in bar, and the second column gives the altitude in kilometers calculated from hydrostatic equilibrium, and applies to the equatorial profile only. The top row of the table specifies the planetographic latitude.

  15. Abundances of Jupiter's Trace Hydrocarbons from Voyager and Cassini. Data Tables: Voyager IRIS Observations Planetary and Space Science, Forthcoming 2010

    NASA Technical Reports Server (NTRS)

    Nixon, C. A.; Achterberg, R. K.; Romani, P. N.; Allen, M.; Zhang, X.; Irwin, P. G. J.; Flasar, F. M.

    2010-01-01

    The following six tables give the retrieved temperatures and volume mixing ratios of C2H2 and C2H6 and the formal errors on these results from the retrieval, as described in the manuscript. These are in the form of two-dimensional tables, specified on a latitudinal and vertical grid. The first column is the pressure in bar, and the second column gives the altitude in kilometers calculated from hydrostatic equilibrium, and applies to the equatorial profile only. The top row of the table specifies the planetographic latitude.

  16. Multiconfiguration calculations of electronic isotope shift factors in Al i

    NASA Astrophysics Data System (ADS)

    Filippin, Livio; Beerwerth, Randolf; Ekman, Jörgen; Fritzsche, Stephan; Godefroid, Michel; Jönsson, Per

    2016-12-01

    The present work reports results from systematic multiconfiguration Dirac-Hartree-Fock calculations of electronic isotope shift factors for a set of transitions between low-lying levels of neutral aluminium. These electronic quantities together with observed isotope shifts between different pairs of isotopes provide the changes in mean-square charge radii of the atomic nuclei. Two computational approaches are adopted for the estimation of the mass- and field-shift factors. Within these approaches, different models for electron correlation are explored in a systematic way to determine a reliable computational strategy and to estimate theoretical error bars of the isotope shift factors.

  17. Critical temperature of the Ising ferromagnet on the fcc, hcp, and dhcp lattices

    NASA Astrophysics Data System (ADS)

    Yu, Unjong

    2015-02-01

    By an extensive Monte-Carlo calculation together with the finite-size-scaling and the multiple histogram method, the critical coupling constant (Kc = J /kBTc) of the Ising ferromagnet on the fcc, hcp, and double hcp (dhcp) lattices were obtained with unprecedented precision: Kcfcc= 0.1020707(2) , Kchcp= 0.1020702(1) , and Kcdhcp= 0.1020706(2) . The critical temperature Tc of the hcp lattice is found to be higher than those of the fcc and the dhcp lattice. The dhcp lattice seems to have higher Tc than the fcc lattice, but the difference is within error bars.

  18. Using Perturbative Least Action to Reconstruct Redshift-Space Distortions

    NASA Astrophysics Data System (ADS)

    Goldberg, David M.

    2001-05-01

    In this paper, we present a redshift-space reconstruction scheme that is analogous to and extends the perturbative least action (PLA) method described by Goldberg & Spergel. We first show that this scheme is effective in reconstructing even nonlinear observations. We then suggest that by varying the cosmology to minimize the quadrupole moment of a reconstructed density field, it may be possible to lower the error bars on the redshift distortion parameter, β, as well as to break the degeneracy between the linear bias parameter, b, and ΩM. Finally, we discuss how PLA might be applied to realistic redshift surveys.

  19. Joint estimation over multiple individuals improves behavioural state inference from animal movement data.

    PubMed

    Jonsen, Ian

    2016-02-08

    State-space models provide a powerful way to scale up inference of movement behaviours from individuals to populations when the inference is made across multiple individuals. Here, I show how a joint estimation approach that assumes individuals share identical movement parameters can lead to improved inference of behavioural states associated with different movement processes. I use simulated movement paths with known behavioural states to compare estimation error between nonhierarchical and joint estimation formulations of an otherwise identical state-space model. Behavioural state estimation error was strongly affected by the degree of similarity between movement patterns characterising the behavioural states, with less error when movements were strongly dissimilar between states. The joint estimation model improved behavioural state estimation relative to the nonhierarchical model for simulated data with heavy-tailed Argos location errors. When applied to Argos telemetry datasets from 10 Weddell seals, the nonhierarchical model estimated highly uncertain behavioural state switching probabilities for most individuals whereas the joint estimation model yielded substantially less uncertainty. The joint estimation model better resolved the behavioural state sequences across all seals. Hierarchical or joint estimation models should be the preferred choice for estimating behavioural states from animal movement data, especially when location data are error-prone.

  20. Errors in veterinary practice: preliminary lessons for building better veterinary teams.

    PubMed

    Kinnison, T; Guile, D; May, S A

    2015-11-14

    Case studies in two typical UK veterinary practices were undertaken to explore teamwork, including interprofessional working. Each study involved one week of whole team observation based on practice locations (reception, operating theatre), one week of shadowing six focus individuals (veterinary surgeons, veterinary nurses and administrators) and a final week consisting of semistructured interviews regarding teamwork. Errors emerged as a finding of the study. The definition of errors was inclusive, pertaining to inputs or omitted actions with potential adverse outcomes for patients, clients or the practice. The 40 identified instances could be grouped into clinical errors (dosing/drugs, surgical preparation, lack of follow-up), lost item errors, and most frequently, communication errors (records, procedures, missing face-to-face communication, mistakes within face-to-face communication). The qualitative nature of the study allowed the underlying cause of the errors to be explored. In addition to some individual mistakes, system faults were identified as a major cause of errors. Observed examples and interviews demonstrated several challenges to interprofessional teamworking which may cause errors, including: lack of time, part-time staff leading to frequent handovers, branch differences and individual veterinary surgeon work preferences. Lessons are drawn for building better veterinary teams and implications for Disciplinary Proceedings considered. British Veterinary Association.

  1. Punishment sensitivity modulates the processing of negative feedback but not error-induced learning.

    PubMed

    Unger, Kerstin; Heintz, Sonja; Kray, Jutta

    2012-01-01

    Accumulating evidence suggests that individual differences in punishment and reward sensitivity are associated with functional alterations in neural systems underlying error and feedback processing. In particular, individuals highly sensitive to punishment have been found to be characterized by larger mediofrontal error signals as reflected in the error negativity/error-related negativity (Ne/ERN) and the feedback-related negativity (FRN). By contrast, reward sensitivity has been shown to relate to the error positivity (Pe). Given that Ne/ERN, FRN, and Pe have been functionally linked to flexible behavioral adaptation, the aim of the present research was to examine how these electrophysiological reflections of error and feedback processing vary as a function of punishment and reward sensitivity during reinforcement learning. We applied a probabilistic learning task that involved three different conditions of feedback validity (100%, 80%, and 50%). In contrast to prior studies using response competition tasks, we did not find reliable correlations between punishment sensitivity and the Ne/ERN. Instead, higher punishment sensitivity predicted larger FRN amplitudes, irrespective of feedback validity. Moreover, higher reward sensitivity was associated with a larger Pe. However, only reward sensitivity was related to better overall learning performance and higher post-error accuracy, whereas highly punishment sensitive participants showed impaired learning performance, suggesting that larger negative feedback-related error signals were not beneficial for learning or even reflected maladaptive information processing in these individuals. Thus, although our findings indicate that individual differences in reward and punishment sensitivity are related to electrophysiological correlates of error and feedback processing, we found less evidence for influences of these personality characteristics on the relation between performance monitoring and feedback-based learning.

  2. Accounting for apparent deviations between calorimetric and van't Hoff enthalpies.

    PubMed

    Kantonen, Samuel A; Henriksen, Niel M; Gilson, Michael K

    2018-03-01

    In theory, binding enthalpies directly obtained from calorimetry (such as ITC) and the temperature dependence of the binding free energy (van't Hoff method) should agree. However, previous studies have often found them to be discrepant. Experimental binding enthalpies (both calorimetric and van't Hoff) are obtained for two host-guest pairs using ITC, and the discrepancy between the two enthalpies is examined. Modeling of artificial ITC data is also used to examine how different sources of error propagate to both types of binding enthalpies. For the host-guest pairs examined here, good agreement, to within about 0.4kcal/mol, is obtained between the two enthalpies. Additionally, using artificial data, we find that different sources of error propagate to either enthalpy uniquely, with concentration error and heat error propagating primarily to calorimetric and van't Hoff enthalpies, respectively. With modern calorimeters, good agreement between van't Hoff and calorimetric enthalpies should be achievable, barring issues due to non-ideality or unanticipated measurement pathologies. Indeed, disagreement between the two can serve as a flag for error-prone datasets. A review of the underlying theory supports the expectation that these two quantities should be in agreement. We address and arguably resolve long-standing questions regarding the relationship between calorimetric and van't Hoff enthalpies. In addition, we show that comparison of these two quantities can be used as an internal consistency check of a calorimetry study. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Error affect inoculation for a complex decision-making task.

    PubMed

    Tabernero, Carmen; Wood, Robert E

    2009-05-01

    Individuals bring knowledge, implicit theories, and goal orientations to group meetings. Group decisions arise out of the exchange of these orientations. This research explores how a trainee's exploratory and deliberate process (an incremental theory and learning goal orientation) impacts the effectiveness of individual and group decision-making processes. The effectiveness of this training program is compared with another program that included error affect inoculation (EAI). Subjects were 40 Spanish Policemen in a training course. They were distributed in two training conditions for an individual and group decision-making task. In one condition, individuals received the Self-Guided Exploration plus Deliberation Process instructions, which emphasised exploring the options and testing hypotheses. In the other condition, individuals also received instructions based on Error Affect Inoculation (EAI), which emphasised positive affective reactions to errors and mistakes when making decisions. Results show that the quality of decisions increases when the groups share their reasoning. The AIE intervention promotes sharing information, flexible initial viewpoints, and improving the quality of group decisions. Implications and future directions are discussed.

  4. Design and preliminary evaluation of the FINGER rehabilitation robot: controlling challenge and quantifying finger individuation during musical computer game play.

    PubMed

    Taheri, Hossein; Rowe, Justin B; Gardner, David; Chan, Vicki; Gray, Kyle; Bower, Curtis; Reinkensmeyer, David J; Wolbrecht, Eric T

    2014-02-04

    This paper describes the design and preliminary testing of FINGER (Finger Individuating Grasp Exercise Robot), a device for assisting in finger rehabilitation after neurologic injury. We developed FINGER to assist stroke patients in moving their fingers individually in a naturalistic curling motion while playing a game similar to Guitar Hero. The goal was to make FINGER capable of assisting with motions where precise timing is important. FINGER consists of a pair of stacked single degree-of-freedom 8-bar mechanisms, one for the index and one for the middle finger. Each 8-bar mechanism was designed to control the angle and position of the proximal phalanx and the position of the middle phalanx. Target positions for the mechanism optimization were determined from trajectory data collected from 7 healthy subjects using color-based motion capture. The resulting robotic device was built to accommodate multiple finger sizes and finger-to-finger widths. For initial evaluation, we asked individuals with a stroke (n = 16) and without impairment (n = 4) to play a game similar to Guitar Hero while connected to FINGER. Precision design, low friction bearings, and separate high speed linear actuators allowed FINGER to individually actuate the fingers with a high bandwidth of control (-3 dB at approximately 8 Hz). During the tests, we were able to modulate the subject's success rate at the game by automatically adjusting the controller gains of FINGER. We also used FINGER to measure subjects' effort and finger individuation while playing the game. Test results demonstrate the ability of FINGER to motivate subjects with an engaging game environment that challenges individuated control of the fingers, automatically control assistance levels, and quantify finger individuation after stroke.

  5. Impact and quantification of the sources of error in DNA pooling designs.

    PubMed

    Jawaid, A; Sham, P

    2009-01-01

    The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.

  6. Systematic review and meta-analysis of the economic impact of smoking bans in restaurants and bars.

    PubMed

    Cornelsen, Laura; McGowan, Yvonne; Currie-Murphy, Laura M; Normand, Charles

    2014-05-01

    To review systematically the literature on the economic impact of smoking bans in bars and restaurants and provide an estimate of the impact size using meta-analysis. Studies were identified by systematic database searches and screening references of reviews and relevant studies. Google and web-pages of tobacco control agencies were also searched. The review identified 56 studies using absolute sales, sales ratio or employment data and employing regression methods to evaluate the impact of smoking bans in the United States, Australia or in countries in South America or Europe. The meta-analysis included 39 comparable studies, with 129 cases identified based on the outcome measure, scope of the ban, type of establishment and geographical location. Methodological quality was assessed based on four pre-determined criteria. Study and case selection and data extraction were conducted independently by two researchers. Random-effects meta-analysis of all cases showed no associations between smoking bans and changes in absolute sales or employment. An increase in the share of bar and restaurant sector sales in total retail sales was associated with smoking bans [0.23 percentage-points; 95% confidence interval (CI) 0.08-0.375]. When cases were separated by business type (bars or restaurants or wider hospitality including bars and restaurants), some differential impacts emerged. Meta-analysis of the economic impact of smoking bans in hospitality sector showed overall no substantial economic gains or losses. Differential impacts were observed across individual business types and outcome variable, but at aggregate level these appear to balance out. © 2014 Society for the Study of Addiction.

  7. Does Alcohol Contribute to College Men's Sexual Assault Perpetration? Between- and Within-Person Effects Over Five Semesters.

    PubMed

    Testa, Maria; Cleveland, Michael J

    2017-01-01

    The current longitudinal study was designed to consider the time-varying effects of men's heavy episodic drinking (HED) and drinking setting attendance on college sexual assault perpetration. Freshman men (N = 992) were recruited in their first semester and completed online measures at the end of their first five semesters. Using multilevel models, we examined whether men with higher frequency HED (or party or bar attendance) were more likely to perpetrate sexual assault (between-person, Level 2 effect) and whether sexual assault perpetration was more likely in semesters in which HED (or party or bar attendance) was higher than each individual's average (within-person, Level 1 effect). The between-person effect of HED on sexual assault was not significant after accounting for the between-person effects of antisocial behavior, impersonal sex orientation, and low self-control. The within-person effect of HED on sexual assault perpetration was not significant. However, models substituting frequency of party attendance or bar attendance revealed both between- and within-person effects. The odds of sexual assault were increased for men with higher bar and party attendance than the sample as a whole, and in semesters in which party or bar attendance was higher than their own average. Supplemental analyses suggested that these drinking setting effects were explained by hookups, with sexual assault perpetration more likely in semesters in which the number of hookups exceeded one's own average. Findings point toward the importance of drinking contexts, rather than drinking per se, as predictors of college men's sexual assault perpetration.

  8. Focus on Fruits: 10 Tips to Eat More Fruits

    MedlinePlus

    ... lunch, pack a tangerine, banana, or grapes to eat or choose fruits from a salad bar. Individual containers of fruits like peaches or applesauce are easy to carry and convenient for lunch. 7 Enjoy fruit at dinner, too At dinner, add crushed pineapple to coleslaw ...

  9. Individual Differences in Social Anxiety Affect the Salience of Errors in Social Contexts

    PubMed Central

    Barker, Tyson V.; Troller-Renfree, Sonya; Pine, Daniel S.; Fox, Nathan A.

    2015-01-01

    The error-related negativity (ERN) is an event-related potential that occurs approximately 50 ms after an erroneous response. The magnitude of the ERN is influenced by contextual factors, such as when errors are made during social evaluation. The ERN is also influenced by individual differences in anxiety, and it is elevated amongst anxious individuals. However, little research has examined how individual differences in anxiety interact with contextual factors to impact the ERN. Social anxiety involves fear and apprehension of social evaluation. The current study explored how individual differences in social anxiety interact with social contexts to modulate the ERN. The ERN was measured in 43 young adults characterized as either high or low in social anxiety while they completed a flanker task in two contexts: alone and during social evaluation. Results revealed a significant interaction between social anxiety and context, such that the ERN was enhanced in a social relative to a non-social context only among high socially anxious individuals. Furthermore, the degree of such enhancement significantly correlated with individual differences in social anxiety. These findings demonstrate that social anxiety is characterized by enhanced neural activity to errors in social evaluative contexts. PMID:25967929

  10. Effects of Measurement Errors on Individual Tree Stem Volume Estimates for the Austrian National Forest Inventory

    Treesearch

    Ambros Berger; Thomas Gschwantner; Ronald E. McRoberts; Klemens Schadauer

    2014-01-01

    National forest inventories typically estimate individual tree volumes using models that rely on measurements of predictor variables such as tree height and diameter, both of which are subject to measurement error. The aim of this study was to quantify the impacts of these measurement errors on the uncertainty of the model-based tree stem volume estimates. The impacts...

  11. Analysis of preservatives with different polarities in beverage samples by dual-phase dual stir bar sorptive extraction combined with high-performance liquid chromatography.

    PubMed

    Xu, Jin; Chen, Beibei; He, Man; Hu, Bin

    2013-02-22

    A new concept of "dual-phase dual stir bar sorptive extraction (SBSE)" was proposed to simultaneously extract six preservatives with different polarities (logKo/w values of 1.27-3.41), namely, benzoic acid (BA), sorbic acid (SA), methyl p-hydroxybenzoate (MP), ethyl p-hydroxybenzoate (EP), propyl p-hydroxybenzoate (PP), and butyl p-hydroxybenzoate (BP). The dual-phase dual SBSE apparatus was consisted of two differently coated stir bars, a 3-aminopropyltriethoxysilane (APTES)-hydroxy-terminated silicone oil (OH-TSO)-coated stir bar that was prepared by sol-gel technique and a C(18) silica (C(18))-polydimethylsiloxane (PDMS)-coated stir bar that was prepared by adhesion. In dual-phase dual SBSE, the two stir bars with different coatings were placed in the same sample solution for the simultaneous extraction of the target analytes with different polarities, and then the bars were desorbed in the same desorption solvent. The extraction performance of the dual-phase dual SBSE for the six preservatives was evaluated by comparing with the conventional SBSE (individual stir bar) with different coatings, including commercial PDMS, homemade PDMS, C(18)-APTES-OH-TSO, APTES-OH-TSO, and C(18)-PDMS. The experimental results showed that the dual-phase dual SBSE had the highest extraction efficiency for the six target preservatives. Based on this fact, a novel method by combining the dual-phase dual SBSE which was consisted of the APTES-OH-TSO-coated and C(18)-PDMS-coated stir bars with high-performance liquid chromatography-ultraviolet detection (HPLC-UV) was developed for the simultaneous analysis of six target beverage preservatives in beverages. Under optimal conditions, the limits of detection (LODs) for six target preservatives ranged from 0.6 to 2.7 μgL(-1) with the relative standard deviations (RSDs) of 4.6-9.2% (C(BA,SA)=5 μgL(-1),C(MP)=20 μgL(-1),C(EP,PP,BP)=10 μgL(-1), n=7). The enrichment factors (EFs) were approximately 16-42-fold (theoretical EF was 50-fold). The proposed method was validated by the analysis of six target preservatives in three kinds of beverage samples, and the recoveries for the spiked samples were in the range of 76.6-118.6% for cola, 74.6-17.5% for orange juice, and 83.0-119.1% for herbal tea, respectively. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Medication errors as malpractice-a qualitative content analysis of 585 medication errors by nurses in Sweden.

    PubMed

    Björkstén, Karin Sparring; Bergqvist, Monica; Andersén-Karlsson, Eva; Benson, Lina; Ulfvarson, Johanna

    2016-08-24

    Many studies address the prevalence of medication errors but few address medication errors serious enough to be regarded as malpractice. Other studies have analyzed the individual and system contributory factor leading to a medication error. Nurses have a key role in medication administration, and there are contradictory reports on the nurses' work experience in relation to the risk and type for medication errors. All medication errors where a nurse was held responsible for malpractice (n = 585) during 11 years in Sweden were included. A qualitative content analysis and classification according to the type and the individual and system contributory factors was made. In order to test for possible differences between nurses' work experience and associations within and between the errors and contributory factors, Fisher's exact test was used, and Cohen's kappa (k) was performed to estimate the magnitude and direction of the associations. There were a total of 613 medication errors in the 585 cases, the most common being "Wrong dose" (41 %), "Wrong patient" (13 %) and "Omission of drug" (12 %). In 95 % of the cases, an average of 1.4 individual contributory factors was found; the most common being "Negligence, forgetfulness or lack of attentiveness" (68 %), "Proper protocol not followed" (25 %), "Lack of knowledge" (13 %) and "Practice beyond scope" (12 %). In 78 % of the cases, an average of 1.7 system contributory factors was found; the most common being "Role overload" (36 %), "Unclear communication or orders" (30 %) and "Lack of adequate access to guidelines or unclear organisational routines" (30 %). The errors "Wrong patient due to mix-up of patients" and "Wrong route" and the contributory factors "Lack of knowledge" and "Negligence, forgetfulness or lack of attentiveness" were more common in less experienced nurses. The experienced nurses were more prone to "Practice beyond scope of practice" and to make errors in spite of "Lack of adequate access to guidelines or unclear organisational routines". Medication errors regarded as malpractice in Sweden were of the same character as medication errors worldwide. A complex interplay between individual and system factors often contributed to the errors.

  13. Validation of simplified centre of mass models during gait in individuals with chronic stroke.

    PubMed

    Huntley, Andrew H; Schinkel-Ivy, Alison; Aqui, Anthony; Mansfield, Avril

    2017-10-01

    The feasibility of using a multiple segment (full-body) kinematic model in clinical gait assessment is difficult when considering obstacles such as time and cost constraints. While simplified gait models have been explored in healthy individuals, no such work to date has been conducted in a stroke population. The aim of this study was to quantify the errors of simplified kinematic models for chronic stroke gait assessment. Sixteen individuals with chronic stroke (>6months), outfitted with full body kinematic markers, performed a series of gait trials. Three centre of mass models were computed: (i) 13-segment whole-body model, (ii) 3 segment head-trunk-pelvis model, and (iii) 1 segment pelvis model. Root mean squared error differences were compared between models, along with correlations to measures of stroke severity. Error differences revealed that, while both models were similar in the mediolateral direction, the head-trunk-pelvis model had less error in the anteroposterior direction and the pelvis model had less error in the vertical direction. There was some evidence that the head-trunk-pelvis model error is influenced in the mediolateral direction for individuals with more severe strokes, as a few significant correlations were observed between the head-trunk-pelvis model and measures of stroke severity. These findings demonstrate the utility and robustness of the pelvis model for clinical gait assessment in individuals with chronic stroke. Low error in the mediolateral and vertical directions is especially important when considering potential stability analyses during gait for this population, as lateral stability has been previously linked to fall risk. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. SU-E-J-192: Comparative Effect of Different Respiratory Motion Management Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakajima, Y; Kadoya, N; Ito, K

    Purpose: Irregular breathing can influence the outcome of four-dimensional computed tomography imaging for causing artifacts. Audio-visual biofeedback systems associated with patient-specific guiding waveform are known to reduce respiratory irregularities. In Japan, abdomen and chest motion self-control devices (Abches), representing simpler visual coaching techniques without guiding waveform are used instead; however, no studies have compared these two systems to date. Here, we evaluate the effectiveness of respiratory coaching to reduce respiratory irregularities by comparing two respiratory management systems. Methods: We collected data from eleven healthy volunteers. Bar and wave models were used as audio-visual biofeedback systems. Abches consisted of a respiratorymore » indicator indicating the end of each expiration and inspiration motion. Respiratory variations were quantified as root mean squared error (RMSE) of displacement and period of breathing cycles. Results: All coaching techniques improved respiratory variation, compared to free breathing. Displacement RMSEs were 1.43 ± 0.84, 1.22 ± 1.13, 1.21 ± 0.86, and 0.98 ± 0.47 mm for free breathing, Abches, bar model, and wave model, respectively. Free breathing and wave model differed significantly (p < 0.05). Period RMSEs were 0.48 ± 0.42, 0.33 ± 0.31, 0.23 ± 0.18, and 0.17 ± 0.05 s for free breathing, Abches, bar model, and wave model, respectively. Free breathing and all coaching techniques differed significantly (p < 0.05). For variation in both displacement and period, wave model was superior to free breathing, bar model, and Abches. The average reduction in displacement and period RMSE compared with wave model were 27% and 47%, respectively. Conclusion: The efficacy of audio-visual biofeedback to reduce respiratory irregularity compared with Abches. Our results showed that audio-visual biofeedback combined with a wave model can potentially provide clinical benefits in respiratory management, although all techniques could reduce respiratory irregularities.« less

  15. The linkage between fluvial meander-belt morphodynamics and the depositional record improves paleoenvironmental interpretations, Western Interior Basin, Alberta, Canada

    NASA Astrophysics Data System (ADS)

    Durkin, P.; Hubbard, S. M.

    2016-12-01

    Enhanced stratigraphic interpretations are possible when linkages between morphodynamic processes and the depositional record are resolved. Recent studies of modern and ancient meander-belt deposits have emphasized morphodynamic processes that are commonly understated in the analysis of stratigraphic products, such as intra-point bar erosion and rotation, counter-point-bar (concave bank-bench) development and meander-bend abandonment. On a larger scale, longitudinal changes in meander-belt morphology and processes such as changes in meander-bend migration rate, channel-belt width/depth ratio and sinuosity have been observed as rivers flow through the tidal backwater zone. However, few studies have attempted to recognize the impact of the backwater zone in the stratigraphic record. We consider ancient meander-belt deposits of the Cretaceous McMurray Formation and document linkages between morphodynamic processes and their stratigraphic product to resolve more detailed paleoenvironmental interpretations. The ancient meander belt was characterized by paleochannels that were 600 m wide and up to 50 m deep, resolved in a particularly high quality subsurface dataset consisting of 600 km2 of high-quality 3-D seismic data and over 1000 wellbores. A 3-D geocellular model and reconstructed paleochannel migration patterns reveal the evolutionary history of seventeen individual meander belt elements, including point bars, counter point bars and their associated abandoned channel fills. At the meander-bend scale, intra-point-bar erosion surfaces bound accretion packages characterized by unique accretion directions, internal stratigraphic architecture and lithologic properties. Erosion surfaces and punctuated bar rotation are linked to upstream changes in channel planform geometry (meander cut-offs). We provide evidence for downstream translation and development of counter-point bars that formed in response to valley-edge and intra-meander-belt confinement. At the meander-belt scale, analysis of changes in morphology over time reveal a decrease in channel-belt width/thickness ratio and sinuosity, which we attribute to the landward migration of the paleo-backwater limit due to the oncoming and overlying transgression of the Cretaceous Boreal Sea into the Western Interior Basin.

  16. Descriptive and hedonic analyses of low-Phe food formulations containing corn (Zea mays) seedling roots: toward development of a dietary supplement for individuals with phenylketonuria.

    PubMed

    Cliff, Margaret A; Law, Jessica R; Lücker, Joost; Scaman, Christine H; Kermode, Allison R

    2016-01-15

    Seedling roots of anthocyanin-rich corn (Zea mays) cultivars contain high levels of phenylalanine ammonia lyase (PAL) activity. The development of a natural dietary supplement containing corn roots could provide the means to improve the restrictive diet of phenylketonuria (PKU) patients by increasing their tolerance to dietary phenylalanine (Phe). Therefore this research was undertaken to explore the sensory characteristics of roots of four corn cultivars as well as to develop and evaluate food products (cereal bar, beverage, jam-like spread) to which roots had been added. Sensory profiles of corn roots were investigated using ten trained judges. Roots of Japanese Striped corn seedlings were more bitter, pungent and astringent than those of white and yellow cultivars, while roots from the Blue Jade cultivar had a more pronounced earthy/mushroom aroma. Consumer research using 24 untrained panelists provided hedonic (degree-of-liking) assessments for products with and without roots (controls). The former had lower mean scores than the controls; however, the cereal bar had scores above 5 on the nine-point scale for all hedonic assessments compared with the other treated products. By evaluating low-Phe food products containing corn roots, this research ascertained that the root-containing low-Phe cereal bar was an acceptable 'natural' dietary supplement for PKU-affected individuals. © 2015 Her Majesty the Queen in Right of Canada. Journal of the Science of Food and Agriculture © 2015 Society of Chemical Industry.

  17. El Camino Hospital: using health information technology to promote patient safety.

    PubMed

    Bukunt, Susan; Hunter, Christine; Perkins, Sharon; Russell, Diana; Domanico, Lee

    2005-10-01

    El Camino Hospital is a leader in the use of health information technology to promote patient safety, including bar coding, computerized order entry, electronic medical records, and wireless communications. Each year, El Camino Hospital's board of directors sets performance expectations for the chief executive officer, which are tied to achievement of local, regional, and national safety and quality standards, including the six Institute of Medicine quality dimensions. He then determines a set of explicit quality goals and measurable actions, which serve as guidelines for the overall hospital. The goals and progress reports are widely shared with employees, medical staff, patients and families, and the public. For safety, for example, the medication error reduction team tracks and reviews medication error rates. The hospital has virtually eliminated transcription errors through its 100% use of computerized physician order entry. Clinical pathways and standard order sets have reduced practice variation, providing a safer environment. Many projects focused on timeliness, such as emergency department wait time, lab turnaround time, and pneumonia time to initial antibiotic. Results have been mixed, with projects most successful when a link was established with patient outcomes, such as in reducing time to percutaneous transluminal coronary angioplasty for patients with acute myocardial infarction.

  18. The owl: spotted, listed, barred, or gone?

    Treesearch

    Sally Duncan

    1998-01-01

    The information we bring to the table is usually complex. The April issue of Science Findings illustrates this complexity. Scientific inquiry about an individual species and its habitat requires modeling, assumptions, and time. Uncertainty remains after studies are done. Once policy is made, implementation continues to build new understanding, which may present...

  19. Enhanced Access to Early Visual Processing of Perceptual Simultaneity in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Falter, Christine M.; Braeutigam, Sven; Nathan, Roger; Carrington, Sarah; Bailey, Anthony J.

    2013-01-01

    We compared judgements of the simultaneity or asynchrony of visual stimuli in individuals with autism spectrum disorders (ASD) and typically-developing controls using Magnetoencephalography (MEG). Two vertical bars were presented simultaneously or non-simultaneously with two different stimulus onset delays. Participants with ASD distinguished…

  20. Automotive Radar and Lidar Systems for Next Generation Driver Assistance Functions

    NASA Astrophysics Data System (ADS)

    Rasshofer, R. H.; Gresser, K.

    2005-05-01

    Automotive radar and lidar sensors represent key components for next generation driver assistance functions (Jones, 2001). Today, their use is limited to comfort applications in premium segment vehicles although an evolution process towards more safety-oriented functions is taking place. Radar sensors available on the market today suffer from low angular resolution and poor target detection in medium ranges (30 to 60m) over azimuth angles larger than ±30°. In contrast, Lidar sensors show large sensitivity towards environmental influences (e.g. snow, fog, dirt). Both sensor technologies today have a rather high cost level, forbidding their wide-spread usage on mass markets. A common approach to overcome individual sensor drawbacks is the employment of data fusion techniques (Bar-Shalom, 2001). Raw data fusion requires a common, standardized data interface to easily integrate a variety of asynchronous sensor data into a fusion network. Moreover, next generation sensors should be able to dynamically adopt to new situations and should have the ability to work in cooperative sensor environments. As vehicular function development today is being shifted more and more towards virtual prototyping, mathematical sensor models should be available. These models should take into account the sensor's functional principle as well as all typical measurement errors generated by the sensor.

  1. Characterization of galactic bars from 3.6 μm S4G imaging

    NASA Astrophysics Data System (ADS)

    Díaz-García, S.; Salo, H.; Laurikainen, E.; Herrera-Endoqui, M.

    2016-03-01

    Context. Stellar bars play an essential role in the secular evolution of disk galaxies because they are responsible for the redistribution of matter and angular momentum. Dynamical models predict that bars become stronger and longer in time, while their rotation speed slows down. Aims: We use the Spitzer Survey of Stellar Structure in Galaxies (S4G) 3.6 μm imaging to study the properties (length and strength) and fraction of bars at z = 0 over a wide range of galaxy masses (M∗ ≈ 108-1011 M⊙) and Hubble types (-3 ≤ T ≤ 10). Methods: We calculated gravitational forces from the 3.6 μm images for galaxies with a disk inclination lower than 65°. We used the maximum of the tangential-to-radial force ratio in the bar region (Qb) as a measure of the bar-induced perturbation strength for a sample of ~600 barred galaxies. We also used the maximum of the normalized m = 2 Fourier density amplitude (A2max) to characterize the bar. Bar sizes were estimated I) visually; II) from ellipse fitting; III) from the radii of the strongest torque; and iv) from the radii of the largest m = 2 Fourier amplitude in the bar region. By combining our force calculations with the H I kinematics from the literature, we estimated the ratio of the halo-to-stellar mass (Mh/M∗) within the optical disk and by further using the universal rotation curve models, we obtained a first-order model of the rotation curve decomposition of 1128 disk galaxies. Results: We probe possible sources of uncertainty in our Qb measurements: the assumed scale height and its radial variation, the influence of the spiral arms torques, the effect of non-stellar emission in the bar region, and the dilution of the bar forces by the dark matter halo (our models imply that only ~10% of the disks in our sample are maximal). We find that for early- and intermediate-type disks (-3 ≤ T< 5), the relatively modest influence of the dark matter halo leads to a systematic reduction of the mean Qb by about 10-15%, which is of the same order as the uncertainty associated with estimating the vertical scale height. The halo correction on Qb becomes important for later types, implying a reduction of ~20-25% for T = 7-10. Whether the halo correction is included or not, the mean Qb shows an increasing trend with T. However, the mean A2max decreases for lower mass late-type systems. These opposing trends are most likely related to the reduced force dilution by bulges when moving towards later type galaxies. Nevertheless, when treated separately, both the early- and late-type disk galaxies show a strong positive correlation between Qb and A2max. For spirals the mean ɛ ≈ 0.5 is nearly independent of T, but it drops among S0s (≈0.2). The Qb and ɛ show a relatively tight dependence, with only a slight difference between early and late disks. For spirals, all our bar strength indicators correlate with the bar length (scaled to isophotal size). Late-type bars are longer than previously found in the literature. The bar fraction shows a double-humped distribution in the Hubble sequence (~75% for Sab galaxies), with a local minimum at T = 4 (~40%), and it drops for M∗ ≲ 109.5-10 M⊙. If we use bar identification methods based on Fourier decomposition or ellipse fitting instead of the morphological classification, the bar fraction decreases by ~30-50% for late-type systems with T ≥ 5 and correlates with Mh/M∗. Our Mh/M∗ ratios agree well with studies based on weak lensing analysis, abundance matching, and halo occupation distribution methods, under the assumption that the halo inside the optical disk contributes roughly a constant fraction of the total halo mass (~4%). Conclusions: We find possible evidence for the growth of bars within a Hubble time; as (1) bars in early-type galaxies show larger density amplitudes and disk-relative sizes than their intermediate-type counterparts; and (2) long bars are typically strong. We also observe two clearly distinct types of bars, between early- and intermediate-type galaxies (T< 5) on one side, and the late-type systems on the other, based on the differences in the bar properties. Most likely this distinction is connected to the higher halo-to-stellar ratio that we observe in later types, which affects the disk stability properties. Full Tables A.1-A.3, the tabulated radial force profiles, and the rotation curve decomposition model of each individual galaxy are available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/587/A160

  2. Contribution to volatile organic compound exposures from time spent in stores and restaurants and bars.

    PubMed

    Loh, Miranda M; Houseman, E Andres; Levy, Jonathan I; Spengler, John D; Bennett, Deborah H

    2009-11-01

    Many people spend time in stores and restaurants, yet there has been little investigation of the influence of these microenvironments on personal exposure. Relative to the outdoors, transportation, and the home, these microenvironments have high concentrations of several volatile organic compounds (VOCs). We developed a stochastic model to examine the effect of VOC concentrations in these microenvironments on total personal exposure for (1) non-smoking adults working in offices who spend time in stores and restaurants or bars and (2) non-smoking adults who work in these establishments. We also compared the effect of working in a smoking versus non-smoking restaurant or bar. Input concentrations for each microenvironment were developed from the literature whereas time activity inputs were taken from the National Human Activity Patterns Survey. Time-averaged exposures were simulated for 5000 individuals over a weeklong period for each analysis. Mean contributions to personal exposure from non-working time spent in stores and restaurants or bars range from <5% to 20%, depending on the VOC and time-activity patterns. At the 95th percentile of the distribution of the proportion of personal exposure attributable to time spent in stores and restaurants or bars, these microenvironments can be responsible for over half of a person's total exposure to certain VOCs. People working in restaurants or bars where smoking is allowed had the highest fraction of exposure attributable to their workplace. At the median, people who worked in stores or restaurants tended to have 20-60% of their total exposures from time spent at work. These results indicate that stores and restaurants can be large contributors to personal exposure to VOCs for both workers in those establishments and for a subset of people who visit these places, and that incorporation of these non-residential microenvironments can improve models of personal exposure distributions.

  3. Testing the limits of Paleozoic chronostratigraphic correlation via high-resolution (13Ccarb) biochemostratigraphy across the Llandovery–Wenlock (Silurian) boundary: Is a unified Phanerozoic time scale achievable?

    USGS Publications Warehouse

    Cramer, Bradley D.; Loydell, David K.; Samtleben, Christian; Munnecke, Axel; Kaljo, Dimitri; Mannik, Peep; Martma, Tonu; Jeppsson, Lennart; Kleffner, Mark A.; Barrick, James E.; Johnson, Craig A.; Emsbo, Poul; Joachimski, Michael M.; Bickert, Torsten; Saltzman, Matthew R.

    2010-01-01

    The resolution and fidelity of global chronostratigraphic correlation are direct functions of the time period under consideration. By virtue of deep-ocean cores and astrochronology, the Cenozoic and Mesozoic time scales carry error bars of a few thousand years (k.y.) to a few hundred k.y. In contrast, most of the Paleozoic time scale carries error bars of plus or minus a few million years (m.y.), and chronostratigraphic control better than ??1 m.y. is considered "high resolution." The general lack of Paleozoic abyssal sediments and paucity of orbitally tuned Paleozoic data series combined with the relative incompleteness of the Paleozoic stratigraphic record have proven historically to be such an obstacle to intercontinental chronostratigraphic correlation that resolving the Paleozoic time scale to the level achieved during the Mesozoic and Cenozoic was viewed as impractical, impossible, or both. Here, we utilize integrated graptolite, conodont, and carbonate carbon isotope (??13Ccarb) data from three paleocontinents (Baltica, Avalonia, and Laurentia) to demonstrate chronostratigraphic control for upper Llando very through middle Wenlock (Telychian-Sheinwoodian, ~436-426 Ma) strata with a resolution of a few hundred k.y. The interval surrounding the base of the Wenlock Series can now be correlated globally with precision approaching 100 k.y., but some intervals (e.g., uppermost Telychian and upper Shein-woodian) are either yet to be studied in sufficient detail or do not show sufficient biologic speciation and/or extinction or carbon isotopic features to delineate such small time slices. Although producing such resolution during the Paleozoic presents an array of challenges unique to the era, we have begun to demonstrate that erecting a Paleozoic time scale comparable to that of younger eras is achievable. ?? 2010 Geological Society of America.

  4. Astrostatistics in X-ray Astronomy: Systematics and Calibration

    NASA Astrophysics Data System (ADS)

    Siemiginowska, Aneta; Kashyap, Vinay; CHASC

    2014-01-01

    Astrostatistics has been emerging as a new field in X-ray and gamma-ray astronomy, driven by the analysis challenges arising from data collected by high performance missions since the beginning of this century. The development and implementation of new analysis methods and techniques requires a close collaboration between astronomers and statisticians, and requires support from a reliable and continuous funding source. The NASA AISR program was one such, and played a crucial part in our work. Our group (CHASC; http://heawww.harvard.edu/AstroStat/), composed of a mixture of high energy astrophysicists and statisticians, was formed ~15 years ago to address specific issues related to Chandra X-ray Observatory data (Siemiginowska et al. 1997) and was initially fully supported by Chandra. We have developed several statistical methods that have laid the foundation for extensive application of Bayesian methodologies to Poisson data in high-energy astrophysics. I will describe one such project, on dealing with systematic uncertainties (Lee et al. 2011, ApJ ), and present the implementation of the method in Sherpa, the CIAO modeling and fitting application. This algorithm propagates systematic uncertainties in instrumental responses (e.g., ARFs) through the Sherpa spectral modeling chain to obtain realistic error bars on model parameters when the data quality is high. Recent developments include the ability to narrow the space of allowed calibration and obtain better parameter estimates as well as tighter error bars. Acknowledgements: This research is funded in part by NASA contract NAS8-03060. References: Lee, H., Kashyap, V.L., van Dyk, D.A., et al. 2011, ApJ, 731, 126 Siemiginowska, A., Elvis, M., Connors, A., et al. 1997, Statistical Challenges in Modern Astronomy II, 241

  5. Quantitative comparison of electron temperature fluctuations to nonlinear gyrokinetic simulations in C-Mod Ohmic L-mode discharges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sung, C., E-mail: csung@physics.ucla.edu; White, A. E.; Greenwald, M.

    2016-04-15

    Long wavelength turbulent electron temperature fluctuations (k{sub y}ρ{sub s} < 0.3) are measured in the outer core region (r/a > 0.8) of Ohmic L-mode plasmas at Alcator C-Mod [E. S. Marmar et al., Nucl. Fusion 49, 104014 (2009)] with a correlation electron cyclotron emission diagnostic. The relative amplitude and frequency spectrum of the fluctuations are compared quantitatively with nonlinear gyrokinetic simulations using the GYRO code [J. Candy and R. E. Waltz, J. Comput. Phys. 186, 545 (2003)] in two different confinement regimes: linear Ohmic confinement (LOC) regime and saturated Ohmic confinement (SOC) regime. When comparing experiment with nonlinear simulations, it is found that local,more » electrostatic ion-scale simulations (k{sub y}ρ{sub s} ≲ 1.7) performed at r/a ∼ 0.85 reproduce the experimental ion heat flux levels, electron temperature fluctuation levels, and frequency spectra within experimental error bars. In contrast, the electron heat flux is robustly under-predicted and cannot be recovered by using scans of the simulation inputs within error bars or by using global simulations. If both the ion heat flux and the measured temperature fluctuations are attributed predominantly to long-wavelength turbulence, then under-prediction of electron heat flux strongly suggests that electron scale turbulence is important for transport in C-Mod Ohmic L-mode discharges. In addition, no evidence is found from linear or nonlinear simulations for a clear transition from trapped electron mode to ion temperature gradient turbulence across the LOC/SOC transition, and also there is no evidence in these Ohmic L-mode plasmas of the “Transport Shortfall” [C. Holland et al., Phys. Plasmas 16, 052301 (2009)].« less

  6. Quantum cryptography: individual eavesdropping with the knowledge of the error-correcting protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horoshko, D B

    2007-12-31

    The quantum key distribution protocol BB84 combined with the repetition protocol for error correction is analysed from the point of view of its security against individual eavesdropping relying on quantum memory. It is shown that the mere knowledge of the error-correcting protocol changes the optimal attack and provides the eavesdropper with additional information on the distributed key. (fifth seminar in memory of d.n. klyshko)

  7. The statistical properties and possible causes of polar motion prediction errors

    NASA Astrophysics Data System (ADS)

    Kosek, Wieslaw; Kalarus, Maciej; Wnek, Agnieszka; Zbylut-Gorska, Maria

    2015-08-01

    The pole coordinate data predictions from different prediction contributors of the Earth Orientation Parameters Combination of Prediction Pilot Project (EOPCPPP) were studied to determine the statistical properties of polar motion forecasts by looking at the time series of differences between them and the future IERS pole coordinates data. The mean absolute errors, standard deviations as well as the skewness and kurtosis of these differences were computed together with their error bars as a function of prediction length. The ensemble predictions show a little smaller mean absolute errors or standard deviations however their skewness and kurtosis values are similar as the for predictions from different contributors. The skewness and kurtosis enable to check whether these prediction differences satisfy normal distribution. The kurtosis values diminish with the prediction length which means that the probability distribution of these prediction differences is becoming more platykurtic than letptokurtic. Non zero skewness values result from oscillating character of these differences for particular prediction lengths which can be due to the irregular change of the annual oscillation phase in the joint fluid (atmospheric + ocean + land hydrology) excitation functions. The variations of the annual oscillation phase computed by the combination of the Fourier transform band pass filter and the Hilbert transform from pole coordinates data as well as from pole coordinates model data obtained from fluid excitations are in a good agreement.

  8. Improvement of the Error-detection Mechanism in Adults with Dyslexia Following Reading Acceleration Training.

    PubMed

    Horowitz-Kraus, Tzipi

    2016-05-01

    The error-detection mechanism aids in preventing error repetition during a given task. Electroencephalography demonstrates that error detection involves two event-related potential components: error-related and correct-response negativities (ERN and CRN, respectively). Dyslexia is characterized by slow, inaccurate reading. In particular, individuals with dyslexia have a less active error-detection mechanism during reading than typical readers. In the current study, we examined whether a reading training programme could improve the ability to recognize words automatically (lexical representations) in adults with dyslexia, thereby resulting in more efficient error detection during reading. Behavioural and electrophysiological measures were obtained using a lexical decision task before and after participants trained with the reading acceleration programme. ERN amplitudes were smaller in individuals with dyslexia than in typical readers before training but increased following training, as did behavioural reading scores. Differences between the pre-training and post-training ERN and CRN components were larger in individuals with dyslexia than in typical readers. Also, the error-detection mechanism as represented by the ERN/CRN complex might serve as a biomarker for dyslexia and be used to evaluate the effectiveness of reading intervention programmes. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Prospects of discovering stable double-heavy tetraquarks at a Tera-Z factory

    NASA Astrophysics Data System (ADS)

    Ali, Ahmed; Parkhomenko, Alexander Ya.; Qin, Qin; Wang, Wei

    2018-07-01

    Motivated by a number of theoretical considerations, predicting the deeply bound double-heavy tetraquarks T[ u bar d bar ]{ bb }, T[ u bar s bar ]{ bb } and T[ d bar s bar ]{ bb }, we explore the potential of their discovery at Tera-Z factories. Using the process Z → b b bar b b bar , we calculate, employing the Monte Carlo generators MadGraph5_aMC@NLO and Pythia6, the phase space configuration in which the bb pair is likely to fragment as a diquark. In a jet-cone, defined by an invariant mass interval mbb < M T[ q bar qbar‧ ]{ bb } + ΔM, the sought-after tetraquarks T[ q bar qbar‧ ]{ bb } as well as the double-bottom baryons, Ξbb0,-, and Ωbb- , can be produced. Using the heavy quark-diquark symmetry, we estimate B (Z → T[ u bar d bar ]{ bb } + b bar b bar) = (1.2-0.3+1.0) ×10-6, and about a half of this for the T[ u bar s bar ]{ bb } and T[ d bar s bar ]{ bb } . We also present an estimate of their lifetimes using the heavy quark expansion, yielding τ (T[ q bar qbar‧ ]{ bb }) ≃ 800 fs. Measuring the tetraquark masses would require decays, such as T[ u bar d bar ]{ bb } - →B-D-π+, T [ u bar d bar ]{ bb } - → J / ψK‾0B-, T[ u bar d bar ]{ bb } - → J / ψK-B‾0, T[ u bar s bar ]{ bb } - → Ξbc0 Σ-, and T[ d bar s bar ]{ bb } 0 → Ξbc0 Σbar0, with subsequent decay chains in exclusive non-leptonic final states. We estimate a couple of the decay widths and find that the product branching ratios do not exceed 10-5. Hence, a good fraction of these modes will be required for a discovery of T[ q bar qbar‧ ]{ bb } at a Tera-Z factory.

  10. Refractive error characteristics of early and advanced presbyopic individuals.

    DOT National Transportation Integrated Search

    1977-07-01

    The frequency and distribution of ocular refractive errors among middle-aged and older people were obtained from a nonclinical population holding a variety of blue-collar, clerical, and technical jobs. The 422 individuals ranged in age from 35 to 69 ...

  11. How to Avoid Errors in Error Propagation: Prediction Intervals and Confidence Intervals in Forest Biomass

    NASA Astrophysics Data System (ADS)

    Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.

    2016-12-01

    Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.

  12. Quantagenetics® analysis of laser-induced breakdown spectroscopic data: Rapid and accurate authentication of materials

    NASA Astrophysics Data System (ADS)

    McManus, Catherine E.; Dowe, James; McMillan, Nancy J.

    2018-07-01

    Many industrial and commercial issues involve authentication of such matters as the manufacturer or geographic source of a material, and quality control of materials, determining whether specific treatments have been properly applied, or if a material is authentic or fraudulent. Often, multiple analytical techniques and tests are used, resulting in expensive and time-consuming testing procedures. Laser-Induced Breakdown Spectroscopy (LIBS) is a rapid laser ablation spectroscopic analytical method. Each LIBS spectrum contains information about the concentration of every element, some isotopic ratios, and the molecular structure of the material, making it a unique and comprehensive signature of the material. Quantagenetics® is a multivariate statistical method based on Bayesian statistics that uses the Euclidian distance between LIBS spectra of materials to classify materials (US Patents 9,063,085 and 8,699,022). The fundamental idea behind Quantagenetics® is that LIBS spectra contain sufficient information to determine the origin and history of materials. This study presents two case studies that illustrate the method. LIBS spectra from 510 Colombian emeralds from 18 mines were classified by mine. Overall, 99.4% of the spectra were correctly classified; the success rate for individual mines ranges from 98.2% to 100%. Some of the mines are separated by distances as little as 200 m, indicating that the method uses the slight but consistent differences in composition to identify the mine of origin accurately. The second study used bars of 17-4 stainless steel from three manufacturers. Each of the three bars was cut into 90 coupons; 30 of each bar received no further treatment, another 30 from each bar received one tempering and hardening treatment, and the final 30 coupons from each bar received a different heat treatment. Using LIBS spectra taken from the coupons, the Quantagenetics® method classified the 270 coupons both by manufacturer (composition) and heat treatment (structure) with an overall success rate of 95.3%. Individual success rates range from 92.4% to 97.6%. These case studies were successful despite having no preconceived knowledge of the materials; artificial intelligence allows the materials to classify themselves without human intervention or bias. Multivariate analysis of LIBS spectra using the Quantagenetics® method has promise to improve quality control and authentication of a wide variety of materials in industrial enterprises.

  13. Anticipating cognitive effort: roles of perceived error-likelihood and time demands.

    PubMed

    Dunn, Timothy L; Inzlicht, Michael; Risko, Evan F

    2017-11-13

    Why are some actions evaluated as effortful? In the present set of experiments we address this question by examining individuals' perception of effort when faced with a trade-off between two putative cognitive costs: how much time a task takes vs. how error-prone it is. Specifically, we were interested in whether individuals anticipate engaging in a small amount of hard work (i.e., low time requirement, but high error-likelihood) vs. a large amount of easy work (i.e., high time requirement, but low error-likelihood) as being more effortful. In between-subject designs, Experiments 1 through 3 demonstrated that individuals anticipate options that are high in perceived error-likelihood (yet less time consuming) as more effortful than options that are perceived to be more time consuming (yet low in error-likelihood). Further, when asked to evaluate which of the two tasks was (a) more effortful, (b) more error-prone, and (c) more time consuming, effort-based and error-based choices closely tracked one another, but this was not the case for time-based choices. Utilizing a within-subject design, Experiment 4 demonstrated overall similar pattern of judgments as Experiments 1 through 3. However, both judgments of error-likelihood and time demand similarly predicted effort judgments. Results are discussed within the context of extant accounts of cognitive control, with considerations of how error-likelihood and time demands may independently and conjunctively factor into judgments of cognitive effort.

  14. SU-F-J-65: Prediction of Patient Setup Errors and Errors in the Calibration Curve from Prompt Gamma Proton Range Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albert, J; Labarbe, R; Sterpin, E

    2016-06-15

    Purpose: To understand the extent to which the prompt gamma camera measurements can be used to predict the residual proton range due to setup errors and errors in the calibration curve. Methods: We generated ten variations on a default calibration curve (CC) and ten corresponding range maps (RM). Starting with the default RM, we chose a square array of N beamlets, which were then rotated by a random angle θ and shifted by a random vector s. We added a 5% distal Gaussian noise to each beamlet in order to introduce discrepancies that exist between the ranges predicted from themore » prompt gamma measurements and those simulated with Monte Carlo algorithms. For each RM, s, θ, along with an offset u in the CC, were optimized using a simple Euclidian distance between the default ranges and the ranges produced by the given RM. Results: The application of our method lead to the maximal overrange of 2.0mm and underrange of 0.6mm on average. Compared to the situations where s, θ, and u were ignored, these values were larger: 2.1mm and 4.3mm. In order to quantify the need for setup error corrections, we also performed computations in which u was corrected for, but s and θ were not. This yielded: 3.2mm and 3.2mm. The average computation time for 170 beamlets was 65 seconds. Conclusion: These results emphasize the necessity to correct for setup errors and the errors in the calibration curve. The simplicity and speed of our method makes it a good candidate for being implemented as a tool for in-room adaptive therapy. This work also demonstrates that the Prompt gamma range measurements can indeed be useful in the effort to reduce range errors. Given these results, and barring further refinements, this approach is a promising step towards an adaptive proton radiotherapy.« less

  15. [Current Situation Survey of the Measures to Prevent Medication Errors in the Operating Room: Report of the Japan Society of Anesthesiologists Safety Commission Working Group for Consideration of Recommendations for Color Coding of Prepared Syringe Labels for Prevention of Medication Errors].

    PubMed

    Shida, Kyoko; Suzuki, Toshiyasu; Sugahara, Kazuhiro; Sobue, Kazuya

    2016-05-01

    In the case of medication errors which are among the more frequent adverse events that occur in the hospital, there is a need for effective measures to prevent incidence. According to the Japan Society of Anesthesiologists study "Drug incident investigation 2005-2007 years", "Error of a syringe at the selection stage" was the most frequent (44.2%). The status of current measures and best practices implemented in Japanese hospitals was the focus of a subsequent investigation. Representative specialists in anesthesiology certified hospitals across the country were surveyed via a questionnaire sampling that lasted 46 days. Investigation method was via the Web with survey responses anonymous. With respect to preventive measures implemented to mitigate risk of medication errors in perioperative settings, responses included: incident and accident report (215 facilities, 70.3%), use of pre-filled syringes (180 facilities, 58.8%), devised the arrangement of dangerous drugs (154 facilities, 50.3%), use of the product with improper connection preventing mechanism (123 facilities, 40.2%), double-check (116 facilities, 37.9%), use of color barreled syringe (115 facilities, 37.6%), use of color label or color tape (89 facilities, 29.1%), presentation of medication such as placing the ampoule or syringe on a tray by dividing color code for drug class on a tray (54 facilities, 17.6%), the discontinuance of handwritten labels (23 facilities, 7.5%), use of a drug verification system that uses bar code (20 facilities, 6.5%), and facilities that have not implemented any means (11 facilities, 3.6%), others not mentioned (10 facilities, 3.3%), and use of carts that count/account the agents by drug type and record selection and number picked automatically (6 facilities, 2.0%). Drug name identification affixed to the syringe via perforated label torn from the ampoule/vial, etc. (245 facilities, 28.1%), handwriting directly to the syringe (208 facilities, 23.8%), use of the attached label (like that comes with the product) (187 facilities, 21.4%), handwriting on the plain tape (87 facilities, 10.0%), printing labels (62 facilities, 7.1%), printed color labels (44 facilities, 5.0%), handwriting on the color tape (27 facilities, 3.1%), machinery for printing the drug name by scanning bar code of the ampoule, etc.(10 facilities, 1.1%), others (3 facilities, 0.3%), no description on the prepared drug (0 facilities, 0%). The awareness of international standard color code, such as by the International Organization for Standardization (ISO), was only 18.6%. Targeting anesthesiology certified hospitals recognized by the Japan Society of Anesthesiologists, the result of the survey on the measures to prevent medication errors during perioperative procedures indicated that various measures were documented in use. However, many facilities still use hand written labels (a common cause for errors). Confirmation of the need for improved drug name and drug recognition on syringe was documented.

  16. The face you recognize may not be the one you saw: memory conjunction errors in individuals with or without learning disability.

    PubMed

    Danielsson, Henrik; Rönnberg, Jerker; Leven, Anna; Andersson, Jan; Andersson, Karin; Lyxell, Björn

    2006-06-01

    Memory conjunction errors, that is, when a combination of two previously presented stimuli is erroneously recognized as previously having been seen, were investigated in a face recognition task with drawings and photographs in 23 individuals with learning disability, and 18 chronologically age-matched controls without learning disability. Compared to the controls, individuals with learning disability committed significantly more conjunction errors, feature errors (one old and one new component), but had lower correct recognition, when the results were adjusted for different guessing levels. A dual-processing approach gained more support than a binding approach. However, neither of the approaches could explain all of the results. The results of the learning disability group were only partly related to non-verbal intelligence.

  17. Complete Mitochondrial Genomes of New Zealand’s First Dogs

    PubMed Central

    Greig, Karen; Boocock, James; Prost, Stefan; Horsburgh, K. Ann; Jacomb, Chris; Walter, Richard; Matisoo-Smith, Elizabeth

    2015-01-01

    Dogs accompanied people in their migrations across the Pacific Ocean and ultimately reached New Zealand, which is the southern-most point of their oceanic distribution, around the beginning of the fourteenth century AD. Previous ancient DNA analyses of mitochondrial control region sequences indicated the New Zealand dog population included two lineages. We sequenced complete mitochondrial genomes of fourteen dogs from the colonisation era archaeological site of Wairau Bar and found five closely-related haplotypes. The limited number of mitochondrial lineages present at Wairau Bar suggests that the founding population may have comprised only a few dogs; or that the arriving dogs were closely related. For populations such as that at Wairau Bar, which stemmed from relatively recent migration events, control region sequences have insufficient power to address questions about population structure and founding events. Sequencing mitogenomes provided the opportunity to observe sufficient diversity to discriminate between individuals that would otherwise be assigned the same haplotype and to clarify their relationships with each other. Our results also support the proposition that at least one dispersal of dogs into the Pacific was via a south-western route through Indonesia. PMID:26444283

  18. Supercritical Fluid Extraction of Eucalyptus globulus Bark—A Promising Approach for Triterpenoid Production

    PubMed Central

    Domingues, Rui M. A.; Oliveira, Eduardo L. G.; Freire, Carmen S. R.; Couto, Ricardo M.; Simões, Pedro C.; Neto, Carlos P.; Silvestre, Armando J. D.; Silva, Carlos M.

    2012-01-01

    Eucalyptus bark contains significant amounts of triterpenoids with demonstrated bioactivity, namely triterpenic acids and their acetyl derivatives (ursolic, betulinic, oleanolic, betulonic, 3-acetylursolic, and 3-acetyloleanolic acids). In this work, the supercritical fluid extraction (SFE) of Eucalyptus globulus deciduous bark was carried out with pure and modified carbon dioxide to recover this fraction, and the results were compared with those obtained by Soxhlet extraction with dichloromethane. The effects of pressure (100–200 bar), co-solvent (ethanol) content (0, 5 and 8% wt), and multistep operation were studied in order to evaluate the applicability of SFE for their selective and efficient production. The individual extraction curves of the main families of compounds were measured, and the extracts analyzed by GC-MS. Results pointed out the influence of pressure and the important role played by the co-solvent. Ethanol can be used with advantage, since its effect is more important than increasing pressure by several tens of bar. At 160 bar and 40 °C, the introduction of 8% (wt) of ethanol greatly improves the yield of triterpenoids more than threefold. PMID:22837719

  19. Functional Analysis With a Barcoder Yeast Gene Overexpression System

    PubMed Central

    Douglas, Alison C.; Smith, Andrew M.; Sharifpoor, Sara; Yan, Zhun; Durbic, Tanja; Heisler, Lawrence E.; Lee, Anna Y.; Ryan, Owen; Göttert, Hendrikje; Surendra, Anu; van Dyk, Dewald; Giaever, Guri; Boone, Charles; Nislow, Corey; Andrews, Brenda J.

    2012-01-01

    Systematic analysis of gene overexpression phenotypes provides an insight into gene function, enzyme targets, and biological pathways. Here, we describe a novel functional genomics platform that enables a highly parallel and systematic assessment of overexpression phenotypes in pooled cultures. First, we constructed a genome-level collection of ~5100 yeast barcoder strains, each of which carries a unique barcode, enabling pooled fitness assays with a barcode microarray or sequencing readout. Second, we constructed a yeast open reading frame (ORF) galactose-induced overexpression array by generating a genome-wide set of yeast transformants, each of which carries an individual plasmid-born and sequence-verified ORF derived from the Saccharomyces cerevisiae full-length EXpression-ready (FLEX) collection. We combined these collections genetically using synthetic genetic array methodology, generating ~5100 strains, each of which is barcoded and overexpresses a specific ORF, a set we termed “barFLEX.” Additional synthetic genetic array allows the barFLEX collection to be moved into different genetic backgrounds. As a proof-of-principle, we describe the properties of the barFLEX overexpression collection and its application in synthetic dosage lethality studies under different environmental conditions. PMID:23050238

  20. Using lean "automation with a human touch" to improve medication safety: a step closer to the "perfect dose".

    PubMed

    Ching, Joan M; Williams, Barbara L; Idemoto, Lori M; Blackmore, C Craig

    2014-08-01

    Virginia Mason Medical Center (Seattle) employed the Lean concept of Jidoka (automation with a human touch) to plan for and deploy bar code medication administration (BCMA) to hospitalized patients. Integrating BCMA technology into the nursing work flow with minimal disruption was accomplished using three steps ofJidoka: (1) assigning work to humans and machines on the basis of their differing abilities, (2) adapting machines to the human work flow, and (3) monitoring the human-machine interaction. Effectiveness of BCMA to both reinforce safe administration practices and reduce medication errors was measured using the Collaborative Alliance for Nursing Outcomes (CALNOC) Medication Administration Accuracy Quality Study methodology. Trained nurses observed a total of 16,149 medication doses for 3,617 patients in a three-year period. Following BCMA implementation, the number of safe practice violations decreased from 54.8 violations/100 doses (January 2010-September 2011) to 29.0 violations/100 doses (October 2011-December 2012), resulting in an absolute risk reduction of 25.8 violations/100 doses (95% confidence interval [CI]: 23.7, 27.9, p < .001). The number of medication errors decreased from 5.9 errors/100 doses at baseline to 3.0 errors/100 doses after BCMA implementation (absolute risk reduction: 2.9 errors/100 doses [95% CI: 2.2, 3.6,p < .001]). The number of unsafe administration practices (estimate, -5.481; standard error 1.133; p < .001; 95% CI: -7.702, -3.260) also decreased. As more hospitals respond to health information technology meaningful use incentives, thoughtful, methodical, and well-managed approaches to technology deployment are crucial. This work illustrates how Jidoka offers opportunities for a smooth transition to new technology.

  1. Estimation of wave phase speed and nearshore bathymetry from video imagery

    USGS Publications Warehouse

    Stockdon, H.F.; Holman, R.A.

    2000-01-01

    A new remote sensing technique based on video image processing has been developed for the estimation of nearshore bathymetry. The shoreward propagation of waves is measured using pixel intensity time series collected at a cross-shore array of locations using remotely operated video cameras. The incident band is identified, and the cross-spectral matrix is calculated for this band. The cross-shore component of wavenumber is found as the gradient in phase of the first complex empirical orthogonal function of this matrix. Water depth is then inferred from linear wave theory's dispersion relationship. Full bathymetry maps may be measured by collecting data in a large array composed of both cross-shore and longshore lines. Data are collected hourly throughout the day, and a stable, daily estimate of bathymetry is calculated from the median of the hourly estimates. The technique was tested using 30 days of hourly data collected at the SandyDuck experiment in Duck, North Carolina, in October 1997. Errors calculated as the difference between estimated depth and ground truth data show a mean bias of -35 cm (rms error = 91 cm). Expressed as a fraction of the true water depth, the mean percent error was 13% (rms error = 34%). Excluding the region of known wave nonlinearities over the bar crest, the accuracy of the technique improved, and the mean (rms) error was -20 cm (75 cm). Additionally, under low-amplitude swells (wave height H ???1 m), the performance of the technique across the entire profile improved to 6% (29%) of the true water depth with a mean (rms) error of -12 cm (71 cm). Copyright 2000 by the American Geophysical Union.

  2. The austral peregrine falcon: Color variation, productivity, and pesticides

    USGS Publications Warehouse

    Ellis, D.H.

    1985-01-01

    The austral peregrine falcon (Falco peregrinus cassini) was studied in the Andean foot- hills and across the Patagonian steppe from November to December 1981. The birds under study (18 pairs) were reproducing at or near normal (pre-DDT) levels for other races. Pesticide residues, while elevated, were well below the values associated with reproductive failure in other populations. With one exception, eggshells were not abnormally thin. The peregrine falcon in Patagonia exhibits extreme color variation. Pallid birds are nearly pure white below (light cream as juveniles), whereas normally pigmented birds are black-crowned and conspicuously barred with black ventrally. Rare individuals of the Normal Phase display black heads, broad black ventral barring, and warm reddish-brown ventral background coloration.

  3. Explaining pragmatic performance in traumatic brain injury: a process perspective on communicative errors.

    PubMed

    Bosco, Francesca M; Angeleri, Romina; Sacco, Katiuscia; Bara, Bruno G

    2015-01-01

    The purpose of this study is to investigate the pragmatic abilities of individuals with traumatic brain injury (TBI). Several studies in the literature have previously reported communicative deficits in individuals with TBI, however such research has focused principally on communicative deficits in general, without providing an analysis of the errors committed in understanding and expressing communicative acts. Within the theoretical framework of Cognitive Pragmatics theory and Cooperative principle we focused on intermediate communicative errors that occur in both the comprehension and the production of various pragmatic phenomena, expressed through both linguistic and extralinguistic communicative modalities. To investigate the pragmatic abilities of individuals with TBI. A group of 30 individuals with TBI and a matched control group took part in the experiment. They were presented with a series of videotaped vignettes depicting everyday communicative exchanges, and were tested on the comprehension and production of various kinds of communicative acts (standard communicative act, deceit and irony). The participants' answers were evaluated as correct or incorrect. Incorrect answers were then further evaluated with regard to the presence of different intermediate errors. Individuals with TBI performed worse than control participants on all the tasks investigated when considering correct versus incorrect answers. Furthermore, a series of logistic regression analyses showed that group membership (TBI versus controls) significantly predicted the occurrence of intermediate errors. This result holds in both the comprehension and production tasks, and in both linguistic and extralinguistic modalities. Participants with TBI tend to have difficulty in managing different types of communicative acts, and they make more intermediate errors than the control participants. Intermediate errors concern the comprehension and production of the expression act, the comprehension of the actors' meaning, as well as the respect of the Cooperative principle. © 2014 Royal College of Speech and Language Therapists.

  4. Testing and Improving the Luminosity Relations for Gamma-Ray Bursts

    NASA Astrophysics Data System (ADS)

    Collazzi, Andrew

    2011-08-01

    Gamma Ray Bursts (GRBs) have several luminosity relations where a measurable property of a burst light curve or spectrum is correlated with the burst luminosity. These luminosity relations are calibrated for the fraction of bursts with spectroscopic redshifts and hence the known luminosities. GRBs have thus become known as a type of 'standard candle'; where standard candle is meant in the usual sense that their luminosities can be derived from measurable properties of the bursts. GRBs can therefore be used for the same cosmology applications as Type Ia supernovae, including the construction of the Hubble Diagram and measuring massive star formation rate. The greatest disadvantage of using GRBs as standard candles is that their accuracy is lower than desired. With the recent advent of GRBs as a new standard candle, every effort must be made to test and improve the distance measures. Here, several methods are employed to do just that. First, generalized forms of two tests are performed on all of the luminosity relations. All the luminosity relations pass the second of these tests, and all but two pass the first. Even with this failure, the redundancy in using multiple luminosity relations allows all the luminosity relations to retain value. Next, the 'Firmani relation' is shown to have poorer accuracy than first advertised. In addition, it is shown to be exactly derivable from two other luminosity relations. For these reasons, the Firmani relation is useless for cosmology. The Amati relation is then revisited and shown to be an artifact of a combination of selection effects. Therefore, the Amati relation is also not good for cosmology. Fourthly, the systematic errors involved in measuring a popular luminosity indicator (Epeak ) are measured. The result is that an irreducible systematic error of 28% exists. After that, a preliminary investigation into the usefulness of breaking GRBs into individual pulses is conducted. The results of an 'ideal' set of data do not provide for confident results due to large error bars. Finally, the work concludes with a discussion about the impact of the work and the future of GRB luminosity relations.

  5. The effect of Gestalt laws of perceptual organization on the comprehension of three-variable bar and line graphs.

    PubMed

    Ali, Nadia; Peebles, David

    2013-02-01

    We report three experiments investigating the ability of undergraduate college students to comprehend 2 x 2 "interaction" graphs from two-way factorial research designs. Factorial research designs are an invaluable research tool widely used in all branches of the natural and social sciences, and the teaching of such designs lies at the core of many college curricula. Such data can be represented in bar or line graph form. Previous studies have shown, however, that people interpret these two graphical forms differently. In Experiment 1, participants were required to interpret interaction data in either bar or line graphs while thinking aloud. Verbal protocol analysis revealed that line graph users were significantly more likely to misinterpret the data or fail to interpret the graph altogether. The patterns of errors line graph users made were interpreted as arising from the operation of Gestalt principles of perceptual organization, and this interpretation was used to develop two modified versions of the line graph, which were then tested in two further experiments. One of the modifications resulted in a significant improvement in performance. Results of the three experiments support the proposed explanation and demonstrate the effects (both positive and negative) of Gestalt principles of perceptual organization on graph comprehension. We propose that our new design provides a more balanced representation of the data than the standard line graph for nonexpert users to comprehend the full range of relationships in two-way factorial research designs and may therefore be considered a more appropriate representation for use in educational and other nonexpert contexts.

  6. [Pressure control in medical gas distribution systems].

    PubMed

    Bourgain, J L; Benayoun, L; Baguenard, P; Haré, G; Puizillout, J M; Billard, V

    1997-01-01

    To assess whether the pressure gauges at the downstream part of pressure regulators are accurate enough to ensure that pressure in O2 pipeline is always higher than in Air pipeline and that pressure in the latter is higher than pressure in N2O pipeline. A pressure difference of at least 0.4 bar between two medical gas supply systems is recommended to avoid the reflow of either N2O or Air into the O2 pipeline, through a faulty mixer or proportioning device. Prospective technical comparative study. Readings of 32 Bourdon gauges were compared with data obtained with a calibrated reference transducer. Two sets of measurements were performed at a one month interval. Pressure differences between Bourdon gauges and reference transducer were 8% (0.28 bar) in average for a theoretical maximal error less than 2.5%. During the first set of measurements, Air pressure was higher than O2 pressure in one place and N2O pressure higher than Air pressure in another. After an increase in the O2 pipeline pressure and careful setting of pressure regulators, this problem was not observed at the second set of measurements. Actual accuracy of Bourdon gauges was not convenient enough to ensure that O2 pressure was always above Air pressure. Regular controls of these pressure gauges are therefore essential. Replacement of the faulty Bourdon gauges by more accurate transducers should be considered. As an alternative, the increase in pressure difference between O2 and Air pipelines to at least 0.6 bar is recommended.

  7. Use of heavier drinking contexts among heterosexuals, homosexuals and bisexuals: results from a National Household Probability Survey.

    PubMed

    Trocki, Karen F; Drabble, Laurie; Midanik, Lorraine

    2005-01-01

    Extensive use of specific social contexts (bars and parties, for instance) by homosexuals and bisexuals is thought to be a factor in the higher rates of drinking among these groups. However, much of the empirical evidence behind these assumptions has been based on studies with methodological or sampling shortcomings. This article examines the epidemiological patterns of alcohol contexts in relation to sexual identity, using a large, national, probability population survey. We used the 2000 National Alcohol Survey for these analyses. The prevalence of spending leisure time in each of two social contexts (bars and parties) that are associated with heavier drinking is examined by sexual orientation (heterosexual, homosexual, bisexual and self-identified heterosexuals with same sex partners). In addition, we compare levels of drinking within these contexts by sexual orientation within these groups. Exclusively heterosexual women spent less time in these two contexts relative to all other groups of women. Gay men spent considerably more time in bars compared with the other groups of men. Heterosexual women who reported same sex partners drink more at bars, and bisexual women drink more alcohol at both bars and parties than exclusively heterosexual women. For men, there were no significant differences for average consumption in any of these contexts. Entry of background and demographic variables into logistic regression analyses did little to modify these associations. There is empirical evidence that some groups of homosexual and bisexual women and men spend more time than heterosexual individuals in heavier drinking contexts. The frequency of being in these two social contexts does not appear to be associated with heavier drinking within these contexts for men, but it may be related to heavier drinking in those places among some groups of women.

  8. A novel fiber composite ingredient incorporated into a beverage and bar blunts postprandial serum glucose and insulin responses: a randomized controlled trial.

    PubMed

    O'Connor, Lauren E; Campbell, Wayne W

    2016-03-01

    Previous research supports that consumption of resistant starch and guar gum independently influences insulin-mediated glucose responses to meals. This research assessed a novel co-processed fiber composite (FC) ingredient comprising whole-grain high-amylose maize flour and viscous guar gum on glucose and insulin responses to co-consumed and subsequent meals in humans. It was hypothesized that a smoothie-type beverage or a cold-pressed snack bar containing the FC would blunt and sustain serum glucose and insulin postprandial responses compared with maltodextrin (MD). The beverage and bar were assessed in 2 separate studies using identical protocols. Young, nondiabetic, nonobese adults participated in 2 testing days (randomized crossover design) separated by at least 1 week for both food forms. On each testing day, the FC or MD product was consumed with a low-fiber standardized breakfast followed by a low-fiber standardized lunch (with no FC or MD) 4 hours later. Blood samples were collected at baseline and incrementally throughout the 8-hour testing day. One-tailed paired t tests were performed to compare treatment areas under the curve, and a doubly repeated-measures analysis of variance was performed to compare treatment responses at individual time points (P< .05, Bonferroni corrected). The FC blunted the postprandial glucose and insulin responses compared with MD, including a robust glucose and insulin response reduction after breakfast and a continued modest glycemic second-meal reduction after lunch in both the beverage and the bar. These findings support the use of this novel whole-grain FC ingredient in a beverage or bar for insulin-mediated glucose control in young healthy adults. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Controls on sinuosity in the sparsely vegetated Fossálar River, southern Iceland

    NASA Astrophysics Data System (ADS)

    Ielpi, Alessandro

    2017-06-01

    Vegetation exerts strong controls on fluvial sinuosity, providing bank stability and buffering surface runoff. These controls are manifest in densely vegetated landscapes, whereas sparsely vegetated fluvial systems have been so far overlooked. This study integrates remote sensing and gauging records of the meandering to wandering Fossálar River, a relatively steep-sloped (< 2.5%) Icelandic river featuring well-developed point bars (79%-85% of total active bar surface) despite the lack of thick, arborescent vegetation. Over four decades, fluctuations in the sinuosity index (1.15-1.43) and vegetation cover (63%-83%) are not significantly correlated (r = 0.28, p > 0.05), suggesting that relationships between the two are mediated by intervening variables and uncertain lag times. By comparison, discharge regime and fluvial planform show direct correlation over monthly to yearly time scales, with stable discharge stages accompanying the accretion of meander bends and peak floods related to destructive point-bar reworking. Rapid planform change is aided by the unconsolidated nature of unrooted alluvial banks, with recorded rates of lateral channel-belt migration averaging 18 m/yr. Valley confinement and channel mobility also control the geometry and evolution of individual point bars, with the highest degree of spatial geomorphic variability recorded in low-gradient stretches where lateral migration is unimpeded. Point bars in the Fossálar River display morphometric values comparable to those of other sparsely vegetated rivers, suggesting shared scalar properties. This conjecture prompts the need for more sophisticated integrations between remote sensing and gauging records on modern rivers lacking widespread plant life. While a large volume of experimental and field-based work maintains that thick vegetation has a critical role in limiting braiding, thus favouring sinuosity, this study demonstrates the stronger controls of discharge regime and alluvial morphology on sparsely vegetated sinuous rivers.

  10. Structural analysis of lunar subsurface with Chang'E-3 lunar penetrating radar

    NASA Astrophysics Data System (ADS)

    Lai, Jialong; Xu, Yi; Zhang, Xiaoping; Tang, Zesheng

    2016-01-01

    Geological structure of the subsurface of the Moon provides valuable information on lunar evolution. Recently, Chang'E-3 has utilized lunar penetrating radar (LPR), which is equipped on the lunar rover named as Yutu, to detect the lunar geological structure in Northern Imbrium (44.1260N, 19.5014W) for the first time. As an in situ detector, Chang'E-3 LPR has relative higher horizontal and vertical resolution and less clutter impact compared to spaceborne radars and earth-based radars. In this work, we analyze the LPR data at 500 MHz transmission frequency to obtain the shallow subsurface structure of the landing area of Chang'E-3 in Mare Imbrium. Filter method and amplitude recovery algorithms are utilized to alleviate the adverse effects of environment and system noises and compensate the amplitude losses during signal propagation. Based on the processed radar image, we observe numerous diffraction hyperbolae, which may be caused by discrete reflectors beneath the lunar surface. Hyperbolae fitting method is utilized to reverse the average dielectric constant to certain depth (ε bar). Overall, the estimated ε bar increases with the depth and ε bar could be classified into three categories. Average ε bar of each category is 2.47, 3.40 and 6.16, respectively. Because of the large gap between the values of ε bar of neighboring categories, we speculate a three-layered structure of the shallow surface of LPR exploration region. One possible geological picture of the speculated three-layered structure is presented as follows. The top layer is weathered layer of ejecta blanket with its average thickness and bound on error is 0.95±0.02 m. The second layer is the ejecta blanket of the nearby impact crater, and the corresponding average thickness is about 2.30±0.07 m, which is in good agreement with the two primary models of ejecta blanket thickness as a function of distance from the crater center. The third layer is regarded as a mixture of stones and soil. The echoes below the third layer are in the same magnitude as the noises, which may indicate that the fourth layer, if it exists, is uniform (no clear reflector) and its thickness is beyond the detection limit of LPR. Hence, we infer the fourth layer is a basalt layer.

  11. Global Precipitation Measurement Mission Launch and Commissioning

    NASA Technical Reports Server (NTRS)

    Davis, Nikesha; DeWeese, Keith; Vess, Melissa; O'Donnell, James R., Jr.; Welter, Gary

    2015-01-01

    During launch and early operation of the Global Precipitation Measurement (GPM) Mission, the Guidance, Navigation, and Control (GN&C) analysis team encountered four main on-orbit anomalies. These include: (1) unexpected shock from Solar Array deployment, (2) momentum buildup from the Magnetic Torquer Bars (MTBs) phasing errors, (3) transition into Safehold due to albedo induced Course Sun Sensor (CSS) anomaly, and (4) a flight software error that could cause a Safehold transition due to a Star Tracker occultation. This paper will discuss ways GN&C engineers identified the anomalies and tracked down the root causes. Flight data and GN&C on-board models will be shown to illustrate how each of these anomalies were investigated and mitigated before causing any harm to the spacecraft. On May 29, 2014, GPM was handed over to the Mission Flight Operations Team after a successful commissioning period. Currently, GPM is operating nominally on orbit, collecting meaningful scientific data that will significantly improve our understanding of the Earth's climate and water cycle.

  12. A High Temperature Capacitive Pressure Sensor Based on Alumina Ceramic for in Situ Measurement at 600 °C

    PubMed Central

    Tan, Qiulin; Li, Chen; Xiong, Jijun; Jia, Pinggang; Zhang, Wendong; Liu, Jun; Xue, Chenyang; Hong, Yingping; Ren, Zhong; Luo, Tao

    2014-01-01

    In response to the growing demand for in situ measurement of pressure in high-temperature environments, a high temperature capacitive pressure sensor is presented in this paper. A high-temperature ceramic material-alumina is used for the fabrication of the sensor, and the prototype sensor consists of an inductance, a variable capacitance, and a sealed cavity integrated in the alumina ceramic substrate using a thick-film integrated technology. The experimental results show that the proposed sensor has stability at 850 °C for more than 20 min. The characterization in high-temperature and pressure environments successfully demonstrated sensing capabilities for pressure from 1 to 5 bar up to 600 °C, limited by the sensor test setup. At 600 °C, the sensor achieves a linear characteristic response, and the repeatability error, hysteresis error and zero-point drift of the sensor are 8.3%, 5.05% and 1%, respectively. PMID:24487624

  13. Development and validity of an instrumented handbike: initial results of propulsion kinetics.

    PubMed

    van Drongelen, Stefan; van den Berg, Jos; Arnet, Ursina; Veeger, Dirkjan H E J; van der Woude, Lucas H V

    2011-11-01

    To develop an instrumented handbike system to measure the forces applied to the handgrip during handbiking. A 6 degrees of freedom force sensor was built into the handgrip of an attach-unit handbike, together with two optical encoders to measure the orientation of the handgrip and crank in space. Linearity, precision, and percent error were determined for static and dynamic tests. High linearity was demonstrated for both the static and the dynamic condition (r=1.01). Precision was high under the static condition (standard deviation of 0.2N), however the precision decreased with higher loads during the dynamic condition. Percent error values were between 0.3 and 5.1%. This is the first instrumented handbike system that can register 3-dimensional forces. It can be concluded that the instrumented handbike system allows for an accurate force analysis based on forces registered at the handle bars. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  14. A novel design of membrane mirror with small deformation and imaging performance analysis in infrared system

    NASA Astrophysics Data System (ADS)

    Zhang, Shuqing; Wang, Yongquan; Zhi, Xiyang

    2017-05-01

    A method of diminishing the shape error of membrane mirror is proposed in this paper. The inner inflating pressure is considerably decreased by adopting the pre-shaped membrane. Small deformation of the membrane mirror with greatly reduced shape error is sequentially achieved. Primarily a finite element model of the above pre-shaped membrane is built on the basis of its mechanical properties. Then accurate shape data under different pressures can be acquired by iteratively calculating the node displacements of the model. Shape data are applicable to build up deformed reflecting surfaces for the simulative analysis in ZEMAX. Finally, ground-based imaging experiments of 4-bar targets and nature scene are conducted. Experiment results indicate that the MTF of the infrared system can reach to 0.3 at a high spatial resolution of 10l p/mm, and texture details of the nature scene are well-presented. The method can provide theoretical basis and technical support for the applications in lightweight optical components with ultra-large apertures.

  15. Global Precipitation Measurement Mission Launch and Commissioning

    NASA Technical Reports Server (NTRS)

    Davis, Nikesha; Deweese, Keith; Vess, Missie; Welter, Gary; O'Donnell, James R., Jr.

    2015-01-01

    During launch and early operation of the Global Precipitation Measurement (GPM) Mission, the Guidance, Navigation and Control (GNC) analysis team encountered four main on orbit anomalies. These include: (1) unexpected shock from Solar Array deployment, (2) momentum buildup from the Magnetic Torquer Bars (MTBs) phasing errors, (3) transition into Safehold due to albedo-induced Course Sun Sensor (CSS) anomaly, and (4) a flight software error that could cause a Safehold transition due to a Star Tracker occultation. This paper will discuss ways GNC engineers identified and tracked down the root causes. Flight data and GNC on board models will be shown to illustrate how each of these anomalies were investigated and mitigated before causing any harm to the spacecraft. On May 29, 2014, GPM was handed over to the Mission Flight Operations Team after a successful commissioning period. Currently, GPM is operating nominally on orbit, collecting meaningful scientific data that will significantly improve our understanding of the Earth's climate and water cycle.

  16. Allocentrically implied target locations are updated in an eye-centred reference frame.

    PubMed

    Thompson, Aidan A; Glover, Christopher V; Henriques, Denise Y P

    2012-04-18

    When reaching to remembered target locations following an intervening eye movement a systematic pattern of error is found indicating eye-centred updating of visuospatial memory. Here we investigated if implicit targets, defined only by allocentric visual cues, are also updated in an eye-centred reference frame as explicit targets are. Participants viewed vertical bars separated by varying distances, and horizontal lines of equivalently varying lengths, implying a "target" location at the midpoint of the stimulus. After determining the implied "target" location from only the allocentric stimuli provided, participants saccaded to an eccentric location, and reached to the remembered "target" location. Irrespective of the type of stimulus reaching errors to these implicit targets are gaze-dependent, and do not differ from those found when reaching to remembered explicit targets. Implicit target locations are coded and updated as a function of relative gaze direction with respect to those implied locations just as explicit targets are, even though no target is specifically represented. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  17. Cognitive Abilities, Monitoring Confidence, and Control Thresholds Explain Individual Differences in Heuristics and Biases

    PubMed Central

    Jackson, Simon A.; Kleitman, Sabina; Howie, Pauline; Stankov, Lazar

    2016-01-01

    In this paper, we investigate whether individual differences in performance on heuristic and biases tasks can be explained by cognitive abilities, monitoring confidence, and control thresholds. Current theories explain individual differences in these tasks by the ability to detect errors and override automatic but biased judgments, and deliberative cognitive abilities that help to construct the correct response. Here we retain cognitive abilities but disentangle error detection, proposing that lower monitoring confidence and higher control thresholds promote error checking. Participants (N = 250) completed tasks assessing their fluid reasoning abilities, stable monitoring confidence levels, and the control threshold they impose on their decisions. They also completed seven typical heuristic and biases tasks such as the cognitive reflection test and Resistance to Framing. Using structural equation modeling, we found that individuals with higher reasoning abilities, lower monitoring confidence, and higher control threshold performed significantly and, at times, substantially better on the heuristic and biases tasks. Individuals with higher control thresholds also showed lower preferences for risky alternatives in a gambling task. Furthermore, residual correlations among the heuristic and biases tasks were reduced to null, indicating that cognitive abilities, monitoring confidence, and control thresholds accounted for their shared variance. Implications include the proposal that the capacity to detect errors does not differ between individuals. Rather, individuals might adopt varied strategies that promote error checking to different degrees, regardless of whether they have made a mistake or not. The results support growing evidence that decision-making involves cognitive abilities that construct actions and monitoring and control processes that manage their initiation. PMID:27790170

  18. Cognitive Abilities, Monitoring Confidence, and Control Thresholds Explain Individual Differences in Heuristics and Biases.

    PubMed

    Jackson, Simon A; Kleitman, Sabina; Howie, Pauline; Stankov, Lazar

    2016-01-01

    In this paper, we investigate whether individual differences in performance on heuristic and biases tasks can be explained by cognitive abilities, monitoring confidence, and control thresholds. Current theories explain individual differences in these tasks by the ability to detect errors and override automatic but biased judgments, and deliberative cognitive abilities that help to construct the correct response. Here we retain cognitive abilities but disentangle error detection, proposing that lower monitoring confidence and higher control thresholds promote error checking. Participants ( N = 250) completed tasks assessing their fluid reasoning abilities, stable monitoring confidence levels, and the control threshold they impose on their decisions. They also completed seven typical heuristic and biases tasks such as the cognitive reflection test and Resistance to Framing. Using structural equation modeling, we found that individuals with higher reasoning abilities, lower monitoring confidence, and higher control threshold performed significantly and, at times, substantially better on the heuristic and biases tasks. Individuals with higher control thresholds also showed lower preferences for risky alternatives in a gambling task. Furthermore, residual correlations among the heuristic and biases tasks were reduced to null, indicating that cognitive abilities, monitoring confidence, and control thresholds accounted for their shared variance. Implications include the proposal that the capacity to detect errors does not differ between individuals. Rather, individuals might adopt varied strategies that promote error checking to different degrees, regardless of whether they have made a mistake or not. The results support growing evidence that decision-making involves cognitive abilities that construct actions and monitoring and control processes that manage their initiation.

  19. Moderation of the Relationship Between Reward Expectancy and Prediction Error-Related Ventral Striatal Reactivity by Anhedonia in Unmedicated Major Depressive Disorder: Findings From the EMBARC Study

    PubMed Central

    Greenberg, Tsafrir; Chase, Henry W.; Almeida, Jorge R.; Stiffler, Richelle; Zevallos, Carlos R.; Aslam, Haris A.; Deckersbach, Thilo; Weyandt, Sarah; Cooper, Crystal; Toups, Marisa; Carmody, Thomas; Kurian, Benji; Peltier, Scott; Adams, Phillip; McInnis, Melvin G.; Oquendo, Maria A.; McGrath, Patrick J.; Fava, Maurizio; Weissman, Myrna; Parsey, Ramin; Trivedi, Madhukar H.; Phillips, Mary L.

    2016-01-01

    Objective Anhedonia, disrupted reward processing, is a core symptom of major depressive disorder. Recent findings demonstrate altered reward-related ventral striatal reactivity in depressed individuals, but the extent to which this is specific to anhedonia remains poorly understood. The authors examined the effect of anhedonia on reward expectancy (expected outcome value) and prediction error-(discrepancy between expected and actual outcome) related ventral striatal reactivity, as well as the relationship between these measures. Method A total of 148 unmedicated individuals with major depressive disorder and 31 healthy comparison individuals recruited for the multisite EMBARC (Establishing Moderators and Biosignatures of Antidepressant Response in Clinical Care) study underwent functional MRI during a well-validated reward task. Region of interest and whole-brain data were examined in the first- (N=78) and second- (N=70) recruited cohorts, as well as the total sample, of depressed individuals, and in healthy individuals. Results Healthy, but not depressed, individuals showed a significant inverse relationship between reward expectancy and prediction error-related right ventral striatal reactivity. Across all participants, and in depressed individuals only, greater anhedonia severity was associated with a reduced reward expectancy-prediction error inverse relationship, even after controlling for other symptoms. Conclusions The normal reward expectancy and prediction error-related ventral striatal reactivity inverse relationship concords with conditioning models, predicting a shift in ventral striatal responding from reward outcomes to reward cues. This study shows, for the first time, an absence of this relationship in two cohorts of unmedicated depressed individuals and a moderation of this relationship by anhedonia, suggesting reduced reward-contingency learning with greater anhedonia. These findings help elucidate neural mechanisms of anhedonia, as a step toward identifying potential biosignatures of treatment response. PMID:26183698

  20. Moderation of the Relationship Between Reward Expectancy and Prediction Error-Related Ventral Striatal Reactivity by Anhedonia in Unmedicated Major Depressive Disorder: Findings From the EMBARC Study.

    PubMed

    Greenberg, Tsafrir; Chase, Henry W; Almeida, Jorge R; Stiffler, Richelle; Zevallos, Carlos R; Aslam, Haris A; Deckersbach, Thilo; Weyandt, Sarah; Cooper, Crystal; Toups, Marisa; Carmody, Thomas; Kurian, Benji; Peltier, Scott; Adams, Phillip; McInnis, Melvin G; Oquendo, Maria A; McGrath, Patrick J; Fava, Maurizio; Weissman, Myrna; Parsey, Ramin; Trivedi, Madhukar H; Phillips, Mary L

    2015-09-01

    Anhedonia, disrupted reward processing, is a core symptom of major depressive disorder. Recent findings demonstrate altered reward-related ventral striatal reactivity in depressed individuals, but the extent to which this is specific to anhedonia remains poorly understood. The authors examined the effect of anhedonia on reward expectancy (expected outcome value) and prediction error- (discrepancy between expected and actual outcome) related ventral striatal reactivity, as well as the relationship between these measures. A total of 148 unmedicated individuals with major depressive disorder and 31 healthy comparison individuals recruited for the multisite EMBARC (Establishing Moderators and Biosignatures of Antidepressant Response in Clinical Care) study underwent functional MRI during a well-validated reward task. Region of interest and whole-brain data were examined in the first- (N=78) and second- (N=70) recruited cohorts, as well as the total sample, of depressed individuals, and in healthy individuals. Healthy, but not depressed, individuals showed a significant inverse relationship between reward expectancy and prediction error-related right ventral striatal reactivity. Across all participants, and in depressed individuals only, greater anhedonia severity was associated with a reduced reward expectancy-prediction error inverse relationship, even after controlling for other symptoms. The normal reward expectancy and prediction error-related ventral striatal reactivity inverse relationship concords with conditioning models, predicting a shift in ventral striatal responding from reward outcomes to reward cues. This study shows, for the first time, an absence of this relationship in two cohorts of unmedicated depressed individuals and a moderation of this relationship by anhedonia, suggesting reduced reward-contingency learning with greater anhedonia. These findings help elucidate neural mechanisms of anhedonia, as a step toward identifying potential biosignatures of treatment response.

  1. Impact of Alcohol Use Disorder Comorbidity on Defensive Reactivity to Errors in Veterans with Post-traumatic Stress Disorder

    PubMed Central

    Gorka, Stephanie M.; MacNamara, Annmarie; Aase, Darrin M.; Proescher, Eric; Greenstein, Justin E.; Walters, Robert; Passi, Holly; Babione, Joseph M.; Levy, David M.; Kennedy, Amy E.; DiGangi, Julia A.; Rabinak, Christine A.; Schroth, Christopher; Afshar, Kaveh; Fitzgerald, Jacklynn; Hajcak, Greg; Phan, K. Luan

    2017-01-01

    Converging lines of evidence suggest that individuals with comorbid post-traumatic stress disorder (PTSD) and alcohol use disorder (AUD) may be characterized by heightened defensive reactivity, which serves to maintain drinking behaviors and anxiety/hyperarousal symptoms. Notably, however, very few studies have directly tested whether individuals with PTSD and AUD exhibit greater defensive reactivity compared with individuals with PTSD without AUD. The aim of the current study was to therefore test this emerging hypothesis by examining individual differences in error related negativity (ERN), an event-related component that is larger among anxious individuals and is thought to reflect defensive reactivity to errors. Participants were sixty-six military veterans who completed a well-validated flanker task known to robustly elicit the ERN. Veterans were comprised of three groups: controls (i.e., no PTSD or AUD), PTSD-AUD (i.e., current PTSD but no AUD), and PTSD+AUD (i.e., current comorbid PTSD and AUD). Results indicated that in general, individuals with PTSD and controls did not differ in ERN amplitude. However, among individuals with PTSD, those with comorbid AUD had significantly larger ERNs than those without AUD. These findings suggest that PTSD+AUD is a neurobiologically unique subtype of PTSD and the comorbidity of AUD may enhance defensive reactivity to errors in individuals with PTSD. PMID:27786513

  2. Beyond alpha: an empirical examination of the effects of different sources of measurement error on reliability estimates for measures of individual differences constructs.

    PubMed

    Schmidt, Frank L; Le, Huy; Ilies, Remus

    2003-06-01

    On the basis of an empirical study of measures of constructs from the cognitive domain, the personality domain, and the domain of affective traits, the authors of this study examine the implications of transient measurement error for the measurement of frequently studied individual differences variables. The authors clarify relevant reliability concepts as they relate to transient error and present a procedure for estimating the coefficient of equivalence and stability (L. J. Cronbach, 1947), the only classical reliability coefficient that assesses all 3 major sources of measurement error (random response, transient, and specific factor errors). The authors conclude that transient error exists in all 3 trait domains and is especially large in the domain of affective traits. Their findings indicate that the nearly universal use of the coefficient of equivalence (Cronbach's alpha; L. J. Cronbach, 1951), which fails to assess transient error, leads to overestimates of reliability and undercorrections for biases due to measurement error.

  3. magicaxis: Pretty scientific plotting with minor-tick and log minor-tick support

    NASA Astrophysics Data System (ADS)

    Robotham, Aaron S. G.

    2016-04-01

    The R suite magicaxis makes useful and pretty plots for scientific plotting and includes functions for base plotting, with particular emphasis on pretty axis labelling in a number of circumstances that are often used in scientific plotting. It also includes functions for generating images and contours that reflect the 2D quantile levels of the data designed particularly for output of MCMC posteriors where visualizing the location of the 68% and 95% 2D quantiles for covariant parameters is a necessary part of the post MCMC analysis, can generate low and high error bars, and allows clipping of values, rejection of bad values, and log stretching.

  4. Microwave properties of ice from The Great Lakes

    NASA Technical Reports Server (NTRS)

    Vickers, R. S.

    1975-01-01

    The increasing use of radar systems as remote sensors of ice thickness has revealed a lack of basic data on the microwave properties of fresh-water ice. A program, in which the complex dielectric constant was measured for a series of ice samples taken from the Great Lakes, is described. The measurements were taken at temperatures of -5, -10, and -15 C. It is noted that the ice has considerable internal layered structure, and the effects of the layering are examined. Values of 3.0 to 3.2 are reported for the real part of the dielectric constant, with an error bar of + or - 0.01.

  5. The temperature of the cosmic microwave background radiation at 3.8 GHz - Results of a measurement from the South Pole site

    NASA Technical Reports Server (NTRS)

    De Amici, Giovanni; Limon, Michele; Smoot, George F.; Bersanelli, Marco; Kogut, AL; Levin, Steve

    1991-01-01

    As part of an international collaboration to measure the low-frequency spectrum of the cosmic microwave background (CMB) radiation, its temperature was measured at a frequency of 3.8 GHz, during the austral spring of 1989, obtaining a brightness temperature, T(CMB), of 2.64 +/-0.07 K (68 percent confidence level). The new result is in agreement with previous measurements at the same frequency obtained in 1986-88 from a very different site and has comparable error bars. Combining measurements from all years, T(CMB) = 2.64 +/-0.06 K is obtained.

  6. Learning Disabilities/Attention Deficit Hyperactivity Disorder and Test Accommodations in Professional Licensing under the Americans with Disabilities Act.

    ERIC Educational Resources Information Center

    Latham, Patricia H.; Latham, Peter S.

    1998-01-01

    Reviews court decisions regarding the documentation of disabilities and accommodations for individuals with learning disabilities and/or attention-deficit disorders taking licensing examinations from the National Board of Medical Examiners and the State Bar Examiners. Professional schools and licensing authorities are urged to work toward…

  7. TrafficGen Architecture Document

    DTIC Science & Technology

    2016-01-01

    sequence diagram ....................................................5 Fig. 5 TrafficGen traffic flows viewed in SDT3D...Scripts contain commands to have the network node listen on specific ports and flows describing the start time, stop time, and specific traffic ...arranged vertically and time presented horizontally. Individual traffic flows are represented by horizontal bars indicating the start time, stop time

  8. The Effect of Emergent Features on Judgments of Quantity in Configural and Separable Displays

    ERIC Educational Resources Information Center

    Peebles, David

    2008-01-01

    Two experiments investigated effects of emergent features on perceptual judgments of comparative magnitude in three diagrammatic representations: kiviat charts, bar graphs, and line graphs. Experiment 1 required participants to compare individual values; whereas in Experiment 2 participants had to integrate several values to produce a global…

  9. A video multitracking system for quantification of individual behavior in a large fish shoal: advantages and limits.

    PubMed

    Delcourt, Johann; Becco, Christophe; Vandewalle, Nicolas; Poncin, Pascal

    2009-02-01

    The capability of a new multitracking system to track a large number of unmarked fish (up to 100) is evaluated. This system extrapolates a trajectory from each individual and analyzes recorded sequences that are several minutes long. This system is very efficient in statistical individual tracking, where the individual's identity is important for a short period of time in comparison with the duration of the track. Individual identification is typically greater than 99%. Identification is largely efficient (more than 99%) when the fish images do not cross the image of a neighbor fish. When the images of two fish merge (occlusion), we consider that the spot on the screen has a double identity. Consequently, there are no identification errors during occlusions, even though the measurement of the positions of each individual is imprecise. When the images of these two merged fish separate (separation), individual identification errors are more frequent, but their effect is very low in statistical individual tracking. On the other hand, in complete individual tracking, where individual fish identity is important for the entire trajectory, each identification error invalidates the results. In such cases, the experimenter must observe whether the program assigns the correct identification, and, when an error is made, must edit the results. This work is not too costly in time because it is limited to the separation events, accounting for fewer than 0.1% of individual identifications. Consequently, in both statistical and rigorous individual tracking, this system allows the experimenter to gain time by measuring the individual position automatically. It can also analyze the structural and dynamic properties of an animal group with a very large sample, with precision and sampling that are impossible to obtain with manual measures.

  10. Comparison of exercises inducing maximum voluntary isometric contraction for the latissimus dorsi using surface electromyography.

    PubMed

    Park, Se-yeon; Yoo, Won-gyu

    2013-10-01

    The aim of this study was to compare muscular activation during five different normalization techniques that induced maximal isometric contraction of the latissimus dorsi. Sixteen healthy men participated in the study. Each participant performed three repetitions each of five types of isometric exertion: (1) conventional shoulder extension in the prone position, (2) caudal shoulder depression in the prone position, (3) body lifting with shoulder depression in the seated position, (4) trunk bending to the right in the lateral decubitus position, and (5) downward bar pulling in the seated position. In most participants, maximal activation of the latissimus dorsi was observed during conventional shoulder extension in the prone position; the percentage of maximal voluntary contraction was significantly greater for this exercise than for all other normalization techniques except downward bar pulling in the seated position. Although differences in electrode placement among various electromyographic studies represent a limitation, normalization techniques for the latissimus dorsi are recommended to minimize error in assessing maximal muscular activation of the latissimus dorsi through the combined use of shoulder extension in the prone position and downward pulling. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Abnormal Error Monitoring in Math-Anxious Individuals: Evidence from Error-Related Brain Potentials

    PubMed Central

    Suárez-Pellicioni, Macarena; Núñez-Peña, María Isabel; Colomé, Àngels

    2013-01-01

    This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA) and seventeen low math-anxious (LMA) individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN) in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN), the error positivity component (Pe), classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants’ math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA) we found greater activation of the insula in errors on a numerical task as compared to errors in a non-numerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN. PMID:24236212

  12. Sexual network analysis of a gonorrhoea outbreak

    PubMed Central

    De, P; Singh, A; Wong, T; Yacoub, W; Jolly, A

    2004-01-01

    Objectives: Sexual partnerships can be viewed as networks in order to study disease transmission. We examined the transmission of Neisseria gonorrhoeae in a localised outbreak in Alberta, Canada, using measures of network centrality to determine the association between risk of infection of network members and their position within the sexual network. We also compared risk in smaller disconnected components with a large network centred on a social venue. Methods: During the investigation of the outbreak, epidemiological data were collected on gonorrhoea cases and their sexual contacts from STI surveillance records. In addition to traditional contact tracing information, subjects were interviewed about social venues they attended in the past year where casual sexual partnering may have occurred. Sexual networks were constructed by linking together named partners. Univariate comparisons of individual network member characteristics and algebraic measures of network centrality were completed. Results: The sexual networks consisted of 182 individuals, of whom 107 were index cases with laboratory confirmed gonorrhoea and 75 partners of index cases. People who had significantly higher information centrality within each of their local networks were found to have patronised a popular motel bar in the main town in the region (p = 0.05). When the social interaction through the bar was considered, a large network of 89 individuals was constructed that joined all eight of the largest local networks. Moreover, several networks from different communities were linked by individuals who served as bridge populations as a result of their sexual partnering. Conclusion: Asking clients about particular social venues emphasised the importance of location in disease transmission. Network measures of centrality, particularly information centrality, allowed the identification of key individuals through whom infection could be channelled into local networks. Such individuals would be ideal targets for increased interventions. PMID:15295126

  13. Phonological and Motor Errors in Individuals with Acquired Sound Production Impairment

    ERIC Educational Resources Information Center

    Buchwald, Adam; Miozzo, Michele

    2012-01-01

    Purpose: This study aimed to compare sound production errors arising due to phonological processing impairment with errors arising due to motor speech impairment. Method: Two speakers with similar clinical profiles who produced similar consonant cluster simplification errors were examined using a repetition task. We compared both overall accuracy…

  14. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    ERIC Educational Resources Information Center

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  15. Impact of the Lok-bar for High-precision Radiotherapy with Tomotherapy.

    PubMed

    Hirata, Makoto; Monzen, Hajime; Tamura, Mikoto; Kubo, Kazuki; Matsumoto, Kenji; Hanaoka, Kohei; Okumura, Masahiko; Nishimura, Yasumasa

    2018-05-01

    Patient immobilization systems are used to establish a reproducible patient position relative to the couch. In this study, the impact of conventional lok-bars for CT-simulation (CIVCO-bar) and treatment (iBEAM-bar) were compared with a novel lok-bar (mHM-bar) in tomotherapy. Verification was obtained as follows: i. artifacts in CT images; ii. dose attenuation rate of lok-bar, compared to without lok-bar; and iii. dose differences between the calculated and measured absorbed doses. With the CIVCO-bar, there were obvious metal artifacts, while there were nearly no artifacts with the mHM-bar. The mean dose attenuation rates with the mHM-bar and iBEAM-bar were 1.31% and 2.28%, and the mean dose difference was 1.55% and 1.66% for mHM-bar and iBEAM-bar. Using the mHM-bar reduced artifacts on the CT image and improved dose attenuation are obtained. The lok-bar needs to be inserted as a structure set in treatment planning with tomotherapy. Copyright© 2018, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.

  16. The modulating effect of personality traits on neural error monitoring: evidence from event-related FMRI.

    PubMed

    Sosic-Vasic, Zrinka; Ulrich, Martin; Ruchsow, Martin; Vasic, Nenad; Grön, Georg

    2012-01-01

    The present study investigated the association between traits of the Five Factor Model of Personality (Neuroticism, Extraversion, Openness for Experiences, Agreeableness, and Conscientiousness) and neural correlates of error monitoring obtained from a combined Eriksen-Flanker-Go/NoGo task during event-related functional magnetic resonance imaging in 27 healthy subjects. Individual expressions of personality traits were measured using the NEO-PI-R questionnaire. Conscientiousness correlated positively with error signaling in the left inferior frontal gyrus and adjacent anterior insula (IFG/aI). A second strong positive correlation was observed in the anterior cingulate gyrus (ACC). Neuroticism was negatively correlated with error signaling in the inferior frontal cortex possibly reflecting the negative inter-correlation between both scales observed on the behavioral level. Under present statistical thresholds no significant results were obtained for remaining scales. Aligning the personality trait of Conscientiousness with task accomplishment striving behavior the correlation in the left IFG/aI possibly reflects an inter-individually different involvement whenever task-set related memory representations are violated by the occurrence of errors. The strong correlations in the ACC may indicate that more conscientious subjects were stronger affected by these violations of a given task-set expressed by individually different, negatively valenced signals conveyed by the ACC upon occurrence of an error. Present results illustrate that for predicting individual responses to errors underlying personality traits should be taken into account and also lend external validity to the personality trait approach suggesting that personality constructs do reflect more than mere descriptive taxonomies.

  17. Positive Beliefs about Errors as an Important Element of Adaptive Individual Dealing with Errors during Academic Learning

    ERIC Educational Resources Information Center

    Tulis, Maria; Steuer, Gabriele; Dresel, Markus

    2018-01-01

    Research on learning from errors gives reason to assume that errors provide a high potential to facilitate deep learning if students are willing and able to take these learning opportunities. The first aim of this study was to analyse whether beliefs about errors as learning opportunities can be theoretically and empirically distinguished from…

  18. Hα3: an Hα imaging survey of HI selected galaxies from ALFALFA. VI. The role of bars in quenching star formation from z = 3 to the present epoch

    NASA Astrophysics Data System (ADS)

    Gavazzi, G.; Consolandi, G.; Dotti, M.; Fanali, R.; Fossati, M.; Fumagalli, M.; Viscardi, E.; Savorgnan, G.; Boselli, A.; Gutiérrez, L.; Hernández Toledo, H.; Giovanelli, R.; Haynes, M. P.

    2015-08-01

    A growing body of evidence indicates that the star formation rate per unit stellar mass (sSFR) decreases with increasing mass in normal main-sequence star-forming galaxies. Many processes have been advocated as being responsible for this trend (also known as mass quenching), e.g., feedback from active galactic nuclei (AGNs), and the formation of classical bulges. In order to improve our insight into the mechanisms regulating the star formation in normal star-forming galaxies across cosmic epochs, we determine a refined star formation versus stellar mass relation in the local Universe. To this end we use the Hα narrow-band imaging follow-up survey (Hα3) of field galaxies selected from the HI Arecibo Legacy Fast ALFA Survey (ALFALFA) in the Coma and Local superclusters. By complementing this local determination with high-redshift measurements from the literature, we reconstruct the star formation history of main-sequence galaxies as a function of stellar mass from the present epoch up to z = 3. In agreement with previous studies, our analysis shows that quenching mechanisms occur above a threshold stellar mass Mknee that evolves with redshift as ∝ (1 + z)2. Moreover, visual morphological classification of individual objects in our local sample reveals a sharp increase in the fraction of visually classified strong bars with mass, hinting that strong bars may contribute to the observed downturn in the sSFR above Mknee. We test this hypothesis using a simple but physically motivated numerical model for bar formation, finding that strong bars can rapidly quench star formation in the central few kpc of field galaxies. We conclude that strong bars contribute significantly to the red colors observed in the inner parts of massive galaxies, although additional mechanisms are likely required to quench the star formation in the outer regions of massive spiral galaxies. Intriguingly, when we extrapolate our model to higher redshifts, we successfully recover the observed redshift evolution for Mknee. Our study highlights how the formation of strong bars in massive galaxies is an important mechanism in regulating the redshift evolution of the sSFR for field main-sequence galaxies. Based on observations taken at the observatory of San Pedro Martir (Baja California, Mexico), belonging to the Mexican Observatorio Astronómico Nacional.

  19. Dopamine neurons share common response function for reward prediction error

    PubMed Central

    Eshel, Neir; Tian, Ju; Bukwich, Michael; Uchida, Naoshige

    2016-01-01

    Dopamine neurons are thought to signal reward prediction error, or the difference between actual and predicted reward. How dopamine neurons jointly encode this information, however, remains unclear. One possibility is that different neurons specialize in different aspects of prediction error; another is that each neuron calculates prediction error in the same way. We recorded from optogenetically-identified dopamine neurons in the lateral ventral tegmental area (VTA) while mice performed classical conditioning tasks. Our tasks allowed us to determine the full prediction error functions of dopamine neurons and compare them to each other. We found striking homogeneity among individual dopamine neurons: their responses to both unexpected and expected rewards followed the same function, just scaled up or down. As a result, we could describe both individual and population responses using just two parameters. Such uniformity ensures robust information coding, allowing each dopamine neuron to contribute fully to the prediction error signal. PMID:26854803

  20. Writing abilities in intellectual disabilities: a comparison between Down and Williams syndrome.

    PubMed

    Varuzza, Cristiana; De Rose, Paola; Vicari, Stefano; Menghini, Deny

    2015-02-01

    Writing is a complex task that requires the integration of multiple cognitive, linguistic, and motor abilities. Until now, only a few studies investigated writing abilities in individuals with Intellectual Disability (ID). The aim of the present exploratory study was to provide knowledge on the organization of writing in two populations with ID, Down syndrome (DS) and Williams syndrome (WS), trying to disentangle different components of the process. A battery tapping diverse writing demands as low-level transcription skills as well as high-level writing skills was proposed to 13 individuals with WS, 12 individuals with DS and 11 mental-age-matched typically developing (TD) children. Results showed that the two groups with genetic syndromes did not differ from TD in writing a list of objects placed in bedroom, in the number of errors in the text composition, in a text copying task and in kind of errors made. However, in a word dictation task, individuals with DS made more errors than individuals with WS and TD children. In a pseudoword dictation task, both individuals with DS and WS showed more errors than TD children. Our results showed good abilities in individuals with ID in different aspects of writing, involving not only low-level transcription skills but also high-level composition skills. Contrary to the pessimistic view, considering individuals with ID vulnerable for failure, our results indicate that the presence of ID does not prevent the achievement of writing skills. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Standard Errors and Confidence Intervals of Norm Statistics for Educational and Psychological Tests.

    PubMed

    Oosterhuis, Hannah E M; van der Ark, L Andries; Sijtsma, Klaas

    2016-11-14

    Norm statistics allow for the interpretation of scores on psychological and educational tests, by relating the test score of an individual test taker to the test scores of individuals belonging to the same gender, age, or education groups, et cetera. Given the uncertainty due to sampling error, one would expect researchers to report standard errors for norm statistics. In practice, standard errors are seldom reported; they are either unavailable or derived under strong distributional assumptions that may not be realistic for test scores. We derived standard errors for four norm statistics (standard deviation, percentile ranks, stanine boundaries and Z-scores) under the mild assumption that the test scores are multinomially distributed. A simulation study showed that the standard errors were unbiased and that corresponding Wald-based confidence intervals had good coverage. Finally, we discuss the possibilities for applying the standard errors in practical test use in education and psychology. The procedure is provided via the R function check.norms, which is available in the mokken package.

  2. Purification, crystallization and preliminary X-ray analysis of the inverse F-BAR domain of the human srGAP2 protein.

    PubMed

    Wang, Hongpeng; Zhang, Yan; Zhang, Zhenyi; Jin, Wei Lin; Wu, Geng

    2014-01-01

    Bin-Amphiphysin-Rvs (BAR) domain proteins play essential roles in diverse cellular processes by inducing membrane invaginations or membrane protrusions. Among the BAR superfamily, the `classical' BAR and Fes/CIP4 homology BAR (F-BAR) subfamilies of proteins usually promote membrane invaginations, whereas the inverse BAR (I-BAR) subfamily generally incur membrane protrusions. Despite possessing an N-terminal F-BAR domain, the srGAP2 protein regulates neurite outgrowth and neuronal migration by causing membrane protrusions reminiscent of the activity of I-BAR domain proteins. In this study, the inverse F-BAR (IF-BAR) domain of human srGAP2 was overexpressed, purified and crystallized. The crystals of the srGAP2 IF-BAR domain protein diffracted to 3.50 Å resolution and belonged to space group P2(1). These results will facilitate further structural determination of the srGAP2 IF-BAR domain and the ultimate elucidation of its peculiar behaviour of inducing membrane protrusions rather than membrane invaginations.

  3. Creation and validation of the barriers to alcohol reduction (BAR) scale using classical test theory and item response theory.

    PubMed

    Kunicki, Zachary J; Schick, Melissa R; Spillane, Nichea S; Harlow, Lisa L

    2018-06-01

    Those who binge drink are at increased risk for alcohol-related consequences when compared to non-binge drinkers. Research shows individuals may face barriers to reducing their drinking behavior, but few measures exist to assess these barriers. This study created and validated the Barriers to Alcohol Reduction (BAR) scale. Participants were college students ( n  = 230) who endorsed at least one instance of past-month binge drinking (4+ drinks for women or 5+ drinks for men). Using classical test theory, exploratory structural equation modeling found a two-factor structure of personal/psychosocial barriers and perceived program barriers. The sub-factors, and full scale had reasonable internal consistency (i.e., coefficient omega = 0.78 (personal/psychosocial), 0.82 (program barriers), and 0.83 (full measure)). The BAR also showed evidence for convergent validity with the Brief Young Adult Alcohol Consequences Questionnaire ( r  = 0.39, p  < .001) and discriminant validity with Barriers to Physical Activity ( r  = -0.02, p  = .81). Item Response Theory (IRT) analysis showed the two factors separately met the unidimensionality assumption, and provided further evidence for severity of the items on the two factors. Results suggest that the BAR measure appears reliable and valid for use in an undergraduate student population of binge drinkers. Future studies may want to re-examine this measure in a more diverse sample.

  4. Apparent Motives for Aggression in the Social Context of the Bar

    PubMed Central

    Graham, Kathryn; Bernards, Sharon; Osgood, D. Wayne; Parks, Michael; Abbey, Antonia; Felson, Richard B.; Saltz, Robert F.; Wells, Samantha

    2013-01-01

    Objective Little systematic research has focused on motivations for aggression and most of the existing research is qualitative and atheoretical. This study increases existing knowledge by using the theory of coercive actions to quantify the apparent motives of individuals involved in barroom aggression. Objectives were to examine: gender differences in the use of compliance, grievance, social identity, and excitement motives; how motives change during an aggressive encounter; and the relationship of motives to aggression severity. Method We analyzed 844 narrative descriptions of aggressive incidents observed in large late-night drinking venues as part of the Safer Bars evaluation. Trained coders rated each type of motive for the 1,507 bar patrons who engaged in aggressive acts. Results Women were more likely to be motivated by compliance and grievance, many in relation to unwanted sexual overtures from men; whereas men were more likely to be motivated by social identity concerns and excitement. Aggressive acts that escalated tended to be motivated by identity or grievance, with identity motivation especially associated with more severe aggression. Conclusions A key factor in preventing serious aggression is to develop approaches that focus on addressing identity concerns in the escalation of aggression and defusing incidents involving grievance and identity motives before they escalate. In bars, this might include training staff to recognize and defuse identity motives and eliminating grievance-provoking situations such as crowd bottlenecks and poorly managed queues. Preventive interventions generally need to more directly address the role of identity motives, especially among men. PMID:24224117

  5. Triple bar, high efficiency mechanical sealer

    DOEpatents

    Pak, Donald J.; Hawkins, Samantha A.; Young, John E.

    2013-03-19

    A clamp with a bottom clamp bar that has a planar upper surface is provided. The clamp may also include a top clamp bar connected to the bottom clamp bar, and a pressure distribution bar between the top clamp bar and the bottom clamp bar. The pressure distribution bar may have a planar lower surface in facing relation to the upper surface of the bottom clamp bar. An object is capable of being disposed in a clamping region between the upper surface and the lower surface. The width of the planar lower surface may be less than the width of the upper surface within the clamping region. Also, the pressure distribution bar may be capable of being urged away from the top clamp bar and towards the bottom clamp bar.

  6. Investigating Simulated Driving Errors in Amnestic Single- and Multiple-Domain Mild Cognitive Impairment.

    PubMed

    Hird, Megan A; Vesely, Kristin A; Fischer, Corinne E; Graham, Simon J; Naglie, Gary; Schweizer, Tom A

    2017-01-01

    The areas of driving impairment characteristic of mild cognitive impairment (MCI) remain unclear. This study compared the simulated driving performance of 24 individuals with MCI, including amnestic single-domain (sd-MCI, n = 11) and amnestic multiple-domain MCI (md-MCI, n = 13), and 20 age-matched controls. Individuals with MCI committed over twice as many driving errors (20.0 versus 9.9), demonstrated difficulty with lane maintenance, and committed more errors during left turns with traffic compared to healthy controls. Specifically, individuals with md-MCI demonstrated greater driving difficulty compared to healthy controls, relative to those with sd-MCI. Differentiating between different subtypes of MCI may be important when evaluating driving safety.

  7. Simplified stereo-optical ultrasound plane calibration

    NASA Astrophysics Data System (ADS)

    Hoßbach, Martin; Noll, Matthias; Wesarg, Stefan

    2013-03-01

    Image guided therapy is a natural concept and commonly used in medicine. In anesthesia, a common task is the injection of an anesthetic close to a nerve under freehand ultrasound guidance. Several guidance systems exist using electromagnetic tracking of the ultrasound probe as well as the needle, providing the physician with a precise projection of the needle into the ultrasound image. This, however, requires additional expensive devices. We suggest using optical tracking with miniature cameras attached to a 2D ultrasound probe to achieve a higher acceptance among physicians. The purpose of this paper is to present an intuitive method to calibrate freehand ultrasound needle guidance systems employing a rigid stereo camera system. State of the art methods are based on a complex series of error prone coordinate system transformations which makes them susceptible to error accumulation. By reducing the amount of calibration steps to a single calibration procedure we provide a calibration method that is equivalent, yet not prone to error accumulation. It requires a linear calibration object and is validated on three datasets utilizing di erent calibration objects: a 6mm metal bar and a 1:25mm biopsy needle were used for experiments. Compared to existing calibration methods for freehand ultrasound needle guidance systems, we are able to achieve higher accuracy results while additionally reducing the overall calibration complexity. Ke

  8. Design and preliminary evaluation of the FINGER rehabilitation robot: controlling challenge and quantifying finger individuation during musical computer game play

    PubMed Central

    2014-01-01

    Background This paper describes the design and preliminary testing of FINGER (Finger Individuating Grasp Exercise Robot), a device for assisting in finger rehabilitation after neurologic injury. We developed FINGER to assist stroke patients in moving their fingers individually in a naturalistic curling motion while playing a game similar to Guitar Hero®a. The goal was to make FINGER capable of assisting with motions where precise timing is important. Methods FINGER consists of a pair of stacked single degree-of-freedom 8-bar mechanisms, one for the index and one for the middle finger. Each 8-bar mechanism was designed to control the angle and position of the proximal phalanx and the position of the middle phalanx. Target positions for the mechanism optimization were determined from trajectory data collected from 7 healthy subjects using color-based motion capture. The resulting robotic device was built to accommodate multiple finger sizes and finger-to-finger widths. For initial evaluation, we asked individuals with a stroke (n = 16) and without impairment (n = 4) to play a game similar to Guitar Hero® while connected to FINGER. Results Precision design, low friction bearings, and separate high speed linear actuators allowed FINGER to individually actuate the fingers with a high bandwidth of control (−3 dB at approximately 8 Hz). During the tests, we were able to modulate the subject’s success rate at the game by automatically adjusting the controller gains of FINGER. We also used FINGER to measure subjects’ effort and finger individuation while playing the game. Conclusions Test results demonstrate the ability of FINGER to motivate subjects with an engaging game environment that challenges individuated control of the fingers, automatically control assistance levels, and quantify finger individuation after stroke. PMID:24495432

  9. Raising the Bar for Peace and Sustainability Educators: An Educational Response to the Implementation Gap

    ERIC Educational Resources Information Center

    Wenden, Anita L.

    2014-01-01

    Throughout history there has been no lack of evidence attesting to the inequitable distribution of wealth, power, and resources among individuals and groups, conditions which continue to characterize this second decade of the twenty-first millennium despite the many UN-based initiatives to deal with their consequences. Not included among the…

  10. Saving Our Criminal Justice System: The Efficacy of a Collaborative Social Service

    ERIC Educational Resources Information Center

    Yamatani, Hide; Spjeldnes, Solveig

    2011-01-01

    On a typical day in 2008, 776,573 individuals were behind bars in nearly 3,500 U.S. jails. Yet the potential benefits of social services in achieving lower recidivism rates and successful reintegration are understudied in jail populations. This three-year study investigated the effects of collaboration-based in-jail services and postrelease…

  11. Examining University Students' Anger and Satisfaction with Life

    ERIC Educational Resources Information Center

    Çevik, Gülsen Büyüksahin

    2017-01-01

    The current research aims to study university students' levels of anger and satisfaction with life, based on gender, years of attendance, accommodation, and whether they experience adjustment problems. The current research participants included a total of 484 individuals (X-bar age = 22.56; SD = 1.72; range = 19-37), with 269 (55.6%) males and 215…

  12. Ten Important Words Plus: A Strategy for Building Word Knowledge

    ERIC Educational Resources Information Center

    Yopp, Ruth Helen; Yopp, Hallie Kay

    2007-01-01

    In this strategy, students individually select and record 10 important words on self-adhesive notes as they read a text. Then students build a group bar graph displaying their choices, write a sentence that summarizes the content, and then respond to prompts that ask them to think about words in powerful ways. Several prompts are suggested, each…

  13. THE MASS PROFILE AND SHAPE OF BARS IN THE SPITZER SURVEY OF STELLAR STRUCTURE IN GALAXIES (S{sup 4}G): SEARCH FOR AN AGE INDICATOR FOR BARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Taehyun; Lee, Myung Gyoon; Sheth, Kartik

    2015-01-20

    We have measured the radial light profiles and global shapes of bars using two-dimensional 3.6 μm image decompositions for 144 face-on barred galaxies from the Spitzer Survey of Stellar Structure in Galaxies. The bar surface brightness profile is correlated with the stellar mass and bulge-to-total (B/T) ratio of their host galaxies. Bars in massive and bulge-dominated galaxies (B/T > 0.2) show a flat profile, while bars in less massive, disk-dominated galaxies (B/T ∼ 0) show an exponential, disk-like profile with a wider spread in the radial profile than in the bulge-dominated galaxies. The global two-dimensional shapes of bars, however, are rectangular/boxy, independentmore » of the bulge or disk properties. We speculate that because bars are formed out of disks, bars initially have an exponential (disk-like) profile that evolves over time, trapping more disk stars to boxy bar orbits. This leads bars to become stronger and have flatter profiles. The narrow spread of bar radial profiles in more massive disks suggests that these bars formed earlier (z > 1), while the disk-like profiles and a larger spread in the radial profile in less massive systems imply a later and more gradual evolution, consistent with the cosmological evolution of bars inferred from observational studies. Therefore, we expect that the flatness of the bar profile can be used as a dynamical age indicator of the bar to measure the time elapsed since the bar formation. We argue that cosmic gas accretion is required to explain our results on bar profile and the presence of gas within the bar region.« less

  14. Systematic review of ERP and fMRI studies investigating inhibitory control and error processing in people with substance dependence and behavioural addictions

    PubMed Central

    Luijten, Maartje; Machielsen, Marise W.J.; Veltman, Dick J.; Hester, Robert; de Haan, Lieuwe; Franken, Ingmar H.A.

    2014-01-01

    Background Several current theories emphasize the role of cognitive control in addiction. The present review evaluates neural deficits in the domains of inhibitory control and error processing in individuals with substance dependence and in those showing excessive addiction-like behaviours. The combined evaluation of event-related potential (ERP) and functional magnetic resonance imaging (fMRI) findings in the present review offers unique information on neural deficits in addicted individuals. Methods We selected 19 ERP and 22 fMRI studies using stop-signal, go/no-go or Flanker paradigms based on a search of PubMed and Embase. Results The most consistent findings in addicted individuals relative to healthy controls were lower N2, error-related negativity and error positivity amplitudes as well as hypoactivation in the anterior cingulate cortex (ACC), inferior frontal gyrus and dorsolateral prefrontal cortex. These neural deficits, however, were not always associated with impaired task performance. With regard to behavioural addictions, some evidence has been found for similar neural deficits; however, studies are scarce and results are not yet conclusive. Differences among the major classes of substances of abuse were identified and involve stronger neural responses to errors in individuals with alcohol dependence versus weaker neural responses to errors in other substance-dependent populations. Limitations Task design and analysis techniques vary across studies, thereby reducing comparability among studies and the potential of clinical use of these measures. Conclusion Current addiction theories were supported by identifying consistent abnormalities in prefrontal brain function in individuals with addiction. An integrative model is proposed, suggesting that neural deficits in the dorsal ACC may constitute a hallmark neurocognitive deficit underlying addictive behaviours, such as loss of control. PMID:24359877

  15. Prevalence of uncorrected refractive errors, presbyopia and spectacle coverage in marine fishing communities in South India: Rapid Assessment of Visual Impairment (RAVI) project.

    PubMed

    Marmamula, Srinivas; Madala, Sreenivas R; Rao, Gullapalli N

    2012-03-01

    To investigate the prevalence of uncorrected refractive errors, presbyopia and spectacle coverage in subjects aged 40 years or more using a novel Rapid Assessment of Visual Impairment (RAVI) methodology. A population-based cross-sectional study was conducted using cluster random sampling to enumerate 1700 subjects from 34 clusters predominantly inhabited by marine fishing communities in the Prakasam district of Andhra Pradesh, India. Unaided, aided and pinhole visual acuity (VA) was assessed using a Snellen chart at a distance of 6 m. Near vision was assessed using an N notation chart. Uncorrected refractive error was defined as presenting VA < 6/18 and improving to ≥6/18 with pinhole. Uncorrected presbyopia was defined as binocular near vision worse than N8 in subjects with binocular distance VA ≥ 6/18. 1560 subjects (response rate - 92%) were available for examination. Of these, 54.6% were female and 10.1% were ≥70 years of age. Refractive error was present in 250 individuals. It was uncorrected in 179 (unmet need) and corrected in 71 (met need) individuals. Among 1094 individuals with no distance visual impairment, presbyopia was present in 494 individuals. It was uncorrected in 439 (unmet need) and corrected in 55 individuals (met need). Spectacle coverage was 28.4% for refractive errors and 11.1% for presbyopia. There is a high unmet need for uncorrected refractive errors and presbyopia among marine fishing communities in the Prakasam district of South India. The data from this study can now be used as a baseline prior to the commencement of eye care services in this region. Ophthalmic & Physiological Optics © 2012 The College of Optometrists.

  16. Prevalence of the refractive errors by age and gender: the Mashhad eye study of Iran.

    PubMed

    Ostadimoghaddam, Hadi; Fotouhi, Akbar; Hashemi, Hassan; Yekta, Abbasali; Heravian, Javad; Rezvan, Farhad; Ghadimi, Hamidreza; Rezvan, Bijan; Khabazkhoob, Mehdi

    2011-11-01

    Refractive errors are a common eye problem. Considering the low number of population-based studies in Iran in this regard, we decided to determine the prevalence rates of myopia and hyperopia in a population in Mashhad, Iran. Cross-sectional population-based study. Random cluster sampling. Of 4453 selected individuals from the urban population of Mashhad, 70.4% participated. Refractive error was determined using manifest (age > 15 years) and cycloplegic refraction (age ≤ 15 years). Myopia was defined as a spherical equivalent of -0.5 diopter or worse. An spherical equivalent of +0.5 diopter or worse for non-cycloplegic refraction and an spherical equivalent of +2 diopter or worse for cycloplegic refraction was used to define hyperopia. Prevalence of refractive errors. The prevalence of myopia and hyperopia in individuals ≤ 15 years old was 3.64% (95% CI: 2.19-5.09) and 27.4% (95% CI: 23.72-31.09), respectively. The same measurements for subjects > 15 years of age was 22.36% (95% CI: 20.06-24.66) and 34.21% (95% CI: 31.57-36.85), respectively. Myopia was found to increase with age in individuals ≤ 15 years and decrease with age in individuals > 15 years of age. The rate of hyperopia showed a significant increase with age in individuals > 15 years. The prevalence of astigmatism was 25.64% (95% CI: 23.76-27.51). In children and the elderly, hyperopia is the most prevalent refractive error. After hyperopia, astigmatism is also of importance in older ages. Age is the most important demographic factor associated with different types of refractive errors. © 2011 The Authors. Clinical and Experimental Ophthalmology © 2011 Royal Australian and New Zealand College of Ophthalmologists.

  17. Individual Differences in Working Memory Capacity Predict Action Monitoring and the Error-Related Negativity

    ERIC Educational Resources Information Center

    Miller, A. Eve; Watson, Jason M.; Strayer, David L.

    2012-01-01

    Neuroscience suggests that the anterior cingulate cortex (ACC) is responsible for conflict monitoring and the detection of errors in cognitive tasks, thereby contributing to the implementation of attentional control. Though individual differences in frontally mediated goal maintenance have clearly been shown to influence outward behavior in…

  18. Diagnosis of Cognitive Errors by Statistical Pattern Recognition Methods.

    ERIC Educational Resources Information Center

    Tatsuoka, Kikumi K.; Tatsuoka, Maurice M.

    The rule space model permits measurement of cognitive skill acquisition, diagnosis of cognitive errors, and detection of the strengths and weaknesses of knowledge possessed by individuals. Two ways to classify an individual into his or her most plausible latent state of knowledge include: (1) hypothesis testing--Bayes' decision rules for minimum…

  19. Error-Free Text Typing Performance of an Inductive Intra-Oral Tongue Computer Interface for Severely Disabled Individuals.

    PubMed

    Andreasen Struijk, Lotte N S; Bentsen, Bo; Gaihede, Michael; Lontis, Eugen R

    2017-11-01

    For severely paralyzed individuals, alternative computer interfaces are becoming increasingly essential for everyday life as social and vocational activities are facilitated by information technology and as the environment becomes more automatic and remotely controllable. Tongue computer interfaces have proven to be desirable by the users partly due to their high degree of aesthetic acceptability, but so far the mature systems have shown a relatively low error-free text typing efficiency. This paper evaluated the intra-oral inductive tongue computer interface (ITCI) in its intended use: Error-free text typing in a generally available text editing system, Word. Individuals with tetraplegia and able bodied individuals used the ITCI for typing using a MATLAB interface and for Word typing for 4 to 5 experimental days, and the results showed an average error-free text typing rate in Word of 11.6 correct characters/min across all participants and of 15.5 correct characters/min for participants familiar with tongue piercings. Improvements in typing rates between the sessions suggest that typing ratescan be improved further through long-term use of the ITCI.

  20. Everyday action in schizophrenia: performance patterns and underlying cognitive mechanisms.

    PubMed

    Kessler, Rachel K; Giovannetti, Tania; MacMullen, Laura R

    2007-07-01

    Everyday action is impaired among individuals with schizophrenia, yet few studies have characterized the nature of this deficit using performance-based measures. This study examined the performance of 20 individuals with schizophrenia or schizoaffective disorder on the Naturalistic Action Test (M. F. Schwartz, L. J. Buxbaum, M. Ferraro, T. Veramonti, & M. Segal, 2003). Performance was coded to examine overall impairment, task accomplishment, and error patterns and was compared with that of healthy controls (n = 28) and individuals with mild dementia (n = 23). Additionally, 2 competing accounts of everyday action deficits, the resource theory and an executive account, were evaluated. When compared with controls, the participants with schizophrenia demonstrated impaired performance. Relative to dementia patients, participants with schizophrenia obtained higher accomplishment scores but committed comparable rates of errors. Moreover, distributions of error types for the 2 groups differed, with the participants with schizophrenia demonstrating greater proportions of errors associated with executive dysfunction. This is the 1st study to show different Naturalistic Action Test performance patterns between 2 neurologically impaired populations. The distinct performance pattern demonstrated by individuals with schizophrenia reflects specific deficits in executive function.

Top