Effect of bar-code technology on the safety of medication administration.
Poon, Eric G; Keohane, Carol A; Yoon, Catherine S; Ditmore, Matthew; Bane, Anne; Levtzion-Korach, Osnat; Moniz, Thomas; Rothschild, Jeffrey M; Kachalia, Allen B; Hayes, Judy; Churchill, William W; Lipsitz, Stuart; Whittemore, Anthony D; Bates, David W; Gandhi, Tejal K
2010-05-06
Serious medication errors are common in hospitals and often occur during order transcription or administration of medication. To help prevent such errors, technology has been developed to verify medications by incorporating bar-code verification technology within an electronic medication-administration system (bar-code eMAR). We conducted a before-and-after, quasi-experimental study in an academic medical center that was implementing the bar-code eMAR. We assessed rates of errors in order transcription and medication administration on units before and after implementation of the bar-code eMAR. Errors that involved early or late administration of medications were classified as timing errors and all others as nontiming errors. Two clinicians reviewed the errors to determine their potential to harm patients and classified those that could be harmful as potential adverse drug events. We observed 14,041 medication administrations and reviewed 3082 order transcriptions. Observers noted 776 nontiming errors in medication administration on units that did not use the bar-code eMAR (an 11.5% error rate) versus 495 such errors on units that did use it (a 6.8% error rate)--a 41.4% relative reduction in errors (P<0.001). The rate of potential adverse drug events (other than those associated with timing errors) fell from 3.1% without the use of the bar-code eMAR to 1.6% with its use, representing a 50.8% relative reduction (P<0.001). The rate of timing errors in medication administration fell by 27.3% (P<0.001), but the rate of potential adverse drug events associated with timing errors did not change significantly. Transcription errors occurred at a rate of 6.1% on units that did not use the bar-code eMAR but were completely eliminated on units that did use it. Use of the bar-code eMAR substantially reduced the rate of errors in order transcription and in medication administration as well as potential adverse drug events, although it did not eliminate such errors. Our data show that the bar-code eMAR is an important intervention to improve medication safety. (ClinicalTrials.gov number, NCT00243373.) 2010 Massachusetts Medical Society
Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-12-01
We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.
Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-09-01
We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.
NASA Astrophysics Data System (ADS)
Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-12-01
We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.
NASA Astrophysics Data System (ADS)
Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-09-01
We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.
Influence of survey strategy and interpolation model on DEM quality
NASA Astrophysics Data System (ADS)
Heritage, George L.; Milan, David J.; Large, Andrew R. G.; Fuller, Ian C.
2009-11-01
Accurate characterisation of morphology is critical to many studies in the field of geomorphology, particularly those dealing with changes over time. Digital elevation models (DEMs) are commonly used to represent morphology in three dimensions. The quality of the DEM is largely a function of the accuracy of individual survey points, field survey strategy, and the method of interpolation. Recommendations concerning field survey strategy and appropriate methods of interpolation are currently lacking. Furthermore, the majority of studies to date consider error to be uniform across a surface. This study quantifies survey strategy and interpolation error for a gravel bar on the River Nent, Blagill, Cumbria, UK. Five sampling strategies were compared: (i) cross section; (ii) bar outline only; (iii) bar and chute outline; (iv) bar and chute outline with spot heights; and (v) aerial LiDAR equivalent, derived from degraded terrestrial laser scan (TLS) data. Digital Elevation Models were then produced using five different common interpolation algorithms. Each resultant DEM was differentiated from a terrestrial laser scan of the gravel bar surface in order to define the spatial distribution of vertical and volumetric error. Overall triangulation with linear interpolation (TIN) or point kriging appeared to provide the best interpolators for the bar surface. Lowest error on average was found for the simulated aerial LiDAR survey strategy, regardless of interpolation technique. However, comparably low errors were also found for the bar-chute-spot sampling strategy when TINs or point kriging was used as the interpolator. The magnitude of the errors between survey strategy exceeded those found between interpolation technique for a specific survey strategy. Strong relationships between local surface topographic variation (as defined by the standard deviation of vertical elevations in a 0.2-m diameter moving window), and DEM errors were also found, with much greater errors found at slope breaks such as bank edges. A series of curves are presented that demonstrate these relationships for each interpolation and survey strategy. The simulated aerial LiDAR data set displayed the lowest errors across the flatter surfaces; however, sharp slope breaks are better modelled by the morphologically based survey strategy. The curves presented have general application to spatially distributed data of river beds and may be applied to standard deviation grids to predict spatial error within a surface, depending upon sampling strategy and interpolation algorithm.
Time trend of injection drug errors before and after implementation of bar-code verification system.
Sakushima, Ken; Umeki, Reona; Endoh, Akira; Ito, Yoichi M; Nasuhara, Yasuyuki
2015-01-01
Bar-code technology, used for verification of patients and their medication, could prevent medication errors in clinical practice. Retrospective analysis of electronically stored medical error reports was conducted in a university hospital. The number of reported medication errors of injected drugs, including wrong drug administration and administration to the wrong patient, was compared before and after implementation of the bar-code verification system for inpatient care. A total of 2867 error reports associated with injection drugs were extracted. Wrong patient errors decreased significantly after implementation of the bar-code verification system (17.4/year vs. 4.5/year, p< 0.05), although wrong drug errors did not decrease sufficiently (24.2/year vs. 20.3/year). The source of medication errors due to wrong drugs was drug preparation in hospital wards. Bar-code medication administration is effective for prevention of wrong patient errors. However, ordinary bar-code verification systems are limited in their ability to prevent incorrect drug preparation in hospital wards.
Six1-Eya-Dach Network in Breast Cancer
2009-05-01
Ctrl scramble controls. Responsiveness was tested using luciferase activity of the 3TP reporter construct and normalized to renilla luciferase...construct and normalized to renilla luciferase activity. Data points show the mean of two individual clones from two experiments and error bars represent
The Effects of Bar-coding Technology on Medication Errors: A Systematic Literature Review.
Hutton, Kevin; Ding, Qian; Wellman, Gregory
2017-02-24
The bar-coding technology adoptions have risen drastically in U.S. health systems in the past decade. However, few studies have addressed the impact of bar-coding technology with strong prospective methodologies and the research, which has been conducted from both in-pharmacy and bedside implementations. This systematic literature review is to examine the effectiveness of bar-coding technology on preventing medication errors and what types of medication errors may be prevented in the hospital setting. A systematic search of databases was performed from 1998 to December 2016. Studies measuring the effect of bar-coding technology on medication errors were included in a full-text review. Studies with the outcomes other than medication errors such as efficiency or workarounds were excluded. The outcomes were measured and findings were summarized for each retained study. A total of 2603 articles were initially identified and 10 studies, which used prospective before-and-after study design, were fully reviewed in this article. Of the 10 included studies, 9 took place in the United States, whereas the remaining was conducted in the United Kingdom. One research article focused on bar-coding implementation in a pharmacy setting, whereas the other 9 focused on bar coding within patient care areas. All 10 studies showed overall positive effects associated with bar-coding implementation. The results of this review show that bar-coding technology may reduce medication errors in hospital settings, particularly on preventing targeted wrong dose, wrong drug, wrong patient, unauthorized drug, and wrong route errors.
Output Error Analysis of Planar 2-DOF Five-bar Mechanism
NASA Astrophysics Data System (ADS)
Niu, Kejia; Wang, Jun; Ting, Kwun-Lon; Tao, Fen; Cheng, Qunchao; Wang, Quan; Zhang, Kaiyang
2018-03-01
Aiming at the mechanism error caused by clearance of planar 2-DOF Five-bar motion pair, the method of equivalent joint clearance of kinematic pair to virtual link is applied. The structural error model of revolute joint clearance is established based on the N-bar rotation laws and the concept of joint rotation space, The influence of the clearance of the moving pair is studied on the output error of the mechanis. and the calculation method and basis of the maximum error are given. The error rotation space of the mechanism under the influence of joint clearance is obtained. The results show that this method can accurately calculate the joint space error rotation space, which provides a new way to analyze the planar parallel mechanism error caused by joint space.
Revision of laser-induced damage threshold evaluation from damage probability data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bataviciute, Gintare; Grigas, Povilas; Smalakys, Linas
2013-04-15
In this study, the applicability of commonly used Damage Frequency Method (DFM) is addressed in the context of Laser-Induced Damage Threshold (LIDT) testing with pulsed lasers. A simplified computer model representing the statistical interaction between laser irradiation and randomly distributed damage precursors is applied for Monte Carlo experiments. The reproducibility of LIDT predicted from DFM is examined under both idealized and realistic laser irradiation conditions by performing numerical 1-on-1 tests. A widely accepted linear fitting resulted in systematic errors when estimating LIDT and its error bars. For the same purpose, a Bayesian approach was proposed. A novel concept of parametricmore » regression based on varying kernel and maximum likelihood fitting technique is introduced and studied. Such approach exhibited clear advantages over conventional linear fitting and led to more reproducible LIDT evaluation. Furthermore, LIDT error bars are obtained as a natural outcome of parametric fitting which exhibit realistic values. The proposed technique has been validated on two conventionally polished fused silica samples (355 nm, 5.7 ns).« less
Error analysis of mechanical system and wavelength calibration of monochromator
NASA Astrophysics Data System (ADS)
Zhang, Fudong; Chen, Chen; Liu, Jie; Wang, Zhihong
2018-02-01
This study focuses on improving the accuracy of a grating monochromator on the basis of the grating diffraction equation in combination with an analysis of the mechanical transmission relationship between the grating, the sine bar, and the screw of the scanning mechanism. First, the relationship between the mechanical error in the monochromator with the sine drive and the wavelength error is analyzed. Second, a mathematical model of the wavelength error and mechanical error is developed, and an accurate wavelength calibration method based on the sine bar's length adjustment and error compensation is proposed. Based on the mathematical model and calibration method, experiments using a standard light source with known spectral lines and a pre-adjusted sine bar length are conducted. The model parameter equations are solved, and subsequent parameter optimization simulations are performed to determine the optimal length ratio. Lastly, the length of the sine bar is adjusted. The experimental results indicate that the wavelength accuracy is ±0.3 nm, which is better than the original accuracy of ±2.6 nm. The results confirm the validity of the error analysis of the mechanical system of the monochromator as well as the validity of the calibration method.
Science 101: When Drawing Graphs from Collected Data, Why Don't You Just "Connect the Dots?"
ERIC Educational Resources Information Center
Robertson, William C.
2007-01-01
Using "error bars" on graphs is a good way to help students see that, within the inherent uncertainty of the measurements due to the instruments used for measurement, the data points do, in fact, lie along the line that represents the linear relationship. In this article, the author explains why connecting the dots on graphs of collected data is…
The Impact of Bar Code Medication Administration Technology on Reported Medication Errors
ERIC Educational Resources Information Center
Holecek, Andrea
2011-01-01
The use of bar-code medication administration technology is on the rise in acute care facilities in the United States. The technology is purported to decrease medication errors that occur at the point of administration. How significantly this technology affects actual rate and severity of error is unknown. This descriptive, longitudinal research…
Bandwagon effects and error bars in particle physics
NASA Astrophysics Data System (ADS)
Jeng, Monwhea
2007-02-01
We study historical records of experiments on particle masses, lifetimes, and widths, both for signs of expectation bias, and to compare actual errors with reported error bars. We show that significant numbers of particle properties exhibit "bandwagon effects": reported values show trends and clustering as a function of the year of publication, rather than random scatter about the mean. While the total amount of clustering is significant, it is also fairly small; most individual particle properties do not display obvious clustering. When differences between experiments are compared with the reported error bars, the deviations do not follow a normal distribution, but instead follow an exponential distribution for up to ten standard deviations.
Numerical modeling of the divided bar measurements
NASA Astrophysics Data System (ADS)
LEE, Y.; Keehm, Y.
2011-12-01
The divided-bar technique has been used to measure thermal conductivity of rocks and fragments in heat flow studies. Though widely used, divided-bar measurements can have errors, which are not systematically quantified yet. We used an FEM and performed a series of numerical studies to evaluate various errors in divided-bar measurements and to suggest more reliable measurement techniques. A divided-bar measurement should be corrected against lateral heat loss on the sides of rock samples, and the thermal resistance at the contacts between the rock sample and the bar. We first investigated how the amount of these corrections would change by the thickness and thermal conductivity of rock samples through numerical modeling. When we fixed the sample thickness as 10 mm and varied thermal conductivity, errors in the measured thermal conductivity ranges from 2.02% for 1.0 W/m/K to 7.95% for 4.0 W/m/K. While we fixed thermal conductivity as 1.38 W/m/K and varied the sample thickness, we found that the error ranges from 2.03% for the 30 mm-thick sample to 11.43% for the 5 mm-thick sample. After corrections, a variety of error analyses for divided-bar measurements were conducted numerically. Thermal conductivity of two thin standard disks (2 mm in thickness) located at the top and the bottom of the rock sample slightly affects the accuracy of thermal conductivity measurements. When the thermal conductivity of a sample is 3.0 W/m/K and that of two standard disks is 0.2 W/m/K, the relative error in measured thermal conductivity is very small (~0.01%). However, the relative error would reach up to -2.29% for the same sample when thermal conductivity of two disks is 0.5 W/m/K. The accuracy of thermal conductivity measurements strongly depends on thermal conductivity and the thickness of thermal compound that is applied to reduce thermal resistance at contacts between the rock sample and the bar. When the thickness of thermal compound (0.29 W/m/K) is 0.03 mm, we found that the relative error in measured thermal conductivity is 4.01%, while the relative error can be very significant (~12.2%) if the thickness increases to 0.1 mm. Then, we fixed the thickness (0.03 mm) and varied thermal conductivity of the thermal compound. We found that the relative error with an 1.0 W/m/K compound is 1.28%, and the relative error with a 0.29 W/m/K is 4.06%. When we repeated this test with a different thickness of the thermal compound (0.1 mm), the relative error with an 1.0 W/m/K compound is 3.93%, and that with a 0.29 W/m/K is 12.2%. In addition, the cell technique by Sass et al.(1971), which is widely used to measure thermal conductivity of rock fragments, was evaluated using the FEM modeling. A total of 483 isotropic and homogeneous spherical rock fragments in the sample holder were used to test numerically the accuracy of the cell technique. The result shows the relative error of -9.61% for rock fragments with the thermal conductivity of 2.5 W/m/K. In conclusion, we report quantified errors in the divided-bar and the cell technique for thermal conductivity measurements for rocks and fragments. We found that the FEM modeling can accurately mimic these measurement techniques and can help us to estimate measurement errors quantitatively.
Spirality: A Noval Way to Measure Spiral Arm Pitch Angle
NASA Astrophysics Data System (ADS)
Shields, Douglas W.; Boe, Benjamin; Henderson, Casey L.; Hartley, Matthew; Davis, Benjamin L.; Pour Imani, Hamed; Kennefick, Daniel; Kennefick, Julia D.
2015-01-01
We present the MATLAB code Spirality, a novel method for measuring spiral arm pitch angles by fitting galaxy images to spiral templates of known pitch. For a given pitch angle template, the mean pixel value is found along each of typically 1000 spiral axes. The fitting function, which shows a local maximum at the best-fit pitch angle, is the variance of these means. Error bars are found by varying the inner radius of the measurement annulus and finding the standard deviation of the best-fit pitches. Computation time is typically on the order of 2 minutes per galaxy, assuming at least 8 GB of working memory. We tested the code using 128 synthetic spiral images of known pitch. These spirals varied in the number of spiral arms, pitch angle, degree of logarithmicity, radius, SNR, inclination angle, bar length, and bulge radius. A correct result is defined as a result that matches the true pitch within the error bars, with error bars no greater than ±7°. For the non-logarithmic spiral sample, the correct answer is similarly defined, with the mean pitch as function of radius in place of the true pitch. For all synthetic spirals, correct results were obtained so long as SNR > 0.25, the bar length was no more than 60% of the spiral's diameter (when the bar was included in the measurement), the input center of the spiral was no more than 6% of the spiral radius away from the true center, and the inclination angle was no more than 30°. The synthetic spirals were not deprojected prior to measurement. The code produced the correct result for all barred spirals when the measurement annulus was placed outside the bar. Additionally, we compared the code's results against 2DFFT results for 203 visually selected spiral galaxies in GOODS North and South. Among the entire sample, Spirality's error bars overlapped 2DFFT's error bars 64% of the time. For those galaxies in which Source code is available by email request from the primary author.
Metal Ion Sensor with Catalytic DNA in a Nanofluidic Intelligent Processor
2011-12-01
attributed to decreased diffusion and less active DNAzyme complex because of pore constraints. Uncleavable Alexa546 intensity is shown in gray ...is shown in gray , cleavable fluorescein in green, and the ratio of Fl/Alexa in red. Error bars represent one standard deviation of four independent...higher concentrations inhibiting cleaved fragment release. Uncleavable Alexa 546 intensity is shown in gray , cleavable fluorescein in green, and the
Bioavailability and Methylation Potential of Mercury Sulfides in Sediments
2014-08-01
such as size separation (i.e. filtration with a particular pore size or molecular weight cutoff) or metal-ligand complexation from experimentally ...and 6 nM HgS microparticles. The error bars represent ±1 s.d. for duplicate samples. Results of Hg fractionation by filtration and (ultra... results from filtration (Figures S2). These differences in the data indicated that the nHgS dissolution rate could be overestimated by the filtration data
Hayden, Randall T; Patterson, Donna J; Jay, Dennis W; Cross, Carl; Dotson, Pamela; Possel, Robert E; Srivastava, Deo Kumar; Mirro, Joseph; Shenep, Jerry L
2008-02-01
To assess the ability of a bar code-based electronic positive patient and specimen identification (EPPID) system to reduce identification errors in a pediatric hospital's clinical laboratory. An EPPID system was implemented at a pediatric oncology hospital to reduce errors in patient and laboratory specimen identification. The EPPID system included bar-code identifiers and handheld personal digital assistants supporting real-time order verification. System efficacy was measured in 3 consecutive 12-month time frames, corresponding to periods before, during, and immediately after full EPPID implementation. A significant reduction in the median percentage of mislabeled specimens was observed in the 3-year study period. A decline from 0.03% to 0.005% (P < .001) was observed in the 12 months after full system implementation. On the basis of the pre-intervention detected error rate, it was estimated that EPPID prevented at least 62 mislabeling events during its first year of operation. EPPID decreased the rate of misidentification of clinical laboratory samples. The diminution of errors observed in this study provides support for the development of national guidelines for the use of bar coding for laboratory specimens, paralleling recent recommendations for medication administration.
Turbulent heat flux measurements in a transitional boundary layer
NASA Technical Reports Server (NTRS)
Sohn, K. H.; Zaman, K. B. M. Q.; Reshotko, E.
1992-01-01
During an experimental investigation of the transitional boundary layer over a heated flat plate, an unexpected result was encountered for the turbulent heat flux (bar-v't'). This quantity, representing the correlation between the fluctuating normal velocity and the temperature, was measured to be negative near the wall under certain conditions. The result was unexpected as it implied a counter-gradient heat transfer by the turbulent fluctuations. Possible reasons for this anomalous result were further investigated. The possible causes considered for this negative bar-v't' were: (1) plausible measurement error and peculiarity of the flow facility, (2) large probe size effect, (3) 'streaky structure' in the near wall boundary layer, and (4) contributions from other terms usually assumed negligible in the energy equation including the Reynolds heat flux in the streamwise direction (bar-u't'). Even though the energy balance has remained inconclusive, none of the items (1) to (3) appear to be contributing directly to the anomaly.
Song, Lunar; Park, Byeonghwa; Oh, Kyeung Mi
2015-04-01
Serious medication errors continue to exist in hospitals, even though there is technology that could potentially eliminate them such as bar code medication administration. Little is known about the degree to which the culture of patient safety is associated with behavioral intention to use bar code medication administration. Based on the Technology Acceptance Model, this study evaluated the relationships among patient safety culture and perceived usefulness and perceived ease of use, and behavioral intention to use bar code medication administration technology among nurses in hospitals. Cross-sectional surveys with a convenience sample of 163 nurses using bar code medication administration were conducted. Feedback and communication about errors had a positive impact in predicting perceived usefulness (β=.26, P<.01) and perceived ease of use (β=.22, P<.05). In a multiple regression model predicting for behavioral intention, age had a negative impact (β=-.17, P<.05); however, teamwork within hospital units (β=.20, P<.05) and perceived usefulness (β=.35, P<.01) both had a positive impact on behavioral intention. The overall bar code medication administration behavioral intention model explained 24% (P<.001) of the variance. Identified factors influencing bar code medication administration behavioral intention can help inform hospitals to develop tailored interventions for RNs to reduce medication administration errors and increase patient safety by using this technology.
Hakala, John L; Hung, Joseph C; Mosman, Elton A
2012-09-01
The objective of this project was to ensure correct radiopharmaceutical administration through the use of a bar code system that links patient and drug profiles with on-site information management systems. This new combined system would minimize the amount of manual human manipulation, which has proven to be a primary source of error. The most common reason for dosing errors is improper patient identification when a dose is obtained from the nuclear pharmacy or when a dose is administered. A standardized electronic transfer of information from radiopharmaceutical preparation to injection will further reduce the risk of misadministration. Value stream maps showing the flow of the patient dose information, as well as potential points of human error, were developed. Next, a future-state map was created that included proposed corrections for the most common critical sites of error. Transitioning the current process to the future state will require solutions that address these sites. To optimize the future-state process, a bar code system that links the on-site radiology management system with the nuclear pharmacy management system was proposed. A bar-coded wristband connects the patient directly to the electronic information systems. The bar code-enhanced process linking the patient dose with the electronic information reduces the number of crucial points for human error and provides a framework to ensure that the prepared dose reaches the correct patient. Although the proposed flowchart is designed for a site with an in-house central nuclear pharmacy, much of the framework could be applied by nuclear medicine facilities using unit doses. An electronic connection between information management systems to allow the tracking of a radiopharmaceutical from preparation to administration can be a useful tool in preventing the mistakes that are an unfortunate reality for any facility.
On the Bar Pattern Speed Determination of NGC 3367
NASA Astrophysics Data System (ADS)
Gabbasov, R. F.; Repetto, P.; Rosado, M.
2009-09-01
An important dynamic parameter of barred galaxies is the bar pattern speed, Ω P . Among several methods that are used for the determination of Ω P , the Tremaine-Weinberg method has the advantage of model independence and accuracy. In this work, we apply the method to a simulated bar including gas dynamics and study the effect of two-dimensional spectroscopy data quality on robustness of the method. We added white noise and a Gaussian random field to the data and measured the corresponding errors in Ω P . We found that a signal to noise ratio in surface density ~5 introduces errors of ~20% for the Gaussian noise, while for the white noise the corresponding errors reach ~50%. At the same time, the velocity field is less sensitive to contamination. On the basis of the performed study, we applied the method to the NGC 3367 spiral galaxy using Hα Fabry-Pérot interferometry data. We found Ω P = 43 ± 6 km s-1 kpc-1 for this galaxy.
2008-06-01
NASAS TLX (Hart & Staveland, 1987) was used to evaluate perceived task demands. In the modified version, participants were asked to estimate the...subjective workload (i.e., NASA - TLX ) was assessed for each trial. Unweighted NASA - TLX ratings were submitted to a 5 (Subscale) × 2 (Communication...Communication Condition M ea n TL X R at in g Figure 3. Mean unweighted NASA - TLX ratings as a function of communication modality. Error bars represent one
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicholson, Andrew D.; Croft, Stephen; McElroy, Robert Dennis
2017-08-01
The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically provide error bars and also partition total uncertainty into “random” and “systematic” components so that, for example, an error bar can be developed for the total mass estimate in multiple items. Uncertainty Quantification (UQ) for NDA has always been important, but itmore » is recognized that greater rigor is needed and achievable using modern statistical methods.« less
Olsen, Aaron M; Camp, Ariel L; Brainerd, Elizabeth L
2017-12-15
The planar, one degree of freedom (1-DoF) four-bar linkage is an important model for understanding the function, performance and evolution of numerous biomechanical systems. One such system is the opercular mechanism in fishes, which is thought to function like a four-bar linkage to depress the lower jaw. While anatomical and behavioral observations suggest some form of mechanical coupling, previous attempts to model the opercular mechanism as a planar four-bar have consistently produced poor model fits relative to observed kinematics. Using newly developed, open source mechanism fitting software, we fitted multiple three-dimensional (3D) four-bar models with varying DoF to in vivo kinematics in largemouth bass to test whether the opercular mechanism functions instead as a 3D four-bar with one or more DoF. We examined link position error, link rotation error and the ratio of output to input link rotation to identify a best-fit model at two different levels of variation: for each feeding strike and across all strikes from the same individual. A 3D, 3-DoF four-bar linkage was the best-fit model for the opercular mechanism, achieving link rotational errors of less than 5%. We also found that the opercular mechanism moves with multiple degrees of freedom at the level of each strike and across multiple strikes. These results suggest that active motor control may be needed to direct the force input to the mechanism by the axial muscles and achieve a particular mouth-opening trajectory. Our results also expand the versatility of four-bar models in simulating biomechanical systems and extend their utility beyond planar or single-DoF systems. © 2017. Published by The Company of Biologists Ltd.
NASA Astrophysics Data System (ADS)
Zeng, Y. Y.; Guo, J. Y.; Shang, K.; Shum, C. K.; Yu, J. H.
2015-09-01
Two methods for computing gravitational potential difference (GPD) between the GRACE satellites using orbit data have been formulated based on energy integral; one in geocentric inertial frame (GIF) and another in Earth fixed frame (EFF). Here we present a rigorous theoretical formulation in EFF with particular emphasis on necessary approximations, provide a computational approach to mitigate the approximations to negligible level, and verify our approach using simulations. We conclude that a term neglected or ignored in all former work without verification should be retained. In our simulations, 2 cycle per revolution (CPR) errors are present in the GPD computed using our formulation, and empirical removal of the 2 CPR and lower frequency errors can improve the precisions of Stokes coefficients (SCs) of degree 3 and above by 1-2 orders of magnitudes. This is despite of the fact that the result without removing these errors is already accurate enough. Furthermore, the relation between data errors and their influences on GPD is analysed, and a formal examination is made on the possible precision that real GRACE data may attain. The result of removing 2 CPR errors may imply that, if not taken care of properly, the values of SCs computed by means of the energy integral method using real GRACE data may be seriously corrupted by aliasing errors from possibly very large 2 CPR errors based on two facts: (1) errors of bar C_{2,0} manifest as 2 CPR errors in GPD and (2) errors of bar C_{2,0} in GRACE data-the differences between the CSR monthly values of bar C_{2,0} independently determined using GRACE and SLR are a reasonable measure of their magnitude-are very large. Our simulations show that, if 2 CPR errors in GPD vary from day to day as much as those corresponding to errors of bar C_{2,0} from month to month, the aliasing errors of degree 15 and above SCs computed using a month's GPD data may attain a level comparable to the magnitude of gravitational potential variation signal that GRACE was designed to recover. Consequently, we conclude that aliasing errors from 2 CPR errors in real GRACE data may be very large if not properly handled; and therefore, we propose an approach to reduce aliasing errors from 2 CPR and lower frequency errors for computing SCs above degree 2.
Metacontrast masking and attention do not interact.
Agaoglu, Sevda; Breitmeyer, Bruno; Ogmen, Haluk
2016-07-01
Visual masking and attention have been known to control the transfer of information from sensory memory to visual short-term memory. A natural question is whether these processes operate independently or interact. Recent evidence suggests that studies that reported interactions between masking and attention suffered from ceiling and/or floor effects. The objective of the present study was to investigate whether metacontrast masking and attention interact by using an experimental design in which saturation effects are avoided. We asked observers to report the orientation of a target bar randomly selected from a display containing either two or six bars. The mask was a ring that surrounded the target bar. Attentional load was controlled by set-size and masking strength by the stimulus onset asynchrony between the target bar and the mask ring. We investigated interactions between masking and attention by analyzing two different aspects of performance: (i) the mean absolute response errors and (ii) the distribution of signed response errors. Our results show that attention affects observers' performance without interacting with masking. Statistical modeling of response errors suggests that attention and metacontrast masking exert their effects by independently modulating the probability of "guessing" behavior. Implications of our findings for models of attention are discussed.
Morrison, Aileen P; Tanasijevic, Milenko J; Goonan, Ellen M; Lobo, Margaret M; Bates, Michael M; Lipsitz, Stuart R; Bates, David W; Melanson, Stacy E F
2010-06-01
Ensuring accurate patient identification is central to preventing medical errors, but it can be challenging. We implemented a bar code-based positive patient identification system for use in inpatient phlebotomy. A before-after design was used to evaluate the impact of the identification system on the frequency of mislabeled and unlabeled samples reported in our laboratory. Labeling errors fell from 5.45 in 10,000 before implementation to 3.2 in 10,000 afterward (P = .0013). An estimated 108 mislabeling events were prevented by the identification system in 1 year. Furthermore, a workflow step requiring manual preprinting of labels, which was accompanied by potential labeling errors in about one quarter of blood "draws," was removed as a result of the new system. After implementation, a higher percentage of patients reported having their wristband checked before phlebotomy. Bar code technology significantly reduced the rate of specimen identification errors.
NASA Astrophysics Data System (ADS)
Hakim, Layal; Lacaze, Guilhem; Khalil, Mohammad; Sargsyan, Khachik; Najm, Habib; Oefelein, Joseph
2018-05-01
This paper demonstrates the development of a simple chemical kinetics model designed for autoignition of n-dodecane in air using Bayesian inference with a model-error representation. The model error, i.e. intrinsic discrepancy from a high-fidelity benchmark model, is represented by allowing additional variability in selected parameters. Subsequently, we quantify predictive uncertainties in the results of autoignition simulations of homogeneous reactors at realistic diesel engine conditions. We demonstrate that these predictive error bars capture model error as well. The uncertainty propagation is performed using non-intrusive spectral projection that can also be used in principle with larger scale computations, such as large eddy simulation. While the present calibration is performed to match a skeletal mechanism, it can be done with equal success using experimental data only (e.g. shock-tube measurements). Since our method captures the error associated with structural model simplifications, we believe that the optimised model could then lead to better qualified predictions of autoignition delay time in high-fidelity large eddy simulations than the existing detailed mechanisms. This methodology provides a way to reduce the cost of reaction kinetics in simulations systematically, while quantifying the accuracy of predictions of important target quantities.
High-resolution smile measurement and control of wavelength-locked QCW and CW laser diode bars
NASA Astrophysics Data System (ADS)
Rosenkrantz, Etai; Yanson, Dan; Klumel, Genady; Blonder, Moshe; Rappaport, Noam; Peleg, Ophir
2018-02-01
High-power linewidth-narrowed applications of laser diode arrays demand high beam quality in the fast, or vertical, axis. This requires very high fast-axis collimation (FAC) quality with sub-mrad angular errors, especially where laser diode bars are wavelength-locked by a volume Bragg grating (VBG) to achieve high pumping efficiency in solid-state and fiber lasers. The micron-scale height deviation of emitters in a bar against the FAC lens causes the so-called smile effect with variable beam pointing errors and wavelength locking degradation. We report a bar smile imaging setup allowing FAC-free smile measurement in both QCW and CW modes. By Gaussian beam simulation, we establish optimum smile imaging conditions to obtain high resolution and accuracy with well-resolved emitter images. We then investigate the changes in the smile shape and magnitude under thermal stresses such as variable duty cycles in QCW mode and, ultimately, CW operation. Our smile measurement setup provides useful insights into the smile behavior and correlation between the bar collimation in QCW mode and operating conditions under CW pumping. With relaxed alignment tolerances afforded by our measurement setup, we can screen bars for smile compliance and potential VBG lockability prior to assembly, with benefits in both lower manufacturing costs and higher yield.
Some conservative estimates in quantum cryptography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molotkov, S. N.
2006-08-15
Relationship is established between the security of the BB84 quantum key distribution protocol and the forward and converse coding theorems for quantum communication channels. The upper bound Q{sub c} {approx} 11% on the bit error rate compatible with secure key distribution is determined by solving the transcendental equation H(Q{sub c})=C-bar({rho})/2, where {rho} is the density matrix of the input ensemble, C-bar({rho}) is the classical capacity of a noiseless quantum channel, and H(Q) is the capacity of a classical binary symmetric channel with error rate Q.
NASA Technical Reports Server (NTRS)
Dobson, Chris C.; Jones, Jonathan E.; Chavers, Greg
2003-01-01
A polychromatic microwave quadrature interferometer has been characterized using several laboratory plasmas. Reflections between the transmitter and the receiver have been observed, and the effects of including reflection terms in the data reduction equation have been examined. An error analysis which includes the reflections, modulation of the scene beam amplitude by the plasma, and simultaneous measurements at two frequencies has been applied to the empirical database, and the results are summarized. For reflection amplitudes around 1096, the reflection terms were found to reduce the calculated error bars for electron density measurements by about a factor of 2. The impact of amplitude modulation is also quantified. In the complete analysis, the mean error bar for high- density measurements is 7.596, and the mean phase shift error for low-density measurements is 1.2". .
NASA Astrophysics Data System (ADS)
Bala, Rajni; Mittal, Sherry; Sharma, Rohit K.; Wangoo, Nishima
2018-05-01
In the present study, we report a highly sensitive, rapid and low cost colorimetric monitoring of malathion (an organophosphate insecticide) employing a basic hexapeptide, malathion specific aptamer (oligonucleotide) and silver nanoparticles (AgNPs) as a nanoprobe. AgNPs are made to interact with the aptamer and peptide to give different optical responses depending upon the presence or absence of malathion. The nanoparticles remain yellow in color in the absence of malathion owing to the binding of aptamer with peptide which otherwise tends to aggregate the particles because of charge based interactions. In the presence of malathion, the agglomeration of the particles occurs which turns the solution orange. Furthermore, the developed aptasensor was successfully applied to detect malathion in various water samples and apple. The detection offered high recoveries in the range of 89-120% with the relative standard deviation within 2.98-4.78%. The proposed methodology exhibited excellent selectivity and a very low limit of detection i.e. 0.5 pM was achieved. The developed facile, rapid and low cost silver nanoprobe based on aptamer and peptide proved to be potentially applicable for highly selective and sensitive colorimetric sensing of trace levels of malathion in complex environmental samples. Figure S2. HPLC Chromatogram of KKKRRR. Figure S3. UV- Visible spectra of AgNPs in the presence of increasing peptide concentrations. Inset shows respective color changes of AgNPs with peptide concentrations ranging from 0.1 mM to 100 mM (a to e). Figure S4. UV- Visible spectra of AgNPs in the presence 10 mM peptide and varying aptamer concentrations. Inset shows the corresponding color changes. a to e shows aptamer concentrations ranging from 10 nM to 1000 nM. Figure S5. Interference Studies. Ratio of A520 nm/390 nm of AgNPs in the presence of 10 mM peptide, 500 nM aptamer, 0.5 nM malathion and 0.5 mM interfering components i.e. sodium, potassium, calcium, alanine, arginine, aspartic acid, ascorbic acid (AA) and glucose. Figure S6. (A) Absorbance spectra of AgNPs with increasing malathion concentrations. (B) Calibration plot for spiked lake water. Inset shows their respective images where a to g represents malathion concentrations from 0.01 nM to 0.75 nM. Each point represents an average of three individual measurements and error bars indicate standard deviation. Figure S7. (A) Absorbance spectra of AgNPs with increasing malathion concentrations in spiked tap water samples. (B) Calibration plot for the biosensor. Inset represents the color changes. a to g represents varying malathion concentrations from 0.01 nM to 0.75 nM. Each point represents an average of three individual measurements and error bars indicate standard deviation. Figure S8. (A) Absorbance spectra of AgNPs in the presence of different malathion concentrations in spiked apple samples. (B) Calibration plot for spiked apple. Inset displays the corresponding color changes. a to g shows the color of solutions having malathion concentrations from 0.01 nM to 0.75 nM. Each point represents an average of three individual measurements and error bars indicate standard deviation.
Shu, Deming; Kearney, Steven P.; Preissner, Curt A.
2015-02-17
A method and deformation compensated flexural pivots structured for precision linear nanopositioning stages are provided. A deformation-compensated flexural linear guiding mechanism includes a basic parallel mechanism including a U-shaped member and a pair of parallel bars linked to respective pairs of I-link bars and each of the I-bars coupled by a respective pair of flexural pivots. The basic parallel mechanism includes substantially evenly distributed flexural pivots minimizing center shift dynamic errors.
Use the Bar Code System to Improve Accuracy of the Patient and Sample Identification.
Chuang, Shu-Hsia; Yeh, Huy-Pzu; Chi, Kun-Hung; Ku, Hsueh-Chen
2018-01-01
In time and correct sample collection were highly related to patient's safety. The sample error rate was 11.1%, because misbranded patient information and wrong sample containers during January to April, 2016. We developed a barcode system of "Specimens Identify System" through process of reengineering of TRM, used bar code scanners, add sample container instructions, and mobile APP. Conclusion, the bar code systems improved the patient safety and created green environment.
Validation of instrumentation to monitor dynamic performance of olympic weightlifters.
Bruenger, Adam J; Smith, Sarah L; Sands, William A; Leigh, Michael R
2007-05-01
The purpose of this study was to validate the accuracy and reliability of the Weightlifting Video Overlay System (WVOS) used by coaches and sport biomechanists at the United States Olympic Training Center. Static trials with the bar set at specific positions and dynamic trials of a power snatch were performed. Static and dynamic values obtained by the WVOS were compared with values obtained by tape measure and standard video kinematic analysis. Coordinate positions (horizontal [X] and vertical [Y]) were compared on both ends (left and right) of the bar. Absolute technical error of measurement between WVOS and kinematic values were calculated (0.97 cm [left X], 0.98 cm [right X], 0.88 cm [left Y], and 0.53 cm [right Y]) for the static data. Pearson correlations for all dynamic trials exceeded r = 0.88. The greatest discrepancies between the 2 measuring systems were found to occur when there was twisting of the bar during the performance. This error was probably due to the location on the bar where the coordinates were measured. The WVOS appears to provide accurate position information when compared with standard kinematics; however, care must be taken in evaluating position measurements if there is a significant amount of twisting in the movement. The WVOS appears to be reliable and valid within reasonable error limits for the determination of weightlifting movement technique.
Wonnapinij, Passorn; Chinnery, Patrick F.; Samuels, David C.
2010-01-01
In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference. PMID:20362273
Predicting Error Bars for QSAR Models
NASA Astrophysics Data System (ADS)
Schroeter, Timon; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-09-01
Unfavorable physicochemical properties often cause drug failures. It is therefore important to take lipophilicity and water solubility into account early on in lead discovery. This study presents log D7 models built using Gaussian Process regression, Support Vector Machines, decision trees and ridge regression algorithms based on 14556 drug discovery compounds of Bayer Schering Pharma. A blind test was conducted using 7013 new measurements from the last months. We also present independent evaluations using public data. Apart from accuracy, we discuss the quality of error bars that can be computed by Gaussian Process models, and ensemble and distance based techniques for the other modelling approaches.
Samaranayake, N R; Cheung, S T D; Chui, W C M; Cheung, B M Y
2012-12-01
Healthcare technology is meant to reduce medication errors. The objective of this study was to assess unintended errors related to technologies in the medication use process. Medication incidents reported from 2006 to 2010 in a main tertiary care hospital were analysed by a pharmacist and technology-related errors were identified. Technology-related errors were further classified as socio-technical errors and device errors. This analysis was conducted using data from medication incident reports which may represent only a small proportion of medication errors that actually takes place in a hospital. Hence, interpretation of results must be tentative. 1538 medication incidents were reported. 17.1% of all incidents were technology-related, of which only 1.9% were device errors, whereas most were socio-technical errors (98.1%). Of these, 61.2% were linked to computerised prescription order entry, 23.2% to bar-coded patient identification labels, 7.2% to infusion pumps, 6.8% to computer-aided dispensing label generation and 1.5% to other technologies. The immediate causes for technology-related errors included, poor interface between user and computer (68.1%), improper procedures or rule violations (22.1%), poor interface between user and infusion pump (4.9%), technical defects (1.9%) and others (3.0%). In 11.4% of the technology-related incidents, the error was detected after the drug had been administered. A considerable proportion of all incidents were technology-related. Most errors were due to socio-technical issues. Unintended and unanticipated errors may happen when using technologies. Therefore, when using technologies, system improvement, awareness, training and monitoring are needed to minimise medication errors. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Piecuch, C. G.; Huybers, P. J.; Tingley, M.
2016-12-01
Sea level observations from coastal tide gauges are some of the longest instrumental records of the ocean. However, these data can be noisy, biased, and gappy, featuring missing values, and reflecting land motion and local effects. Coping with these issues in a formal manner is a challenging task. Some studies use Bayesian approaches to estimate sea level from tide gauge records, making inference probabilistically. Such methods are typically empirically Bayesian in nature: model parameters are treated as known and assigned point values. But, in reality, parameters are not perfectly known. Empirical Bayes methods thus neglect a potentially important source of uncertainty, and so may overestimate the precision (i.e., underestimate the uncertainty) of sea level estimates. We consider whether empirical Bayes methods underestimate uncertainty in sea level from tide gauge data, comparing to a full Bayes method that treats parameters as unknowns to be solved for along with the sea level field. We develop a hierarchical algorithm that we apply to tide gauge data on the North American northeast coast over 1893-2015. The algorithm is run in full Bayes mode, solving for the sea level process and parameters, and in empirical mode, solving only for the process using fixed parameter values. Error bars on sea level from the empirical method are smaller than from the full Bayes method, and the relative discrepancies increase with time; the 95% credible interval on sea level values from the empirical Bayes method in 1910 and 2010 is 23% and 56% narrower, respectively, than from the full Bayes approach. To evaluate the representativeness of the credible intervals, empirical Bayes and full Bayes methods are applied to corrupted data of a known surrogate field. Using rank histograms to evaluate the solutions, we find that the full Bayes method produces generally reliable error bars, whereas the empirical Bayes method gives too-narrow error bars, such that the 90% credible interval only encompasses 70% of true process values. Results demonstrate that parameter uncertainty is an important source of process uncertainty, and advocate for the fully Bayesian treatment of tide gauge records in ocean circulation and climate studies.
Three-dimensional accuracy of different correction methods for cast implant bars
Kwon, Ji-Yung; Kim, Chang-Whe; Lim, Young-Jun; Kwon, Ho-Beom
2014-01-01
PURPOSE The aim of the present study was to evaluate the accuracy of three techniques for correction of cast implant bars. MATERIALS AND METHODS Thirty cast implant bars were fabricated on a metal master model. All cast implant bars were sectioned at 5 mm from the left gold cylinder using a disk of 0.3 mm thickness, and then each group of ten specimens was corrected by gas-air torch soldering, laser welding, and additional casting technique. Three dimensional evaluation including horizontal, vertical, and twisting measurements was based on measurement and comparison of (1) gap distances of the right abutment replica-gold cylinder interface at buccal, distal, lingual side, (2) changes of bar length, and (3) axis angle changes of the right gold cylinders at the step of the post-correction measurements on the three groups with a contact and non-contact coordinate measuring machine. One-way analysis of variance (ANOVA) and paired t-test were performed at the significance level of 5%. RESULTS Gap distances of the cast implant bars after correction procedure showed no statistically significant difference among groups. Changes in bar length between pre-casting and post-correction measurement were statistically significance among groups. Axis angle changes of the right gold cylinders were not statistically significance among groups. CONCLUSION There was no statistical significance among three techniques in horizontal, vertical and axial errors. But, gas-air torch soldering technique showed the most consistent and accurate trend in the correction of implant bar error. However, Laser welding technique, showed a large mean and standard deviation in vertical and twisting measurement and might be technique-sensitive method. PMID:24605205
Error-Detecting Identification Codes for Algebra Students.
ERIC Educational Resources Information Center
Sutherland, David C.
1990-01-01
Discusses common error-detecting identification codes using linear algebra terminology to provide an interesting application of algebra. Presents examples from the International Standard Book Number, the Universal Product Code, bank identification numbers, and the ZIP code bar code. (YP)
A Substantive Process Analysis of Responses to Items from the Multistate Bar Examination
ERIC Educational Resources Information Center
Bonner, Sarah M.; D'Agostino, Jerome V.
2012-01-01
We investigated examinees' cognitive processes while they solved selected items from the Multistate Bar Exam (MBE), a high-stakes professional certification examination. We focused on ascertaining those mental processes most frequently used by examinees, and the most common types of errors in their thinking. We compared the relationships between…
ENDF/B-IV fission-product files: summary of major nuclide data
DOE Office of Scientific and Technical Information (OSTI.GOV)
England, T.R.; Schenter, R.E.
1975-09-01
The major fission-product parameters [sigma/sub th/, RI, tau/sub 1/2/, E- bar/sub $beta$/, E-bar/sub $gamma$/, E-bar/sub $alpha$/, decay and (n,$gamma$) branching, Q, and AWR] abstracted from ENDF/B-IV files for 824 nuclides are summarized. These data are most often requested by users concerned with reactor design, reactor safety, dose, and other sundry studies. The few known file errors are corrected to date. Tabular data are listed by increasing mass number. (auth)
Reading color barcodes using visual snakes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaub, Hanspeter
2004-05-01
Statistical pressure snakes are used to track a mono-color target in an unstructured environment using a video camera. The report discusses an algorithm to extract a bar code signal that is embedded within the target. The target is assumed to be rectangular in shape, with the bar code printed in a slightly different saturation and value in HSV color space. Thus, the visual snake, which primarily weighs hue tracking errors, will not be deterred by the presence of the color bar codes in the target. The bar code is generate with the standard 3 of 9 method. Using this method,more » the numeric bar codes reveal if the target is right-side-up or up-side-down.« less
Patient safety with blood products administration using wireless and bar-code technology.
Porcella, Aleta; Walker, Kristy
2005-01-01
Supported by a grant from the Agency for Healthcare Research and Quality, a University of Iowa Hospitals and Clinics interdisciplinary research team created an online data-capture-response tool utilizing wireless mobile devices and bar code technology to track and improve blood products administration process. The tool captures 1) sample collection, 2) sample arrival in the blood bank, 3) blood product dispense from blood bank, and 4) administration. At each step, the scanned patient wristband ID bar code is automatically compared to scanned identification barcode on requisition, sample, and/or product, and the system presents either a confirmation or an error message to the user. Following an eight-month, 5 unit, staged pilot, a 'big bang,' hospital-wide implementation occurred on February 7, 2005. Preliminary results from pilot data indicate that the new barcode process captures errors 3 to 10 times better than the old manual process.
Nuttall, Gregory A; Abenstein, John P; Stubbs, James R; Santrach, Paula; Ereth, Mark H; Johnson, Pamela M; Douglas, Emily; Oliver, William C
2013-04-01
To determine whether the use of a computerized bar code-based blood identification system resulted in a reduction in transfusion errors or near-miss transfusion episodes. Our institution instituted a computerized bar code-based blood identification system in October 2006. After institutional review board approval, we performed a retrospective study of transfusion errors from January 1, 2002, through December 31, 2005, and from January 1, 2007, through December 31, 2010. A total of 388,837 U were transfused during the 2002-2005 period. There were 6 misidentification episodes of a blood product being transfused to the wrong patient during that period (incidence of 1 in 64,806 U or 1.5 per 100,000 transfusions; 95% CI, 0.6-3.3 per 100,000 transfusions). There was 1 reported near-miss transfusion episode (incidence of 0.3 per 100,000 transfusions; 95% CI, <0.1-1.4 per 100,000 transfusions). A total of 304,136 U were transfused during the 2007-2010 period. There was 1 misidentification episode of a blood product transfused to the wrong patient during that period when the blood bag and patient's armband were scanned after starting to transfuse the unit (incidence of 1 in 304,136 U or 0.3 per 100,000 transfusions; 95% CI, <0.1-1.8 per 100,000 transfusions; P=.14). There were 34 reported near-miss transfusion errors (incidence of 11.2 per 100,000 transfusions; 95% CI, 7.7-15.6 per 100,000 transfusions; P<.001). Institution of a computerized bar code-based blood identification system was associated with a large increase in discovered near-miss events. Copyright © 2013 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.
Weak charge form factor and radius of 208Pb through parity violation in electron scattering
Horowitz, C. J.; Ahmed, Z.; Jen, C. -M.; ...
2012-03-26
We use distorted wave electron scattering calculations to extract the weak charge form factor F W(more » $$\\bar{q}$$), the weak charge radius R W, and the point neutron radius R n, of 208Pb from the PREX parity violating asymmetry measurement. The form factor is the Fourier transform of the weak charge density at the average momentum transfer $$\\bar{q}$$ = 0.475 fm -1. We find F W($$\\bar{q}$$) = 0.204 ± 0.028(exp) ± 0.001(model). We use the Helm model to infer the weak radius from F W($$\\bar{q}$$). We find RW = 5.826 ± 0.181(exp) ± 0.027(model) fm. Here the exp error includes PREX statistical and systematic errors, while the model error describes the uncertainty in R W from uncertainties in the surface thickness σ of the weak charge density. The weak radius is larger than the charge radius, implying a 'weak charge skin' where the surface region is relatively enriched in weak charges compared to (electromagnetic) charges. We extract the point neutron radius R n = 5.751 ± 0.175 (exp) ± 0.026(model) ± 0.005(strange) fm, from R W. Here there is only a very small error (strange) from possible strange quark contributions. We find R n to be slightly smaller than R W because of the nucleon's size. As a result, we find a neutron skin thickness of R n-R p = 0.302 ± 0.175 (exp) ± 0.026 (model) ± 0.005 (strange) fm, where R p is the point proton radius.« less
Kim, Myoung Soo
2012-08-01
The purpose of this cross-sectional study was to examine current status of IT-based medication error prevention system construction and the relationships among system construction, medication error management climate and perception for system use. The participants were 124 patient safety chief managers working for 124 hospitals with over 300 beds in Korea. The characteristics of the participants, construction status and perception of systems (electric pharmacopoeia, electric drug dosage calculation system, computer-based patient safety reporting and bar-code system) and medication error management climate were measured in this study. The data were collected between June and August 2011. Descriptive statistics, partial Pearson correlation and MANCOVA were used for data analysis. Electric pharmacopoeia were constructed in 67.7% of participating hospitals, computer-based patient safety reporting systems were constructed in 50.8%, electric drug dosage calculation systems were in use in 32.3%. Bar-code systems showed up the lowest construction rate at 16.1% of Korean hospitals. Higher rates of construction of IT-based medication error prevention systems resulted in greater safety and a more positive error management climate prevailed. The supportive strategies for improving perception for use of IT-based systems would add to system construction, and positive error management climate would be more easily promoted.
Bayesian aerosol retrieval algorithm for MODIS AOD retrieval over land
NASA Astrophysics Data System (ADS)
Lipponen, Antti; Mielonen, Tero; Pitkänen, Mikko R. A.; Levy, Robert C.; Sawyer, Virginia R.; Romakkaniemi, Sami; Kolehmainen, Ville; Arola, Antti
2018-03-01
We have developed a Bayesian aerosol retrieval (BAR) algorithm for the retrieval of aerosol optical depth (AOD) over land from the Moderate Resolution Imaging Spectroradiometer (MODIS). In the BAR algorithm, we simultaneously retrieve all dark land pixels in a granule, utilize spatial correlation models for the unknown aerosol parameters, use a statistical prior model for the surface reflectance, and take into account the uncertainties due to fixed aerosol models. The retrieved parameters are total AOD at 0.55 µm, fine-mode fraction (FMF), and surface reflectances at four different wavelengths (0.47, 0.55, 0.64, and 2.1 µm). The accuracy of the new algorithm is evaluated by comparing the AOD retrievals to Aerosol Robotic Network (AERONET) AOD. The results show that the BAR significantly improves the accuracy of AOD retrievals over the operational Dark Target (DT) algorithm. A reduction of about 29 % in the AOD root mean square error and decrease of about 80 % in the median bias of AOD were found globally when the BAR was used instead of the DT algorithm. Furthermore, the fraction of AOD retrievals inside the ±(0.05+15 %) expected error envelope increased from 55 to 76 %. In addition to retrieving the values of AOD, FMF, and surface reflectance, the BAR also gives pixel-level posterior uncertainty estimates for the retrieved parameters. The BAR algorithm always results in physical, non-negative AOD values, and the average computation time for a single granule was less than a minute on a modern personal computer.
Predicting Error Bars for QSAR Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeter, Timon; Technische Universitaet Berlin, Department of Computer Science, Franklinstrasse 28/29, 10587 Berlin; Schwaighofer, Anton
2007-09-18
Unfavorable physicochemical properties often cause drug failures. It is therefore important to take lipophilicity and water solubility into account early on in lead discovery. This study presents log D{sub 7} models built using Gaussian Process regression, Support Vector Machines, decision trees and ridge regression algorithms based on 14556 drug discovery compounds of Bayer Schering Pharma. A blind test was conducted using 7013 new measurements from the last months. We also present independent evaluations using public data. Apart from accuracy, we discuss the quality of error bars that can be computed by Gaussian Process models, and ensemble and distance based techniquesmore » for the other modelling approaches.« less
NASA Astrophysics Data System (ADS)
Vieira, Daniel; Krems, Roman
2017-04-01
Fine-structure transitions in collisions of O(3Pj) with atomic hydrogen are an important cooling mechanism in the interstellar medium; knowledge of the rate coefficients for these transitions has a wide range of astrophysical applications. The accuracy of the theoretical calculation is limited by inaccuracy in the ab initio interaction potentials used in the coupled-channel quantum scattering calculations from which the rate coefficients can be obtained. In this work we use the latest ab initio results for the O(3Pj) + H interaction potentials to improve on previous calculations of the rate coefficients. We further present a machine-learning technique based on Gaussian Process regression to determine the sensitivity of the rate coefficients to variations of the underlying adiabatic interaction potentials. To account for the inaccuracy inherent in the ab initio calculations we compute error bars for the rate coefficients corresponding to 20% variation in each of the interaction potentials. We obtain these error bars by fitting a Gaussian Process model to a data set of potential curves and rate constants. We use the fitted model to do sensitivity analysis, determining the relative importance of individual adiabatic potential curves to a given fine-structure transition. NSERC.
A large-scale test of free-energy simulation estimates of protein-ligand binding affinities.
Mikulskis, Paulius; Genheden, Samuel; Ryde, Ulf
2014-10-27
We have performed a large-scale test of alchemical perturbation calculations with the Bennett acceptance-ratio (BAR) approach to estimate relative affinities for the binding of 107 ligands to 10 different proteins. Employing 20-Å truncated spherical systems and only one intermediate state in the perturbations, we obtain an error of less than 4 kJ/mol for 54% of the studied relative affinities and a precision of 0.5 kJ/mol on average. However, only four of the proteins gave acceptable errors, correlations, and rankings. The results could be improved by using nine intermediate states in the simulations or including the entire protein in the simulations using periodic boundary conditions. However, 27 of the calculated affinities still gave errors of more than 4 kJ/mol, and for three of the proteins the results were not satisfactory. This shows that the performance of BAR calculations depends on the target protein and that several transformations gave poor results owing to limitations in the molecular-mechanics force field or the restricted sampling possible within a reasonable simulation time. Still, the BAR results are better than docking calculations for most of the proteins.
Visual mining business service using pixel bar charts
NASA Astrophysics Data System (ADS)
Hao, Ming C.; Dayal, Umeshwar; Casati, Fabio
2004-06-01
Basic bar charts have been commonly available, but they only show highly aggregated data. Finding the valuable information hidden in the data is essential to the success of business. We describe a new visualization technique called pixel bar charts, which are derived from regular bar charts. The basic idea of a pixel bar chart is to present all data values directly instead of aggregating them into a few data values. Pixel bar charts provide data distribution and exceptions besides aggregated data. The approach is to represent each data item (e.g. a business transaction) by a single pixel in the bar chart. The attribute of each data item is encoded into the pixel color and can be accessed and drilled down to the detail information as needed. Different color mappings are used to represent multiple attributes. This technique has been prototyped in three business service applications-Business Operation Analysis, Sales Analysis, and Service Level Agreement Analysis at Hewlett Packard Laboratories. Our applications show the wide applicability and usefulness of this new idea.
Fabricating CAD/CAM Implant-Retained Mandibular Bar Overdentures: A Clinical and Technical Overview.
Goo, Chui Ling; Tan, Keson Beng Choon
2017-01-01
This report describes the clinical and technical aspects in the oral rehabilitation of an edentulous patient with knife-edge ridge at the mandibular anterior edentulous region, using implant-retained overdentures. The application of computer-aided design and computer-aided manufacturing (CAD/CAM) in the fabrication of the overdenture framework simplifies the laboratory process of the implant prostheses. The Nobel Procera CAD/CAM System was utilised to produce a lightweight titanium overdenture bar with locator attachments. It is proposed that the digital workflow of CAD/CAM milled implant overdenture bar allows us to avoid numerous technical steps and possibility of casting errors involved in the conventional casting of such bars.
Publisher Correction: Role of outer surface probes for regulating ion gating of nanochannels.
Li, Xinchun; Zhai, Tianyou; Gao, Pengcheng; Cheng, Hongli; Hou, Ruizuo; Lou, Xiaoding; Xia, Fan
2018-02-08
The original version of this Article contained an error in Fig. 3. The scale bars in Figs 3c and 3d were incorrectly labelled as 50 μA. In the correct version, the scale bars are labelled as 0.5 μA. This has now been corrected in both the PDF and HTML versions of the Article.
Evaluation of force-torque displays for use with space station telerobotic activities
NASA Technical Reports Server (NTRS)
Hendrich, Robert C.; Bierschwale, John M.; Manahan, Meera K.; Stuart, Mark A.; Legendre, A. Jay
1992-01-01
Recent experiments which addressed Space Station remote manipulation tasks found that tactile force feedback (reflecting forces and torques encountered at the end-effector through the manipulator hand controller) does not improve performance significantly. Subjective response from astronaut and non-astronaut test subjects indicated that force information, provided visually, could be useful. No research exists which specifically investigates methods of presenting force-torque information visually. This experiment was designed to evaluate seven different visual force-torque displays which were found in an informal telephone survey. The displays were prototyped in the HyperCard programming environment. In a within-subjects experiment, 14 subjects nullified forces and torques presented statically, using response buttons located at the bottom of the screen. Dependent measures included questionnaire data, errors, and response time. Subjective data generally demonstrate that subjects rated variations of pseudo-perspective displays consistently better than bar graph and digital displays. Subjects commented that the bar graph and digital displays could be used, but were not compatible with using hand controllers. Quantitative data show similar trends to the subjective data, except that the bar graph and digital displays both provided good performance, perhaps do to the mapping of response buttons to display elements. Results indicate that for this set of displays, the pseudo-perspective displays generally represent a more intuitive format for presenting force-torque information.
2007-03-01
Photomicrographs show typical images. Scale bar, 50 µm. Data are the mean ± SE and are representative of ≥ 3 independent experiments. P values represent the...not affect ICAM-1 expression in normal islets of RIP-Tag5 pancreas. Photomicrographs show typical images. Scale bar, 50 µm. 2 We have identified the...WBH-treated mice. Thermal upregulation of vascular ICAM-1 expression was abolished in IL-6 KO mice. Photomicrographs show typical images. Scale bar
Xiong, Jijun; Li, Chen; Jia, Pinggang; Chen, Xiaoyong; Zhang, Wendong; Liu, Jun; Xue, Chenyang; Tan, Qiulin
2015-08-31
Pressure measurements in high-temperature applications, including compressors, turbines, and others, have become increasingly critical. This paper proposes an implantable passive LC pressure sensor based on an alumina ceramic material for in situ pressure sensing in high-temperature environments. The inductance and capacitance elements of the sensor were designed independently and separated by a thermally insulating material, which is conducive to reducing the influence of the temperature on the inductance element and improving the quality factor of the sensor. In addition, the sensor was fabricated using thick film integrated technology from high-temperature materials that ensure stable operation of the sensor in high-temperature environments. Experimental results showed that the sensor accurately monitored pressures from 0 bar to 2 bar at temperatures up to 800 °C. The sensitivity, linearity, repeatability error, and hysteretic error of the sensor were 0.225 MHz/bar, 95.3%, 5.5%, and 6.2%, respectively.
Xiong, Jijun; Li, Chen; Jia, Pinggang; Chen, Xiaoyong; Zhang, Wendong; Liu, Jun; Xue, Chenyang; Tan, Qiulin
2015-01-01
Pressure measurements in high-temperature applications, including compressors, turbines, and others, have become increasingly critical. This paper proposes an implantable passive LC pressure sensor based on an alumina ceramic material for in situ pressure sensing in high-temperature environments. The inductance and capacitance elements of the sensor were designed independently and separated by a thermally insulating material, which is conducive to reducing the influence of the temperature on the inductance element and improving the quality factor of the sensor. In addition, the sensor was fabricated using thick film integrated technology from high-temperature materials that ensure stable operation of the sensor in high-temperature environments. Experimental results showed that the sensor accurately monitored pressures from 0 bar to 2 bar at temperatures up to 800 °C. The sensitivity, linearity, repeatability error, and hysteretic error of the sensor were 0.225 MHz/bar, 95.3%, 5.5%, and 6.2%, respectively. PMID:26334279
Ma, Pei-Luen; Jheng, Yan-Wun; Jheng, Bi-Wei; Hou, I-Ching
2017-01-01
Bar code medication administration (BCMA) could reduce medical errors and promote patient safety. This research uses modified information systems success model (M-ISS model) to evaluate nurses' acceptance to BCMA. The result showed moderate correlation between medication administration safety (MAS) to system quality, information quality, service quality, user satisfaction, and limited satisfaction.
Allocentrically implied target locations are updated in an eye-centred reference frame.
Thompson, Aidan A; Glover, Christopher V; Henriques, Denise Y P
2012-04-18
When reaching to remembered target locations following an intervening eye movement a systematic pattern of error is found indicating eye-centred updating of visuospatial memory. Here we investigated if implicit targets, defined only by allocentric visual cues, are also updated in an eye-centred reference frame as explicit targets are. Participants viewed vertical bars separated by varying distances, and horizontal lines of equivalently varying lengths, implying a "target" location at the midpoint of the stimulus. After determining the implied "target" location from only the allocentric stimuli provided, participants saccaded to an eccentric location, and reached to the remembered "target" location. Irrespective of the type of stimulus reaching errors to these implicit targets are gaze-dependent, and do not differ from those found when reaching to remembered explicit targets. Implicit target locations are coded and updated as a function of relative gaze direction with respect to those implied locations just as explicit targets are, even though no target is specifically represented. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Wang, Wen; Cao, Jian; Yang, Fang; Wang, Xuelian; Zheng, Sisi; Sharshov, Kirill; Li, Laixing
2016-04-01
Elucidating the spatial dynamic and core gut microbiome related to wild bar-headed goose is of crucial importance for probiotics development that may meet the demands of bar-headed goose artificial breeding industries and accelerate the domestication of this species. However, the core microbial communities in the wild bar-headed geese remain totally unknown. Here, for the first time, we present a comprehensive survey of bar-headed geese gut microbial communities by Illumina high-throughput sequencing technology using nine individuals from three distinct wintering locations in Tibet. A total of 236,676 sequences were analyzed, and 607 OTUs were identified. We show that the gut microbial communities of bar-headed geese have representatives of 14 phyla and are dominated by Firmicutes, Proteobacteria, Actinobacteria, and Bacteroidetes. The additive abundance of these four most dominant phyla was above 96% across all the samples. At the genus level, the sequences represented 150 genera. A set of 19 genera were present in all samples and considered as core gut microbiome. The top seven most abundant core genera were distributed in that four dominant phyla. Among them, four genera (Lactococcus, Bacillus, Solibacillus, and Streptococcus) belonged to Firmicutes, while for other three phyla, each containing one genus, such as Proteobacteria (genus Pseudomonas), Actinobacteria (genus Arthrobacter), and Bacteroidetes (genus Bacteroides). This broad survey represents the most in-depth assessment, to date, of the gut microbes that associated with bar-headed geese. These data create a baseline for future bar-headed goose microbiology research, and make an original contribution to probiotics development for bar-headed goose artificial breeding industries. © 2015 The Authors. MicrobiologyOpen published by John Wiley & Sons Ltd.
Haptic spatial matching in near peripersonal space.
Kaas, Amanda L; Mier, Hanneke I van
2006-04-01
Research has shown that haptic spatial matching at intermanual distances over 60 cm is prone to large systematic errors. The error pattern has been explained by the use of reference frames intermediate between egocentric and allocentric coding. This study investigated haptic performance in near peripersonal space, i.e. at intermanual distances of 60 cm and less. Twelve blindfolded participants (six males and six females) were presented with two turn bars at equal distances from the midsagittal plane, 30 or 60 cm apart. Different orientations (vertical/horizontal or oblique) of the left bar had to be matched by adjusting the right bar to either a mirror symmetric (/ \\) or parallel (/ /) position. The mirror symmetry task can in principle be performed accurately in both an egocentric and an allocentric reference frame, whereas the parallel task requires an allocentric representation. Results showed that parallel matching induced large systematic errors which increased with distance. Overall error was significantly smaller in the mirror task. The task difference also held for the vertical orientation at 60 cm distance, even though this orientation required the same response in both tasks, showing a marked effect of task instruction. In addition, men outperformed women on the parallel task. Finally, contrary to our expectations, systematic errors were found in the mirror task, predominantly at 30 cm distance. Based on these findings, we suggest that haptic performance in near peripersonal space might be dominated by different mechanisms than those which come into play at distances over 60 cm. Moreover, our results indicate that both inter-individual differences and task demands affect task performance in haptic spatial matching. Therefore, we conclude that the study of haptic spatial matching in near peripersonal space might reveal important additional constraints for the specification of adequate models of haptic spatial performance.
Cheng, Long; Hou, Zeng-Guang; Tan, Min; Zhang, W J
2012-10-01
The trajectory tracking problem of a closed-chain five-bar robot is studied in this paper. Based on an error transformation function and the backstepping technique, an approximation-based tracking algorithm is proposed, which can guarantee the control performance of the robotic system in both the stable and transient phases. In particular, the overshoot, settling time, and final tracking error of the robotic system can be all adjusted by properly setting the parameters in the error transformation function. The radial basis function neural network (RBFNN) is used to compensate the complicated nonlinear terms in the closed-loop dynamics of the robotic system. The approximation error of the RBFNN is only required to be bounded, which simplifies the initial "trail-and-error" configuration of the neural network. Illustrative examples are given to verify the theoretical analysis and illustrate the effectiveness of the proposed algorithm. Finally, it is also shown that the proposed approximation-based controller can be simplified by a smart mechanical design of the closed-chain robot, which demonstrates the promise of the integrated design and control philosophy.
NASA Astrophysics Data System (ADS)
Shimizu, N.; Aihara, H.; Epifanov, D.; Adachi, I.; Al Said, S.; Asner, D. M.; Aulchenko, V.; Aushev, T.; Ayad, R.; Babu, V.; Badhrees, I.; Bakich, A. M.; Bansal, V.; Barberio, E.; Bhardwaj, V.; Bhuyan, B.; Biswal, J.; Bobrov, A.; Bozek, A.; Bračko, M.; Browder, T. E.; Červenkov, D.; Chang, M.-C.; Chang, P.; Chekelian, V.; Chen, A.; Cheon, B. G.; Chilikin, K.; Cho, K.; Choi, S.-K.; Choi, Y.; Cinabro, D.; Czank, T.; Dash, N.; Di Carlo, S.; Doležal, Z.; Dutta, D.; Eidelman, S.; Fast, J. E.; Ferber, T.; Fulsom, B. G.; Garg, R.; Gaur, V.; Gabyshev, N.; Garmash, A.; Gelb, M.; Goldenzweig, P.; Greenwald, D.; Guido, E.; Haba, J.; Hayasaka, K.; Hayashii, H.; Hedges, M. T.; Hirose, S.; Hou, W.-S.; Iijima, T.; Inami, K.; Inguglia, G.; Ishikawa, A.; Itoh, R.; Iwasaki, M.; Jaegle, I.; Jeon, H. B.; Jia, S.; Jin, Y.; Joo, K. K.; Julius, T.; Kang, K. H.; Karyan, G.; Kawasaki, T.; Kiesling, C.; Kim, D. Y.; Kim, J. B.; Kim, S. H.; Kim, Y. J.; Kinoshita, K.; Kodyž, P.; Korpar, S.; Kotchetkov, D.; Križan, P.; Kroeger, R.; Krokovny, P.; Kulasiri, R.; Kuzmin, A.; Kwon, Y.-J.; Lange, J. S.; Lee, I. S.; Li, L. K.; Li, Y.; Li Gioi, L.; Libby, J.; Liventsev, D.; Masuda, M.; Merola, M.; Miyabayashi, K.; Miyata, H.; Mohanty, G. B.; Moon, H. K.; Mori, T.; Mussa, R.; Nakano, E.; Nakao, M.; Nanut, T.; Nath, K. J.; Natkaniec, Z.; Nayak, M.; Niiyama, M.; Nisar, N. K.; Nishida, S.; Ogawa, S.; Okuno, S.; Ono, H.; Pakhlova, G.; Pal, B.; Park, C. W.; Park, H.; Paul, S.; Pedlar, T. K.; Pestotnik, R.; Piilonen, L. E.; Popov, V.; Ritter, M.; Rostomyan, A.; Sakai, Y.; Salehi, M.; Sandilya, S.; Sato, Y.; Savinov, V.; Schneider, O.; Schnell, G.; Schwanda, C.; Seino, Y.; Senyo, K.; Sevior, M. E.; Shebalin, V.; Shibata, T.-A.; Shiu, J.-G.; Shwartz, B.; Sokolov, A.; Solovieva, E.; Starič, M.; Strube, J. F.; Sumisawa, K.; Sumiyoshi, T.; Tamponi, U.; Tanida, K.; Tenchini, F.; Trabelsi, K.; Uchida, M.; Uglov, T.; Unno, Y.; Uno, S.; Usov, Y.; Van Hulse, C.; Varner, G.; Vorobyev, V.; Vossen, A.; Wang, C. H.; Wang, M.-Z.; Wang, P.; Watanabe, M.; Widmann, E.; Won, E.; Yamashita, Y.; Ye, H.; Yuan, C. Z.; Zhang, Z. P.; Zhilich, V.; Zhukova, V.; Zhulanov, V.; Zupanc, A.
2018-02-01
We present a measurement of the Michel parameters of the τ lepton, \\bar{η} and ξκ, in the radiative leptonic decay τ^- \\rArr ℓ^- ν_{τ} \\bar{ν}_{ℓ} γ using 711 fb^{-1} of collision data collected with the Belle detector at the KEKB e^+e^- collider. The Michel parameters are measured in an unbinned maximum likelihood fit to the kinematic distribution of e^+e^-\\rArrτ^+τ^-\\rArr (π^+π^0 \\bar{ν}_τ)(ℓ^-ν_{τ}\\bar{ν}_{ℓ}γ)(ℓ=e or μ). The measured values of the Michel parameters are \\bar{η} = -1.3 ± 1.5 ± 0.8 and ξκ = 0.5 ± 0.4 ± 0.2, where the first error is statistical and the second is systematic. This is the first measurement of these parameters. These results are consistent with the Standard Model predictions within their uncertainties, and constrain the coupling constants of the generalized weak interaction.
Ha, Seung-Ryong; Song, Seung-Il; Hong, Seong-Tae; Kim, Gy-Young
2012-01-01
Implant-supported overdenture is a reliable treatment option for the patients with edentulous mandible when they have difficulty in using complete dentures. Several options have been used for implant-supported overdenture attachments. Among these, bar attachment system has greater retention and better maintainability than others. SFI-Bar® is prefabricated and can be adjustable at chairside. Therefore, laboratory procedures such as soldering and welding are unnecessary, which leads to fewer errors and lower costs. A 67-year-old female patient presented, complaining of mobility of lower anterior teeth with old denture. She had been wearing complete denture in the maxilla and removable partial denture in the mandible with severe bone loss. After extracting the teeth, two implants were placed in front of mental foramen, and SFI-Bar® was connected. A tube bar was seated to two adapters through large ball joints and fixation screws, connecting each implant. The length of the tube bar was adjusted according to inter-implant distance. Then, a female part was attached to the bar beneath the new denture. This clinical report describes two-implant-supported overdenture using the SFI-Bar® system in a mandibular edentulous patient. PMID:23236580
McCaffery, Kirsten J; Dixon, Ann; Hayen, Andrew; Jansen, Jesse; Smith, Sian; Simpson, Judy M
2012-01-01
To test optimal graphic risk communication formats for presenting small probabilities using graphics with a denominator of 1000 to adults with lower education and literacy. A randomized experimental study, which took place in adult basic education classes in Sydney, Australia. The participants were 120 adults with lower education and literacy. An experimental computer-based manipulation compared 1) pictographs in 2 forms, shaded "blocks" and unshaded "dots"; and 2) bar charts across different orientations (horizontal/vertical) and numerator size (small <100, medium 100-499, large 500-999). Accuracy (size of error) and ease of processing (reaction time) were assessed on a gist task (estimating the larger chance of survival) and a verbatim task (estimating the size of difference). Preferences for different graph types were also assessed. Accuracy on the gist task was very high across all conditions (>95%) and not tested further. For the verbatim task, optimal graph type depended on the numerator size. For small numerators, pictographs resulted in fewer errors than bar charts (blocks: odds ratio [OR] = 0.047, 95% confidence interval [CI] = 0.023-0.098; dots: OR = 0.049, 95% CI = 0.024-0.099). For medium and large numerators, bar charts were more accurate (e.g., medium dots: OR = 4.29, 95% CI = 2.9-6.35). Pictographs were generally processed faster for small numerators (e.g., blocks: 14.9 seconds v. bars: 16.2 seconds) and bar charts for medium or large numerators (e.g., large blocks: 41.6 seconds v. 26.7 seconds). Vertical formats were processed slightly faster than horizontal graphs with no difference in accuracy. Most participants preferred bar charts (64%); however, there was no relationship with performance. For adults with low education and literacy, pictographs are likely to be the best format to use when displaying small numerators (<100/1000) and bar charts for larger numerators (>100/1000).
Studying W‧ boson contributions in \\bar{B} \\rightarrow {D}^{(* )}{{\\ell }}^{-}{\\bar{\
NASA Astrophysics Data System (ADS)
Wang, Yi-Long; Wei, Bin; Sheng, Jin-Huan; Wang, Ru-Min; Yang, Ya-Dong
2018-05-01
Recently, the Belle collaboration reported the first measurement of the τ lepton polarization P τ (D*) in \\bar{B}\\to {D}* {τ }-{\\bar{ν }}τ decay and a new measurement of the rate of the branching ratios R(D*), which are consistent with the Standard Model (SM) predictions. These could be used to constrain the New Physics (NP) beyond the SM. In this paper, we probe \\bar{B}\\to {D}(* ){{\\ell }}-{\\bar{ν }}{\\ell } (ℓ = e, μ, τ) decays in the model-independent way and in the specific G(221) models with lepton flavour universality. Considering the theoretical uncertainties and the experimental errors at the 95% C.L., we obtain the quite strong bounds on the model-independent parameters {C}{{LL}}{\\prime },{C}{{LR}}{\\prime },{C}{{RR}}{\\prime },{C}{{RL}}{\\prime },{g}V,{g}A,{g}V{\\prime },{g}A{\\prime } and the specific G(221) model parameter rates. We find that the constrained NP couplings have no obvious effects on all (differential) branching ratios and their rates, nevertheless, many NP couplings have very large effects on the lepton spin asymmetries of \\bar{B}\\to {D}(* ){{\\ell }}-{\\bar{ν }}{\\ell } decays and the forward–backward asymmetries of \\bar{B}\\to {D}* {{\\ell }}-{\\bar{ν }}{\\ell }. So we expect precision measurements of these observables would be researched by LHCb and Belle-II.
Sine-Bar Attachment For Machine Tools
NASA Technical Reports Server (NTRS)
Mann, Franklin D.
1988-01-01
Sine-bar attachment for collets, spindles, and chucks helps machinists set up quickly for precise angular cuts that require greater precision than provided by graduations of machine tools. Machinist uses attachment to index head, carriage of milling machine or lathe relative to table or turning axis of tool. Attachment accurate to 1 minute or arc depending on length of sine bar and precision of gauge blocks in setup. Attachment installs quickly and easily on almost any type of lathe or mill. Requires no special clamps or fixtures, and eliminates many trial-and-error measurements. More stable than improvised setups and not jarred out of position readily.
Effects of a direct refill program for automated dispensing cabinets on medication-refill errors.
Helmons, Pieter J; Dalton, Ashley J; Daniels, Charles E
2012-10-01
The effects of a direct refill program for automated dispensing cabinets (ADCs) on medication-refill errors were studied. This study was conducted in designated acute care areas of a 386-bed academic medical center. A wholesaler-to-ADC direct refill program, consisting of prepackaged delivery of medications and bar-code-assisted ADC refilling, was implemented in the inpatient pharmacy of the medical center in September 2009. Medication-refill errors in 26 ADCs from the general medicine units, the infant special care unit, the surgical and burn intensive care units, and intermediate units were assessed before and after the implementation of this program. Medication-refill errors were defined as an ADC pocket containing the wrong drug, wrong strength, or wrong dosage form. ADC refill errors decreased by 77%, from 62 errors per 6829 refilled pockets (0.91%) to 8 errors per 3855 refilled pockets (0.21%) (p < 0.0001). The predominant error type detected before the intervention was the incorrect medication (wrong drug, wrong strength, or wrong dosage form) in the ADC pocket. Of the 54 incorrect medications found before the intervention, 38 (70%) were loaded in a multiple-drug drawer. After the implementation of the new refill process, 3 of the 5 incorrect medications were loaded in a multiple-drug drawer. There were 3 instances of expired medications before and only 1 expired medication after implementation of the program. A redesign of the ADC refill process using a wholesaler-to-ADC direct refill program that included delivery of prepackaged medication and bar-code-assisted refill significantly decreased the occurrence of ADC refill errors.
Heavy flavor decay of Zγ at CDF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timothy M. Harrington-Taber
2013-01-01
Diboson production is an important and frequently measured parameter of the Standard Model. This analysis considers the previously neglected pmore » $$\\bar{p}$$ →Z γ→ b$$\\bar{b}$$ channel, as measured at the Collider Detector at Fermilab. Using the entire Tevatron Run II dataset, the measured result is consistent with Standard Model predictions, but the statistical error associated with this method of measurement limits the strength of this correlation.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-04
... OFFICE OF THE UNITED STATES TRADE REPRESENTATIVE Petition Under Section 302 on Access to the... with requirements for access to the German bar aptitude examination. DATES: Effective Date: April 28..., and practices of the Government of Germany regarding requirements for access to the German bar...
5 CFR 2422.12 - Timeliness of petitions seeking an election.
Code of Federal Regulations, 2010 CFR
2010-01-01
... § 2422.12 Timeliness of petitions seeking an election. (a) Election bar. Where there is no certified...) Certification bar. Where there is a certified exclusive representative of employees, a petition seeking an... (c), (d), or (e) of this section apply. (c) Bar during 5 U.S.C. 7114(c) agency head review. A...
Park, Se-yeon; Yoo, Won-gyu
2013-10-01
The aim of this study was to compare muscular activation during five different normalization techniques that induced maximal isometric contraction of the latissimus dorsi. Sixteen healthy men participated in the study. Each participant performed three repetitions each of five types of isometric exertion: (1) conventional shoulder extension in the prone position, (2) caudal shoulder depression in the prone position, (3) body lifting with shoulder depression in the seated position, (4) trunk bending to the right in the lateral decubitus position, and (5) downward bar pulling in the seated position. In most participants, maximal activation of the latissimus dorsi was observed during conventional shoulder extension in the prone position; the percentage of maximal voluntary contraction was significantly greater for this exercise than for all other normalization techniques except downward bar pulling in the seated position. Although differences in electrode placement among various electromyographic studies represent a limitation, normalization techniques for the latissimus dorsi are recommended to minimize error in assessing maximal muscular activation of the latissimus dorsi through the combined use of shoulder extension in the prone position and downward pulling. Copyright © 2013 Elsevier Ltd. All rights reserved.
Genderedness of bar drinking culture and alcohol-related harms: A multi-country study
Roberts, Sarah C. M.; Bond, Jason; Korcha, Rachael; Greenfield, Thomas K.
2012-01-01
This study explores whether associations between consuming alcohol in bars and alcohol-related harms are consistent across countries and whether country-level characteristics modify associations. We hypothesized that genderedness of bar drinking modifies associations, such that odds of harms associated with bar drinking increase more rapidly in predominantly male bar-drinking countries. Multilevel analysis was used to analyze survey data from 21 countries representing five continents from Gender, Alcohol, and Culture: An International Study (GENACIS). Bar frequency was positively associated with harms overall. Relationships between bar frequency and harms varied across country. Genderedness modified associations between bar frequency and odds of fights, marriage/relationship harms, and work harms. Findings were significant only for men. Contrary to our hypothesis, odds of harms associated with bar drinking increased less rapidly in countries where bar drinking is predominantly male. This suggests predominantly male bar drinking cultures may be protective for males who more frequently drink in bars. PMID:23710158
Positional reference system for ultraprecision machining
Arnold, Jones B.; Burleson, Robert R.; Pardue, Robert M.
1982-01-01
A stable positional reference system for use in improving the cutting tool-to-part contour position in numerical controlled-multiaxis metal turning machines is provided. The reference system employs a plurality of interferometers referenced to orthogonally disposed metering bars which are substantially isolated from machine strain induced position errors for monitoring the part and tool positions relative to the metering bars. A microprocessor-based control system is employed in conjunction with the plurality of position interferometers and part contour description data inputs to calculate error components for each axis of movement and output them to corresponding axis drives with appropriate scaling and error compensation. Real-time position control, operating in combination with the reference system, makes possible the positioning of the cutting points of a tool along a part locus with a substantially greater degree of accuracy than has been attained previously in the art by referencing and then monitoring only the tool motion relative to a reference position located on the machine base.
Positional reference system for ultraprecision machining
Arnold, J.B.; Burleson, R.R.; Pardue, R.M.
1980-09-12
A stable positional reference system for use in improving the cutting tool-to-part contour position in numerical controlled-multiaxis metal turning machines is provided. The reference system employs a plurality of interferometers referenced to orthogonally disposed metering bars which are substantially isolated from machine strain induced position errors for monitoring the part and tool positions relative to the metering bars. A microprocessor-based control system is employed in conjunction with the plurality of positions interferometers and part contour description data input to calculate error components for each axis of movement and output them to corresponding axis driven with appropriate scaling and error compensation. Real-time position control, operating in combination with the reference system, makes possible the positioning of the cutting points of a tool along a part locus with a substantially greater degree of accuracy than has been attained previously in the art by referencing and then monitoring only the tool motion relative to a reference position located on the machine base.
Quantitative NO-LIF imaging in high-pressure flames
NASA Astrophysics Data System (ADS)
Bessler, W. G.; Schulz, C.; Lee, T.; Shin, D.-I.; Hofmann, M.; Jeffries, J. B.; Wolfrum, J.; Hanson, R. K.
2002-07-01
Planar laser-induced fluorescence (PLIF) images of NO concentration are reported in premixed laminar flames from 1-60 bar exciting the A-X(0,0) band. The influence of O2 interference and gas composition, the variation with local temperature, and the effect of laser and signal attenuation by UV light absorption are investigated. Despite choosing a NO excitation and detection scheme with minimum O2-LIF contribution, this interference produces errors of up to 25% in a slightly lean 60 bar flame. The overall dependence of the inferred NO number density with temperature in the relevant (1200-2500 K) range is low (<±15%) because different effects cancel. The attenuation of laser and signal light by combustion products CO2 and H2O is frequently neglected, yet such absorption yields errors of up to 40% in our experiment despite the small scale (8 mm flame diameter). Understanding the dynamic range for each of these corrections provides guidance to minimize errors in single shot imaging experiments at high pressure.
The cost of implementing inpatient bar code medication administration.
Sakowski, Julie Ann; Ketchel, Alan
2013-02-01
To calculate the costs associated with implementing and operating an inpatient bar-code medication administration (BCMA) system in the community hospital setting and to estimate the cost per harmful error prevented. This is a retrospective, observational study. Costs were calculated from the hospital perspective and a cost-consequence analysis was performed to estimate the cost per preventable adverse drug event averted. Costs were collected from financial records and key informant interviews at 4 not-for profit community hospitals. Costs included direct expenditures on capital, infrastructure, additional personnel, and the opportunity costs of time for existing personnel working on the project. The number of adverse drug events prevented using BCMA was estimated by multiplying the number of doses administered using BCMA by the rate of harmful errors prevented by interventions in response to system warnings. Our previous work found that BCMA identified and intercepted medication errors in 1.1% of doses administered, 9% of which potentially could have resulted in lasting harm. The cost of implementing and operating BCMA including electronic pharmacy management and drug repackaging over 5 years is $40,000 (range: $35,600 to $54,600) per BCMA-enabled bed and $2000 (range: $1800 to $2600) per harmful error prevented. BCMA can be an effective and potentially cost-saving tool for preventing the harm and costs associated with medication errors.
Star formation suppression and bar ages in nearby barred galaxies
NASA Astrophysics Data System (ADS)
James, P. A.; Percival, S. M.
2018-03-01
We present new spectroscopic data for 21 barred spiral galaxies, which we use to explore the effect of bars on disc star formation, and to place constraints on the characteristic lifetimes of bar episodes. The analysis centres on regions of heavily suppressed star formation activity, which we term `star formation deserts'. Long-slit optical spectroscopy is used to determine H β absorption strengths in these desert regions, and comparisons with theoretical stellar population models are used to determine the time since the last significant star formation activity, and hence the ages of the bars. We find typical ages of ˜1 Gyr, but with a broad range, much larger than would be expected from measurement errors alone, extending from ˜0.25 to >4 Gyr. Low-level residual star formation, or mixing of stars from outside the `desert' regions, could result in a doubling of these age estimates. The relatively young ages of the underlying populations coupled with the strong limits on the current star formation rule out a gradual exponential decline in activity, and hence support our assumption of an abrupt truncation event.
Enhancing the sensitivity to new physics in the tt¯ invariant mass distribution
NASA Astrophysics Data System (ADS)
Álvarez, Ezequiel
2012-08-01
We propose selection cuts on the LHC tt¯ production sample which should enhance the sensitivity to new physics signals in the study of the tt¯ invariant mass distribution. We show that selecting events in which the tt¯ object has little transverse and large longitudinal momentum enlarges the quark-fusion fraction of the sample and therefore increases its sensitivity to new physics which couples to quarks and not to gluons. We find that systematic error bars play a fundamental role and assume a simple model for them. We check how a non-visible new particle would become visible after the selection cuts enhance its resonance bump. A final realistic analysis should be done by the experimental groups with a correct evaluation of the systematic error bars.
Uchida, H; Sakai, T; Yamauchi, H; Hakamata, K; Shimizu, K; Yamashita, T
2016-09-21
We propose a novel scintillation detector design for positron emission tomography (PET), which has depth of interaction (DOI) capability and uses a single-ended readout scheme. The DOI detector contains a pair of crystal bars segmented using sub-surface laser engraving (SSLE). The two crystal bars are optically coupled to each other at their top segments and are coupled to two photo-sensors at their bottom segments. Initially, we evaluated the performance of different designs of single crystal bars coupled to photomultiplier tubes at both ends. We found that segmentation by SSLE results in superior performance compared to the conventional method. As the next step, we constructed a crystal unit composed of a 3 × 3 × 20 mm 3 crystal bar pair, with each bar containing four layers segmented using the SSLE. We measured the DOI performance by changing the optical conditions for the crystal unit. Based on the experimental results, we then assessed the detector performance in terms of the DOI capability by evaluating the position error, energy resolution, and light collection efficiency for various crystal unit designs with different bar sizes and a different number of layers (four to seven layers). DOI encoding with small position error was achieved for crystal units composed of a 3 × 3 × 20 mm 3 LYSO bar pair having up to seven layers, and with those composed of a 2 × 2 × 20 mm 3 LYSO bar pair having up to six layers. The energy resolution of the segment in the seven-layer 3 × 3 × 20 mm 3 crystal bar pair was 9.3%-15.5% for 662 keV gamma-rays, where the segments closer to the photo-sensors provided better energy resolution. SSLE provides high geometrical accuracy at low production cost due to the simplicity of the crystal assembly. Therefore, the proposed DOI detector is expected to be an attractive choice for practical small-bore PET systems dedicated to imaging of the brain, breast, and small animals.
Medication errors in the emergency department: a systems approach to minimizing risk.
Peth, Howard A
2003-02-01
Adverse drug events caused by medication errors represent a common cause of patient injury in the practice of medicine. Many medication errors are preventable and hence particularly tragic when they occur, often with serious consequences. The enormous increase in the number of available drugs on the market makes it all but impossible for physicians, nurses, and pharmacists to possess the knowledge base necessary for fail-safe medication practice. Indeed, the greatest single systemic factor associated with medication errors is a deficiency in the knowledge requisite to the safe use of drugs. It is vital that physicians, nurses, and pharmacists have at their immediate disposal up-to-date drug references. Patients presenting for care in EDs are usually unfamiliar to their EPs and nurses, and the unique patient factors affecting medication response and toxicity are obscured. An appropriate history, physical examination, and diagnostic workup will assist EPs, nurses, and pharmacists in selecting the safest and most optimum therapeutic regimen for each patient. EDs deliver care "24/7" and are open when valuable information resources, such as hospital pharmacists and previously treating physicians, may not be available for consultation. A systems approach to the complex problem of medication errors will help emergency clinicians eliminate preventable adverse drug events and achieve a goal of a zero-defects system, in which medication errors are a thing of the past. New developments in information technology and the advent of electronic medical records with computerized physician order entry, ward-based clinical pharmacists, and standardized bar codes promise substantial reductions in the incidence of medication errors and adverse drug events. ED patients expect and deserve nothing less than the safest possible emergency medicine service.
1994-09-01
650 B.C. in Asia Minor, coins were developed and used in acquiring goods and services. In France, during the eighteenth century, paper money made its... counterfeited . [INFO94, p. 23] Other weaknesses of bar code technology include limited data storage capability based on the bar code symbology used when...extremely accurate, with calculated error rates as low as 1 in 100 trillion, and are difficult to counterfeit . Strong magnetic fields cannot erase RF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miftakov, V
The BABAR experiment at SLAC provides an opportunity for measurement of the Standard Model parameters describing CP violation. A method of measuring the CKM matrix element |V{sub cb}| using Inclusive Semileptonic B decays in events tagged by a fully reconstructed decay of one of the B mesons is presented here. This mode is considered to be one of the most powerful approaches due to its large branching fraction, simplicity of the theoretical description and very clean experimental signatures. Using fully reconstructed B mesons to flag B{bar B} event we were able to produce the spectrum and branching fraction for electronmore » momenta P{sub C.M.S.} > 0.5 GeV/c. Extrapolation to the lower momenta has been carried out with Heavy Quark Effective Theory. The branching fractions are measured separately for charged and neutral B mesons. For 82 fb{sup -1} of data collected at BABAR we obtain: BR(B{sup {+-}} {yields} X e{bar {nu}}) = 10.63 {+-} 0.24 {+-} 0.29%, BR(B{sup 0} {yields} X e{bar {nu}}) = 10.68 {+-} 0.34 {+-} 0.31%, averaged BR(B {yields} X e{bar {nu}}) = 10.65 {+-} 0.19 {+-} 0.27%, ratio of Branching fractions BR(B{sup {+-}})/BR(B{sup 0}) = 0.996 {+-} 0.039 {+-} 0.015 (errors are statistical and systematic, respectively). They also obtain V{sub cb} = 0.0409 {+-} 0.00074 {+-} 0.0010 {+-} 0.000858 (errors are: statistical, systematic and theoretical).« less
Errors of Measurement, Theory, and Public Policy. William H. Angoff Memorial Lecture Series
ERIC Educational Resources Information Center
Kane, Michael
2010-01-01
The 12th annual William H. Angoff Memorial Lecture was presented by Dr. Michael T. Kane, ETS's (Educational Testing Service) Samuel J. Messick Chair in Test Validity and the former Director of Research at the National Conference of Bar Examiners. Dr. Kane argues that it is important for policymakers to recognize the impact of errors of measurement…
Pioneer-Venus radio occultation (ORO) data reduction: Profiles of 13 cm absorptivity
NASA Technical Reports Server (NTRS)
Steffes, Paul G.
1990-01-01
In order to characterize possible variations in the abundance and distribution of subcloud sulfuric acid vapor, 13 cm radio occultation signals from 23 orbits that occurred in late 1986 and 1987 (Season 10) and 7 orbits that occurred in 1979 (Season 1) were processed. The data were inverted via inverse Abel transform to produce 13 cm absorptivity profiles. Pressure and temperature profiles obtained with the Pioneer-Venus night probe and the northern probe were used along with the absorptivity profiles to infer upper limits for vertical profiles of the abundance of gaseous H2SO4. In addition to inverting the data, error bars were placed on the absorptivity profiles and H2SO4 abundance profiles using the standard propagation of errors. These error bars were developed by considering the effects of statistical errors only. The profiles show a distinct pattern with regard to latitude which is consistent with latitude variations observed in data obtained during the occultation seasons nos. 1 and 2. However, when compared with the earlier data, the recent occultation studies suggest that the amount of sulfuric acid vapor occurring at and below the main cloud layer may have decreased between early 1979 and late 1986.
Miller, Daniel F; Fortier, Christopher R; Garrison, Kelli L
2011-02-01
Bar code medication administration (BCMA) technology is gaining acceptance for its ability to prevent medication administration errors. However, studies suggest that improper use of BCMA technology can yield unsatisfactory error prevention and introduction of new potential medication errors. To evaluate the incidence of high-alert medication BCMA triggers and alert types and discuss the type of nursing and pharmacy workarounds occurring with the use of BCMA technology and the electronic medication administration record (eMAR). Medication scanning and override reports from January 1, 2008, through November 30, 2008, for all adult medical/surgical units were retrospectively evaluated for high-alert medication system triggers, alert types, and override reason documentation. An observational study of nursing workarounds on an adult medicine step-down unit was performed and an analysis of potential pharmacy workarounds affecting BCMA and the eMAR was also conducted. Seventeen percent of scanned medications triggered an error alert of which 55% were for high-alert medications. Insulin aspart, NPH insulin, hydromorphone, potassium chloride, and morphine were the top 5 high-alert medications that generated alert messages. Clinician override reasons for alerts were documented in only 23% of administrations. Observational studies assessing for nursing workarounds revealed a median of 3 clinician workarounds per administration. Specific nursing workarounds included a failure to scan medications/patient armband and scanning the bar code once the dosage has been removed from the unit-dose packaging. Analysis of pharmacy order entry process workarounds revealed the potential for missed doses, duplicate doses, and doses being scheduled at the wrong time. BCMA has the potential to prevent high-alert medication errors by alerting clinicians through alert messages. Nursing and pharmacy workarounds can limit the recognition of optimal safety outcomes and therefore workflow processes must be continually analyzed and restructured to yield the intended full benefits of BCMA technology. © 2011 SAGE Publications.
Bracketing mid-pliocene sea surface temperature: maximum and minimum possible warming
Dowsett, Harry
2004-01-01
Estimates of sea surface temperature (SST) from ocean cores reveal a warm phase of the Pliocene between about 3.3 and 3.0 Mega-annums (Ma). Pollen records from land based cores and sections, although not as well dated, also show evidence for a warmer climate at about the same time. Increased greenhouse forcing and altered ocean heat transport is the leading candidates for the underlying cause of Pliocene global warmth. However, despite being a period of global warmth, there exists considerable variability within this interval. Two new SST reconstructions have been created to provide a climatological error bar for warm peak phases of the Pliocene. These data represent the maximum and minimum possible warming recorded within the 3.3 to 3.0 Ma interval.
Precise Determination of the 1s Lamb Shift in Hydrogen-Like Lead and Gold Using Microcalorimeters
NASA Technical Reports Server (NTRS)
Kraft-Bermuth, S.; Andrianov, V.; Bleile, A.; Echler, A.; Egelhof, P.; Grabitz, P.; Ilieva, S.; Kiselev, O.; Kilbourne, C.; McCammon, D.;
2017-01-01
Quantum electrodynamics in very strong Coulomb fields is one scope which has not yet been tested experimentally with sufficient accuracy to really determine whether the perturbative approach is valid. One sensitive test is the determination of the 1s Lamb shift in highly-charged very heavy ions. The 1s Lamb shift of hydrogen-like lead (Pb81+) and gold (Au78+) has been determined using the novel detector concept of silicon microcalorimeters for the detection of hard x-rays. The results of (260 +/- 53) eV for lead and (211 +/- 42) eV for gold are within the error bars in good agreement with theoretical predictions. To our knowledge, for hydrogen-like lead, this represents the most accurate determination of the 1s Lamb shift.
ERIC Educational Resources Information Center
Lee, Kerry; Khng, Kiat Hui; Ng, Swee Fong; Ng Lan Kong, Jeremy
2013-01-01
In Singapore, primary school students are taught to use bar diagrams to represent known and unknown values in algebraic word problems. However, little is known about students' understanding of these graphical representations. We investigated whether students use and think of the bar diagrams in a concrete or a more abstract fashion. We also…
Radial basis function network learns ceramic processing and predicts related strength and density
NASA Technical Reports Server (NTRS)
Cios, Krzysztof J.; Baaklini, George Y.; Vary, Alex; Tjia, Robert E.
1993-01-01
Radial basis function (RBF) neural networks were trained using the data from 273 Si3N4 modulus of rupture (MOR) bars which were tested at room temperature and 135 MOR bars which were tested at 1370 C. Milling time, sintering time, and sintering gas pressure were the processing parameters used as the input features. Flexural strength and density were the outputs by which the RBF networks were assessed. The 'nodes-at-data-points' method was used to set the hidden layer centers and output layer training used the gradient descent method. The RBF network predicted strength with an average error of less than 12 percent and density with an average error of less than 2 percent. Further, the RBF network demonstrated a potential for optimizing and accelerating the development and processing of ceramic materials.
Laser damage metrology in biaxial nonlinear crystals using different test beams
NASA Astrophysics Data System (ADS)
Hildenbrand, Anne; Wagner, Frank R.; Akhouayri, Hassan; Natoli, Jean-Yves; Commandre, Mireille
2008-01-01
Laser damage measurements in nonlinear optical crystals, in particular in biaxial crystals, may be influenced by several effects proper to these materials or greatly enhanced in these materials. Before discussion of these effects, we address the topic of error bar determination for probability measurements. Error bars for the damage probabilities are important because nonlinear crystals are often small and expensive, thus only few sites are used for a single damage probability measurement. We present the mathematical basics and a flow diagram for the numerical calculation of error bars for probability measurements that correspond to a chosen confidence level. Effects that possibly modify the maximum intensity in a biaxial nonlinear crystal are: focusing aberration, walk-off and self-focusing. Depending on focusing conditions, propagation direction, polarization of the light and the position of the focus point in the crystal, strong aberrations may change the beam profile and drastically decrease the maximum intensity in the crystal. A correction factor for this effect is proposed, but quantitative corrections are not possible without taking into account the experimental beam profile after the focusing lens. The characteristics of walk-off and self-focusing have quickly been reviewed for the sake of completeness of this article. Finally, parasitic second harmonic generation may influence the laser damage behavior of crystals. The important point for laser damage measurements is that the amount of externally observed SHG after the crystal does not correspond to the maximum amount of second harmonic light inside the crystal.
Matus, Bethany A; Bridges, Kayla M; Logomarsino, John V
2018-06-21
Individualized feeding care plans and safe handling of milk (human or formula) are critical in promoting growth, immune function, and neurodevelopment in the preterm infant. Feeding errors and disruptions or limitations to feeding processes in the neonatal intensive care unit (NICU) are associated with negative safety events. Feeding errors include contamination of milk and delivery of incorrect or expired milk and may result in adverse gastrointestinal illnesses. The purpose of this review was to evaluate the effect(s) of centralized milk preparation, use of trained technicians, use of bar code-scanning software, and collaboration between registered dietitians and registered nurses on feeding safety in the NICU. A systematic review of the literature was completed, and 12 articles were selected as relevant to search criteria. Study quality was evaluated using the Downs and Black scoring tool. An evaluation of human studies indicated that the use of centralized milk preparation, trained technicians, bar code-scanning software, and possible registered dietitian involvement decreased feeding-associated error in the NICU. A state-of-the-art NICU includes a centralized milk preparation area staffed by trained technicians, care supported by bar code-scanning software, and utilization of a registered dietitian to improve patient safety. These resources will provide nurses more time to focus on nursing-specific neonatal care. Further research is needed to evaluate the impact of factors related to feeding safety in the NICU as well as potential financial benefits of these quality improvement opportunities.
Ali, Nadia; Peebles, David
2013-02-01
We report three experiments investigating the ability of undergraduate college students to comprehend 2 x 2 "interaction" graphs from two-way factorial research designs. Factorial research designs are an invaluable research tool widely used in all branches of the natural and social sciences, and the teaching of such designs lies at the core of many college curricula. Such data can be represented in bar or line graph form. Previous studies have shown, however, that people interpret these two graphical forms differently. In Experiment 1, participants were required to interpret interaction data in either bar or line graphs while thinking aloud. Verbal protocol analysis revealed that line graph users were significantly more likely to misinterpret the data or fail to interpret the graph altogether. The patterns of errors line graph users made were interpreted as arising from the operation of Gestalt principles of perceptual organization, and this interpretation was used to develop two modified versions of the line graph, which were then tested in two further experiments. One of the modifications resulted in a significant improvement in performance. Results of the three experiments support the proposed explanation and demonstrate the effects (both positive and negative) of Gestalt principles of perceptual organization on graph comprehension. We propose that our new design provides a more balanced representation of the data than the standard line graph for nonexpert users to comprehend the full range of relationships in two-way factorial research designs and may therefore be considered a more appropriate representation for use in educational and other nonexpert contexts.
Development of self-sensing BFRP bars with distributed optic fiber sensors
NASA Astrophysics Data System (ADS)
Tang, Yongsheng; Wu, Zhishen; Yang, Caiqian; Shen, Sheng; Wu, Gang; Hong, Wan
2009-03-01
In this paper, a new type of self-sensing basalt fiber reinforced polymer (BFRP) bars is developed with using the Brillouin scattering-based distributed optic fiber sensing technique. During the fabrication, optic fiber without buffer and sheath as a core is firstly reinforced through braiding around mechanically dry continuous basalt fiber sheath in order to survive the pulling-shoving process of manufacturing the BFRP bars. The optic fiber with dry basalt fiber sheath as a core embedded further in the BFRP bars will be impregnated well with epoxy resin during the pulling-shoving process. The bond between the optic fiber and the basalt fiber sheath as well as between the basalt fiber sheath and the FRP bar can be controlled and ensured. Therefore, the measuring error due to the slippage between the optic fiber core and the coating can be improved. Moreover, epoxy resin of the segments, where the connection of optic fibers will be performed, is uncured by isolating heat from these parts of the bar during the manufacture. Consequently, the optic fiber in these segments of the bar can be easily taken out, and the connection between optic fibers can be smoothly carried out. Finally, a series of experiments are performed to study the sensing and mechanical properties of the propose BFRP bars. The experimental results show that the self-sensing BFRP bar is characterized by not only excellent accuracy, repeatability and linearity for strain measuring but also good mechanical property.
Remote Sensing Global Surface Air Pressure Using Differential Absorption BArometric Radar (DiBAR)
NASA Technical Reports Server (NTRS)
Lin, Bing; Harrah, Steven; Lawrence, Wes; Hu, Yongxiang; Min, Qilong
2016-01-01
Tropical storms and severe weathers are listed as one of core events that need improved observations and predictions in World Meteorological Organization and NASA Decadal Survey (DS) documents and have major impacts on public safety and national security. This effort tries to observe surface air pressure, especially over open seas, from space using a Differential-absorption BArometric Radar (DiBAR) operating at the 50-55 gigahertz O2 absorption band. Air pressure is among the most important variables that affect atmospheric dynamics, and currently can only be measured by limited in-situ observations over oceans. Analyses show that with the proposed space radar the errors in instantaneous (averaged) pressure estimates can be as low as approximately 4 millibars (approximately 1 millibar under all weather conditions). With these sea level pressure measurements, the forecasts of severe weathers such as hurricanes will be significantly improved. Since the development of the DiBAR concept about a decade ago, NASA Langley DiBAR research team has made substantial progress in advancing the concept. The feasibility assessment clearly shows the potential of sea surface barometry using existing radar technologies. The team has developed a DiBAR system design, fabricated a Prototype-DiBAR (P-DiBAR) for proof-of-concept, conducted lab, ground and airborne P-DiBAR tests. The flight test results are consistent with the instrumentation goals. Observational system simulation experiments for space DiBAR performance based on the existing DiBAR technology and capability show substantial improvements in tropical storm predictions, not only for the hurricane track and position but also for the hurricane intensity. DiBAR measurements will lead us to an unprecedented level of the prediction and knowledge on global extreme weather and climate conditions.
SU-E-J-192: Comparative Effect of Different Respiratory Motion Management Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakajima, Y; Kadoya, N; Ito, K
Purpose: Irregular breathing can influence the outcome of four-dimensional computed tomography imaging for causing artifacts. Audio-visual biofeedback systems associated with patient-specific guiding waveform are known to reduce respiratory irregularities. In Japan, abdomen and chest motion self-control devices (Abches), representing simpler visual coaching techniques without guiding waveform are used instead; however, no studies have compared these two systems to date. Here, we evaluate the effectiveness of respiratory coaching to reduce respiratory irregularities by comparing two respiratory management systems. Methods: We collected data from eleven healthy volunteers. Bar and wave models were used as audio-visual biofeedback systems. Abches consisted of a respiratorymore » indicator indicating the end of each expiration and inspiration motion. Respiratory variations were quantified as root mean squared error (RMSE) of displacement and period of breathing cycles. Results: All coaching techniques improved respiratory variation, compared to free breathing. Displacement RMSEs were 1.43 ± 0.84, 1.22 ± 1.13, 1.21 ± 0.86, and 0.98 ± 0.47 mm for free breathing, Abches, bar model, and wave model, respectively. Free breathing and wave model differed significantly (p < 0.05). Period RMSEs were 0.48 ± 0.42, 0.33 ± 0.31, 0.23 ± 0.18, and 0.17 ± 0.05 s for free breathing, Abches, bar model, and wave model, respectively. Free breathing and all coaching techniques differed significantly (p < 0.05). For variation in both displacement and period, wave model was superior to free breathing, bar model, and Abches. The average reduction in displacement and period RMSE compared with wave model were 27% and 47%, respectively. Conclusion: The efficacy of audio-visual biofeedback to reduce respiratory irregularity compared with Abches. Our results showed that audio-visual biofeedback combined with a wave model can potentially provide clinical benefits in respiratory management, although all techniques could reduce respiratory irregularities.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burr, Tom; Croft, Stephen; Jarman, Kenneth D.
The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings, and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically quantify total uncertainty in terms of “random” and “systematic” components, and then specify error bars for the total mass estimate in multiple items. Uncertainty quantification (UQ) for NDA has always been important, but it is recognized that greater rigor is needed andmore » achievable using modern statistical methods. To this end, we describe the extent to which the guideline for expressing uncertainty in measurements (GUM) can be used for NDA. Also, we propose improvements over GUM for NDA by illustrating UQ challenges that it does not address, including calibration with errors in predictors, model error, and item-specific biases. A case study is presented using low-resolution NaI spectra and applying the enrichment meter principle to estimate the U-235 mass in an item. The case study illustrates how to update the current American Society for Testing and Materials guide for application of the enrichment meter principle using gamma spectra from a NaI detector.« less
Interpolating Spherical Harmonics for Computing Antenna Patterns
2011-07-01
4∞. If gNF denotes the spline computed from the uniform partition of NF + 1 frequency points, the splines converge as O[N−4F ]: ‖gN − g‖∞ ≤ C0‖g(4...splines. There is the possibility of estimating the error ‖g− gNF ‖∞ even though the function g is unknown. Table 1 compares these unknown errors ‖g − gNF ...to the computable estimates ‖ gNF − g2NF ‖∞. The latter is a strong predictor of the unknown error. The triple bar is the sup-norm error over all the
Precision modelling of M dwarf stars: the magnetic components of CM Draconis
NASA Astrophysics Data System (ADS)
MacDonald, J.; Mullan, D. J.
2012-04-01
The eclipsing binary CM Draconis (CM Dra) contains two nearly identical red dwarfs of spectral class dM4.5. The masses and radii of the two components have been reported with unprecedentedly small statistical errors: for M, these errors are 1 part in 260, while for R, the errors reported by Morales et al. are 1 part in 130. When compared with standard stellar models with appropriate mass and age (≈4 Gyr), the empirical results indicate that both components are discrepant from the models in the following sense: the observed stars are larger in R ('bloated'), by several standard deviations, than the models predict. The observed luminosities are also lower than the models predict. Here, we attempt at first to model the two components of CM Dra in the context of standard (non-magnetic) stellar models using a systematic array of different assumptions about helium abundances (Y), heavy element abundances (Z), opacities and mixing length parameter (α). We find no 4-Gyr-old models with plausible values of these four parameters that fit the observed L and R within the reported statistical error bars. However, CM Dra is known to contain magnetic fields, as evidenced by the occurrence of star-spots and flares. Here we ask: can inclusion of magnetic effects into stellar evolution models lead to fits of L and R within the error bars? Morales et al. have reported that the presence of polar spots results in a systematic overestimate of R by a few per cent when eclipses are interpreted with a standard code. In a star where spots cover a fraction f of the surface area, we find that the revised R and L for CM Dra A can be fitted within the error bars by varying the parameter α. The latter is often assumed to be reduced by the presence of magnetic fields, although the reduction in α as a function of B is difficult to quantify. An alternative magnetic effect, namely inhibition of the onset of convection, can be readily quantified in terms of a magnetic parameter δ≈B2/4πγpgas (where B is the strength of the local vertical magnetic field). In the context of δ models in which B is not allowed to exceed a 'ceiling' of 106 G, we find that the revised R and L can also be fitted, within the error bars, in a finite region of the f-δ plane. The permitted values of δ near the surface leads us to estimate that the vertical field strength on the surface of CM Dra A is about 500 G, in good agreement with independent observational evidence for similar low-mass stars. Recent results for another binary with parameters close to those of CM Dra suggest that metallicity differences cannot be the dominant explanation for the bloating of the two components of CM Dra.
Stone, Will J R; Campo, Joseph J; Ouédraogo, André Lin; Meerstein-Kessel, Lisette; Morlais, Isabelle; Da, Dari; Cohuet, Anna; Nsango, Sandrine; Sutherland, Colin J; van de Vegte-Bolmer, Marga; Siebelink-Stoter, Rianne; van Gemert, Geert-Jan; Graumans, Wouter; Lanke, Kjerstin; Shandling, Adam D; Pablo, Jozelyn V; Teng, Andy A; Jones, Sophie; de Jong, Roos M; Fabra-García, Amanda; Bradley, John; Roeffen, Will; Lasonder, Edwin; Gremo, Giuliana; Schwarzer, Evelin; Janse, Chris J; Singh, Susheel K; Theisen, Michael; Felgner, Phil; Marti, Matthias; Drakeley, Chris; Sauerwein, Robert; Bousema, Teun; Jore, Matthijs M
2018-04-11
The original version of this Article contained errors in Fig. 3. In panel a, bars from a chart depicting the percentage of antibody-positive individuals in non-infectious and infectious groups were inadvertently included in place of bars depicting the percentage of infectious individuals, as described in the Article and figure legend. However, the p values reported in the Figure and the resulting conclusions were based on the correct dataset. The corrected Fig. 3a now shows the percentage of infectious individuals in antibody-negative and -positive groups, in both the PDF and HTML versions of the Article. The incorrect and correct versions of Figure 3a are also presented for comparison in the accompanying Publisher Correction as Figure 1.The HTML version of the Article also omitted a link to Supplementary Data 6. The error has now been fixed and Supplementary Data 6 is available to download.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hubbard, W. B.; Militzer, B.
In anticipation of new observational results for Jupiter's axial moment of inertia and gravitational zonal harmonic coefficients from the forthcoming Juno orbiter, we present a number of preliminary Jupiter interior models. We combine results from ab initio computer simulations of hydrogen–helium mixtures, including immiscibility calculations, with a new nonperturbative calculation of Jupiter's zonal harmonic coefficients, to derive a self-consistent model for the planet's external gravity and moment of inertia. We assume helium rain modified the interior temperature and composition profiles. Our calculation predicts zonal harmonic values to which measurements can be compared. Although some models fit the observed (pre-Juno) second-more » and fourth-order zonal harmonics to within their error bars, our preferred reference model predicts a fourth-order zonal harmonic whose absolute value lies above the pre-Juno error bars. This model has a dense core of about 12 Earth masses and a hydrogen–helium-rich envelope with approximately three times solar metallicity.« less
Estimating instream constituent loads using replicate synoptic sampling, Peru Creek, Colorado
NASA Astrophysics Data System (ADS)
Runkel, Robert L.; Walton-Day, Katherine; Kimball, Briant A.; Verplanck, Philip L.; Nimick, David A.
2013-05-01
SummaryThe synoptic mass balance approach is often used to evaluate constituent mass loading in streams affected by mine drainage. Spatial profiles of constituent mass load are used to identify sources of contamination and prioritize sites for remedial action. This paper presents a field scale study in which replicate synoptic sampling campaigns are used to quantify the aggregate uncertainty in constituent load that arises from (1) laboratory analyses of constituent and tracer concentrations, (2) field sampling error, and (3) temporal variation in concentration from diel constituent cycles and/or source variation. Consideration of these factors represents an advance in the application of the synoptic mass balance approach by placing error bars on estimates of constituent load and by allowing all sources of uncertainty to be quantified in aggregate; previous applications of the approach have provided only point estimates of constituent load and considered only a subset of the possible errors. Given estimates of aggregate uncertainty, site specific data and expert judgement may be used to qualitatively assess the contributions of individual factors to uncertainty. This assessment can be used to guide the collection of additional data to reduce uncertainty. Further, error bars provided by the replicate approach can aid the investigator in the interpretation of spatial loading profiles and the subsequent identification of constituent source areas within the watershed. The replicate sampling approach is applied to Peru Creek, a stream receiving acidic, metal-rich effluent from the Pennsylvania Mine. Other sources of acidity and metals within the study reach include a wetland area adjacent to the mine and tributary inflow from Cinnamon Gulch. Analysis of data collected under low-flow conditions indicates that concentrations of Al, Cd, Cu, Fe, Mn, Pb, and Zn in Peru Creek exceed aquatic life standards. Constituent loading within the study reach is dominated by effluent from the Pennsylvania Mine, with over 50% of the Cd, Cu, Fe, Mn, and Zn loads attributable to a collapsed adit near the top of the study reach. These estimates of mass load may underestimate the effect of the Pennsylvania Mine as leakage from underground mine workings may contribute to metal loads that are currently attributed to the wetland area. This potential leakage confounds the evaluation of remedial options and additional research is needed to determine the magnitude and location of the leakage.
Estimating instream constituent loads using replicate synoptic sampling, Peru Creek, Colorado
Runkel, Robert L.; Walton-Day, Katherine; Kimball, Briant A.; Verplanck, Philip L.; Nimick, David A.
2013-01-01
The synoptic mass balance approach is often used to evaluate constituent mass loading in streams affected by mine drainage. Spatial profiles of constituent mass load are used to identify sources of contamination and prioritize sites for remedial action. This paper presents a field scale study in which replicate synoptic sampling campaigns are used to quantify the aggregate uncertainty in constituent load that arises from (1) laboratory analyses of constituent and tracer concentrations, (2) field sampling error, and (3) temporal variation in concentration from diel constituent cycles and/or source variation. Consideration of these factors represents an advance in the application of the synoptic mass balance approach by placing error bars on estimates of constituent load and by allowing all sources of uncertainty to be quantified in aggregate; previous applications of the approach have provided only point estimates of constituent load and considered only a subset of the possible errors. Given estimates of aggregate uncertainty, site specific data and expert judgement may be used to qualitatively assess the contributions of individual factors to uncertainty. This assessment can be used to guide the collection of additional data to reduce uncertainty. Further, error bars provided by the replicate approach can aid the investigator in the interpretation of spatial loading profiles and the subsequent identification of constituent source areas within the watershed.The replicate sampling approach is applied to Peru Creek, a stream receiving acidic, metal-rich effluent from the Pennsylvania Mine. Other sources of acidity and metals within the study reach include a wetland area adjacent to the mine and tributary inflow from Cinnamon Gulch. Analysis of data collected under low-flow conditions indicates that concentrations of Al, Cd, Cu, Fe, Mn, Pb, and Zn in Peru Creek exceed aquatic life standards. Constituent loading within the study reach is dominated by effluent from the Pennsylvania Mine, with over 50% of the Cd, Cu, Fe, Mn, and Zn loads attributable to a collapsed adit near the top of the study reach. These estimates of mass load may underestimate the effect of the Pennsylvania Mine as leakage from underground mine workings may contribute to metal loads that are currently attributed to the wetland area. This potential leakage confounds the evaluation of remedial options and additional research is needed to determine the magnitude and location of the leakage.
Contribution to the theory of propeller vibrations
NASA Technical Reports Server (NTRS)
Liebers, F
1930-01-01
This report presents a calculation of the torsional frequencies of revolving bars with allowance for the air forces. Calculation of the flexural or bonding frequencies of revolving straight or tapered bars in terms of the angular velocity of revolution. Calculation on the basis of Rayleigh's principle of variation. There is also a discussion of error estimation and the accuracy of results. The author then provides an application of the theory to screw propellers for airplanes and the discusses the liability of propellers to damage through vibrations due to lack of uniform loading.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biassoni, Pietro
2009-01-01
In this thesis work we have measured the following upper limits at 90% of confidence level, for B meson decays (in units of 10 -6), using a statistics of 465.0 x 10 6 Bmore » $$\\bar{B}$$ pairs: β(B 0 → ηK 0) < 1.6 β(B 0 → ηη) < 1.4 β(B 0 → η'η') < 2.1 β(B 0 → ηΦ) < 0.52 β(B 0 → ηω) < 1.6 β(B 0 → η'Φ) < 1.2 β(B 0 → η'ω) < 1.7 We have no observation of any decay mode, statistical significance for our measurements is in the range 1.3-3.5 standard deviation. We have a 3.5σ evidence for B → ηω and a 3.1 σ evidence for B → η'ω. The absence of observation of the B 0 → ηK 0 open an issue related to the large difference compared to the charged mode B + → ηK + branching fraction, which is measured to be 3.7 ± 0.4 ± 0.1 [118]. Our results represent substantial improvements of the previous ones [109, 110, 111] and are consistent with theoretical predictions. All these results were presented at Flavor Physics and CP Violation (FPCP) 2008 Conference, that took place in Taipei, Taiwan. They will be soon included into a paper to be submitted to Physical Review D. For time-dependent analysis, we have reconstructed 1820 ± 48 flavor-tagged B 0 → η'K 0 events, using the final BABAR statistic of 467.4 x 10 6 B$$\\bar{B}$$ pairs. We use these events to measure the time-dependent asymmetry parameters S and C. We find S = 0.59 ± 0.08 ± 0.02, and C = -0.06 ± 0.06 ± 0.02. A non-zero value of C would represent a directly CP non-conserving component in B 0 → η'K 0, while S would be equal to sin2β measured in B 0 → J/ΨK s 0 [108], a mixing-decay interference effect, provided the decay is dominated by amplitudes of a single weak phase. The new measured value of S can be considered in agreement with the expectations of the 'Standard Model', inside the experimental and theoretical uncertainties. Inconsistency of our result for S with CP conservation (S = 0) has a significance of 7.1 standard deviations (statistical and systematics included). Our result for the direct-CP violation parameter C is 0.9 standard deviations from zero (statistical and systematics included). Our results are in agreement with the previous ones [18]. Despite the statistics is only 20% larger than the one used in previous measurement, we improved of 20% the error on S and of 14% the error on C. This error is the smaller ever achieved, by both BABAR and Belle, in Time-Dependent CP Violation Parameters measurement is a b → s transition.« less
Defining robustness protocols: a method to include and evaluate robustness in clinical plans
NASA Astrophysics Data System (ADS)
McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.
2015-04-01
We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.
Success and High Predictability of Intraorally Welded Titanium Bar in the Immediate Loading Implants
Fogli, Vaniel; Camerini, Michele; Carinci, Francesco
2014-01-01
The implants failure may be caused by micromotion and stress exerted on implants during the phase of bone healing. This concept is especially true in case of implants placed in atrophic ridges. So the primary stabilization and fixation of implants are an important goal that can also allow immediate loading and oral rehabilitation on the same day of surgery. This goal may be achieved thanks to the technique of welding titanium bars on implant abutments. In fact, the procedure can be performed directly in the mouth eliminating possibility of errors or distortions due to impression. This paper describes a case report and the most recent data about long-term success and high predictability of intraorally welded titanium bar in immediate loading implants. PMID:24963419
Quark fragmentation into spin-triplet S -wave quarkonium
Bodwin, Geoffrey T.; Chung, Hee Sok; Kim, U-Rae; ...
2015-04-08
We compute fragmentation functions for a quark to fragment to a quarkonium through an S-wave spin-triplet heavy quark-antiquark pair. We consider both color-singlet and color-octet heavy quark-antiquark (Q (Q) over bar) pairs. We give results for the case in which the fragmenting quark and the quark that is a constituent of the quarkonium have different flavors and for the case in which these quarks have the same flavors. Our results for the sum over all spin polarizations of the Q (Q) over bar pairs confirm previous results. Our results for longitudinally polarized Q (Q) over bar pairs agree with previousmore » calculations for the same flavor cases and correct an error in a previous calculation for the different-flavor case.« less
A model for flexi-bar to evaluate intervertebral disc and muscle forces in exercises.
Abdollahi, Masoud; Nikkhoo, Mohammad; Ashouri, Sajad; Asghari, Mohsen; Parnianpour, Mohamad; Khalaf, Kinda
2016-10-01
This study developed and validated a lumped parameter model for the FLEXI-BAR, a popular training instrument that provides vibration stimulation. The model which can be used in conjunction with musculoskeletal-modeling software for quantitative biomechanical analyses, consists of 3 rigid segments, 2 torsional springs, and 2 torsional dashpots. Two different sets of experiments were conducted to determine the model's key parameters including the stiffness of the springs and the damping ratio of the dashpots. In the first set of experiments, the free vibration of the FLEXI-BAR with an initial displacement at its end was considered, while in the second set, forced oscillations of the bar were studied. The properties of the mechanical elements in the lumped parameter model were derived utilizing a non-linear optimization algorithm which minimized the difference between the model's prediction and the experimental data. The results showed that the model is valid (8% error) and can be used for simulating exercises with the FLEXI-BAR for excitations in the range of the natural frequency. The model was then validated in combination with AnyBody musculoskeletal modeling software, where various lumbar disc, spinal muscles and hand muscles forces were determined during different FLEXI-BAR exercise simulations. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ahmadi, Bahman; Nariman-zadeh, Nader; Jamali, Ali
2017-06-01
In this article, a novel approach based on game theory is presented for multi-objective optimal synthesis of four-bar mechanisms. The multi-objective optimization problem is modelled as a Stackelberg game. The more important objective function, tracking error, is considered as the leader, and the other objective function, deviation of the transmission angle from 90° (TA), is considered as the follower. In a new approach, a group method of data handling (GMDH)-type neural network is also utilized to construct an approximate model for the rational reaction set (RRS) of the follower. Using the proposed game-theoretic approach, the multi-objective optimal synthesis of a four-bar mechanism is then cast into a single-objective optimal synthesis using the leader variables and the obtained RRS of the follower. The superiority of using the synergy game-theoretic method of Stackelberg with a GMDH-type neural network is demonstrated for two case studies on the synthesis of four-bar mechanisms.
The intrinsic three-dimensional shape of galactic bars
NASA Astrophysics Data System (ADS)
Méndez-Abreu, J.; Costantin, L.; Aguerri, J. A. L.; de Lorenzo-Cáceres, A.; Corsini, E. M.
2018-06-01
We present the first statistical study on the intrinsic three-dimensional (3D) shape of a sample of 83 galactic bars extracted from the CALIFA survey. We use the galaXYZ code to derive the bar intrinsic shape with a statistical approach. The method uses only the geometric information (ellipticities and position angles) of bars and discs obtained from a multi-component photometric decomposition of the galaxy surface-brightness distributions. We find that bars are predominantly prolate-triaxial ellipsoids (68%), with a small fraction of oblate-triaxial ellipsoids (32%). The typical flattening (intrinsic C/A semiaxis ratio) of the bars in our sample is 0.34, which matches well the typical intrinsic flattening of stellar discs at these galaxy masses. We demonstrate that, for prolate-triaxial bars, the intrinsic shape of bars depends on the galaxy Hubble type and stellar mass (bars in massive S0 galaxies are thicker and more circular than those in less massive spirals). The bar intrinsic shape correlates with bulge, disc, and bar parameters. In particular with the bulge-to-total (B/T) luminosity ratio, disc g - r color, and central surface brightness of the bar, confirming the tight link between bars and their host galaxies. Combining the probability distributions of the intrinsic shape of bulges and bars in our sample we show that 52% (16%) of bulges are thicker (flatter) than the surrounding bar at 1σ level. We suggest that these percentages might be representative of the fraction of classical and disc-like bulges in our sample, respectively.
Impact of the smoking ban on the volume of bar sales in Ireland: evidence from time series analysis.
Cornelsen, Laura; Normand, Charles
2012-05-01
This paper is the first to estimate the economic impact of a comprehensive smoking ban in all enclosed public places of work, on bars in Ireland. The demand in bars, represented by a monthly index of sales volume, is explained by relative prices in bars, prices of alcohol sold in off-licences and the aggregate retail sales (ARS) as a proxy for general economic activity and incomes. The smoking ban is included into the model as a step dummy and the modelling is done using ARIMAX strategy. The results show a reduction in the volume of sales in bars by -4.6% (p<0.01) following the ban. Copyright © 2011 John Wiley & Sons, Ltd.
Critical error fields for locked mode instability in tokamaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
La Haye, R.J.; Fitzpatrick, R.; Hender, T.C.
1992-07-01
Otherwise stable discharges can become nonlinearly unstable to disruptive locked modes when subjected to a resonant {ital m}=2, {ital n}=1 error field from irregular poloidal field coils, as in DIII-D (Nucl. Fusion {bold 31}, 875 (1991)), or from resonant magnetic perturbation coils as in COMPASS-C ({ital Proceedings} {ital of} {ital the} 18{ital th} {ital European} {ital Conference} {ital on} {ital Controlled} {ital Fusion} {ital and} {ital Plasma} {ital Physics}, Berlin (EPS, Petit-Lancy, Switzerland, 1991), Vol. 15C, Part II, p. 61). Experiments in Ohmically heated deuterium discharges with {ital q}{approx}3.5, {ital {bar n}} {approx} 2 {times} 10{sup 19} m{sup {minus}3} andmore » {ital B}{sub {ital T}} {approx} 1.2 T show that a much larger relative error field ({ital B}{sub {ital r}21}/{ital B}{sub {ital T}} {approx} 1 {times} 10{sup {minus}3}) is required to produce a locked mode in the small, rapidly rotating plasma of COMPASS-C ({ital R}{sub 0} = 0.56 m, {ital f}{approx}13 kHz) than in the medium-sized plasmas of DIII-D ({ital R}{sub 0} = 1.67 m, {ital f}{approx}1.6 kHz), where the critical relative error field is {ital B}{sub {ital r}21}/{ital B}{sub {ital T}} {approx} 2 {times} 10{sup {minus}4}. This dependence of the threshold for instability is explained by a nonlinear tearing theory of the interaction of resonant magnetic perturbations with rotating plasmas that predicts the critical error field scales as ({ital fR}{sub 0}/{ital B}{sub {ital T}}){sup 4/3}{ital {bar n}}{sup 2/3}. Extrapolating from existing devices, the predicted critical field for locked modes in Ohmic discharges on the International Thermonuclear Experimental Reactor (ITER) (Nucl. Fusion {bold 30}, 1183 (1990)) ({ital f}=0.17 kHz, {ital R}{sub 0} = 6.0 m, {ital B}{sub {ital T}} = 4.9 T, {ital {bar n}} = 2 {times} 10{sup 19} m{sup {minus}3}) is {ital B}{sub {ital r}21}/{ital B}{sub {ital T}} {approx} 2 {times} 10{sup {minus}5}.« less
Intraocular foreign-body hazard during vitrectomy.
Bovino, J A; Marcus, D F
1982-03-01
We noted two instances of forceps-induced fragmentation of the bar used for scleral plug storage during vitreous surgery. The silicone bar material was adherent to the plug in both cases. Because this represents a significant intraocular foreign body hazard, the scleral plug should be carefully inspected before insertion.
Galaxy Zoo: Observing secular evolution through bars
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, Edmond; Faber, S. M.; Koo, David C.
In this paper, we use the Galaxy Zoo 2 data set to study the behavior of bars in disk galaxies as a function of specific star formation rate (SSFR) and bulge prominence. Our sample consists of 13,295 disk galaxies, with an overall (strong) bar fraction of 23.6% ± 0.4%, of which 1154 barred galaxies also have bar length (BL) measurements. These samples are the largest ever used to study the role of bars in galaxy evolution. We find that the likelihood of a galaxy hosting a bar is anticorrelated with SSFR, regardless of stellar mass or bulge prominence. We findmore » that the trends of bar likelihood and BL with bulge prominence are bimodal with SSFR. We interpret these observations using state-of-the-art simulations of bar evolution that include live halos and the effects of gas and star formation. We suggest our observed trends of bar likelihood with SSFR are driven by the gas fraction of the disks, a factor demonstrated to significantly retard both bar formation and evolution in models. We interpret the bimodal relationship between bulge prominence and bar properties as being due to the complicated effects of classical bulges and central mass concentrations on bar evolution and also to the growth of disky pseudobulges by bar evolution. These results represent empirical evidence for secular evolution driven by bars in disk galaxies. This work suggests that bars are not stagnant structures within disk galaxies but are a critical evolutionary driver of their host galaxies in the local universe (z < 1).« less
NASA Technical Reports Server (NTRS)
Lin, Bing; Harrah, Steven; Lawrence, R. Wes; Hu, Yongxiang; Min, Qilong
2015-01-01
This work studies the potential of monitoring changes in tropical extreme rainfall events such as tropical storms from space using a Differential-absorption BArometric Radar (DiBAR) operating at 50-55 gigahertz O2 absorption band to remotely measure sea surface air pressure. Air pressure is among the most important variables that affect atmospheric dynamics, and currently can only be measured by limited in-situ observations over oceans. Analyses show that with the proposed radar the errors in instantaneous (averaged) pressure estimates can be as low as approximately 5 millibars (approximately 1 millibar) under all weather conditions. With these sea level pressure measurements, the forecasts, analyses and understanding of these extreme events in both short and long time scales can be improved. Severe weathers, especially hurricanes, are listed as one of core areas that need improved observations and predictions in WCRP (World Climate Research Program) and NASA Decadal Survey (DS) and have major impacts on public safety and national security through disaster mitigation. Since the development of the DiBAR concept about a decade ago, our team has made substantial progress in advancing the concept. Our feasibility assessment clearly shows the potential of sea surface barometry using existing radar technologies. We have developed a DiBAR system design, fabricated a Prototype-DiBAR (P-DiBAR) for proof-of-concept, conducted lab, ground and airborne P-DiBAR tests. The flight test results are consistent with our instrumentation goals. Observational system simulation experiments for space DiBAR performance show substantial improvements in tropical storm predictions, not only for the hurricane track and position but also for the hurricane intensity. DiBAR measurements will lead us to an unprecedented level of the prediction and knowledge on tropical extreme rainfall weather and climate conditions.
Hubble Space Telescope secondary mirror vertex radius/conic constant test
NASA Technical Reports Server (NTRS)
Parks, Robert
1991-01-01
The Hubble Space Telescope backup secondary mirror was tested to determine the vertex radius and conic constant. Three completely independent tests (to the same procedure) were performed. Similar measurements in the three tests were highly consistent. The values obtained for the vertex radius and conic constant were the nominal design values within the error bars associated with the tests. Visual examination of the interferometric data did not show any measurable zonal figure error in the secondary mirror.
ERIC Educational Resources Information Center
Wilkerson, Trena L.; Bryan, Tommy; Curry, Jane
2012-01-01
This article describes how using candy bars as models gives sixth-grade students a taste for learning to represent fractions whose denominators are factors of twelve. Using paper models of the candy bars, students explored and compared fractions. They noticed fewer different representations for one-third than for one-half. The authors conclude…
Vis-A-Plan /visualize a plan/ management technique provides performance-time scale
NASA Technical Reports Server (NTRS)
Ranck, N. H.
1967-01-01
Vis-A-Plan is a bar-charting technique for representing and evaluating project activities on a performance-time basis. This rectilinear method presents the logic diagram of a project as a series of horizontal time bars. It may be used supplementary to PERT or independently.
Burr, Tom; Croft, Stephen; Jarman, Kenneth D.
2015-09-05
The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings, and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically quantify total uncertainty in terms of “random” and “systematic” components, and then specify error bars for the total mass estimate in multiple items. Uncertainty quantification (UQ) for NDA has always been important, but it is recognized that greater rigor is needed andmore » achievable using modern statistical methods. To this end, we describe the extent to which the guideline for expressing uncertainty in measurements (GUM) can be used for NDA. Also, we propose improvements over GUM for NDA by illustrating UQ challenges that it does not address, including calibration with errors in predictors, model error, and item-specific biases. A case study is presented using low-resolution NaI spectra and applying the enrichment meter principle to estimate the U-235 mass in an item. The case study illustrates how to update the current American Society for Testing and Materials guide for application of the enrichment meter principle using gamma spectra from a NaI detector.« less
NLO QCD corrections to tt-barbb-bar production at the LHC: 1. quark-antiquark annihilation
NASA Astrophysics Data System (ADS)
Bredenstein, A.; Denner, A.; Dittmaier, S.; Pozzorini, S.
2008-08-01
The process pp → tt-barbb-bar + X represents a very important background reaction to searches at the LHC, in particular to tt-barH production where the Higgs boson decays into a bb-bar pair. A successful analysis of tt-barH at the LHC requires the knowledge of direct tt-barbb-bar production at next-to-leading order in QCD. We take the first step in this direction upon calculating the next-to-leading-order QCD corrections to the subprocess initiated by qbar q annihilation. We devote an appendix to the general issue of rational terms resulting from ultraviolet or infrared (soft or collinear) singularities within dimensional regularization. There we show that, for arbitrary processes, in the Feynman gauge, rational terms of infrared origin cancel in truncated one-loop diagrams and result only from trivial self-energy corrections.
Predicting vertical jump height from bar velocity.
García-Ramos, Amador; Štirn, Igor; Padial, Paulino; Argüelles-Cienfuegos, Javier; De la Fuente, Blanca; Strojnik, Vojko; Feriche, Belén
2015-06-01
The objective of the study was to assess the use of maximum (Vmax) and final propulsive phase (FPV) bar velocity to predict jump height in the weighted jump squat. FPV was defined as the velocity reached just before bar acceleration was lower than gravity (-9.81 m·s(-2)). Vertical jump height was calculated from the take-off velocity (Vtake-off) provided by a force platform. Thirty swimmers belonging to the National Slovenian swimming team performed a jump squat incremental loading test, lifting 25%, 50%, 75% and 100% of body weight in a Smith machine. Jump performance was simultaneously monitored using an AMTI portable force platform and a linear velocity transducer attached to the barbell. Simple linear regression was used to estimate jump height from the Vmax and FPV recorded by the linear velocity transducer. Vmax (y = 16.577x - 16.384) was able to explain 93% of jump height variance with a standard error of the estimate of 1.47 cm. FPV (y = 12.828x - 6.504) was able to explain 91% of jump height variance with a standard error of the estimate of 1.66 cm. Despite that both variables resulted to be good predictors, heteroscedasticity in the differences between FPV and Vtake-off was observed (r(2) = 0.307), while the differences between Vmax and Vtake-off were homogenously distributed (r(2) = 0.071). These results suggest that Vmax is a valid tool for estimating vertical jump height in a loaded jump squat test performed in a Smith machine. Key pointsVertical jump height in the loaded jump squat can be estimated with acceptable precision from the maximum bar velocity recorded by a linear velocity transducer.The relationship between the point at which bar acceleration is less than -9.81 m·s(-2) and the real take-off is affected by the velocity of movement.Mean propulsive velocity recorded by a linear velocity transducer does not appear to be optimal to monitor ballistic exercise performance.
Predicting Vertical Jump Height from Bar Velocity
García-Ramos, Amador; Štirn, Igor; Padial, Paulino; Argüelles-Cienfuegos, Javier; De la Fuente, Blanca; Strojnik, Vojko; Feriche, Belén
2015-01-01
The objective of the study was to assess the use of maximum (Vmax) and final propulsive phase (FPV) bar velocity to predict jump height in the weighted jump squat. FPV was defined as the velocity reached just before bar acceleration was lower than gravity (-9.81 m·s-2). Vertical jump height was calculated from the take-off velocity (Vtake-off) provided by a force platform. Thirty swimmers belonging to the National Slovenian swimming team performed a jump squat incremental loading test, lifting 25%, 50%, 75% and 100% of body weight in a Smith machine. Jump performance was simultaneously monitored using an AMTI portable force platform and a linear velocity transducer attached to the barbell. Simple linear regression was used to estimate jump height from the Vmax and FPV recorded by the linear velocity transducer. Vmax (y = 16.577x - 16.384) was able to explain 93% of jump height variance with a standard error of the estimate of 1.47 cm. FPV (y = 12.828x - 6.504) was able to explain 91% of jump height variance with a standard error of the estimate of 1.66 cm. Despite that both variables resulted to be good predictors, heteroscedasticity in the differences between FPV and Vtake-off was observed (r2 = 0.307), while the differences between Vmax and Vtake-off were homogenously distributed (r2 = 0.071). These results suggest that Vmax is a valid tool for estimating vertical jump height in a loaded jump squat test performed in a Smith machine. Key points Vertical jump height in the loaded jump squat can be estimated with acceptable precision from the maximum bar velocity recorded by a linear velocity transducer. The relationship between the point at which bar acceleration is less than -9.81 m·s-2 and the real take-off is affected by the velocity of movement. Mean propulsive velocity recorded by a linear velocity transducer does not appear to be optimal to monitor ballistic exercise performance. PMID:25983572
Poon, Eric G; Cina, Jennifer L; Churchill, William W; Mitton, Patricia; McCrea, Michelle L; Featherstone, Erica; Keohane, Carol A; Rothschild, Jeffrey M; Bates, David W; Gandhi, Tejal K
2005-01-01
We performed a direct observation pre-post study to evaluate the impact of barcode technology on medication dispensing errors and potential adverse drug events in the pharmacy of a tertiary-academic medical center. We found that barcode technology significantly reduced the rate of target dispensing errors leaving the pharmacy by 85%, from 0.37% to 0.06%. The rate of potential adverse drug events (ADEs) due to dispensing errors was also significantly reduced by 63%, from 0.19% to 0.069%. In a 735-bed hospital where 6 million doses of medications are dispensed per year, this technology is expected to prevent about 13,000 dispensing errors and 6,000 potential ADEs per year. PMID:16779372
An infrared image based methodology for breast lesions screening
NASA Astrophysics Data System (ADS)
Morais, K. C. C.; Vargas, J. V. C.; Reisemberger, G. G.; Freitas, F. N. P.; Oliari, S. H.; Brioschi, M. L.; Louveira, M. H.; Spautz, C.; Dias, F. G.; Gasperin, P.; Budel, V. M.; Cordeiro, R. A. G.; Schittini, A. P. P.; Neto, C. D.
2016-05-01
The objective of this paper is to evaluate the potential of utilizing a structured methodology for breast lesions screening, based on infrared imaging temperature measurements of a healthy control group to establish expected normality ranges, and of breast cancer patients, previously diagnosed through biopsies of the affected regions. An analysis of the systematic error of the infrared camera skin temperature measurements was conducted in several different regions of the body, by direct comparison to high precision thermistor temperature measurements, showing that infrared camera temperatures are consistently around 2 °C above the thermistor temperatures. Therefore, a method of conjugated gradients is proposed to eliminate the infrared camera direct temperature measurement imprecision, by calculating the temperature difference between two points to cancel out the error. The method takes into account the human body approximate bilateral symmetry, and compares measured dimensionless temperature difference values (Δ θ bar) between two symmetric regions of the patient's breast, that takes into account the breast region, the surrounding ambient and the individual core temperatures, and doing so, the results interpretation for different individuals become simple and non subjective. The range of normal whole breast average dimensionless temperature differences for 101 healthy individuals was determined, and admitting that the breasts temperatures exhibit a unimodal normal distribution, the healthy normal range for each region was considered to be the dimensionless temperature difference plus/minus twice the standard deviation of the measurements, Δ θ bar ‾ + 2σ Δ θ bar ‾ , in order to represent 95% of the population. Forty-seven patients with previously diagnosed breast cancer through biopsies were examined with the method, which was capable of detecting breast abnormalities in 45 cases (96%). Therefore, the conjugated gradients method was considered effective in breast lesions screening through infrared imaging in order to recommend a biopsy, even with the use of a low optical resolution camera (160 × 120 pixels) and a thermal resolution of 0.1 °C, whose results were compared to the results of a higher resolution camera (320 × 240 pixels). The main conclusion is that the results demonstrate that the method has potential for utilization as a noninvasive screening exam for individuals with breast complaints, indicating whether the patient should be submitted to a biopsy or not.
Visual short-term memory deficits associated with GBA mutation and Parkinson's disease.
Zokaei, Nahid; McNeill, Alisdair; Proukakis, Christos; Beavan, Michelle; Jarman, Paul; Korlipara, Prasad; Hughes, Derralynn; Mehta, Atul; Hu, Michele T M; Schapira, Anthony H V; Husain, Masud
2014-08-01
Individuals with mutation in the lysosomal enzyme glucocerebrosidase (GBA) gene are at significantly high risk of developing Parkinson's disease with cognitive deficit. We examined whether visual short-term memory impairments, long associated with patients with Parkinson's disease, are also present in GBA-positive individuals-both with and without Parkinson's disease. Precision of visual working memory was measured using a serial order task in which participants observed four bars, each of a different colour and orientation, presented sequentially at screen centre. Afterwards, they were asked to adjust a coloured probe bar's orientation to match the orientation of the bar of the same colour in the sequence. An additional attentional 'filtering' condition tested patients' ability to selectively encode one of the four bars while ignoring the others. A sensorimotor task using the same stimuli controlled for perceptual and motor factors. There was a significant deficit in memory precision in GBA-positive individuals-with or without Parkinson's disease-as well as GBA-negative patients with Parkinson's disease, compared to healthy controls. Worst recall was observed in GBA-positive cases with Parkinson's disease. Although all groups were impaired in visual short-term memory, there was a double dissociation between sources of error associated with GBA mutation and Parkinson's disease. The deficit observed in GBA-positive individuals, regardless of whether they had Parkinson's disease, was explained by a systematic increase in interference from features of other items in memory: misbinding errors. In contrast, impairments in patients with Parkinson's disease, regardless of GBA status, was explained by increased random responses. Individuals who were GBA-positive and also had Parkinson's disease suffered from both types of error, demonstrating the worst performance. These findings provide evidence for dissociable signature deficits within the domain of visual short-term memory associated with GBA mutation and with Parkinson's disease. Identification of the specific pattern of cognitive impairment in GBA mutation versus Parkinson's disease is potentially important as it might help to identify individuals at risk of developing Parkinson's disease. © The Author (2014). Published by Oxford University Press on behalf of the Guarantors of Brain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abulencia, A.; Acosta, D.; Adelman, Jahred A.
2006-01-01
The authors present the first observation of the baryon decay {Lambda}{sub b}{sup 0} {yields} {Lambda}{sub c}{sup +} {pi}{sup -} followed by {Lambda}{sub c}{sup +} {yields} pK{sup -} {pi}{sup +} in 106 pb{sup -1} p{bar p} collisions at {radical}s = 1.96 TeV in the CDF experiment. IN order to reduce systematic error, the measured rate for {Lambda}{sub b}{sup 0} decay is normalized to the kinematically similar meson decay {bar B}{sup 0} {yields} D{sup +}{pi}{sup -} followed by D{sup +} {yields} {pi}{sup +}K{sup -}{pi}{sup +}. They report the ratio of production cross sections ({sigma}) times the ratio of branching fractions ({Beta}) formore » the momentum region integrated above p{sub T} > 6 GeV/c and pseudorapidity range |{eta}| < 1.3: {sigma}(p{bar p} {yields} {Lambda}{sub b}{sup 0}X)/{sigma}(p{bar p} {yields} {bar B}{sup 0} X) x {Beta}({Lambda}{sub b}{sup 0} {yields} {Lambda}{sub c}{sup +}{pi}{sup -})/{Beta}({bar B}{sup 0} {yields} D{sup +}{pi}{sup -}) = 0.82 {+-} 0.08(stat) {+-} 0.11(syst) {+-} 0.22 ({Beta}({Lambda}{sub c}{sup +} {yields} pK{sup -} {pi}{sup +})).« less
Partial entrainment of gravel bars during floods
Konrad, Christopher P.; Booth, Derek B.; Burges, Stephen J.; Montgomery, David R.
2002-01-01
Spatial patterns of bed material entrainment by floods were documented at seven gravel bars using arrays of metal washers (bed tags) placed in the streambed. The observed patterns were used to test a general stochastic model that bed material entrainment is a spatially independent, random process where the probability of entrainment is uniform over a gravel bar and a function of the peak dimensionless shear stress τ0* of the flood. The fraction of tags missing from a gravel bar during a flood, or partial entrainment, had an approximately normal distribution with respect to τ0* with a mean value (50% of the tags entrained) of 0.085 and standard deviation of 0.022 (root‐mean‐square error of 0.09). Variation in partial entrainment for a given τ0* demonstrated the effects of flow conditioning on bed strength, with lower values of partial entrainment after intermediate magnitude floods (0.065 < τ0*< 0.08) than after higher magnitude floods. Although the probability of bed material entrainment was approximately uniform over a gravel bar during individual floods and independent from flood to flood, regions of preferential stability and instability emerged at some bars over the course of a wet season. Deviations from spatially uniform and independent bed material entrainment were most pronounced for reaches with varied flow and in consecutive floods with small to intermediate magnitudes.
NASA Astrophysics Data System (ADS)
Prokocki, E.; Best, J.; Ashworth, P. J.; Parsons, D. R.; Sambrook Smith, G.; Nicholas, A. P.; Simpson, C.; Wang, H.; Sandbach, S.; Keevil, C.
2015-12-01
Optically stimulated luminescence (OSL) dating of four deep sediment cores (≤ 20m depth), in conjunction with shallow vibracores (≤ 6m depth), obtained from mid-channel bars in the lower Columbia River (LCR), USA, provides new insights into the mid-Holocene to present geomorphic and coupled sedimentological evolution of the LCR fluvial-tidal zone. These data reveal that the relatively coarse-grained basal sediments of mid-channel bars positioned across the LCR tidal-fluvial hydraulic regime were deposited at c. 2.5 to 2.0 ka, and not at c. 8.0 ka as previously reported. Thus, these younger depositional ages of basal sediments relative to previous studies coupled with the overall sedimentary architecture of these bars, and the absence of a temporal lag in the timing of basal sedimentation between bars located from river kilometer 51.1 to 29.3, challenges existing models that these bars represent: (a) estuarine tidal-bars, or (b) bay-head deltaic deposits. Within the context of post glacial Holocene sea-level rise, our results suggest these bars represent vertical construction of a LCR fluvial top-set from c. 2.5- 2.0 ka to the present, as the regional rate of sea-level rise slowed to ≤ 1.4 mmyr-1. Within this geomorphic context, two tidal-fluvial sedimentological signatures can be identified: (i) in the downstream direction, basal bar deposits incorporate a larger percentage of finer-grained interbeds, and (ii) vertically stacked silt/very-fine sand draped current ripple cross-laminae become prevalent from approximately 5 m in depth to the bar surfaces. The preservation of finer-grained interbeds within basal bar deposits is reasoned to be caused by the flocculation and settling of suspended sediment enhanced by the turbidity maximum. The stacked draped current ripple cross-laminae are interpreted to result from tidal-currents generating asymmetric current ripples that were draped by fine-sediment entrained by wind-waves, which fell-out of suspension during reduced wave activity, slackwater intervals, and periods when the turbidity maximum was active.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peche, Roberto, E-mail: roberto.peche@ehu.es; Rodriguez, Esther, E-mail: esther.rodriguez@ehu.e
This study shows the practical application of the EIA method based on fuzzy logic proposed by the authors (Peche and Rodriguez, 2009) to a simplified case of study-the activity of a petrol station throughout its exploitation. The intensity (p{sub 1}), the extent (p{sub 2}) and the persistence (p{sub 3}) were the properties selected to describe the impacts and their respective assessment functions v-bar{sub i}=f(p-bar{sub i}) were determined. The main actions (A) and potentially affected environmental factors (F) were selected. Every impact was identified by a pair A-F and the values of the three impact properties were estimated for each ofmore » them by means of triangular fuzzy numbers. Subsequently, the fuzzy estimation of every impact was carried out, the estimation of the impact A{sub 1}-F{sub 2} (V-bar{sub 1}) being explained in detail. Every impact was simultaneously represented by its corresponding generalised confidence interval and membership function. Since the membership functions of all impacts were similar to triangular fuzzy numbers, a triangular approach (TA) was used to describe every impact. A triangular approach coefficient (TAC) was introduced to quantify the similarity of each fuzzy number and its corresponding triangular approach, where TAC (V-bar) element of (0, 1] and TAC being 1 when the fuzzy number is triangular. The TACs-ranging from 0.96 to 0.99-proved that TAs were valid in all cases. Next, the total positive and negative impacts-TV-bar{sup +} and TV-bar{sup -} were calculated and later, the fuzzy value of the total environmental impact TV-bar was determined from them. Finally, the defuzzification of TV-bar led to the punctual impact estimator TV{sup (1)} = -88.50 and its corresponding uncertainty interval [{delta}{sub l}(TV-bar),{delta}{sub r}(TV-bar)]=[6.52,6.96], which represent the total value of the EI. In conclusion, the EIA method enabled the integration of heterogeneous impacts, which exerted influence on environmental factors of a very diverse nature in very different ways, into a global impact indicator.« less
NASA Astrophysics Data System (ADS)
Hu, C.; Zhang, Y.; Jiang, Z.; Algeo, T. J.; Wang, M.; Lei, H.
2017-12-01
Poyang Lake formed along with the changing geological environment in the Quaternary as a continental faulted basin. Songmenshan Island lies within the lake and offers many examples of modern coastal deposits on its shore. There are plenty of typical modern coastal beach bar deposits and the plane shapes of beach bar are clearly visible at the Songmenshan Island shore in the center of the Poyang Lake. Modern coastal beach bar deposits are researched comprehensively in this article by geological surveying, research results of rhythm topography by Komar, wave model of littoral zone by Friedman and Sanders. The controlling factors of modern coastal beach bar sedimentary system and transformation relationships of different shapes beach bar are analyzed. The study shows that beach bar was divided into five microfacies based on the different shaped sand bodies of the modern coast. The waves, formed by the wind, are the main controlling factors of the modern coastal beach bar deposits based on the evidence of environment, climate and wind data in Poyang Lake. Among the 5 types of beach bar, 35 types of transformation relationship with different waves were identified. The modern coastal sedimentary model, which includes a beach bar influenced by waves and transformation relationships among the five kinds of beach bar, is representative of continental faulted lake basins.
Driving out errors through tight integration between software and automation.
Reifsteck, Mark; Swanson, Thomas; Dallas, Mary
2006-01-01
A clear case has been made for using clinical IT to improve medication safety, particularly bar-code point-of-care medication administration and computerized practitioner order entry (CPOE) with clinical decision support. The equally important role of automation has been overlooked. When the two are tightly integrated, with pharmacy information serving as a hub, the distinctions between software and automation become blurred. A true end-to-end medication management system drives out errors from the dockside to the bedside. Presbyterian Healthcare Services in Albuquerque has been building such a system since 1999, beginning by automating pharmacy operations to support bar-coded medication administration. Encouraged by those results, it then began layering on software to further support clinician workflow and improve communication, culminating with the deployment of CPOE and clinical decision support. This combination, plus a hard-wired culture of safety, has resulted in a dramatically lower mortality and harm rate that could not have been achieved with a partial solution.
Machine learning models for lipophilicity and their domain of applicability.
Schroeter, Timon; Schwaighofer, Anton; Mika, Sebastian; Laak, Antonius Ter; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-01-01
Unfavorable lipophilicity and water solubility cause many drug failures; therefore these properties have to be taken into account early on in lead discovery. Commercial tools for predicting lipophilicity usually have been trained on small and neutral molecules, and are thus often unable to accurately predict in-house data. Using a modern Bayesian machine learning algorithm--a Gaussian process model--this study constructs a log D7 model based on 14,556 drug discovery compounds of Bayer Schering Pharma. Performance is compared with support vector machines, decision trees, ridge regression, and four commercial tools. In a blind test on 7013 new measurements from the last months (including compounds from new projects) 81% were predicted correctly within 1 log unit, compared to only 44% achieved by commercial software. Additional evaluations using public data are presented. We consider error bars for each method (model based error bars, ensemble based, and distance based approaches), and investigate how well they quantify the domain of applicability of each model.
Active Sensing Air Pressure Using Differential Absorption Barometric Radar
NASA Astrophysics Data System (ADS)
Lin, B.
2016-12-01
Tropical storms and other severe weathers cause huge life losses and property damages and have major impacts on public safety and national security. Their observations and predictions need to be significantly improved. This effort tries to develop a feasible active microwave approach that measures surface air pressure, especially over open seas, from space using a Differential-absorption BArometric Radar (DiBAR) operating at 50-55 GHz O2 absorption band in order to constrain assimilated dynamic fields of numerical weather Prediction (NWP) models close to actual conditions. Air pressure is the most important variable that drives atmospheric dynamics, and currently can only be measured by limited in-situ observations over oceans. Even over land there is no uniform coverage of surface air pressure measurements. Analyses show that with the proposed space radar the errors in instantaneous (averaged) pressure estimates can be as low as 4mb ( 1mb) under all weather conditions. NASA Langley research team has made substantial progresses in advancing the DiBAR concept. The feasibility assessment clearly shows the potential of surface barometry using existing radar technologies. The team has also developed a DiBAR system design, fabricated a Prototype-DiBAR (P-DiBAR) for proof-of-concept, conducted laboratory, ground and airborne P-DiBAR tests. The flight test results are consistent with the instrumentation goals. The precision and accuracy of radar surface pressure measurements are within the range of the theoretical analysis of the DiBAR concept. Observational system simulation experiments for space DiBAR performance based on the existing DiBAR technology and capability show substantial improvements in tropical storm predictions, not only for the hurricane track and position but also for the hurricane intensity. DiBAR measurements will provide us an unprecedented level of the prediction and knowledge on global extreme weather and climate conditions.
Marion, G.M.; Kargel, J.S.; Catling, D.C.; Jakubowski, S.D.
2005-01-01
Pressure plays a critical role in controlling aqueous geochemical processes in deep oceans and deep ice. The putative ocean of Europa could have pressures of 1200 bars or higher on the seafloor, a pressure not dissimilar to the deepest ocean basin on Earth (the Mariana Trench at 1100 bars of pressure). At such high pressures, chemical thermodynamic relations need to explicitly consider pressure. A number of papers have addressed the role of pressure on equilibrium constants, activity coefficients, and the activity of water. None of these models deal, however, with processes at subzero temperatures, which may be important in cold environments on Earth and other planetary bodies. The objectives of this work were to (1) incorporate a pressure dependence into an existing geochemical model parameterized for subzero temperatures (FREZCHEM), (2) validate the model, and (3) simulate pressure-dependent processes on Europa. As part of objective 1, we examined two models for quantifying the volumetric properties of liquid water at subzero temperatures: one model is based on the measured properties of supercooled water, and the other model is based on the properties of liquid water in equilibrium with ice. The relative effect of pressure on solution properties falls in the order: equilibrium constants(K) > activity coefficients (??) > activity of water (aw). The errors (%) in our model associated with these properties, however, fall in the order: ?? > K > aw. The transposition between K and ?? is due to a more accurate model for estimating K than for estimating ??. Only activity coefficients are likely to be significantly in error. However, even in this case, the errors are likely to be only in the range of 2 to 5% up to 1000 bars of pressure. Evidence based on the pressure/temperature melting of ice and salt solution densities argue in favor of the equilibrium water model, which depends on extrapolations, for characterizing the properties of liquid water in electrolyte solutions at subzero temperatures, rather than the supercooled water model. Model-derived estimates of mixed salt solution densities and chemical equilibria as a function of pressure are in reasonably good agreement with experimental measurements. To demonstrate the usefulness of this low-temperature, high-pressure model, we examined two hypothetical cases for Europa. Case 1 dealt with the ice cover of Europa, where we asked the question: How far above the putative ocean in the ice layer could we expect to find thermodynamically stable brine pockets that could serve as habitats for life? For a hypothetical nonconvecting 20 km icy shell, this potential life zone only extends 2.8 km into the icy shell before the eutectic is reached. For the case of a nonconvecting icy shell, the cold surface of Europa precludes stable aqueous phases (habitats for life) anywhere near the surface. Case 2 compared chemical equilibria at 1 bar (based on previous work) with a more realistic 1460 bars of pressure at the base of a 100 km Europan ocean. A pressure of 1460 bars, compared to 1 bar, caused a 12 K decrease in the temperature at which ice first formed and a 11 K increase in the temperature at which MgSO4. 12H2O first formed. Remarkably, there was only a 1.2 K decrease in the eutectic temperatures between 1 and 1460 bars of pressure. Chemical systems and their response to pressure depend, ultimately, on the volumetric properties of individual constituents, which makes every system response highly individualistic. Copyright ?? 2005 Elsevier Ltd.
Behavior intentions of the public after bans on smoking in restaurants and bars.
Biener, L; Siegel, M
1997-01-01
OBJECTIVES: This study assessed the potential effect of smoke-free policies on bar and restaurant patronage. METHODS: Random-digit dialing techniques were used in surveying a representative sample of Massachusetts adults (n = 2356) by telephone. RESULTS: Approximately 61% of the respondents predicted no change in their use of restaurants in response to smoke-free policies, 30% predicted increased use, and 8% predicted decreased use. In turn, 69% of the respondents predicted no change in their patronage of bars, while 20% predicted increased use and 11% predicted decreased use. CONCLUSIONS: These results suggest that smoke-free policies are likely to increase overall patronage of bars and restaurants. PMID:9431301
What makes the family of barred disc galaxies so rich: damping stellar bars in spinning haloes
NASA Astrophysics Data System (ADS)
Collier, Angela; Shlosman, Isaac; Heller, Clayton
2018-05-01
We model and analyse the secular evolution of stellar bars in spinning dark matter (DM) haloes with the cosmological spin λ ˜ 0-0.09. Using high-resolution stellar and DM numerical simulations, we focus on angular momentum exchange between stellar discs and DM haloes of various axisymmetric shapes - spherical, oblate, and prolate. We find that stellar bars experience a diverse evolution that is guided by the ability of parent haloes to absorb angular momentum, J, lost by the disc through the action of gravitational torques, resonant and non-resonant. We confirm that dynamical bar instability is accelerated via resonant J-transfer to the halo. Our main findings relate to the long-term secular evolution of disc-halo systems: with an increasing λ, bars experience less growth and basically dissolve after they pass through vertical buckling instability. Specifically, with increasing λ, (1) the vertical buckling instability in stellar bars colludes with inability of the inner halo to absorb J - this emerges as the main factor weakening or destroying bars in spinning haloes; (2) bars lose progressively less J, and their pattern speeds level off; (3) bars are smaller, and for λ ≳ 0.06 cease their growth completely following buckling; (4) bars in λ > 0.03 haloes have ratio of corotation-to-bar radii, RCR/Rb > 2, and represent so-called slow bars without offset dust lanes. We provide a quantitative analysis of J-transfer in disc-halo systems, and explain the reasons for absence of growth in fast spinning haloes and its observational corollaries. We conclude that stellar bar evolution is substantially more complex than anticipated, and bars are not as resilient as has been considered so far.
The economic impact of a smoke-free bylaw on restaurant and bar sales in Ottawa, Canada.
Luk, Rita; Ferrence, Roberta; Gmel, Gerhard
2006-05-01
On 1 August 2001, the City of Ottawa (Canada's Capital) implemented a smoke-free bylaw that completely prohibited smoking in work-places and public places, including restaurants and bars, with no exemption for separately ventilated smoking rooms. This paper evaluates the effects of this bylaw on restaurant and bar sales. DATA AND MEASURES: We used retail sales tax data from March 1998 to June 2002 to construct two outcome measures: the ratio of licensed restaurant and bar sales to total retail sales and the ratio of unlicensed restaurant sales to total retail sales. Restaurant and bar sales were subtracted from total retail sales in the denominator of these measures. We employed an interrupted time-series design. Autoregressive integrated moving average (ARIMA) intervention analysis was used to test for three possible impacts that the bylaw might have on the sales of restaurants and bars. We repeated the analysis using regression with autoregressive moving average (ARMA) errors method to triangulate our results. Outcome measures showed declining trends at baseline before the bylaw went into effect. Results from ARIMA intervention and regression analyses did not support the hypotheses that the smoke-free bylaw had an impact that resulted in (1) abrupt permanent, (2) gradual permanent or (3) abrupt temporary changes in restaurant and bar sales. While a large body of research has found no significant adverse impact of smoke-free legislation on restaurant and bar sales in the United States, Australia and elsewhere, our study confirms these results in a northern region with a bilingual population, which has important implications for impending policy in Europe and other areas.
Nurses' attitudes toward the use of the bar-coding medication administration system.
Marini, Sana Daya; Hasman, Arie; Huijer, Huda Abu-Saad; Dimassi, Hani
2010-01-01
This study determines nurses' attitudes toward bar-coding medication administration system use. Some of the factors underlying the successful use of bar-coding medication administration systems that are viewed as a connotative indicator of users' attitudes were used to gather data that describe the attitudinal basis for system adoption and use decisions in terms of subjective satisfaction. Only 67 nurses in the United States had the chance to respond to the e-questionnaire posted on the CARING list server for the months of June and July 2007. Participants rated their satisfaction with bar-coding medication administration system use based on system functionality, usability, and its positive/negative impact on the nursing practice. Results showed, to some extent, positive attitude, but the image profile draws attention to nurses' concerns for improving certain system characteristics. The high bar-coding medication administration system skills revealed a more negative perception of the system by the nursing staff. The reasons underlying dissatisfaction with bar-coding medication administration use by skillful users are an important source of knowledge that can be helpful for system development as well as system deployment. As a result, strengthening bar-coding medication administration system usability by magnifying its ability to eliminate medication errors and the contributing factors, maximizing system functionality by ascertaining its power as an extra eye in the medication administration process, and impacting the clinical nursing practice positively by being helpful to nurses, speeding up the medication administration process, and being user-friendly can offer a congenial settings for establishing positive attitude toward system use, which in turn leads to successful bar-coding medication administration system use.
Investigating The Nuclear Activity Of Barred Spirals: The case of NGC 1672
NASA Astrophysics Data System (ADS)
Jenkins, Leigh; Brandt, N.; Colbert, E.; Levan, A.; Roberts, T.; Ward, M.; Zezas, A.
2008-03-01
We present new results from Chandra and XMM-Newton X-ray observations of the nearby barred spiral galaxy NGC1672. It shows dramatic nuclear and extra-nuclear star formation activity, including starburst regions located either end of its prominent bar. Using new X-ray imaging and spectral information, together with supporting multiwavelength data, we show for the first time that NGC1672 possesses a faint, hard, central X-ray source surrounded by a circumnuclear starburst ring that dominates the X-ray emission in the region, presumably triggered and sustained by gas and dust driven inwards along the galactic bar. The faint central source may represent low-level AGN activity, or alternatively emission associated with star-formation in the nucleus. More generally, we present some preliminary results on a Chandra archival search for low-luminosity AGN activity in barred galaxies.
Hamilton, S J
2017-05-22
Electrical impedance tomography (EIT) is an emerging imaging modality that uses harmless electrical measurements taken on electrodes at a body's surface to recover information about the internal electrical conductivity and or permittivity. The image reconstruction task of EIT is a highly nonlinear inverse problem that is sensitive to noise and modeling errors making the image reconstruction task challenging. D-bar methods solve the nonlinear problem directly, bypassing the need for detailed and time-intensive forward models, to provide absolute (static) as well as time-difference EIT images. Coupling the D-bar methodology with the inclusion of high confidence a priori data results in a noise-robust regularized image reconstruction method. In this work, the a priori D-bar method for complex admittivities is demonstrated effective on experimental tank data for absolute imaging for the first time. Additionally, the method is adjusted for, and tested on, time-difference imaging scenarios. The ability of the method to be used for conductivity, permittivity, absolute as well as time-difference imaging provides the user with great flexibility without a high computational cost.
NASA Astrophysics Data System (ADS)
Vairamuthu, G.; Thangagiri, B.; Sundarapandian, S.
2018-01-01
The present work investigates the effect of varying Nozzle Opening Pressures (NOP) from 220 bar to 250 bar on performance, emissions and combustion characteristics of Calophyllum inophyllum Methyl Ester (CIME) in a constant speed, Direct Injection (DI) diesel engine using Artificial Neural Network (ANN) approach. An ANN model has been developed to predict a correlation between specific fuel consumption (SFC), brake thermal efficiency (BTE), exhaust gas temperature (EGT), Unburnt hydrocarbon (UBHC), CO, CO2, NOx and smoke density using load, blend (B0 and B100) and NOP as input data. A standard Back-Propagation Algorithm (BPA) for the engine is used in this model. A Multi Layer Perceptron network (MLP) is used for nonlinear mapping between the input and the output parameters. An ANN model can predict the performance of diesel engine and the exhaust emissions with correlation coefficient (R2) in the range of 0.98-1. Mean Relative Errors (MRE) values are in the range of 0.46-5.8%, while the Mean Square Errors (MSE) are found to be very low. It is evident that the ANN models are reliable tools for the prediction of DI diesel engine performance and emissions. The test results show that the optimum NOP is 250 bar with B100.
Using a Divided Bar Apparatus to Measure Thermal Conductivity of Samples of Odd Sizes and Shapes
NASA Astrophysics Data System (ADS)
Crowell, J. "; Gosnold, W. D.
2012-12-01
Standard procedure for measuring thermal conductivity using a divided bar apparatus requires a sample that has the same surface dimensions as the heat sink/source surface in the divided bar. Heat flow is assumed to be constant throughout the column and thermal conductivity (K) is determined by measuring temperatures (T) across the sample and across standard layers and using the basic relationship Ksample=(Kstandard*(ΔT1+ΔT2)/2)/(ΔTsample). Sometimes samples are not large enough or of correct proportions to match the surface of the heat sink/source, however using the equations presented here the thermal conductivity of these samples can still be measured with a divided bar. Measurements were done on the UND Geothermal Laboratories stationary divided bar apparatus (SDB). This SDB has been designed to mimic many in-situ conditions, with a temperature range of -20C to 150C and a pressure range of 0 to 10,000 psi for samples with parallel surfaces and 0 to 3000 psi for samples with non-parallel surfaces. The heat sink/source surfaces are copper disks and have a surface area of 1,772 mm2 (2.74 in2). Layers of polycarbonate 6 mm thick with the same surface area as the copper disks are located in the heat sink and in the heat source as standards. For this study, all samples were prepared from a single piece of 4 inch limestone core. Thermal conductivities were measured for each sample as it was cut successively smaller. The above equation was adjusted to include the thicknesses (Th) of the samples and the standards and the surface areas (A) of the heat sink/source and of the sample Ksample=(Kstandard*Astandard*Thsample*(ΔT1+ΔT3))/(ΔTsample*Asample*2*Thstandard). Measuring the thermal conductivity of samples of multiple sizes, shapes, and thicknesses gave consistent values for samples with surfaces as small as 50% of the heat sink/source surface, regardless of the shape of the sample. Measuring samples with surfaces smaller than 50% of the heat sink/source surface resulted in thermal conductivity values which were too high. The cause of the error with the smaller samples is being examined as is the relationship between the amount of error in the thermal conductivity and the difference in surface areas. As more measurements are made an equation to mathematically correct for the error is being developed on in case a way to physically correct the problem cannot be determined.
Yoon, Youngdae; Zhang, Xiuqi; Cho, Wonhwa
2012-01-01
Cellular proteins containing Bin/amphiphysin/Rvs (BAR) domains play a key role in clathrin-mediated endocytosis. Despite extensive structural and functional studies of BAR domains, it is still unknown how exactly these domains interact with the plasma membrane containing phosphatidylinositol 4,5-bisphosphate (PtdIns(4,5)P2) and whether they function by a universal mechanism or by different mechanisms. Here we report that PtdIns(4,5)P2 specifically induces partial membrane penetration of the N-terminal amphiphilic α-helix (H0) of two representative N-BAR domains from Drosophila amphiphysin (dAmp-BAR) and rat endophilin A1 (EndoA1-BAR). Our quantitative fluorescence imaging analysis shows that PtdIns(4,5)P2-dependent membrane penetration of H0 is important for self-association of membrane-bound dAmp-BAR and EndoA1-BAR and their membrane deformation activity. EndoA1-BAR behaves differently from dAmp-BAR because the former has an additional amphiphilic α-helix that penetrates the membrane in a PtdIns(4,5)P2-independent manner. Depletion of PtdIns(4,5)P2 from the plasma membrane of HEK293 cells abrogated the membrane deforming activity of EndoA1-BAR and dAmp-BAR. Collectively, these studies suggest that the local PtdIns(4,5)P2 concentration in the plasma membrane may regulate the membrane interaction and deformation by N-BAR domain-containing proteins during clathrin-mediated endocytosis. PMID:22888025
A marriage bar of convenience? The BBC and married women's work 1923-39.
Murphy, Kate
2014-01-01
In October 1932 the British Broadcasting Corporation introduced a marriage bar, stemming what had been an enlightened attitude towards married women employees. The policy was in line with the convention of the day; marriage bars were widespread in the inter-war years operating in occupations such as teaching and the civil service and in large companies such as Sainsbury's and ICI. However, once implemented, the BBC displayed an ambivalent attitude towards its marriage bar which had been constructed to allow those married women considered useful to the Corporation to remain on the staff. This article considers why, for its first ten years, the BBC bucked convention and openly employed married women and why, in 1932, it took the decision to introduce a marriage bar, albeit not a full bar, which was not abolished until 1944. It contends that the BBC marriage bar represented a quest for conformity rather than active hostility towards the employment of married women and demonstrates how easily arguments against the acceptability of married women's work could be transgressed, if seen as beneficial to the employer. Overall, the article contemplates how far the BBC's marriage bar reflected inter-war ideology towards the employment of married women.
Synthesis and optimization of four bar mechanism with six design parameters
NASA Astrophysics Data System (ADS)
Jaiswal, Ankur; Jawale, H. P.
2018-04-01
Function generation is synthesis of mechanism for specific task, involves complexity for specially synthesis above five precision of coupler points. Thus pertains to large structural error. The methodology for arriving to better precision solution is to use the optimization technique. Work presented herein considers methods of optimization of structural error in closed kinematic chain with single degree of freedom, for generating functions like log(x), ex, tan(x), sin(x) with five precision points. The equation in Freudenstein-Chebyshev method is used to develop five point synthesis of mechanism. The extended formulation is proposed and results are obtained to verify existing results in literature. Optimization of structural error is carried out using least square approach. Comparative structural error analysis is presented on optimized error through least square method and extended Freudenstein-Chebyshev method.
Tides on Europa: The membrane paradigm
NASA Astrophysics Data System (ADS)
Beuthe, Mikael
2015-03-01
Jupiter's moon Europa has a thin icy crust which is decoupled from the mantle by a subsurface ocean. The crust thus responds to tidal forcing as a deformed membrane, cold at the top and near melting point at the bottom. In this paper I develop the membrane theory of viscoelastic shells with depth-dependent rheology with the dual goal of predicting tidal tectonics and computing tidal dissipation. Two parameters characterize the tidal response of the membrane: the effective Poisson's ratio ν bar and the membrane spring constant Λ, the latter being proportional to the crust thickness and effective shear modulus. I solve membrane theory in terms of tidal Love numbers, for which I derive analytical formulas depending on Λ, ν bar , the ocean-to-bulk density ratio and the number k2∘ representing the influence of the deep interior. Membrane formulas predict h2 and k2 with an accuracy of a few tenths of percent if the crust thickness is less than one hundred kilometers, whereas the error on l2 is a few percents. Benchmarking with the thick-shell software SatStress leads to the discovery of an error in the original, uncorrected version of the code that changes stress components by up to 40%. Regarding tectonics, I show that different stress-free states account for the conflicting predictions of thin and thick shell models about the magnitude of tensile stresses due to nonsynchronous rotation. Regarding dissipation, I prove that tidal heating in the crust is proportional to Im (Λ) and that it is equal to the global heat flow (proportional to Im (k2)) minus the core-mantle heat flow (proportional to Im (k2∘)). As an illustration, I compute the equilibrium thickness of a convecting crust. More generally, membrane formulas are useful in any application involving tidal Love numbers such as crust thickness estimates, despinning tectonics or true polar wander.
Schmidtke, Kelly Ann; Poots, Alan J; Carpio, Juan; Vlaev, Ivo; Kandala, Ngianga-Bakwin; Lilford, Richard J
2017-01-01
Hospital board members are asked to consider large amounts of quality and safety data with a duty to act on signals of poor performance. However, in order to do so it is necessary to distinguish signals from noise (chance). This article investigates whether data in English National Health Service (NHS) acute care hospital board papers are presented in a way that helps board members consider the role of chance in their decisions. Thirty English NHS trusts were selected at random and their board papers retrieved. Charts depicting quality and safety were identified. Categorical discriminations were then performed to document the methods used to present quality and safety data in board papers, with particular attention given to whether and how the charts depicted the role of chance, that is, by including control lines or error bars. Thirty board papers, containing a total of 1488 charts, were sampled. Only 88 (6%) of these charts depicted the role of chance, and only 17 of the 30 board papers included any charts depicting the role of chance. Of the 88 charts that attempted to represent the role of chance, 16 included error bars and 72 included control lines. Only 6 (8%) of the 72 control charts indicated where the control lines had been set (eg, 2 vs 3 SDs). Hospital board members are expected to consider large amounts of information. Control charts can help board members distinguish signals from noise, but often boards are not using them. We discuss demand-side and supply-side barriers that could be overcome to increase use of control charts in healthcare. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Stress tracking in thin bars by eigenstrain actuation
NASA Astrophysics Data System (ADS)
Schoeftner, J.; Irschik, H.
2016-11-01
This contribution focuses on stress tracking in slender structures. The axial stress distribution of a linear elastic bar is investigated, in particular, we seek for an answer to the following question: in which manner do we have to distribute eigenstrains, such that the axial stress in a bar is equal to a certain desired stress distribution, despite external forces or support excitations are present? In order to track a certain time- and space-dependent stress function, smart actuators, such as piezoelectric actuators, are needed to realize eigenstrains. Based on the equation of motion and the constitutive relation, which relate stress, strain, displacement and eigenstrains, an analytical solution for the stress tracking problem is derived. The starting point for the derivation of a solution for the stress tracking problem is a semi-positive definite integral depending on the error stress which is the difference between the actual stress and the desired stress. Our derived stress tracking theory is verified by two examples: first, a clamped-free bar which is harmonically excited is investigated. It is shown under which circumstances the axial stress vanishes at every location and at every time instant. The second example is a support-excited bar with end mass, where a desired stress profile is prescribed.
33 CFR 165.1325 - Regulated Navigation Areas; Bars Along the Coasts of Oregon and Washington.
Code of Federal Regulations, 2013 CFR
2013-07-01
... type of vessel, sea state, winds, wave period, and tidal currents. When a bar is restricted, the... representative and carrying not more than six passengers. (13) Unsafe condition exists when the wave height... than the maximum wave height determined by the formula L/10 + F = W where: L = Overall length of a...
33 CFR 165.1325 - Regulated Navigation Areas; Bars Along the Coasts of Oregon and Washington.
Code of Federal Regulations, 2014 CFR
2014-07-01
... type of vessel, sea state, winds, wave period, and tidal currents. When a bar is restricted, the... representative and carrying not more than six passengers. (13) Unsafe condition exists when the wave height... than the maximum wave height determined by the formula L/10 + F = W where: L = Overall length of a...
33 CFR 165.1325 - Regulated Navigation Areas; Bars Along the Coasts of Oregon and Washington.
Code of Federal Regulations, 2012 CFR
2012-07-01
... type of vessel, sea state, winds, wave period, and tidal currents. When a bar is restricted, the... representative and carrying not more than six passengers. (13) Unsafe condition exists when the wave height... than the maximum wave height determined by the formula L/10 + F = W where: L = Overall length of a...
33 CFR 165.1325 - Regulated Navigation Areas; Bars Along the Coasts of Oregon and Washington.
Code of Federal Regulations, 2011 CFR
2011-07-01
... type of vessel, sea state, winds, wave period, and tidal currents. When a bar is restricted, the... representative and carrying not more than six passengers. (13) Unsafe condition exists when the wave height... than the maximum wave height determined by the formula L/10 + F = W where: L = Overall length of a...
Technology and medication errors: impact in nursing homes.
Baril, Chantal; Gascon, Viviane; St-Pierre, Liette; Lagacé, Denis
2014-01-01
The purpose of this paper is to study a medication distribution technology's (MDT) impact on medication errors reported in public nursing homes in Québec Province. The work was carried out in six nursing homes (800 patients). Medication error data were collected from nursing staff through a voluntary reporting process before and after MDT was implemented. The errors were analysed using: totals errors; medication error type; severity and patient consequences. A statistical analysis verified whether there was a significant difference between the variables before and after introducing MDT. The results show that the MDT detected medication errors. The authors' analysis also indicates that errors are detected more rapidly resulting in less severe consequences for patients. MDT is a step towards safer and more efficient medication processes. Our findings should convince healthcare administrators to implement technology such as electronic prescriber or bar code medication administration systems to improve medication processes and to provide better healthcare to patients. Few studies have been carried out in long-term healthcare facilities such as nursing homes. The authors' study extends what is known about MDT's impact on medication errors in nursing homes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagai, Kei
A measurement of the flavor asymmetry of the antiquarks (more » $$\\bar{d}$$ and $$\\bar{u}$$) in the proton is described in this thesis. The proton consists of three valence quarks, sea quarks, and gluons. Antiquarks in the proton are sea quarks. They are generated from the gluon splitting: g → q + $$\\bar{q}$$. According to QCD (Quantum Chromodynamics), the gluon splitting is independent of quark flavor. It suggests that the amounts of $$\\bar{d}$$ and $$\\bar{u}$$ should be the same in the proton. However, the NMC experiment at CERN found that the amount of $$\\bar{d}$$ is larger than that of $$\\bar{u}$$ in the proton using the deep inelastic scattering in 1991. This result is obtained for $$\\bar{d}$$ and $$\\bar{u}$$ integrated over Bjorken x. Bjorken x is the fraction of the momentum of the parton to that of the proton. The NA51 experiment (x ~ 0.2) at CERN and E866/NuSea experiment (0.015 < x < 0.35) at Fermilab measured the flavor asymmetry of the antiquarks ($$\\bar{d}$$/$$\\bar{u}$$) in the proton as a function of x using Drell–Yan process. The experiments reported that the flavor symmetry is broken over all measured x values. Understanding the flavor asymmetry of the antiquarks in the proton is a challenge of the QCD. The theo- retical investigation from the first principle of QCD such as lattice QCD calculation is important. In addition, the QCD effective models and hadron models such as the meson cloud model can also be tested with the flavor asymmetry of antiquarks. From the experimental side, it is important to measure with higher accuracy and in a wider x range. The SeaQuest (E906) experiment measures $$\\bar{d}$$/$$\\bar{u}$$ at large x (0.15 < x < 0.45) accurately to understand its behavior. The SeaQuest experiment is a Drell–Yan experiment at Fermi National Accelerator Laboratory (Fermilab). In the Drell–Yan process of proton-proton reaction, an antiquark in a proton and a quark in another proton annihilate and create a virtual photon, which then decays into a muon pair (q$$\\bar{q}$$ → γ* → µ +µ -). The SeaQuest experiment uses a 120 GeV proton beam extracted from Fermilab’s Main Injector. The proton beam interacts with hydrogen and deuterium targets. The SeaQuest spectrometer detects the muon pairs from the Drell–Yan process. The $$\\bar{d}$$/$$\\bar{u}$$ ratio at 0.1 < x < 0.58 is extracted from the number of detected Drell–Yan muon pairs. After the detector construction, commissioning run and detector upgrade, the SeaQuest experiment started the physics data acquisition from 2013. We finished so far three periods of physics data acquisition. The fourth period is in progress. The detector construction, detector performance evaluation, data taking and data analysis for the flavor asymmetry of the antiquarks $$\\bar{d}$$/$$\\bar{u}$$ in the proton are my contribution to SeaQuest. The cross section ratio of Drell–Yan process in p- p and p-d reactions is obtained from dimuon yields. In the experiment with high beam intensity, it is important to control the tracking efficiency of charged particles through the magnetic spectrometer. The tracking efficiency depends on the chamber occupancy, and the appropriate method for the correction is important. The chamber occupancy is the number of hits in drift chambers. A new method of the correction for the tracking efficiency is developed based on the occupancy, and applied to the data. This method reflects the real response of the drift chambers. Therefore, the systematic error is well controlled by this method. The flavor asymmetry of antiquarks is obtained at 0.1 < x < 0.58. At 0.1 < x < 0.45, the result is $$\\bar{d}$$/$$\\bar{u}$$ > 1. The result at 0.1 < x < 0.24 agrees with the E866 result. The result at x > 0.24, however, disagrees with the E866 result. The result at 0.45 < x < 0 the statistical errors. u¯ results extracted from experiments are used to investigate the validity of the theoretical models. The present experimental result provides the data points in wide x region. It is useful for understanding the proton structure in the light of QCD and effective hadron models. The present result has a practical application as well. Antiquark distributions are important as inputs to simulations of hadron reactions such as W± production in various experiments. The new knowledge on antiquark distributions helps to improve the precision of the simulations.« less
Sediment Dynamics Over a Stable Point bar of the San Pedro River, Southeastern Arizona
NASA Astrophysics Data System (ADS)
Hamblen, J. M.; Conklin, M. H.
2002-12-01
Streams of the Southwest receive enormous inputs of sediment during storm events in the monsoon season due to the high intensity rainfall and large percentages of exposed soil in the semi-arid landscape. In the Upper San Pedro River, with a watershed area of approximately 3600 square kilometers, particle size ranges from clays to boulders with large fractions of sand and gravel. This study focuses on the mechanics of scour and fill on a stable point bar. An innovative technique using seven co-located scour chains and liquid-filled, load-cell scour sensors characterized sediment dynamics over the point bar during the monsoon season of July to September 2002. The sensors were set in two transects to document sediment dynamics near the head and toe of the bar. Scour sensors record area-averaged sediment depths while scour chains measure scour and fill at a point. The average area covered by each scour sensor is 11.1 square meters. Because scour sensors have never been used in a system similar to the San Pedro, one goal of the study was to test their ability to detect changes in sediment load with time in order to determine the extent of scour and fill during monsoonal storms. Because of the predominantly unconsolidated nature of the substrate it was hypothesized that dune bedforms would develop in events less than the 1-year flood. The weak 2002 monsoon season produced only two storms that completely inundated the point bar, both less than the 1-year flood event. The first event, 34 cms, produced net deposition in areas where Johnson grass had been present and was now buried. The scour sensor at the lowest elevation, in a depression which serves as a secondary channel during storm events, recorded scour during the rising limb of the hydrograph followed by pulses we interpret to be the passage of dunes. The second event, although smaller at 28 cms, resulted from rain more than 50 km upstream and had a much longer peak and a slowly declining falling limb. During the second flood, several areas with buried vegetation were scoured back to their original bed elevations. Pulses of sediment passed over the sensor in the secondary channel and the sensor in the vegetated zone. Scour sensor measurements agree with data from scour chains (error +/- 3 cm) and surveys (error +/- 0.6 cm) performed before and after the two storm events, within the range of error of each method. All load sensor data were recorded at five minute intervals. Use of a smaller interval could give more details about the shapes of sediment waves and aid in bedform determination. Results suggest that dune migration is the dominant mechanism for scour and backfill in the point bar setting. Scour sensors, when coupled with surveying and/or scour chains, are a tremendous addition to the geomorphologist's toolbox, allowing unattended real-time measurements of sediment depth with time.
Storage and residence time of suspended sediment in gravel bars of Difficult Run, VA
NASA Astrophysics Data System (ADS)
George, J.; Benthem, A.; Pizzuto, J. E.; Skalak, K.
2016-12-01
Reducing the export of suspended sediment is an important consideration for restoring water quality to the Chesapeake Bay, but sediment budgets for in-channel landforms are poorly constrained. We quantified fine (< 2 mm) sediment storage and residence times for gravel bars at two reaches along Difficult Run, a 5th order tributary to the Potomac River. Eight gravel bars were mapped in a 150m headwater reach at Miller Heights (bankfull width 11m; total bar volume 114 m3) and 6 gravel bars were mapped in a 160m reach downstream near Leesburg Pike (bankfull width 19m; total bar volume 210 m3). Grain size analyses of surface and subsurface samples from 2 bars at each reach indicate an average suspended sediment content of 55%, suggesting a total volume of suspended sediment stored in the mapped bars to be 178 m3, or 283000 kg, comprising 5% of the average annual suspended sediment load of the two study reaches. Estimates of the annual bedload flux at Miller Heights based on stream gaging records and the Wilcock-Crowe bedload transport equation imply that the bars are entirely reworked at least annually. Scour chains installed in 2 bars at each site (a total of 50 chains) recorded scour and fill events during the winter and spring of 2016. These data indicate that 38% of the total volume of the bars is exchanged per year, for a residence time of 2.6 ± 1.2 years, a value we interpret as the residence time of suspended sediment stored in the bars. These results are supported by mapping of topographic changes derived from structure-from-motion analyses of digital aerial imagery. Storage in alluvial bars therefore represents a significant component of the suspended sediment budget of mid-Atlantic streams.
Code of Federal Regulations, 2010 CFR
2010-07-01
... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Manufacture of Bar Soaps Subcategory § 417.72 Effluent limitations guidelines representing the...
Code of Federal Regulations, 2013 CFR
2013-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Manufacture of Bar Soaps Subcategory § 417.73 Effluent limitations guidelines representing the degree of...
Code of Federal Regulations, 2012 CFR
2012-07-01
... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Manufacture of Bar Soaps Subcategory § 417.72 Effluent limitations guidelines representing the...
Code of Federal Regulations, 2014 CFR
2014-07-01
... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Manufacture of Bar Soaps Subcategory § 417.72 Effluent limitations guidelines representing the...
Code of Federal Regulations, 2012 CFR
2012-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Manufacture of Bar Soaps Subcategory § 417.73 Effluent limitations guidelines representing the degree of...
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Manufacture of Bar Soaps Subcategory § 417.73 Effluent limitations guidelines representing the degree of...
Code of Federal Regulations, 2014 CFR
2014-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Manufacture of Bar Soaps Subcategory § 417.73 Effluent limitations guidelines representing the degree of...
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Manufacture of Bar Soaps Subcategory § 417.73 Effluent limitations guidelines representing the degree of...
Code of Federal Regulations, 2013 CFR
2013-07-01
... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Manufacture of Bar Soaps Subcategory § 417.72 Effluent limitations guidelines representing the...
Code of Federal Regulations, 2011 CFR
2011-07-01
... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Manufacture of Bar Soaps Subcategory § 417.72 Effluent limitations guidelines representing the...
Information technology and medication safety: what is the benefit?
Kaushal, R; Bates, D
2002-01-01
Medication errors occur frequently and have significant clinical and financial consequences. Several types of information technologies can be used to decrease rates of medication errors. Computerized physician order entry with decision support significantly reduces serious inpatient medication error rates in adults. Other available information technologies that may prove effective for inpatients include computerized medication administration records, robots, automated pharmacy systems, bar coding, "smart" intravenous devices, and computerized discharge prescriptions and instructions. In outpatients, computerization of prescribing and patient oriented approaches such as personalized web pages and delivery of web based information may be important. Public and private mandates for information technology interventions are growing, but further development, application, evaluation, and dissemination are required. PMID:12486992
Shida, Kyoko; Suzuki, Toshiyasu; Sugahara, Kazuhiro; Sobue, Kazuya
2016-05-01
In the case of medication errors which are among the more frequent adverse events that occur in the hospital, there is a need for effective measures to prevent incidence. According to the Japan Society of Anesthesiologists study "Drug incident investigation 2005-2007 years", "Error of a syringe at the selection stage" was the most frequent (44.2%). The status of current measures and best practices implemented in Japanese hospitals was the focus of a subsequent investigation. Representative specialists in anesthesiology certified hospitals across the country were surveyed via a questionnaire sampling that lasted 46 days. Investigation method was via the Web with survey responses anonymous. With respect to preventive measures implemented to mitigate risk of medication errors in perioperative settings, responses included: incident and accident report (215 facilities, 70.3%), use of pre-filled syringes (180 facilities, 58.8%), devised the arrangement of dangerous drugs (154 facilities, 50.3%), use of the product with improper connection preventing mechanism (123 facilities, 40.2%), double-check (116 facilities, 37.9%), use of color barreled syringe (115 facilities, 37.6%), use of color label or color tape (89 facilities, 29.1%), presentation of medication such as placing the ampoule or syringe on a tray by dividing color code for drug class on a tray (54 facilities, 17.6%), the discontinuance of handwritten labels (23 facilities, 7.5%), use of a drug verification system that uses bar code (20 facilities, 6.5%), and facilities that have not implemented any means (11 facilities, 3.6%), others not mentioned (10 facilities, 3.3%), and use of carts that count/account the agents by drug type and record selection and number picked automatically (6 facilities, 2.0%). Drug name identification affixed to the syringe via perforated label torn from the ampoule/vial, etc. (245 facilities, 28.1%), handwriting directly to the syringe (208 facilities, 23.8%), use of the attached label (like that comes with the product) (187 facilities, 21.4%), handwriting on the plain tape (87 facilities, 10.0%), printing labels (62 facilities, 7.1%), printed color labels (44 facilities, 5.0%), handwriting on the color tape (27 facilities, 3.1%), machinery for printing the drug name by scanning bar code of the ampoule, etc.(10 facilities, 1.1%), others (3 facilities, 0.3%), no description on the prepared drug (0 facilities, 0%). The awareness of international standard color code, such as by the International Organization for Standardization (ISO), was only 18.6%. Targeting anesthesiology certified hospitals recognized by the Japan Society of Anesthesiologists, the result of the survey on the measures to prevent medication errors during perioperative procedures indicated that various measures were documented in use. However, many facilities still use hand written labels (a common cause for errors). Confirmation of the need for improved drug name and drug recognition on syringe was documented.
The Comparative Observational Study of Timescale of Feedback by Bar Structure in Late-type Galaxies
NASA Astrophysics Data System (ADS)
Woong-bae Woong-bae Zee, Galaxy; Yoon, Suk-jin
2018-01-01
We investigate star formation activities of ~400 barred and ~1400 unbarred faced-on late-type galaxies from the SDSS DR13. We find that gas-poor and barred galaxies are considerably show enhanced high central star formation activities, while there is no difference among gas-rich barred and unbarred galaxies regardless of their HI gas content. This seems counter-intuitive given that gas contents simply represent the total star formation rate of galaxies and suggests that there is a time delation between the central gas migration/consumption through bar structures and the enhancement of star formation activity at the centre. We analysed the distribution of the stellar population of specific galaxies with MaNGA (Mapping Nearby Galaxies at APO) IFU survey among the total samples. The gas-poor and barred galaxies show the flatter gradient in metallicity and age with respect to the stellar mass than other types of galaxies, in that their centre is more metal-rich and younger. There is an age difference, about 5-6 Gyrs, between centrally star-forming gas-poor barred galaxies and gas-rich galaxies and this value is a plausible candidate of the longevity of bar feedback. The results indicate that the gas migration/mixing driven by bar structure plays a significant role in the evolution of galaxies in a specific of timescale.
Study of the Rare Hyperon Decay $${\\boldmath \\Omega^\\mp \\to \\Xi^\\mp \\: \\pi^+\\pi^-}$$
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamaev, O.; Solomey, N.; Burnstein, R.A.
The authors report a new measurement of the decay {Omega}{sup -} {yields} {Xi}{sup -} {pi}{sup +}{pi}{sup -} with 76 events and a first observation of the decay {bar {Omega}}{sup +} {yields} {bar {Xi}}{sup +} {pi}{sup +}{pi}{sup -} with 24 events, yielding a combined branching ratio (3.74{sub -0.56}{sup +0.67}) x 10{sup -4}. This represents a factor 25 increase in statistics over the best previous measurement. No evidence is seen for CP violation, with {Beta}({Omega}{sup -} {yields} {Xi}{sup -} {pi}{sup +}{pi}{sup -}) = 4.04{sub -0.71}{sup +0.83} x 10{sup -4} and {Beta}({bar {Omega}}{sup +} {yields} {bar {Xi}}{sup +} {pi}{sup +}{pi}{sup -}) = 3.15{submore » -0.89}{sup +1.12} x 10{sup -4}. Contrary to theoretical expectation, they see little evidence for the decays {Omega}{sup -} {yields} {Xi}*{sub 1530}{sup 0} {pi}{sup -} and {bar {Omega}}{sup +} {yields} {bar {Xi}}*{sub 1530}{sup 0} {pi}{sup +} and place a 90% C.L. upper limit on the combined branching ratio {Beta}({Omega}{sup -}({bar {Omega}}{sup +}) {yields} {Xi}*{sub 1530}{sup 0} ({bar {Xi}}*{sub 1530}{sup 0}){pi}{sup {-+}}) < 7.0 x 10{sup -5}.« less
Cornelsen, Laura; Normand, Charles
2014-09-01
Ireland introduced comprehensive smoke-free workplace legislation in 2004. This study evaluates the economic impact of the workplace smoking ban on the value of sales in bars. Data on the value of bar sales were derived from a large, nationally representative, annual business-level survey from 1999 to 2007. The economic impact of the smoking ban was evaluated according to geographical region and bar size. Analysis was based on an econometric model which controlled for background changes in population income and wealth and for investments made by the bars during this period. The overall impact of the Irish smoking ban on bar sales appears to be very small. The ban was associated with an increase in sales among medium to large bars in the Border-Midland-West (more rural) region of Ireland, and a small reduction in sales among large bars in the more urban, South-East region. We failed to find any evidence of a change in bar sales in the remaining categories studied. The results indicate that although some bars saw positive effects and some negative, the overall impact of the smoking ban on the value of sales in bars was negligible. These findings provide further supporting evidence that comprehensive smoke-free workplace legislation does not harm hospitality businesses while having positive health effects. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Farhan, Alan; Petersen, Charlotte F; Dhuey, Scott; Anghinolfi, Luca; Qin, Qi Hang; Saccone, Michael; Velten, Sven; Wuth, Clemens; Gliga, Sebastian; Mellado, Paula; Alava, Mikko J; Scholl, Andreas; van Dijken, Sebastiaan
2017-12-12
The original version of this article contained an error in the legend to Figure 4. The yellow scale bar should have been defined as '~600 nm', not '~600 µm'. This has now been corrected in both the PDF and HTML versions of the article.
The Importance of Statistical Modeling in Data Analysis and Inference
ERIC Educational Resources Information Center
Rollins, Derrick, Sr.
2017-01-01
Statistical inference simply means to draw a conclusion based on information that comes from data. Error bars are the most commonly used tool for data analysis and inference in chemical engineering data studies. This work demonstrates, using common types of data collection studies, the importance of specifying the statistical model for sound…
The relationship of the concentration of air pollutants to wind direction has been determined by nonparametric regression using a Gaussian kernel. The results are smooth curves with error bars that allow for the accurate determination of the wind direction where the concentrat...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rominsky, Mandy Kathleen
2009-01-01
This thesis presents the analysis of the double differential dijet mass cross section, measured at the D0 detector in Batavia, IL, using pmore » $$\\bar{p}$$ collisions at a center of mass energy of √s = 1.96 TeV. The dijet mass was calculated using the two highest p T jets in the event, with approximately 0.7 fb -1 of data collected between 2004 and 2005. The analysis was presented in bins of dijet mass (M JJ) and rapidity (y), and extends the measurement farther in M JJ and y than any previous measurement. Corrections due to detector effects were calculated using a Monte Carlo simulation and applied to data. The errors on the measurement consist of statistical and systematic errors, of which the Jet Energy Scale was the largest. The final result was compared to next-to-leading order theory and good agreement was found. These results may be used in the determination of the proton parton distribution functions and to set limits on new physics.« less
Koa-Wing, Michael; Nakagawa, Hiroshi; Luther, Vishal; Jamil-Copley, Shahnaz; Linton, Nick; Sandler, Belinda; Qureshi, Norman; Peters, Nicholas S; Davies, D Wyn; Francis, Darrel P; Jackman, Warren; Kanagaratnam, Prapa
2015-11-15
Ripple Mapping (RM) is designed to overcome the limitations of existing isochronal 3D mapping systems by representing the intracardiac electrogram as a dynamic bar on a surface bipolar voltage map that changes in height according to the electrogram voltage-time relationship, relative to a fiduciary point. We tested the hypothesis that standard approaches to atrial tachycardia CARTO™ activation maps were inadequate for RM creation and interpretation. From the results, we aimed to develop an algorithm to optimize RMs for future prospective testing on a clinical RM platform. CARTO-XP™ activation maps from atrial tachycardia ablations were reviewed by two blinded assessors on an off-line RM workstation. Ripple Maps were graded according to a diagnostic confidence scale (Grade I - high confidence with clear pattern of activation through to Grade IV - non-diagnostic). The RM-based diagnoses were corroborated against the clinical diagnoses. 43 RMs from 14 patients were classified as Grade I (5 [11.5%]); Grade II (17 [39.5%]); Grade III (9 [21%]) and Grade IV (12 [28%]). Causes of low gradings/errors included the following: insufficient chamber point density; window-of-interest<100% of cycle length (CL); <95% tachycardia CL mapped; variability of CL and/or unstable fiducial reference marker; and suboptimal bar height and scar settings. A data collection and map interpretation algorithm has been developed to optimize Ripple Maps in atrial tachycardias. This algorithm requires prospective testing on a real-time clinical platform. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Color Histogram Diffusion for Image Enhancement
NASA Technical Reports Server (NTRS)
Kim, Taemin
2011-01-01
Various color histogram equalization (CHE) methods have been proposed to extend grayscale histogram equalization (GHE) for color images. In this paper a new method called histogram diffusion that extends the GHE method to arbitrary dimensions is proposed. Ranges in a histogram are specified as overlapping bars of uniform heights and variable widths which are proportional to their frequencies. This diagram is called the vistogram. As an alternative approach to GHE, the squared error of the vistogram from the uniform distribution is minimized. Each bar in the vistogram is approximated by a Gaussian function. Gaussian particles in the vistoram diffuse as a nonlinear autonomous system of ordinary differential equations. CHE results of color images showed that the approach is effective.
Evaluating diffraction based overlay metrology for double patterning technologies
NASA Astrophysics Data System (ADS)
Saravanan, Chandra Saru; Liu, Yongdong; Dasari, Prasad; Kritsun, Oleg; Volkman, Catherine; Acheta, Alden; La Fontaine, Bruno
2008-03-01
Demanding sub-45 nm node lithographic methodologies such as double patterning (DPT) pose significant challenges for overlay metrology. In this paper, we investigate scatterometry methods as an alternative approach to meet these stringent new metrology requirements. We used a spectroscopic diffraction-based overlay (DBO) measurement technique in which registration errors are extracted from specially designed diffraction targets for double patterning. The results of overlay measurements are compared to traditional bar-in-bar targets. A comparison between DBO measurements and CD-SEM measurements is done to show the correlation between the two approaches. We discuss the total measurement uncertainty (TMU) requirements for sub-45 nm nodes and compare TMU from the different overlay approaches.
Cold dust in the giant barred galaxy NGC 1365
NASA Astrophysics Data System (ADS)
Tabatabaei, F. S.; Weiß, A.; Combes, F.; Henkel, C.; Menten, K. M.; Beck, R.; Kovács, A.; Güsten, R.
2013-07-01
Constraining the physcial properties of dust requires observations at submm wavelengths. This will provide important insight into the gas content of galaxies. We mapped NGC 1365 at 870 μm with LABOCA, the Large APEX Bolometer Camera, allowing us to probe the central mass concentration as well as the rate at which the gas flows to the center. We obtained the dust physical properties both globally and locally for different locations in the galaxy. A 20 K modified black body represents about 98% of the total dust content of the galaxy, the rest can be represented by a warmer dust component of 40 K. The bar exhibits an east-west asymmetry in the dust distribution: The eastern bar is heavier than the western bar by more than a factor of 4. Integrating the dust spectral energy distribution, we derived a total infrared luminosity, LTIR, of 9.8 × 1010 L⊙, leading to a dust-enshrouded star formation rate of SFRTIR ≃ 16.7 M⊙ yr-1 in NGC 1365. We derived the gas mass from the measurements of the dust emission, resulting in a CO-to-H2 conversion factor of XCO ≃ 1.2 × 1020 mol cm-2 (K km s-1)-1 in the central disk, including the bar. Taking into account the metallicity variation, the central gas mass concentration is only ≃20% at R < 40″ (3.6 kpc). On the other hand, the timescale on which the gas flows into the center, ≃300 Myr, is relatively short. This indicates that the current central mass in NGC 1365 is evolving fast because of the strong bar.
Neural network uncertainty assessment using Bayesian statistics: a remote sensing application
NASA Technical Reports Server (NTRS)
Aires, F.; Prigent, C.; Rossow, W. B.
2004-01-01
Neural network (NN) techniques have proved successful for many regression problems, in particular for remote sensing; however, uncertainty estimates are rarely provided. In this article, a Bayesian technique to evaluate uncertainties of the NN parameters (i.e., synaptic weights) is first presented. In contrast to more traditional approaches based on point estimation of the NN weights, we assess uncertainties on such estimates to monitor the robustness of the NN model. These theoretical developments are illustrated by applying them to the problem of retrieving surface skin temperature, microwave surface emissivities, and integrated water vapor content from a combined analysis of satellite microwave and infrared observations over land. The weight uncertainty estimates are then used to compute analytically the uncertainties in the network outputs (i.e., error bars and correlation structure of these errors). Such quantities are very important for evaluating any application of an NN model. The uncertainties on the NN Jacobians are then considered in the third part of this article. Used for regression fitting, NN models can be used effectively to represent highly nonlinear, multivariate functions. In this situation, most emphasis is put on estimating the output errors, but almost no attention has been given to errors associated with the internal structure of the regression model. The complex structure of dependency inside the NN is the essence of the model, and assessing its quality, coherency, and physical character makes all the difference between a blackbox model with small output errors and a reliable, robust, and physically coherent model. Such dependency structures are described to the first order by the NN Jacobians: they indicate the sensitivity of one output with respect to the inputs of the model for given input data. We use a Monte Carlo integration procedure to estimate the robustness of the NN Jacobians. A regularization strategy based on principal component analysis is proposed to suppress the multicollinearities in order to make these Jacobians robust and physically meaningful.
Nakajima, Yujiro; Kadoya, Noriyuki; Kanai, Takayuki; Ito, Kengo; Sato, Kiyokazu; Dobashi, Suguru; Yamamoto, Takaya; Ishikawa, Yojiro; Matsushita, Haruo; Takeda, Ken; Jingu, Keiichi
2016-07-01
Irregular breathing can influence the outcome of 4D computed tomography imaging and cause artifacts. Visual biofeedback systems associated with a patient-specific guiding waveform are known to reduce respiratory irregularities. In Japan, abdomen and chest motion self-control devices (Abches) (representing simpler visual coaching techniques without a guiding waveform) are used instead; however, no studies have compared these two systems to date. Here, we evaluate the effectiveness of respiratory coaching in reducing respiratory irregularities by comparing two respiratory management systems. We collected data from 11 healthy volunteers. Bar and wave models were used as visual biofeedback systems. Abches consisted of a respiratory indicator indicating the end of each expiration and inspiration motion. Respiratory variations were quantified as root mean squared error (RMSE) of displacement and period of breathing cycles. All coaching techniques improved respiratory variation, compared with free-breathing. Displacement RMSEs were 1.43 ± 0.84, 1.22 ± 1.13, 1.21 ± 0.86 and 0.98 ± 0.47 mm for free-breathing, Abches, bar model and wave model, respectively. Period RMSEs were 0.48 ± 0.42, 0.33 ± 0.31, 0.23 ± 0.18 and 0.17 ± 0.05 s for free-breathing, Abches, bar model and wave model, respectively. The average reduction in displacement and period RMSE compared with the wave model were 27% and 47%, respectively. For variation in both displacement and period, wave model was superior to the other techniques. Our results showed that visual biofeedback combined with a wave model could potentially provide clinical benefits in respiratory management, although all techniques were able to reduce respiratory irregularities. © The Author 2016. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
The Red Edge Problem in asteroid band parameter analysis
NASA Astrophysics Data System (ADS)
Lindsay, Sean S.; Dunn, Tasha L.; Emery, Joshua P.; Bowles, Neil E.
2016-04-01
Near-infrared reflectance spectra of S-type asteroids contain two absorptions at 1 and 2 μm (band I and II) that are diagnostic of mineralogy. A parameterization of these two bands is frequently employed to determine the mineralogy of S(IV) asteroids through the use of ordinary chondrite calibration equations that link the mineralogy to band parameters. The most widely used calibration study uses a Band II terminal wavelength point (red edge) at 2.50 μm. However, due to the limitations of the NIR detectors on prominent telescopes used in asteroid research, spectral data for asteroids are typically only reliable out to 2.45 μm. We refer to this discrepancy as "The Red Edge Problem." In this report, we evaluate the associated errors for measured band area ratios (BAR = Area BII/BI) and calculated relative abundance measurements. We find that the Red Edge Problem is often not the dominant source of error for the observationally limited red edge set at 2.45 μm, but it frequently is for a red edge set at 2.40 μm. The error, however, is one sided and therefore systematic. As such, we provide equations to adjust measured BARs to values with a different red edge definition. We also provide new ol/(ol+px) calibration equations for red edges set at 2.40 and 2.45 μm.
NASA Astrophysics Data System (ADS)
Pairan, M. Rasidi; Asmuin, Norzelawati; Isa, Nurasikin Mat; Sies, Farid
2017-04-01
Water mist sprays are used in wide range of application. However it is depend to the spray characteristic to suit the particular application. This project studies the water droplet velocity and penetration angle generated by new development mist spray with a flat spray pattern. This research conducted into two part which are experimental and simulation section. The experimental was conducted by using particle image velocimetry (PIV) method, ANSYS software was used as tools for simulation section meanwhile image J software was used to measure the penetration angle. Three different of combination pressure of air and water were tested which are 1 bar (case A), 2 bar (case B) and 3 bar (case C). The flat spray generated by the new development nozzle was examined at 9cm vertical line from 8cm of the nozzle orifice. The result provided in the detailed analysis shows that the trend of graph velocity versus distance gives the good agreement within simulation and experiment for all the pressure combination. As the water and air pressure increased from 1 bar to 2 bar, the velocity and angle penetration also increased, however for case 3 which run under 3 bar condition, the water droplet velocity generated increased but the angle penetration is decreased. All the data then validated by calculate the error between experiment and simulation. By comparing the simulation data to the experiment data for all the cases, the standard deviation for this case A, case B and case C relatively small which are 5.444, 0.8242 and 6.4023.
Health and efficiency in trimix versus air breathing in compressed air workers.
Van Rees Vellinga, T P; Verhoeven, A C; Van Dijk, F J H; Sterk, W
2006-01-01
The Western Scheldt Tunneling Project in the Netherlands provided a unique opportunity to evaluate the effects of trimix usage on the health of compressed air workers and the efficiency of the project. Data analysis addressed 318 exposures to compressed air at 3.9-4.4 bar gauge and 52 exposures to trimix (25% oxygen, 25% helium, and 50% nitrogen) at 4.6-4.8 bar gauge. Results revealed three incidents of decompression sickness all of which involved the use of compressed air. During exposure to compressed air, the effects of nitrogen narcosis were manifested in operational errors and increased fatigue among the workers. When using trimix, less effort was required for breathing, and mandatory decompression times for stays of a specific duration and maximum depth were considerably shorter. We conclude that it might be rational--for both medical and operational reasons--to use breathing gases with lower nitrogen fractions (e.g., trimix) for deep-caisson work at pressures exceeding 3 bar gauge, although definitive studies are needed.
NASA Technical Reports Server (NTRS)
Dwyer, J. H., III; Palmer, E. A., III
1975-01-01
A simulator study was conducted to determine the usefulness of adding flight path vector symbology to a head-up display designed to improve glide-slope tracking performance during steep 7.5 deg visual approaches in STOL aircraft. All displays included a fixed attitude symbol, a pitch- and roll-stabilized horizon bar, and a glide-slope reference bar parallel to and 7.5 deg below the horizon bar. The displays differed with respect to the flight-path marker (FPM) symbol: display 1 had no FPM symbol; display 2 had an air-referenced FPM, and display 3 had a ground-referenced FPM. No differences between displays 1 and 2 were found on any of the performance measures. Display 3 was found to decrease height error in the early part of the approach and to reduce descent rate variation over the entire approach. Two measures of workload did not indicate any differences between the displays.
Xu, Yidong; Qian, Chunxiang
2013-01-01
Based on meso-damage mechanics and finite element analysis, the aim of this paper is to describe the feasibility of the Gurson–Tvergaard–Needleman (GTN) constitutive model in describing the tensile behavior of corroded reinforcing bars. The orthogonal test results showed that different fracture pattern and the related damage evolution process can be simulated by choosing different material parameters of GTN constitutive model. Compared with failure parameters, the two constitutive parameters are significant factors affecting the tensile strength. Both the nominal yield and ultimate tensile strength decrease markedly with the increase of constitutive parameters. Combining with the latest data and trial-and-error method, the suitable material parameters of GTN constitutive model were adopted to simulate the tensile behavior of corroded reinforcing bars in concrete under carbonation environment attack. The numerical predictions can not only agree very well with experimental measurements, but also simplify the finite element modeling process. PMID:23342140
Comparison of stellar and gasdynamics of a barred galaxy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Contopoulos, G.; Gottesman, S.T.; Hunter, J.H. Jr.
1989-08-01
The stellar and gas dynamics of several models of barred galaxies were studied, and results for some representative cases are reported for galaxies in which the stars and gas respond to the same potentials. Inside corotation there are two main families of periodic orbits, designated x1 and 4/1. Close to the center, the x1 orbits are like elongated ellipses. As the 4/1 resonance is approached, these orbits become like lozenges, with apices along the bar and perpendicular to it. The family 4/1 consists of orbits like parallelograms which produce the boxy component of the bar. The orbits in spirals outsidemore » corotation enhance the spiral between the outer -4/1 resonance and the outer Lindblad resonance. Between corotation and the -4/1 resonance in strong spirals, the orbits are mostly stochastic and fill almost circular rings. A spiral field must be added to gasdynamical models to obtain gaseous arms extending from the end of a bar. 38 refs.« less
The Partition Function in the Four-Dimensional Schwarz-Type Topological Half-Flat Two-Form Gravity
NASA Astrophysics Data System (ADS)
Abe, Mitsuko
We derive the partition functions of the Schwarz-type four-dimensional topological half-flat two-form gravity model on K3-surface or T4 up to on-shell one-loop corrections. In this model the bosonic moduli spaces describe an equivalent class of a trio of the Einstein-Kähler forms (the hyper-Kähler forms). The integrand of the partition function is represented by the product of some bar ∂ -torsions. bar ∂ -torsion is the extension of R-torsion for the de Rham complex to that for the bar ∂ -complex of a complex analytic manifold.
Patient Safety: Moving the Bar in Prison Health Care Standards
Greifinger, Robert B.; Mellow, Jeff
2010-01-01
Improvements in community health care quality through error reduction have been slow to transfer to correctional settings. We convened a panel of correctional experts, which recommended 60 patient safety standards focusing on such issues as creating safety cultures at organizational, supervisory, and staff levels through changes to policy and training and by ensuring staff competency, reducing medication errors, encouraging the seamless transfer of information between and within practice settings, and developing mechanisms to detect errors or near misses and to shift the emphasis from blaming staff to fixing systems. To our knowledge, this is the first published set of standards focusing on patient safety in prisons, adapted from the emerging literature on quality improvement in the community. PMID:20864714
Hutchinson, C.B.; Johnson, Dale M.; Gerhart, James M.
1981-01-01
A two-dimensional finite-difference model was developed for simulation of steady-state ground-water flow in the Floridan aquifer throughout a 932-square-mile area, which contains nine municipal well fields. The overlying surficial aquifer contains a constant-head water table and is coupled to the Floridan aquifer by a leakage term that represents flow through a confining layer separating the two aquifers. Under the steady-state condition, all storage terms are set to zero. Utilization of the head-controlled flux condition allows head and flow to vary at the model-grid boundaries. Procedures are described to calibrate the model, test its sensitivity to input-parameter errors, and verify its accuracy for predictive purposes. Also included are attachments that describe setting up and running the model. An example model-interrogation run shows anticipated drawdowns that should result from pumping at the newly constructed Cross Bar Ranch and Morris Bridge well fields. (USGS)
Characterizing the Performance of the Wheel Electrostatic Spectrometer
NASA Technical Reports Server (NTRS)
Johansen, Michael R.; Mackey, P. J.; Holbert, E.; Calle, C. I.; Clements, J. S.
2013-01-01
Insulators need to be discharged after each wheel revolution. Sensor responses repeatable within one standard deviation in the noise of the signal. Insulators may not need to be cleaned after each revolution. Parent Technology- Mars Environmental Compatibility Assessment/Electrometer Electrostatic sensors with dissimilar cover insulators Protruding insulators tribocharge against regolith simulant Developed for use on the scoop for the 2001 Mars Odyssey lander Wheel Electrostatic Spectrometer Embedded electrostatic sensors in prototype Martian rover wheel If successful, this technology will enable constant electrostatic testing on Mars Air ionizing fan used to neutralize the surface charge on cover insulators . WES rolled on JSClA lunar simulant Control experiment -Static elimination not conducted between trials -Capacitor discharged after each experiment Charge neutralization experiment -Static elimination conducted between trials -Capacitor discharged after each experiment. Air ionizing fan used on insulators after each wheel revolution Capacitor discharged after each trial Care was taken to roll WES with same speed/pressure Error bars represent one standard deviation in the noise of e ach sensor
PCT theorem for fields with arbitrary high-energy behavior
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luecke, W.
1986-07-01
A neutral scalar field A(x) is considered that has to be smeared by Fourier transforms of C/sup infinity/ functions with compact support but otherwise fulfills all the Wightman axioms, except strict local commutativity. It is shown to fulfill the PCT symmetry condition (where ..cap omega.. denotes the vacuum state vector) <..cap omega..Vertical BarA(x/sub 1/) xxx A(x/sub n/)..cap omega..> = <..cap omega..Vertical BarA(-x/sub n/) xxx A(-x/sub 1/)..cap omega..> if and only if <..cap omega..Vertical BarA(x/sub 1/) xxx A(x/sub n/)..cap omega..> -<..cap omega..Vertical BarA(x/sub n/) xxx A(x/sub 1/)..cap omega..> can be represented, in a sense, as an infinite sum of derivatives ofmore » measures with supports containing no Jost points.« less
Discrete shear-transformation-zone plasticity modeling of notched bars
NASA Astrophysics Data System (ADS)
Kondori, Babak; Amine Benzerga, A.; Needleman, Alan
2018-02-01
Plane strain tension analyses of un-notched and notched bars are carried out using discrete shear transformation zone plasticity. In this framework, the carriers of plastic deformation are shear transformation zones (STZs) which are modeled as Eshelby inclusions. Superposition is used to represent a boundary value problem solution in terms of discretely modeled Eshelby inclusions, given analytically for an infinite elastic medium, and an image solution that enforces the prescribed boundary conditions. The image problem is a standard linear elastic boundary value problem that is solved by the finite element method. Potential STZ activation sites are randomly distributed in the bars and constitutive relations are specified for their evolution. Results are presented for un-notched bars, for bars with blunt notches and for bars with sharp notches. The computed stress-strain curves are serrated with the magnitude of the associated stress-drops depending on bar size, notch acuity and STZ evolution. Cooperative deformation bands (shear bands) emerge upon straining and, in some cases, high stress levels occur within the bands. Effects of specimen geometry and size on the stress-strain curves are explored. Depending on STZ kinetics, notch strengthening, notch insensitivity or notch weakening are obtained. The analyses provide a rationale for some conflicting findings regarding notch effects on the mechanical response of metallic glasses.
Abazov, Victor Mukhamedovich
2015-04-30
The recent paper on the charge asymmetry for electrons from W boson decay has an error in the Tables VII to XI that show the correlation coefficients of systematic uncertainties. Furthermore, the correlation matrix elements shown in the original publication were the square roots of the calculated values.
Author Correction: Phase-resolved X-ray polarimetry of the Crab pulsar with the AstroSat CZT Imager
NASA Astrophysics Data System (ADS)
Vadawale, S. V.; Chattopadhyay, T.; Mithun, N. P. S.; Rao, A. R.; Bhattacharya, D.; Vibhute, A.; Bhalerao, V. B.; Dewangan, G. C.; Misra, R.; Paul, B.; Basu, A.; Joshi, B. C.; Sreekumar, S.; Samuel, E.; Priya, P.; Vinod, P.; Seetha, S.
2018-05-01
In the Supplementary Information file originally published for this Letter, in Supplementary Fig. 7 the error bars for the polarization fraction were provided as confidence intervals but instead should have been Bayesian credibility intervals. This has been corrected and does not alter the conclusions of the Letter in any way.
National Centers for Environmental Prediction
: Influence of convective parameterization on the systematic errors of Climate Forecast System (CFS) model ; Climate Dynamics, 41, 45-61, 2013. Saha, S., S. Pokhrel and H. S. Chaudhari : Influence of Eurasian snow Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather
Author Correction: Circuit dissection of the role of somatostatin in itch and pain.
Huang, Jing; Polgár, Erika; Solinski, Hans Jürgen; Mishra, Santosh K; Tseng, Pang-Yen; Iwagaki, Noboru; Boyle, Kieran A; Dickie, Allen C; Kriegbaum, Mette C; Wildner, Hendrik; Zeilhofer, Hanns Ulrich; Watanabe, Masahiko; Riddell, John S; Todd, Andrew J; Hoon, Mark A
2018-06-01
In the version of this article initially published online, the labels were switched for the right-hand pair of bars in Fig. 4e. The left one of the two should be Chloroquine + veh, the right one Chloroquine + CNO. The error has been corrected in the print, HTML and PDF versions of the article.
Characterization of rock thermal conductivity by high-resolution optical scanning
Popov, Y.A.; Pribnow, D.F.C.; Sass, J.H.; Williams, C.F.; Burkhardt, H.
1999-01-01
We compared thress laboratory methods for thermal conductivity measurements: divided-bar, line-source and optical scanning. These methods are widely used in geothermal and petrophysical studies, particularly as applied to research on cores from deep scientific boreholes. The relatively new optical scanning method has recently been perfected and applied to geophysical problems. A comparison among these methods for determining the thermal conductivity tensor for anisotropic rocks is based on a representative collection of 80 crystalline rock samples from the KTB continental deep borehole (Germany). Despite substantial thermal inhomogeneity of rock thermal conductivity (up to 40-50% variation) and high anisotropy (with ratios of principal values attaining 2 and more), the results of measurements agree very well among the different methods. The discrepancy for measurements along the foliation is negligible (<1%). The component of thermal conductivity normal to the foliation reveals somewhat larger differences (3-4%). Optical scanning allowed us to characterize the thermal inhomogeneity of rocks and to identify a three-dimensional anisotropy in thermal conductivity of some gneiss samples. The merits of optical scanning include minor random errors (1.6%), the ability to record the variation of thermal conductivity along the sample, the ability to sample deeply using a slow scanning rate, freedom from constraints for sample size and shape, and quality of mechanical treatment of the sample surface, a contactless mode of measurement, high speed of operation, and the ability to measure on a cylindrical sample surface. More traditional methods remain superior for characterizing bulk conductivity at elevated temperature.Three laboratory methods including divided-bar, line-source and optical scanning are widely applied in geothermal and petrophysical studies. In this study, these three methods were compared for determining the thermal conductivity tensor for anisotropic rocks. For this study, a representative collection of 80 crystalline rock samples from the KTB continental deep borehole was used. Despite substantial thermal inhomogeneity of rock thermal conductivity and high anisotropy, measurement results were in excellent agreement among the three methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buras, Andrzej J.; /Munich, Tech. U.; Gorbahn, Martin
The authors calculate the complete next-to-next-to-leading order QCD corrections to the charm contribution of the rare decay K{sup +} {yields} {pi}{sup +}{nu}{bar {nu}}. They encounter several new features, which were absent in lower orders. They discuss them in detail and present the results for the two-loop matching conditions of the Wilson coefficients, the three-loop anomalous dimensions, and the two-loop matrix elements of the relevant operators that enter the next-to-next-to-leading order renormalization group analysis of the Z-penguin and the electroweak box contribution. The inclusion of the next-to-next-to-leading order QCD corrections leads to a significant reduction of the theoretical uncertainty from {+-}more » 9.8% down to {+-} 2.4% in the relevant parameter P{sub c}(X), implying the leftover scale uncertainties in {Beta}(K{sup +} {yields} {pi}{sup +}{nu}{bar {nu}}) and in the determination of |V{sub td}|, sin 2{beta}, and {gamma} from the K {yields} {pi}{nu}{bar {nu}} system to be {+-} 1.3%, {+-} 1.0%, {+-} 0.006, and {+-} 1.2{sup o}, respectively. For the charm quark {ovr MS} mass m{sub c}(m{sub c}) = (1.30 {+-} 0.05) GeV and |V{sub us}| = 0.2248 the next-to-leading order value P{sub c}(X) = 0.37 {+-} 0.06 is modified to P{sub c}(X) = 0.38 {+-} 0.04 at the next-to-next-to-leading order level with the latter error fully dominated by the uncertainty in m{sub c}(m{sub c}). They present tables for P{sub c}(X) as a function of m{sub c}(m{sub c}) and {alpha}{sub s}(M{sub z}) and a very accurate analytic formula that summarizes these two dependences as well as the dominant theoretical uncertainties. Adding the recently calculated long-distance contributions they find {Beta}(K{sup +} {yields} {pi}{sup +}{nu}{bar {nu}}) = (8.0 {+-} 1.1) x 10{sup -11} with the present uncertainties in m{sub c}(m{sub c}) and the Cabibbo-Kobayashi-Maskawa elements being the dominant individual sources in the quoted error. They also emphasize that improved calculations of the long-distance contributions to K{sup +} {yields} {pi}{sup +}{nu}{bar {nu}} and of the isospin breaking corrections in the evaluation of the weak current matrix elements from K{sup +} {yields} {pi}{sup 0}e{sup +}{nu} would be valuable in order to increase the potential of the two golden K {yields} {pi}{nu}{bar {nu}} decays in the search for new physics.« less
Landing Gear Components Noise Study - PIV and Hot-Wire Measurements
NASA Technical Reports Server (NTRS)
Hutcheson, Florence V.; Burley, Casey L.; Stead, Daniel J.; Becker, Lawrence E.; Price, Jennifer L.
2010-01-01
PIV and hot-wire measurements of the wake flow from rods and bars are presented. The test models include rods of different diameters and cross sections and a rod juxtaposed to a plate. The latter is representative of the latch door that is attached to an aircraft landing gear when the gear is deployed, while the single and multiple rod configurations tested are representative of some of the various struts and cables configuration present on an aircraft landing gear. The test set up is described and the flow measurements are presented. The effect of model surface treatment and freestream turbulence on the spanwise coherence of the vortex shedding is studied for several rod and bar configurations.
Value-cell bar charts for visualizing large transaction data sets.
Keim, Daniel A; Hao, Ming C; Dayal, Umeshwar; Lyons, Martha
2007-01-01
One of the common problems businesses need to solve is how to use large volumes of sales histories, Web transactions, and other data to understand the behavior of their customers and increase their revenues. Bar charts are widely used for daily analysis, but only show highly aggregated data. Users often need to visualize detailed multidimensional information reflecting the health of their businesses. In this paper, we propose an innovative visualization solution based on the use of value cells within bar charts to represent business metrics. The value of a transaction can be discretized into one or multiple cells: high-value transactions are mapped to multiple value cells, whereas many small-value transactions are combined into one cell. With value-cell bar charts, users can 1) visualize transaction value distributions and correlations, 2) identify high-value transactions and outliers at a glance, and 3) instantly display values at the transaction record level. Value-Cell Bar Charts have been applied with success to different sales and IT service usage applications, demonstrating the benefits of the technique over traditional charting techniques. A comparison with two variants of the well-known Treemap technique and our earlier work on Pixel Bar Charts is also included.
NASA Astrophysics Data System (ADS)
González-Jorge, Higinio; Riveiro, Belén; Varela, María; Arias, Pedro
2012-07-01
A low-cost image orthorectification tool based on the utilization of compact cameras and scale bars is developed to obtain the main geometric parameters of masonry bridges for inventory and routine inspection purposes. The technique is validated in three different bridges by comparison with laser scanning data. The surveying process is very delicate and must make a balance between working distance and angle. Three different cameras are used in the study to establish the relationship between the error and the camera model. Results depict nondependence in error between the length of the bridge element, the type of bridge, and the type of element. Error values for all the cameras are below 4 percent (95 percent of the data). A compact Canon camera, the model with the best technical specifications, shows an error level ranging from 0.5 to 1.5 percent.
Reanalyzing the visible colors of Centaurs and KBOs: what is there and what we might be missing
NASA Astrophysics Data System (ADS)
Peixinho, Nuno; Delsanti, Audrey; Doressoundiram, Alain
2015-05-01
Since the discovery of the Kuiper belt, broadband surface colors were thoroughly studied as a first approximation to the object reflectivity spectra. Visible colors (BVRI) have proven to be a reasonable proxy for real spectra, which are rather linear in this range. In contrast, near-IR colors (JHK bands) could be misleading when absorption features of ices are present in the spectra. Although the physical and chemical information provided by colors are rather limited, broadband photometry remains the best tool for establishing the bulk surface properties of Kuiper belt objects (KBOs) and Centaurs. In this work, we explore for the first time general, recurrent effects in the study of visible colors that could affect the interpretation of the scientific results: i) how a correlation could be missed or weakened as a result of the data error bars; ii) the "risk" of missing an existing trend because of low sampling, and the possibility of making quantified predictions on the sample size needed to detect a trend at a given significance level - assuming the sample is unbiased; iii) the use of partial correlations to distinguish the mutual effect of two or more (physical) parameters; and iv) the sensitivity of the "reddening line" tool to the central wavelength of the filters used. To illustrate and apply these new tools, we have compiled the visible colors and orbital parameters of about 370 objects available in the literature - assumed, by default, as unbiased samples - and carried out a traditional analysis per dynamical family. Our results show in particular how a) data error bars impose a limit on the detectable correlations regardless of sample size and that therefore, once that limit is achieved, it is important to diminish the error bars, but it is pointless to enlarge the sampling with the same or larger errors; b) almost all dynamical families still require larger samplings to ensure the detection of correlations stronger than ±0.5, that is, correlations that may explain ~25% or more of the color variability; c) the correlation strength between (V - R) vs. (R - I) is systematically lower than the one between (B - V) vs. (V - R) and is not related with error-bar differences between these colors; d) it is statistically equivalent to use any of the different flavors of orbital excitation or collisional velocity parameters regarding the famous color-inclination correlation among classical KBOs - which no longer appears to be a strong correlation - whereas the inclination and Tisserand parameter relative to Neptune cannot be separated from one another; and e) classical KBOs are the only dynamical family that shows neither (B - V) vs. (V - R) nor (V - R) vs. (R - I) correlations. It therefore is the family with the most unpredictable visible surface reflectivities. Tables 4 and 5 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/577/A35
NASA Astrophysics Data System (ADS)
Mendigutía, I.; Oudmaijer, R. D.; Garufi, A.; Lumsden, S. L.; Huélamo, N.; Cheetham, A.; de Wit, W. J.; Norris, B.; Olguin, F. A.; Tuthill, P.
2017-12-01
Context. HD 100546 is one of the few known pre-main-sequence stars that may host a planetary system in its disk. Aims: This work aims to contribute to our understanding of HD 100546 by analyzing new polarimetric images with high spatial resolution. Methods: Using VLT/SPHERE/ZIMPOL with two filters in Hα and the adjacent continuum, we have probed the disk gap and the surface layers of the outer disk, covering a region <500 mas (<55 au at 109 pc) from the central star, at an angular resolution of 20 mas. Results: Our data show an asymmetry: the SE and NW regions of the outer disk are more polarized than the SW and NE regions. This asymmetry can be explained from a preferential scattering angle close to 90° and is consistent with previous polarization images. The outer disk in our observations extends from 13 ± 2 to 45 ± 9 au, with a position angle and inclination of 137 ± 5° and 44 ± 8°, respectively. The comparison with previous estimates suggests that the disk inclination could increase with the stellocentric distance, although the different measurements are still consistent within the error bars. In addition, no direct signature of the innermost candidate companion is detected from the polarimetric data, confirming recent results that were based on intensity imagery. We set an upper limit to its mass accretion rate <10-8 M⊙ yr-1 for a substellar mass of 15 MJup. Finally, we report the first detection (>3σ) of a 20 au bar-like structure that crosses the gap through the central region of HD 100546. Conclusions: In the absence of additional data, it is tentatively suggested that the bar could be dust dragged by infalling gas that radially flows from the outer disk to the inner region. This could represent an exceptional case in which a small-scale radial inflow is observed in a single system. If this scenario is confirmed, it could explain the presence of atomic gas in the inner disk that would otherwise accrete on to the central star on a timescale of a few months/years, as previously indicated from spectro-interferometric data, and could be related with additional (undetected) planets.
A UNIFIED FRAMEWORK FOR THE ORBITAL STRUCTURE OF BARS AND TRIAXIAL ELLIPSOIDS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valluri, Monica; Abbott, Caleb; Shen, Juntai
We examine a large random sample of orbits in two self-consistent simulations of N-body bars. Orbits in these bars are classified both visually and with a new automated orbit classification method based on frequency analysis. The well-known prograde x1 orbit family originates from the same parent orbit as the box orbits in stationary and rotating triaxial ellipsoids. However, only a small fraction of bar orbits (∼4%) have predominately prograde motion like their periodic parent orbit. Most bar orbits arising from the x1 orbit have little net angular momentum in the bar frame, making them equivalent to box orbits in rotatingmore » triaxial potentials. In these simulations a small fraction of bar orbits (∼7%) are long-axis tubes that behave exactly like those in triaxial ellipsoids: they are tipped about the intermediate axis owing to the Coriolis force, with the sense of tipping determined by the sign of their angular momentum about the long axis. No orbits parented by prograde periodic x2 orbits are found in the pure bar model, but a tiny population (∼2%) of short-axis tube orbits parented by retrograde x4 orbits are found. When a central point mass representing a supermassive black hole (SMBH) is grown adiabatically at the center of the bar, those orbits that lie in the immediate vicinity of the SMBH are transformed into precessing Keplerian orbits that belong to the same major families (short-axis tubes, long-axis tubes and boxes) occupying the bar at larger radii. During the growth of an SMBH, the inflow of mass and outward transport of angular momentum transform some x1 and long-axis tube orbits into prograde short-axis tubes. This study has important implications for future attempts to constrain the masses of SMBHs in barred galaxies using orbit-based methods like the Schwarzschild orbit superposition scheme and for understanding the observed features in barred galaxies.« less
Thin family: a new barcode concept
NASA Astrophysics Data System (ADS)
Allais, David C.
1991-02-01
This paper describes a new space-efficient family of thin bar code symbologies which are appropriate for representing small amounts of information. The proposed structure is 30 to 50 percent more compact than the narrowest existing bar code when 12 or fewer bits of information are to be encoded in each symbol. Potential applications for these symbologies include menus catalogs automated test and survey scoring and biological research such as the tracking of honey bees.
Chemical Evolution and History of Star Formation in the Large Magellanic Cloud
NASA Astrophysics Data System (ADS)
Gustafsson, Bengt
1995-07-01
Large scale processes controlling star formation and nucleosynthesis are fundamental but poorly understood. This is especially true for external galaxies. A detailed study of individual main sequence stars in the LMC Bar is proposed. The LMC is close enough to allow this, has considerable spread in stellar ages and a structure permitting identification of stellar populations and their structural features. The Bar presumably plays a dominant role in the chemical and dynamical evolution of the galaxy. Our knowledge is, at best, based on educated guesses. Still, the major population of the Bar is quite old, and many member stars are relatively evolved. The Bar seems to contain stars similar to those of Intermediate to Extreme Pop II in the Galaxy. We want to study the history of star formation, chemical evolution and initial mass function of the population dominating the Bar. We will use field stars close to the turn off point in the HR diagram. From earlier studies, we know that 250-500 such stars are available for uvby photometry in the PC field. We aim at an accuracy of 0.1 -0.2 dex in Me/H and 25% or better in relative ages. This requires an accuracy of about 0.02 mag in the uvby indices, which can be reached, taking into account errors in calibration, flat fielding, guiding and problems due to crowding. For a study of the luminosity function fainter stars will be included as well. Calibration fields are available in Omega Cen and M 67.
Shape of LOSVDs in Barred Disks: Implications for Future IFU Surveys
NASA Astrophysics Data System (ADS)
Li, Zhao-Yu; Shen, Juntai; Bureau, Martin; Zhou, Yingying; Du, Min; Debattista, Victor P.
2018-02-01
The shape of line-of-sight velocity distributions (LOSVDs) carries important information about the internal dynamics of galaxies. The skewness of LOSVDs represents their asymmetric deviation from a Gaussian profile. Correlations between the skewness parameter (h 3) and the mean velocity (\\overline{V}) of a Gauss–Hermite series reflect the underlying stellar orbital configurations of different morphological components. Using two self-consistent N-body simulations of disk galaxies with different bar strengths, we investigate {h}3-\\overline{V} correlations at different inclination angles. Similar to previous studies, we find anticorrelations in the disk area, and positive correlations in the bar area when viewed edge-on. However, at intermediate inclinations, the outer parts of bars exhibit anticorrelations, while the core areas dominated by the boxy/peanut-shaped (B/PS) bulges still maintain weak positive correlations. When viewed edge-on, particles in the foreground/background disk (the wing region) in the bar area constitute the main velocity peak, whereas the particles in the bar contribute to the high-velocity tail, generating the {h}3-\\overline{V} correlation. If we remove the wing particles, the LOSVDs of the particles in the outer part of the bar only exhibit a low-velocity tail, resulting in a negative {h}3-\\overline{V} correlation, whereas the core areas in the central region still show weakly positive correlations. We discuss implications for IFU observations on bars, and show that the variation of the {h}3-\\overline{V} correlation in the disk galaxy may be used as a kinematic indicator of the bar and the B/PS bulge.
The psychomechanics of simulated sound sources: Material properties of impacted bars
NASA Astrophysics Data System (ADS)
McAdams, Stephen; Chaigne, Antoine; Roussarie, Vincent
2004-03-01
Sound can convey information about the materials composing an object that are often not directly available to the visual system. Material and geometric properties of synthesized impacted bars with a tube resonator were varied, their perceptual structure was inferred from multidimensional scaling of dissimilarity judgments, and the psychophysical relations between the two were quantified. Constant cross-section bars varying in mass density and viscoelastic damping coefficient were synthesized with a physical model in experiment 1. A two-dimensional perceptual space resulted, and the dimensions were correlated with the mechanical parameters after applying a power-law transformation. Variable cross-section bars varying in length and viscoelastic damping coefficient were synthesized in experiment 2 with two sets of lengths creating high- and low-pitched bars. In the low-pitched bars, there was a coupling between the bar and the resonator that modified the decay characteristics. Perceptual dimensions again corresponded to the mechanical parameters. A set of potential temporal, spectral, and spectrotemporal correlates of the auditory representation were derived from the signal. The dimensions related to mass density and bar length were correlated with the frequency of the lowest partial and are related to pitch perception. The correlate most likely to represent the viscoelastic damping coefficient across all three stimulus sets is a linear combination of a decay constant derived from the temporal envelope and the spectral center of gravity derived from a cochlear representation of the signal. These results attest to the perceptual salience of energy-loss phenomena in sound source behavior.
Bars and spirals in tidal interactions with an ensemble of galaxy mass models
NASA Astrophysics Data System (ADS)
Pettitt, Alex R.; Wadsley, J. W.
2018-03-01
We present simulations of the gaseous and stellar material in several different galaxy mass models under the influence of different tidal fly-bys to assess the changes in their bar and spiral morphology. Five different mass models are chosen to represent the variety of rotation curves seen in nature. We find a multitude of different spiral and bar structures can be created, with their properties dependent on the strength of the interaction. We calculate pattern speeds, spiral wind-up rates, bar lengths, and angular momentum exchange to quantify the changes in disc morphology in each scenario. The wind-up rates of the tidal spirals follow the 2:1 resonance very closely for the flat and dark matter-dominated rotation curves, whereas the more baryon-dominated curves tend to wind-up faster, influenced by their inner bars. Clear spurs are seen in most of the tidal spirals, most noticeable in the flat rotation curve models. Bars formed both in isolation and interactions agree well with those seen in real galaxies, with a mixture of `fast' and `slow' rotators. We find no strong correlation between bar length or pattern speed and the interaction strength. Bar formation is, however, accelerated/induced in four out of five of our models. We close by briefly comparing the morphology of our models to real galaxies, easily finding analogues for nearly all simulations presenter here, showing passages of small companions can easily reproduce an ensemble of observed morphologies.
Simulation of the oscillation regimes of bowed bars: a non-linear modal approach
NASA Astrophysics Data System (ADS)
Inácio, Octávio; Henrique, Luís.; Antunes, José
2003-06-01
It is still a challenge to properly simulate the complex stick-slip behavior of multi-degree-of-freedom systems. In the present paper we investigate the self-excited non-linear responses of bowed bars, using a time-domain modal approach, coupled with an explicit model for the frictional forces, which is able to emulate stick-slip behavior. This computational approach can provide very detailed simulations and is well suited to deal with systems presenting a dispersive behavior. The effects of the bar supporting fixture are included in the model, as well as a velocity-dependent friction coefficient. We present the results of numerical simulations, for representative ranges of the bowing velocity and normal force. Computations have been performed for constant-section aluminum bars, as well as for real vibraphone bars, which display a central undercutting, intended to help tuning the first modes. Our results show limiting values for the normal force FN and bowing velocity ẏbow for which the "musical" self-sustained solutions exist. Beyond this "playability space", double period and even chaotic regimes were found for specific ranges of the input parameters FN and ẏbow. As also displayed by bowed strings, the vibration amplitudes of bowed bars also increase with the bow velocity. However, in contrast to string instruments, bowed bars "slip" during most of the motion cycle. Another important difference is that, in bowed bars, the self-excited motions are dominated by the system's first mode. Our numerical results are qualitatively supported by preliminary experimental results.
ERIC Educational Resources Information Center
Grandjean, Burke D.; Taylor, Patricia A.; Weiner, Jay
2002-01-01
During the women's all-around gymnastics final at the 2000 Olympics, the vault was inadvertently set 5 cm too low for a random half of the gymnasts. The error was widely viewed as undermining their confidence and subsequent performance. However, data from pretest and posttest scores on the vault, bars, beam, and floor indicated that the vault…
Implementing Material Surfaces with an Adhesive Switch
2014-02-28
squares), M15 (solid triangles), M13 (open circles), M11 (solid circles), or NC14 (open triangles) DNA primary targets. Error bars indicating...5’–ATCAGGCGCAA–3’ M13 = 5’–ATCAGCGGCAATC–3’ M15 = 5’–ATCAGCCCCAATCCA–3’ L3M9 = 5’–ATLCACLCCGLC–3’ L3M11 = 5
Kinematic parameter estimation using close range photogrammetry for sport applications
NASA Astrophysics Data System (ADS)
Magre Colorado, Luz Alejandra; Martínez Santos, Juan Carlos
2015-12-01
In this article, we show the development of a low-cost hardware/software system based on close range photogrammetry to track the movement of a person performing weightlifting. The goal is to reduce the costs to the trainers and athletes dedicated to this sport when it comes to analyze the performance of the sportsman and avoid injuries or accidents. We used a web-cam as the data acquisition hardware and develop the software stack in Processing using the OpenCV library. Our algorithm extracts size, position, velocity, and acceleration measurements of the bar along the course of the exercise. We present detailed characteristics of the system with their results in a controlled setting. The current work improves the detection and tracking capabilities from a previous version of this system by using HSV color model instead of RGB. Preliminary results show that the system is able to profile the movement of the bar as well as determine the size, position, velocity, and acceleration values of a marker/target in scene. The average error finding the size of object at four meters of distance is less than 4%, and the error of the acceleration value is 1.01% in average.
Analysis of D0 -> K anti-K X Decays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jessop, Colin P.
2003-06-06
Using data taken with the CLEO II detector, they have studied the decays of the D{sup 0} to K{sup +}K{sup -}, K{sup 0}{bar K}{sup 0}, K{sub S}{sup 0}K{sub S}{sup 0}, K{sub S}{sup 0}K{sub S}{sup 0}{pi}{sup 0}, K{sup +}K{sup -}{pi}{sup 0}. The authors present significantly improved results for B(D{sup 0} {yields} K{sup +}K{sup -}) = (0.454 {+-} 0.028 {+-} 0.035)%, B(D{sup 0} {yields} K{sup 0}{bar K}{sup 0}) = (0.054 {+-} 0.012 {+-} 0.010)% and B(D{sup 0} {yields} K{sub S}{sup 0}K{sub S}{sup 0}K{sub S}{sup 0}) = (0.074 {+-} 0.010 {+-} 0.015)% where the first errors are statistical and the second errors aremore » the estimate of their systematic uncertainty. They also present a new upper limit B(D{sup 0} {yields} K{sub S}{sup 0}K{sub S}{sup 0}{pi}{sup 0}) < 0.059% at the 90% confidence level and the first measurement of B(D{sup 0} {yields} K{sup +}K{sup -}{pi}{sup 0}) = (0.14 {+-} 0.04)%.« less
NASA Astrophysics Data System (ADS)
Panasenko, N. N.; Sinelschikov, A. V.
2017-11-01
The finite element method is considered to be the most effective in relation to the calculation of strength and stability of buildings and engineering constructions. As a rule, for the modelling of supporting 3-D frameworks, finite elements with six degrees of freedom are used in each of the nodes. In practice, such supporting frameworks represent the thin-walled welded bars and hot-rolled bars of open and closed profiles in which cross-sectional deplanation must be taken into account. This idea was first introduced by L N Vorobjev and brought to one of the easiest variants of the thin-walled bar theory. The development of this approach is based on taking into account the middle surface shear deformation and adding the deformations of a thin-walled open bar to the formulas for potential and kinetic energy; these deformations depend on shearing stress and result in decreasing the frequency of the first tone of fluctuations to 13%. The authors of the article recommend taking into account this fact when calculating fail-proof dynamic systems.
First observation of forward Z → b b bar production in pp collisions at √{ s } = 8 TeV
NASA Astrophysics Data System (ADS)
Aaij, R.; Adeva, B.; Adinolfi, M.; Ajaltouni, Z.; Akar, S.; Albrecht, J.; Alessio, F.; Alexander, M.; Alfonso Albero, A.; Ali, S.; Alkhazov, G.; Alvarez Cartelle, P.; Alves, A. A.; Amato, S.; Amerio, S.; Amhis, Y.; An, L.; Anderlini, L.; Andreassi, G.; Andreotti, M.; Andrews, J. E.; Appleby, R. B.; Archilli, F.; d'Argent, P.; Arnau Romeu, J.; Artamonov, A.; Artuso, M.; Aslanides, E.; Auriemma, G.; Baalouch, M.; Babuschkin, I.; Bachmann, S.; Back, J. J.; Badalov, A.; Baesso, C.; Baker, S.; Balagura, V.; Baldini, W.; Baranov, A.; Barlow, R. J.; Barschel, C.; Barsuk, S.; Barter, W.; Baryshnikov, F.; Batozskaya, V.; Battista, V.; Bay, A.; Beaucourt, L.; Beddow, J.; Bedeschi, F.; Bediaga, I.; Beiter, A.; Bel, L. J.; Beliy, N.; Bellee, V.; Belloli, N.; Belous, K.; Belyaev, I.; Ben-Haim, E.; Bencivenni, G.; Benson, S.; Beranek, S.; Berezhnoy, A.; Bernet, R.; Berninghoff, D.; Bertholet, E.; Bertolin, A.; Betancourt, C.; Betti, F.; Bettler, M.-O.; van Beuzekom, M.; Bezshyiko, Ia.; Bifani, S.; Billoir, P.; Birnkraut, A.; Bitadze, A.; Bizzeti, A.; Bjørn, M.; Blake, T.; Blanc, F.; Blouw, J.; Blusk, S.; Bocci, V.; Boettcher, T.; Bondar, A.; Bondar, N.; Bonivento, W.; Bordyuzhin, I.; Borgheresi, A.; Borghi, S.; Borisyak, M.; Borsato, M.; Bossu, F.; Boubdir, M.; Bowcock, T. J. V.; Bowen, E.; Bozzi, C.; Braun, S.; Britton, T.; Brodzicka, J.; Brundu, D.; Buchanan, E.; Burr, C.; Bursche, A.; Buytaert, J.; Byczynski, W.; Cadeddu, S.; Cai, H.; Calabrese, R.; Calladine, R.; Calvi, M.; Calvo Gomez, M.; Camboni, A.; Campana, P.; Campora Perez, D. H.; Capriotti, L.; Carbone, A.; Carboni, G.; Cardinale, R.; Cardini, A.; Carniti, P.; Carson, L.; Carvalho Akiba, K.; Casse, G.; Cassina, L.; Castillo Garcia, L.; Cattaneo, M.; Cavallero, G.; Cenci, R.; Chamont, D.; Chapman, M. G.; Charles, M.; Charpentier, Ph.; Chatzikonstantinidis, G.; Chefdeville, M.; Chen, S.; Cheung, S. F.; Chitic, S.-G.; Chobanova, V.; Chrzaszcz, M.; Chubykin, A.; Ciambrone, P.; Cid Vidal, X.; Ciezarek, G.; Clarke, P. E. L.; Clemencic, M.; Cliff, H. V.; Closier, J.; Cogan, J.; Cogneras, E.; Cogoni, V.; Cojocariu, L.; Collins, P.; Colombo, T.; Comerma-Montells, A.; Contu, A.; Cook, A.; Coombs, G.; Coquereau, S.; Corti, G.; Corvo, M.; Costa Sobral, C. M.; Couturier, B.; Cowan, G. A.; Craik, D. C.; Crocombe, A.; Cruz Torres, M.; Currie, R.; D'Ambrosio, C.; Da Cunha Marinho, F.; Dall'Occo, E.; Dalseno, J.; Davis, A.; De Aguiar Francisco, O.; De Capua, S.; De Cian, M.; De Miranda, J. M.; De Paula, L.; De Serio, M.; De Simone, P.; Dean, C. T.; Decamp, D.; Del Buono, L.; Dembinski, H.-P.; Demmer, M.; Dendek, A.; Derkach, D.; Deschamps, O.; Dettori, F.; Dey, B.; Di Canto, A.; Di Nezza, P.; Dijkstra, H.; Dordei, F.; Dorigo, M.; Dosil Suárez, A.; Douglas, L.; Dovbnya, A.; Dreimanis, K.; Dufour, L.; Dujany, G.; Durante, P.; Dzhelyadin, R.; Dziewiecki, M.; Dziurda, A.; Dzyuba, A.; Easo, S.; Ebert, M.; Egede, U.; Egorychev, V.; Eidelman, S.; Eisenhardt, S.; Eitschberger, U.; Ekelhof, R.; Eklund, L.; Ely, S.; Esen, S.; Evans, H. M.; Evans, T.; Falabella, A.; Farley, N.; Farry, S.; Fazzini, D.; Federici, L.; Ferguson, D.; Fernandez, G.; Fernandez Declara, P.; Fernandez Prieto, A.; Ferrari, F.; Ferreira Rodrigues, F.; Ferro-Luzzi, M.; Filippov, S.; Fini, R. A.; Fiore, M.; Fiorini, M.; Firlej, M.; Fitzpatrick, C.; Fiutowski, T.; Fleuret, F.; Fohl, K.; Fontana, M.; Fontanelli, F.; Forshaw, D. C.; Forty, R.; Franco Lima, V.; Frank, M.; Frei, C.; Fu, J.; Funk, W.; Furfaro, E.; Färber, C.; Gabriel, E.; Gallas Torreira, A.; Galli, D.; Gallorini, S.; Gambetta, S.; Gandelman, M.; Gandini, P.; Gao, Y.; Garcia Martin, L. M.; García Pardiñas, J.; Garra Tico, J.; Garrido, L.; Garsed, P. J.; Gascon, D.; Gaspar, C.; Gavardi, L.; Gazzoni, G.; Gerick, D.; Gersabeck, E.; Gersabeck, M.; Gershon, T.; Ghez, Ph.; Gianì, S.; Gibson, V.; Girard, O. G.; Giubega, L.; Gizdov, K.; Gligorov, V. V.; Golubkov, D.; Golutvin, A.; Gomes, A.; Gorelov, I. V.; Gotti, C.; Govorkova, E.; Grabowski, J. P.; Graciani Diaz, R.; Granado Cardoso, L. A.; Graugés, E.; Graverini, E.; Graziani, G.; Grecu, A.; Greim, R.; Griffith, P.; Grillo, L.; Gruber, L.; Gruberg Cazon, B. R.; Grünberg, O.; Gushchin, E.; Guz, Yu.; Gys, T.; Göbel, C.; Hadavizadeh, T.; Hadjivasiliou, C.; Haefeli, G.; Haen, C.; Haines, S. C.; Hamilton, B.; Han, X.; Hancock, T. H.; Hansmann-Menzemer, S.; Harnew, N.; Harnew, S. T.; Harrison, J.; Hasse, C.; Hatch, M.; He, J.; Hecker, M.; Heinicke, K.; Heister, A.; Hennessy, K.; Henrard, P.; Henry, L.; van Herwijnen, E.; Heß, M.; Hicheur, A.; Hill, D.; Hombach, C.; Hopchev, P. H.; Huard, Z. C.; Hulsbergen, W.; Humair, T.; Hushchyn, M.; Hutchcroft, D.; Ibis, P.; Idzik, M.; Ilten, P.; Jacobsson, R.; Jalocha, J.; Jans, E.; Jawahery, A.; Jiang, F.; John, M.; Johnson, D.; Jones, C. R.; Joram, C.; Jost, B.; Jurik, N.; Kandybei, S.; Karacson, M.; Kariuki, J. M.; Karodia, S.; Kazeev, N.; Kecke, M.; Kelsey, M.; Kenzie, M.; Ketel, T.; Khairullin, E.; Khanji, B.; Khurewathanakul, C.; Kirn, T.; Klaver, S.; Klimaszewski, K.; Klimkovich, T.; Koliiev, S.; Kolpin, M.; Komarov, I.; Kopecna, R.; Koppenburg, P.; Kosmyntseva, A.; Kotriakhova, S.; Kozeiha, M.; Kravchuk, L.; Kreps, M.; Krokovny, P.; Kruse, F.; Krzemien, W.; Kucewicz, W.; Kucharczyk, M.; Kudryavtsev, V.; Kuonen, A. K.; Kurek, K.; Kvaratskheliya, T.; Lacarrere, D.; Lafferty, G.; Lai, A.; Lanfranchi, G.; Langenbruch, C.; Latham, T.; Lazzeroni, C.; Le Gac, R.; Leflat, A.; Lefrançois, J.; Lefèvre, R.; Lemaitre, F.; Lemos Cid, E.; Leroy, O.; Lesiak, T.; Leverington, B.; Li, P.-R.; Li, T.; Li, Y.; Li, Z.; Likhomanenko, T.; Lindner, R.; Lionetto, F.; Lisovskyi, V.; Liu, X.; Loh, D.; Loi, A.; Longstaff, I.; Lopes, J. H.; Lucchesi, D.; Lucio Martinez, M.; Luo, H.; Lupato, A.; Luppi, E.; Lupton, O.; Lusiani, A.; Lyu, X.; Machefert, F.; Maciuc, F.; Macko, V.; Mackowiak, P.; Maddrell-Mander, S.; Maev, O.; Maguire, K.; Maisuzenko, D.; Majewski, M. W.; Malde, S.; Malinin, A.; Maltsev, T.; Manca, G.; Mancinelli, G.; Manning, P.; Marangotto, D.; Maratas, J.; Marchand, J. F.; Marconi, U.; Marin Benito, C.; Marinangeli, M.; Marino, P.; Marks, J.; Martellotti, G.; Martin, M.; Martinelli, M.; Martinez Santos, D.; Martinez Vidal, F.; Martins Tostes, D.; Massacrier, L. M.; Massafferri, A.; Matev, R.; Mathad, A.; Mathe, Z.; Matteuzzi, C.; Mauri, A.; Maurice, E.; Maurin, B.; Mazurov, A.; McCann, M.; McNab, A.; McNulty, R.; Mead, J. V.; Meadows, B.; Meaux, C.; Meier, F.; Meinert, N.; Melnychuk, D.; Merk, M.; Merli, A.; Michielin, E.; Milanes, D. A.; Millard, E.; Minard, M.-N.; Minzoni, L.; Mitzel, D. S.; Mogini, A.; Molina Rodriguez, J.; Mombächer, T.; Monroy, I. A.; Monteil, S.; Morandin, M.; Morello, M. J.; Morgunova, O.; Moron, J.; Morris, A. B.; Mountain, R.; Muheim, F.; Mulder, M.; Müller, D.; Müller, J.; Müller, K.; Müller, V.; Naik, P.; Nakada, T.; Nandakumar, R.; Nandi, A.; Nasteva, I.; Needham, M.; Neri, N.; Neubert, S.; Neufeld, N.; Neuner, M.; Nguyen, T. D.; Nguyen-Mau, C.; Nieswand, S.; Niet, R.; Nikitin, N.; Nikodem, T.; Nogay, A.; O'Hanlon, D. P.; Oblakowska-Mucha, A.; Obraztsov, V.; Ogilvy, S.; Oldeman, R.; Onderwater, C. J. G.; Ossowska, A.; Otalora Goicochea, J. M.; Owen, P.; Oyanguren, A.; Pais, P. R.; Palano, A.; Palutan, M.; Papanestis, A.; Pappagallo, M.; Pappalardo, L. L.; Parker, W.; Parkes, C.; Passaleva, G.; Pastore, A.; Patel, M.; Patrignani, C.; Pearce, A.; Pellegrino, A.; Penso, G.; Pepe Altarelli, M.; Perazzini, S.; Perret, P.; Pescatore, L.; Petridis, K.; Petrolini, A.; Petrov, A.; Petruzzo, M.; Picatoste Olloqui, E.; Pietrzyk, B.; Pikies, M.; Pinci, D.; Pisani, F.; Pistone, A.; Piucci, A.; Placinta, V.; Playfer, S.; Plo Casasus, M.; Polci, F.; Poli Lener, M.; Poluektov, A.; Polyakov, I.; Polycarpo, E.; Pomery, G. J.; Ponce, S.; Popov, A.; Popov, D.; Poslavskii, S.; Potterat, C.; Price, E.; Prisciandaro, J.; Prouve, C.; Pugatch, V.; Puig Navarro, A.; Pullen, H.; Punzi, G.; Qian, W.; Quagliani, R.; Quintana, B.; Rachwal, B.; Rademacker, J. H.; Rama, M.; Ramos Pernas, M.; Rangel, M. S.; Raniuk, I.; Ratnikov, F.; Raven, G.; Ravonel Salzgeber, M.; Reboud, M.; Redi, F.; Reichert, S.; dos Reis, A. C.; Remon Alepuz, C.; Renaudin, V.; Ricciardi, S.; Richards, S.; Rihl, M.; Rinnert, K.; Rives Molina, V.; Robbe, P.; Robert, A.; Rodrigues, A. B.; Rodrigues, E.; Rodriguez Lopez, J. A.; Rodriguez Perez, P.; Rogozhnikov, A.; Roiser, S.; Rollings, A.; Romanovskiy, V.; Romero Vidal, A.; Ronayne, J. W.; Rotondo, M.; Rudolph, M. S.; Ruf, T.; Ruiz Valls, P.; Ruiz Vidal, J.; Saborido Silva, J. J.; Sadykhov, E.; Sagidova, N.; Saitta, B.; Salustino Guimaraes, V.; Sanchez Mayordomo, C.; Sanmartin Sedes, B.; Santacesaria, R.; Santamarina Rios, C.; Santimaria, M.; Santovetti, E.; Sarpis, G.; Sarti, A.; Satriano, C.; Satta, A.; Saunders, D. M.; Savrina, D.; Schael, S.; Schellenberg, M.; Schiller, M.; Schindler, H.; Schlupp, M.; Schmelling, M.; Schmelzer, T.; Schmidt, B.; Schneider, O.; Schopper, A.; Schreiner, H. F.; Schubert, K.; Schubiger, M.; Schune, M.-H.; Schwemmer, R.; Sciascia, B.; Sciubba, A.; Semennikov, A.; Sepulveda, E. S.; Sergi, A.; Serra, N.; Serrano, J.; Sestini, L.; Seyfert, P.; Shapkin, M.; Shapoval, I.; Shcheglov, Y.; Shears, T.; Shekhtman, L.; Shevchenko, V.; Siddi, B. G.; Silva Coutinho, R.; Silva de Oliveira, L.; Simi, G.; Simone, S.; Sirendi, M.; Skidmore, N.; Skwarnicki, T.; Smith, E.; Smith, I. T.; Smith, J.; Smith, M.; Soares Lavra, l.; Sokoloff, M. D.; Soler, F. J. P.; Souza De Paula, B.; Spaan, B.; Spradlin, P.; Sridharan, S.; Stagni, F.; Stahl, M.; Stahl, S.; Stefko, P.; Stefkova, S.; Steinkamp, O.; Stemmle, S.; Stenyakin, O.; Stepanova, M.; Stevens, H.; Stone, S.; Storaci, B.; Stracka, S.; Stramaglia, M. E.; Straticiuc, M.; Straumann, U.; Sun, J.; Sun, L.; Sutcliffe, W.; Swientek, K.; Syropoulos, V.; Szczekowski, M.; Szumlak, T.; Szymanski, M.; T'Jampens, S.; Tayduganov, A.; Tekampe, T.; Tellarini, G.; Teubert, F.; Thomas, E.; van Tilburg, J.; Tilley, M. J.; Tisserand, V.; Tobin, M.; Tolk, S.; Tomassetti, L.; Tonelli, D.; Toriello, F.; Tourinho Jadallah Aoude, R.; Tournefier, E.; Traill, M.; Tran, M. T.; Tresch, M.; Trisovic, A.; Tsaregorodtsev, A.; Tsopelas, P.; Tully, A.; Tuning, N.; Ukleja, A.; Usachov, A.; Ustyuzhanin, A.; Uwer, U.; Vacca, C.; Vagner, A.; Vagnoni, V.; Valassi, A.; Valat, S.; Valenti, G.; Vazquez Gomez, R.; Vazquez Regueiro, P.; Vecchi, S.; van Veghel, M.; Velthuis, J. J.; Veltri, M.; Veneziano, G.; Venkateswaran, A.; Verlage, T. A.; Vernet, M.; Vesterinen, M.; Viana Barbosa, J. V.; Viaud, B.; Vieira, D.; Vieites Diaz, M.; Viemann, H.; Vilasis-Cardona, X.; Vitti, M.; Volkov, V.; Vollhardt, A.; Voneki, B.; Vorobyev, A.; Vorobyev, V.; Voß, C.; de Vries, J. A.; Vázquez Sierra, C.; Waldi, R.; Wallace, C.; Wallace, R.; Walsh, J.; Wang, J.; Ward, D. R.; Wark, H. M.; Watson, N. K.; Websdale, D.; Weiden, A.; Whitehead, M.; Wicht, J.; Wilkinson, G.; Wilkinson, M.; Williams, M.; Williams, M. P.; Williams, M.; Williams, T.; Wilson, F. F.; Wimberley, J.; Winn, M.; Wishahi, J.; Wislicki, W.; Witek, M.; Wormser, G.; Wotton, S. A.; Wraight, K.; Wyllie, K.; Xie, Y.; Xu, Z.; Yang, Z.; Yang, Z.; Yao, Y.; Yin, H.; Yu, J.; Yuan, X.; Yushchenko, O.; Zarebski, K. A.; Zavertyaev, M.; Zhang, L.; Zhang, Y.; Zhelezov, A.; Zheng, Y.; Zhu, X.; Zhukov, V.; Zonneveld, J. B.; Zucchelli, S.; LHCb Collaboration
2018-01-01
The decay Z → b b bar is reconstructed in pp collision data, corresponding to 2 fb-1 of integrated luminosity, collected by the LHCb experiment at a centre-of-mass energy of √{ s } = 8 TeV. The product of the Z production cross-section and the Z → b b bar branching fraction is measured for candidates in the fiducial region defined by two particle-level b-quark jets with pseudorapidities in the range 2.2 < η < 4.2, with transverse momenta pT > 20 GeV and dijet invariant mass in the range 45
NASA Astrophysics Data System (ADS)
Ackerman, T. R.; Pizzuto, J. E.
2016-12-01
Sediment may be stored briefly or for long periods in alluvial deposits adjacent to rivers. The duration of sediment storage may affect diagenesis, and controls the timing of sediment delivery, affecting the propagation of upland sediment signals caused by tectonics, climate change, and land use, and the efficacy of watershed management strategies designed to reduce sediment loading to estuaries and reservoirs. Understanding the functional form of storage time distributions can help to extrapolate from limited field observations and improve forecasts of sediment loading. We simulate stratigraphy adjacent to a modeled river where meander migration is driven by channel curvature. The basal unit is built immediately as the channel migrates away, analogous to a point bar; rules for overbank (flood) deposition create thicker deposits at low elevations and near the channel, forming topographic features analogous to natural levees, scroll bars, and terraces. Deposit age is tracked everywhere throughout the simulation, and the storage time is recorded when the channel returns and erodes the sediment at each pixel. 210 ky of simulated run time is sufficient for the channel to migrate 10,500 channel widths, but only the final 90 ky are analyzed. Storage time survivor functions are well fit by exponential functions until 500 years (point bar) or 600 years (overbank) representing the youngest 50% of eroded sediment. Then (until an age of 12 ky, representing the next 48% (point bar) or 45% (overbank) of eroding sediment), the distributions are well fit by heavy tailed power functions with slopes of -1 (point bar) and -0.75 (overbank). After 12 ky (6% of model run time) the remainder of the storage time distributions become exponential (light tailed). Point bar sediment has the greatest chance (6%) of eroding at 120 years, as the river reworks recently deposited point bars. Overbank sediment has an 8% chance of eroding after 1 time step, a chance that declines by half after 3 time steps. The high probability of eroding young overbank deposits occurs as the river reworks recently formed natural levees. These results show that depositional environment affects river floodplain storage times shorter than a few centuries, and suggest that a power law distribution with a truncated tail may be the most reasonable functional fit.
Second-order closure PBL model with new third-order moments: Comparison with LES data
NASA Technical Reports Server (NTRS)
Canuto, V. M.; Minotti, F.; Ronchi, C.; Ypma, R. M.; Zeman, O.
1994-01-01
This paper contains two parts. In the first part, a new set of diagnostic equations is derived for the third-order moments for a buoyancy-driven flow, by exact inversion of the prognostic equations for the third-order moment equations in the stationary case. The third-order moments exhibit a universal structure: they all are a linear combination of the derivatives of all the second-order moments, bar-w(exp 2), bar-w theta, bar-theta(exp 2), and bar-q(exp 2). Each term of the sum contains a turbulent diffusivity D(sub t), which also exhibits a universal structure of the form D(sub t) = a nu(sub t) + b bar-w theta. Since the sign of the convective flux changes depending on stable or unstable stratification, D(sub t) varies according to the type of stratification. Here nu(sub t) approximately equal to wl (l is a mixing length and w is an rms velocity) represents the 'mechanical' part, while the 'buoyancy' part is represented by the convective flux bar-w theta. The quantities a and b are functions of the variable N(sub tau)(exp 2), where N(exp 2) = g alpha derivative of Theta with respect to z and tau is the turbulence time scale. The new expressions for the third-order moments generalize those of Zeman and Lumley, which were subsequently adopted by Sun and Ogura, Chen and Cotton, and Finger and Schmidt in their treatments of the convective boundary layer. In the second part, the new expressions for the third-order moments are used to solve the ensemble average equations describing a purely convective boundary laye r heated from below at a constant rate. The computed second- and third-order moments are then compared with the corresponding Large Eddy Simulation (LES) results, most of which are obtained by running a new LES code, and part of which are taken from published results. The ensemble average results compare favorably with the LES data.
Vasiljevic, Milica; Pechey, Rachel; Marteau, Theresa M.
2015-01-01
Recent studies report that using green labels to denote healthier foods, and red to denote less healthy foods increases consumption of green- and decreases consumption of red-labelled foods. Other symbols (e.g. emoticons conveying normative approval and disapproval) could also be used to signal the healthiness and/or acceptability of consuming such products. The present study tested the combined effects of using emoticons and colours on labels amongst a nationally representative sample of the UK population (n = 955). In a 3 (emoticon expression: smiling vs. frowning vs. no emoticon) × 3 (colour label: green vs. red vs. white) ×2 (food option: chocolate bar vs. cereal bar) between-subjects experiment, participants rated the level of desirability, healthiness, tastiness, and calorific content of a snack bar they had been randomised to view. At the end they were further randomised to view one of nine possible combinations of colour and emoticon labels and asked to choose between a chocolate and a cereal bar. Regardless of label, participants rated the chocolate as tastier and more desirable when compared to the cereal bar, and the cereal bar as healthier than the chocolate bar. A series of interactions revealed that a frowning emoticon on a white background decreased perceptions of healthiness and tastiness of the cereal bar, but not the chocolate bar. In the explicit choice task selection was unaffected by label. Overall nutritional labels had limited effects on perceptions and no effects on choice of snack foods. Emoticon labels yielded stronger effects on perceptions of taste and healthiness of snacks than colour labels. Frowning emoticons may be more potent than smiling emoticons at influencing the perceived healthiness and tastiness of foods carrying health halos. PMID:25841647
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibson, Adam Paul
The authors present a measurement of the mass of the top quark. The event sample is selected from proton-antiproton collisions, at 1.96 TeV center-of-mass energy, observed with the CDF detector at Fermilab's Tevatron. They consider a 318 pb -1 dataset collected between March 2002 and August 2004. They select events that contain one energetic lepton, large missing transverse energy, exactly four energetic jets, and at least one displaced vertex b tag. The analysis uses leading-order tmore » $$\\bar{t}$$ and background matrix elements along with parameterized parton showering to construct event-by-event likelihoods as a function of top quark mass. From the 63 events observed with the 318 pb -1 dataset they extract a top quark mass of 172.0 ± 2.6(stat) ± 3.3(syst) GeV/c 2 from the joint likelihood. The mean expected statistical uncertainty is 3.2 GeV/c 2 for m $$\\bar{t}$$ = 178 GTeV/c 2 and 3.1 GeV/c 2 for m $$\\bar{t}$$ = 172.5 GeV/c 2. The systematic error is dominated by the uncertainty of the jet energy scale.« less
A novel automated rat catalepsy bar test system based on a RISC microcontroller.
Alvarez-Cervera, Fernando J; Villanueva-Toledo, Jairo; Moo-Puc, Rosa E; Heredia-López, Francisco J; Alvarez-Cervera, Margarita; Pineda, Juan C; Góngora-Alfaro, José L
2005-07-15
Catalepsy tests performed in rodents treated with drugs that interfere with dopaminergic transmission have been widely used for the screening of drugs with therapeutic potential in the treatment of Parkinson's disease. The basic method for measuring catalepsy intensity is the "standard" bar test. We present here an easy to use microcontroller-based automatic system for recording bar test experiments. The design is simple, compact, and has a low cost. Recording intervals and total experimental time can be programmed within a wide range of values. The resulting catalepsy times are stored, and up to five simultaneous experiments can be recorded. A standard personal computer interface is included. The automated system also permits the elimination of human error associated with factors such as fatigue, distraction, and data transcription, occurring during manual recording. Furthermore, a uniform criterion for timing the cataleptic condition can be achieved. Correlation values between the results obtained with the automated system and those reported by two independent observers ranged between 0.88 and 0.99 (P<0.0001; three treatments, nine animals, 144 catalepsy time measurements).
NASA Technical Reports Server (NTRS)
Hur-Diaz, Sun; Wirzburger, John; Smith, Dan
2008-01-01
The Hubble Space Telescope (HST) is renowned for its superb pointing accuracy of less than 10 milli-arcseconds absolute pointing error. To accomplish this, the HST relies on its complement of four reaction wheel assemblies (RWAs) for attitude control and four magnetic torquer bars (MTBs) for momentum management. As with most satellites with reaction wheel control, the fourth RWA provides for fault tolerance to maintain three-axis pointing capability should a failure occur and a wheel is lost from operations. If an additional failure is encountered, the ability to maintain three-axis pointing is jeopardized. In order to prepare for this potential situation, HST Pointing Control Subsystem (PCS) Team developed a Two Reaction Wheel Science (TRS) control mode. This mode utilizes two RWAs and four magnetic torquer bars to achieve three-axis stabilization and pointing accuracy necessary for a continued science observing program. This paper presents the design of the TRS mode and operational considerations necessary to protect the spacecraft while allowing for a substantial science program.
FastSim: A Fast Simulation for the SuperB Detector
NASA Astrophysics Data System (ADS)
Andreassen, R.; Arnaud, N.; Brown, D. N.; Burmistrov, L.; Carlson, J.; Cheng, C.-h.; Di Simone, A.; Gaponenko, I.; Manoni, E.; Perez, A.; Rama, M.; Roberts, D.; Rotondo, M.; Simi, G.; Sokoloff, M.; Suzuki, A.; Walsh, J.
2011-12-01
We have developed a parameterized (fast) simulation for detector optimization and physics reach studies of the proposed SuperB Flavor Factory in Italy. Detector components are modeled as thin sections of planes, cylinders, disks or cones. Particle-material interactions are modeled using simplified cross-sections and formulas. Active detectors are modeled using parameterized response functions. Geometry and response parameters are configured using xml files with a custom-designed schema. Reconstruction algorithms adapted from BaBar are used to build tracks and clusters. Multiple sources of background signals can be merged with primary signals. Pattern recognition errors are modeled statistically by randomly misassigning nearby tracking hits. Standard BaBar analysis tuples are used as an event output. Hadronic B meson pair events can be simulated at roughly 10Hz.
Near-IR period-luminosity relations for pulsating stars in ω Centauri (NGC 5139)
NASA Astrophysics Data System (ADS)
Navarrete, C.; Catelan, M.; Contreras Ramos, R.; Alonso-García, J.; Gran, F.; Dékány, I.; Minniti, D.
2017-08-01
Aims: The globular cluster ω Centauri (NGC 5139) hosts hundreds of pulsating variable stars of different types, thus representing a treasure trove for studies of their corresponding period-luminosity (PL) relations. Our goal in this study is to obtain the PL relations for RR Lyrae and SX Phoenicis stars in the field of the cluster, based on high-quality, well-sampled light curves in the near-infrared (IR). Methods: Observations were carried out using the VISTA InfraRed CAMera (VIRCAM) mounted on the Visible and Infrared Survey Telescope for Astronomy (VISTA). A total of 42 epochs in J and 100 epochs in KS were obtained, spanning 352 days. Point-spread function photometry was performed using DoPhot and DAOPHOT crowded-field photometry packages in the outer and inner regions of the cluster, respectively. Results: Based on the comprehensive catalog of near-IR light curves thus secured, PL relations were obtained for the different types of pulsators in the cluster, both in the J and KS bands. This includes the first PL relations in the near-IR for fundamental-mode SX Phoenicis stars. The near-IR magnitudes and periods of Type II Cepheids and RR Lyrae stars were used to derive an updated true distance modulus to the cluster, with a resulting value of (m - M)0 = 13.708 ± 0.035 ± 0.10 mag, where the error bars correspond to the adopted statistical and systematic errors, respectively. Adding the errors in quadrature, this is equivalent to a heliocentric distance of 5.52 ± 0.27 kpc. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile, with the VISTA telescope (project ID 087.D-0472, PI R. Angeloni).
Morrison, Christopher; Lee, Juliet P.; Gruenewald, Paul J.; Marzell, Miesha
2015-01-01
Location-based sampling is a method to obtain samples of people within ecological contexts relevant to specific public health outcomes. Random selection increases generalizability, however in some circumstances (such as surveying bar patrons) recruitment conditions increase risks of sample bias. We attempted to recruit representative samples of bars and patrons in six California cities, but low response rates precluded meaningful analysis. A systematic review of 24 similar studies revealed that none addressed the key shortcomings of our study. We recommend steps to improve studies that use location-based sampling: (i) purposively sample places of interest, (ii) utilize recruitment strategies appropriate to the environment, and (iii) provide full information on response rates at all levels of sampling. PMID:26574657
Visual Technology Research Simulator (VTRS) Human Performance Research: Phase III.
1981-11-01
you will intercept the glideslope and a centered meatball at approximately 4500 feet from the ramp. When the meatball approaches centerball you are to...this system with two horizontal bars (to represent the datum bars) and a moving dot (referred to as the ball or the meatball ). The system is...At two balls low the meatball starts to flash. Plus or minus two balls is the maximum effective range of the system. The ball will be lost off the top
1981-05-01
represented as a Winkler foundation. The program can treat any number of slabs connected by steel bars or other load trans- fer devices at the joints...dimensional finite element method. The inherent flexibility of such an approach permits the analysis of a rigid pavement with steel bars and stabilized...layers and provides an efficient tool for analyzing stress conditions at the joint. Unfor- tunately, such a procedure would require a tremendously
Ionospheric Modeling: Development, Verification and Validation
2007-08-15
The University of Massachusetts (UMass), Lowell, has introduced a new version of their ionogram autoscaling program ARTIST , Version 5. A very...Investigation of the Reliability of the ESIR Ionogram Autoscaling Method (Expert System for Ionogram Reduction) ESIR.book.pdf Dec 06 Quality...Figures and Error Bars for Autoscaled Vertical Incidence Ionograms. Background and User Documentation for QualScan V2007.2 AFRL_QualScan.book.pdf Feb
DataPlus - a revolutionary applications generator for DOS hand-held computers
David Dean; Linda Dean
2000-01-01
DataPlus allows the user to easily design data collection templates for DOS-based hand-held computers that mimic clipboard data sheets. The user designs and tests the application on the desktop PC and then transfers it to a DOS field computer. Other features include: error checking, missing data checks, and sensor input from RS-232 devices such as bar code wands,...
Improving radiopharmaceutical supply chain safety by implementing bar code technology.
Matanza, David; Hallouard, François; Rioufol, Catherine; Fessi, Hatem; Fraysse, Marc
2014-11-01
The aim of this study was to describe and evaluate an approach for improving radiopharmaceutical supply chain safety by implementing bar code technology. We first evaluated the current situation of our radiopharmaceutical supply chain and, by means of the ALARM protocol, analysed two dispensing errors that occurred in our department. Thereafter, we implemented a bar code system to secure selected key stages of the radiopharmaceutical supply chain. Finally, we evaluated the cost of this implementation, from overtime, to overheads, to additional radiation exposure to workers. An analysis of the events that occurred revealed a lack of identification of prepared or dispensed drugs. Moreover, the evaluation of the current radiopharmaceutical supply chain showed that the dispensation and injection steps needed to be further secured. The bar code system was used to reinforce product identification at three selected key stages: at usable stock entry; at preparation-dispensation; and during administration, allowing to check conformity between the labelling of the delivered product (identity and activity) and the prescription. The extra time needed for all these steps had no impact on the number and successful conduct of examinations. The investment cost was reduced (2600 euros for new material and 30 euros a year for additional supplies) because of pre-existing computing equipment. With regard to the radiation exposure to workers there was an insignificant overexposure for hands with this new organization because of the labelling and scanning processes of radiolabelled preparation vials. Implementation of bar code technology is now an essential part of a global securing approach towards optimum patient management.
Dalitz plot analysis of the decay B 0 ( B ¯ 0 ) → K ± π ∓ π 0
Aubert, B.; Bona, M.; Karyotakis, Y.; ...
2008-09-12
Here, we report a Dalitz-plot analysis of the charmless hadronic decays of neutral B mesons to K ± π ∓ π 0 . With a sample of ( 231.8 ± 2.6 ) × 10 6 Υ ( 4 S ) → Bmore » $$\\bar{B}$$ decays collected by the BABAR detector at the PEP-II asymmetric-energy B Factory at SLAC, we measure the magnitudes and phases of the intermediate resonant and nonresonant amplitudes for B 0 and $$\\bar{B}$$ 0 decays and determine the corresponding C P -averaged branching fractions and charge asymmetries. Furthermore, we measure the inclusive branching fraction and C P -violating charge asymmetry and found it to be B ( B 0 → K + π - π 0 ) = ( 35.7$$+2.6\\atop{-1.5}$$ + 2.6 - 1.5 ± 2.2 ) × 10 - 6 and A C P = - 0.030 $$+ 0.045\\atop{- 0.051}$$ ± 0.055 where the first errors are statistical and the second systematic. We observe the decay B 0 → K * 0 ( 892 ) π 0 with the branching fraction B ( B 0 → K * 0 ( 892 ) π 0 ) = ( 3.6 $$+ 0.7\\atop- {0.8}$$ ± 0.4 ) × 10 - 6 . This measurement differs from zero by 5.6 standard deviations (including the systematic uncertainties). The selected sample also contains B 0 → $$\\bar{D}$$ 0 π 0 decays where $$\\bar{D}$$ 0 → K + π - , and we measure B ( B 0 → $$\\bar{D}$$ 0π 0 ) = ( 2.93 ± 0.17 ± 0.18 ) × 10 - 4 .« less
Elias, Gabriel A.; Bieszczad, Kasia M.; Weinberger, Norman M.
2015-01-01
Primary sensory cortical fields develop highly specific associative representational plasticity, notably enlarged area of representation of reinforced signal stimuli within their topographic maps. However, overtraining subjects after they have solved an instrumental task can reduce or eliminate the expansion while the successful behavior remains. As the development of this plasticity depends on the learning strategy used to solve a task, we asked whether the loss of expansion is due to the strategy used during overtraining. Adult male rats were trained in a three-tone auditory discrimination task to bar-press to the CS+ for water reward and refrain from doing so during the CS− tones and silent intertrial intervals; errors were punished by a flashing light and time-out penalty. Groups acquired this task to a criterion within seven training sessions by relying on a strategy that was “bar-press from tone-onset-to-error signal” (“TOTE”). Three groups then received different levels of overtraining: Group ST, none; Group RT, one week; Group OT, three weeks. Post-training mapping of their primary auditory fields (A1) showed that Groups ST and RT had developed significantly expanded representational areas, specifically restricted to the frequency band of the CS+ tone. In contrast, the A1 of Group OT was no different from naïve controls. Analysis of learning strategy revealed this group had shifted strategy to a refinement of TOTE in which they self-terminated bar-presses before making an error (“iTOTE”). Across all animals, the greater the use of iTOTE, the smaller was the representation of the CS+ in A1. Thus, the loss of cortical expansion is attributable to a shift or refinement in strategy. This reversal of expansion was considered in light of a novel theoretical framework (CONCERTO) highlighting four basic principles of brain function that resolve anomalous findings and explaining why even a minor change in strategy would involve concomitant shifts of involved brain sites, including reversal of cortical expansion. PMID:26596700
Elias, Gabriel A; Bieszczad, Kasia M; Weinberger, Norman M
2015-12-01
Primary sensory cortical fields develop highly specific associative representational plasticity, notably enlarged area of representation of reinforced signal stimuli within their topographic maps. However, overtraining subjects after they have solved an instrumental task can reduce or eliminate the expansion while the successful behavior remains. As the development of this plasticity depends on the learning strategy used to solve a task, we asked whether the loss of expansion is due to the strategy used during overtraining. Adult male rats were trained in a three-tone auditory discrimination task to bar-press to the CS+ for water reward and refrain from doing so during the CS- tones and silent intertrial intervals; errors were punished by a flashing light and time-out penalty. Groups acquired this task to a criterion within seven training sessions by relying on a strategy that was "bar-press from tone-onset-to-error signal" ("TOTE"). Three groups then received different levels of overtraining: Group ST, none; Group RT, one week; Group OT, three weeks. Post-training mapping of their primary auditory fields (A1) showed that Groups ST and RT had developed significantly expanded representational areas, specifically restricted to the frequency band of the CS+ tone. In contrast, the A1 of Group OT was no different from naïve controls. Analysis of learning strategy revealed this group had shifted strategy to a refinement of TOTE in which they self-terminated bar-presses before making an error ("iTOTE"). Across all animals, the greater the use of iTOTE, the smaller was the representation of the CS+ in A1. Thus, the loss of cortical expansion is attributable to a shift or refinement in strategy. This reversal of expansion was considered in light of a novel theoretical framework (CONCERTO) highlighting four basic principles of brain function that resolve anomalous findings and explaining why even a minor change in strategy would involve concomitant shifts of involved brain sites, including reversal of cortical expansion. Published by Elsevier Inc.
Globular Clusters: Absolute Proper Motions and Galactic Orbits
NASA Astrophysics Data System (ADS)
Chemel, A. A.; Glushkova, E. V.; Dambis, A. K.; Rastorguev, A. S.; Yalyalieva, L. N.; Klinichev, A. D.
2018-04-01
We cross-match objects from several different astronomical catalogs to determine the absolute proper motions of stars within the 30-arcmin radius fields of 115 Milky-Way globular clusters with the accuracy of 1-2 mas yr-1. The proper motions are based on positional data recovered from the USNO-B1, 2MASS, URAT1, ALLWISE, UCAC5, and Gaia DR1 surveys with up to ten positions spanning an epoch difference of up to about 65 years, and reduced to Gaia DR1 TGAS frame using UCAC5 as the reference catalog. Cluster members are photometrically identified by selecting horizontal- and red-giant branch stars on color-magnitude diagrams, and the mean absolute proper motions of the clusters with a typical formal error of about 0.4 mas yr-1 are computed by averaging the proper motions of selected members. The inferred absolute proper motions of clusters are combined with available radial-velocity data and heliocentric distance estimates to compute the cluster orbits in terms of the Galactic potential models based on Miyamoto and Nagai disk, Hernquist spheroid, and modified isothermal dark-matter halo (axisymmetric model without a bar) and the same model + rotating Ferre's bar (non-axisymmetric). Five distant clusters have higher-than-escape velocities, most likely due to large errors of computed transversal velocities, whereas the computed orbits of all other clusters remain bound to the Galaxy. Unlike previously published results, we find the bar to affect substantially the orbits of most of the clusters, even those at large Galactocentric distances, bringing appreciable chaotization, especially in the portions of the orbits close to the Galactic center, and stretching out the orbits of some of the thick-disk clusters.
THE HST/ACS COMA CLUSTER SURVEY. VIII. BARRED DISK GALAXIES IN THE CORE OF THE COMA CLUSTER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marinova, Irina; Jogee, Shardha; Weinzirl, Tim
2012-02-20
We use high-resolution ({approx}0.''1) F814W Advanced Camera for Surveys (ACS) images from the Hubble Space Telescope ACS Treasury survey of the Coma cluster at z {approx} 0.02 to study bars in massive disk galaxies (S0s), as well as low-mass dwarf galaxies in the core of the Coma cluster, the densest environment in the nearby universe. Our study helps to constrain the evolution of bars and disks in dense environments and provides a comparison point for studies in lower density environments and at higher redshifts. Our results are: (1) we characterize the fraction and properties of bars in a sample ofmore » 32 bright (M{sub V} {approx}< -18, M{sub *} > 10{sup 9.5} M{sub Sun }) S0 galaxies, which dominate the population of massive disk galaxies in the Coma core. We find that the measurement of a bar fraction among S0 galaxies must be handled with special care due to the difficulty in separating unbarred S0s from ellipticals, and the potential dilution of the bar signature by light from a relatively large, bright bulge. The results depend sensitively on the method used: the bar fraction for bright S0s in the Coma core is 50% {+-} 11%, 65% {+-} 11%, and 60% {+-} 11% based on three methods of bar detection, namely, strict ellipse fit criteria, relaxed ellipse fit criteria, and visual classification. (2) We compare the S0 bar fraction across different environments (the Coma core, A901/902, and Virgo) adopting the critical step of using matched samples and matched methods in order to ensure robust comparisons. We find that the bar fraction among bright S0 galaxies does not show a statistically significant variation (within the error bars of {+-}11%) across environments which span two orders of magnitude in galaxy number density (n {approx} 300-10,000 galaxies Mpc{sup -3}) and include rich and poor clusters, such as the core of Coma, the A901/902 cluster, and Virgo. We speculate that the bar fraction among S0s is not significantly enhanced in rich clusters compared to low-density environments for two reasons. First, S0s in rich clusters are less prone to bar instabilities as they are dynamically heated by harassment and are gas poor as a result of ram pressure stripping and accelerated star formation. Second, high-speed encounters in rich clusters may be less effective than slow, strong encounters in inducing bars. (3) We also take advantage of the high resolution of the ACS ({approx}50 pc) to analyze a sample of 333 faint (M{sub V} > -18) dwarf galaxies in the Coma core. Using visual inspection of unsharp-masked images, we find only 13 galaxies with bar and/or spiral structure. An additional eight galaxies show evidence for an inclined disk. The paucity of disk structures in Coma dwarfs suggests that either disks are not common in these galaxies or that any disks present are too hot to develop instabilities.« less
Code of Federal Regulations, 2010 CFR
2010-01-01
... also prohibited from conducting business with FDIC as agents or representatives of other contractors... bars to contracting are shown to exist, the existence of a cause for exclusion does not necessarily...
Incorporating a Spatial Prior into Nonlinear D-Bar EIT Imaging for Complex Admittivities.
Hamilton, Sarah J; Mueller, J L; Alsaker, M
2017-02-01
Electrical Impedance Tomography (EIT) aims to recover the internal conductivity and permittivity distributions of a body from electrical measurements taken on electrodes on the surface of the body. The reconstruction task is a severely ill-posed nonlinear inverse problem that is highly sensitive to measurement noise and modeling errors. Regularized D-bar methods have shown great promise in producing noise-robust algorithms by employing a low-pass filtering of nonlinear (nonphysical) Fourier transform data specific to the EIT problem. Including prior data with the approximate locations of major organ boundaries in the scattering transform provides a means of extending the radius of the low-pass filter to include higher frequency components in the reconstruction, in particular, features that are known with high confidence. This information is additionally included in the system of D-bar equations with an independent regularization parameter from that of the extended scattering transform. In this paper, this approach is used in the 2-D D-bar method for admittivity (conductivity as well as permittivity) EIT imaging. Noise-robust reconstructions are presented for simulated EIT data on chest-shaped phantoms with a simulated pneumothorax and pleural effusion. No assumption of the pathology is used in the construction of the prior, yet the method still produces significant enhancements of the underlying pathology (pneumothorax or pleural effusion) even in the presence of strong noise.
Uncertainty analysis technique for OMEGA Dante measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
May, M. J.; Widmann, K.; Sorce, C.
2010-10-15
The Dante is an 18 channel x-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g., hohlraums, etc.) at x-ray energies between 50 eV and 10 keV. It is a main diagnostic installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the x-ray diodes, filters and mirrors, and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less
Uncertainty Analysis Technique for OMEGA Dante Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
May, M J; Widmann, K; Sorce, C
2010-05-07
The Dante is an 18 channel X-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g. hohlraums, etc.) at X-ray energies between 50 eV to 10 keV. It is a main diagnostics installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the X-ray diodes, filters and mirrors and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte-Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less
Wiens, J. David; Dugger, Katie M.; Lesmeister, Damon B.; Dilione, Krista E.; Simon, David C.
2018-05-21
Populations of Northern Spotted Owls (Strix occidentalis caurina; hereinafter referred to as Spotted Owl) are declining throughout this subspecies’ geographic range. Evidence indicates that competition with invading populations of Barred Owls (S. varia) has contributed significantly to those declines. A pilot study in California showed that localized removal of Barred Owls coupled with conservation of suitable forest conditions can slow or even reverse population declines of Spotted Owls. It remains unknown, however, whether similar results can be obtained in areas with different forest conditions, greater densities of Barred Owls, and fewer remaining Spotted Owls. During 2015–17, we initiated a before-after-control-impact (BACI) experiment at three study areas in Oregon and Washington to determine if removal of Barred Owls can improve population trends of Spotted Owls. Each study area had at least 20 years of pre-treatment demographic data on Spotted Owls, and represented different forest conditions occupied by the two owl species in the Pacific Northwest. This report describes research accomplishments and preliminary results from the first 2.5 years (March 2015–August 2017) of the planned 5-year experiment.
Why hard-nosed executives should care about management theory.
Christensen, Clayton M; Raynor, Michael E
2003-09-01
Theory often gets a bum rap among managers because it's associated with the word "theoretical," which connotes "impractical." But it shouldn't. Because experience is solely about the past, solid theories are the only way managers can plan future actions with any degree of confidence. The key word here is "solid." Gravity is a solid theory. As such, it lets us predict that if we step off a cliff we will fall, without actually having to do so. But business literature is replete with theories that don't seem to work in practice or actually contradict each other. How can a manager tell a good business theory from a bad one? The first step is understanding how good theories are built. They develop in three stages: gathering data, organizing it into categories highlighting significant differences, then making generalizations explaining what causes what, under which circumstances. For instance, professor Ananth Raman and his colleagues collected data showing that bar code-scanning systems generated notoriously inaccurate inventory records. These observations led them to classify the types of errors the scanning systems produced and the types of shops in which those errors most often occurred. Recently, some of Raman's doctoral students have worked as clerks to see exactly what kinds of behavior cause the errors. From this foundation, a solid theory predicting under which circumstances bar code systems work, and don't work, is beginning to emerge. Once we forgo one-size-fits-all explanations and insist that a theory describes the circumstances under which it does and doesn't work, we can bring predictable success to the world of management.
Bar and club tobacco promotions in the alternative press: targeting young adults.
Sepe, Edward; Glantz, Stanton A
2002-01-01
This study examined changes in tobacco promotions in the alternative press in San Francisco and Philadelphia from 1994 to 1999. A random sample of alternative newspapers was analyzed, and a content analysis was conducted. Between 1994 and 1999, numbers of tobacco advertisements increased from 8 to 337 in San Francisco and from 8 to 351 in Philadelphia. Product advertisements represented only 45% to 50% of the total; the remaining advertisements were entertainment-focused promotions, mostly bar-club and event promotions. The tobacco industry has increased its use of bars and clubs as promotional venues and has used the alternative press to reach the young adults who frequent these establishments. This increased targeting of young adults may be associated with an increase in smoking among this group.
New insights into the X-ray properties of nearby barred spiral galaxy NGC 1672
NASA Astrophysics Data System (ADS)
Jenkins, L. P.; Brnadt, W. N.; Colbert, E. J. M.; Levan, A. J.; Roberts, T. P.; Ward, M. J.; Zezas, A.
2008-02-01
We present some preliminary results from new Chandra and XMM-Newton X-ray observations of the nearby barred spiral galaxy NGC1672. It shows dramatic nuclear and extra-nuclear star formation activity, including starburst regions located near each end of its strong bar, both of which host ultraluminous X-ray sources (ULXs). With the new high-spatial-resolution Chandra imaging, we show for the first time that NGC1672 possesses a faint ($L(X)~10^39 erg/s), hard central X-ray source surrounded by an X-ray bright circumnuclear starburst ring that dominates the X-ray emission in the region. The central source may represent low-level AGN activity, or alternatively the emission from X-ray binaries associated with star-formation in the nucleus.
ERROR REDUCTION IN DUCT LEAKAGE TESTING THROUGH DATA CROSS-CHECKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
ANDREWS, J.W.
1998-12-31
One way to reduce uncertainty in scientific measurement is to devise a protocol in which more quantities are measured than are absolutely required, so that the result is over constrained. This report develops a method for so combining data from two different tests for air leakage in residential duct systems. An algorithm, which depends on the uncertainty estimates for the measured quantities, optimizes the use of the excess data. In many cases it can significantly reduce the error bar on at least one of the two measured duct leakage rates (supply or return), and it provides a rational method ofmore » reconciling any conflicting results from the two leakage tests.« less
NASA Astrophysics Data System (ADS)
Reid, H. E.; Williams, R. D.; Coleman, S.; Brierley, G. J.
2012-04-01
Bars are key morphological units within river systems, fashioning the sediment regime and bedload transport processes within a reach. Reworking of these features underpins channel adjustment at larger scales, thereby acting as a key determinant of channel stability. Yet, despite their fundamental importance to channel evolution, few investigations have acquired spatially continuous data on bar morphology and sediment particle size to facilitate detailed investigations on bar reworking. To this end, four bars along a 10 km reach of wandering gravel bed river were surveyed, capturing downstream changes in slope, bed material size and channel planform. High resolution surveys of bar topography and grain-size roughness were acquired using Terrestrial Laser Scanning (TLS). The resulting point clouds were then filtered to a quasi-uniform point spacing of 0.05 m and statistical attributes were extracted at a 1 m resolution. The detrended standard deviations from the TLS data were then correlated to the underlying median grain size (D50), which was measured using the Wolman transect method. The resulting linear regression model had a strong relationship (R2 = 0.92) and was used to map median sediment size across each bar. Representative cross-sections were used to interpolate water surfaces across each bar, for flood events with recurrence intervals (RI) of 2.33, 10, 20, 50 and 100 years, enabling flow depth to be calculated. The ratio of dimensionless shear stress (from the depth raster and slope) over critical shear stress (from the D50 raster) was used to map entrainment across each bar at 1 m resolution for each flood event. This is referred to as 'relative erodibility'. The two downstream bars, which are characterised by low slope and smaller bed material, underwent greater entrainment during the more frequent 2.33 RI flood than the higher energy upstream bars which required floods with a RI of 10 or greater. Reworking was also assessed for within-bar geomorphic units. This work demonstrated that floods with a 2.33 year RI flush material on the bar tail, while 10 year RI floods rework the supra-platform and back channel deposits and only the largest flows (RI of > = 50) are able to entrain the bar head materials. Interestingly, despite dramatic differences between slope, grain-size and planform, all bar heads were found to undergo minimal entrainment (between 10 - 20 %) during the frequent 2.33 RI flood. This indicates that resistance at the bar head during frequent floods promotes the deposition of finer-grained, more transient units in their lee. This process-based appraisal explains channel adjustment at the reach-scale, whereby the proportion of the bar made out of more frequently entrained units (tail, backchannel, supra-platform) relative to more static units at the bar head exerts a direct influence upon the extent of adjustment of the bar and the reach as a whole.
A Comparison of Sleep and Performance of Sailors on an Operationally Deployed U.S. Navy Warship
2013-09-01
The crew’s mission on a deployed warship is inherently dangerous. The nature of the job means navigating restricted waters, conducting underway...The nature of the job means navigating restricted waters, conducting underway replenishments with less than 200 feet of lateral separation from... concentration equivalent. Error bars ± s.e. (From Dawson & Reid, 1997). .............................9 Figure 4. Mean psychomotor vigilance task speed (and
Optimisation of gellan gum edible coating for ready-to-eat mango (Mangifera indica L.) bars.
Danalache, Florina; Carvalho, Claudia Y; Alves, Vitor D; Moldão-Martins, Margarida; Mata, Paulina
2016-03-01
The optimisation of an edible coating based on low acyl (L)/high acyl (H) gellan gum for ready-to-eat mango bars was performed through a central composite rotatable design (CCRD). The independent variables were the concentration of gellan (L/H90/10) and the concentration of Ca(2+) in the coating solution, as well as the storage time after coating application. The response variables studied were the coating thickness, mango bars firmness, syneresis, and colour alterations. Gellan concentration was the independent variable that most influenced the thickness of the coating. Syneresis was quite low for the conditions tested (<1.64%). Similarly, the colour alterations were low during the entire storage time (ΔE<5). Considering the model predictions, 1.0%wt L/H90/10 with addition of 6 mM Ca(2+) could represent the optimal coating formulation for the mango bars. The release of eight volatile compounds from the uncoated and coated mango bars with the selected formulation was analysed by Headspace - Solid Phase Micro Extraction-Gas Chromatography during 9 days of refrigerated storage. This work showed that the coating can improve mango bars sensory characteristics (appearance and firmness) and stability in terms of syneresis, colour and volatiles content during storage increasing the commercial value of the final product. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.
Protein Folding Mechanism of the Dimeric AmphiphysinII/Bin1 N-BAR Domain
Gruber, Tobias; Balbach, Jochen
2015-01-01
The human AmphyphisinII/Bin1 N-BAR domain belongs to the BAR domain superfamily, whose members sense and generate membrane curvatures. The N-BAR domain is a 57 kDa homodimeric protein comprising a six helix bundle. Here we report the protein folding mechanism of this protein as a representative of this protein superfamily. The concentration dependent thermodynamic stability was studied by urea equilibrium transition curves followed by fluorescence and far-UV CD spectroscopy. Kinetic unfolding and refolding experiments, including rapid double and triple mixing techniques, allowed to unravel the complex folding behavior of N-BAR. The equilibrium unfolding transition curve can be described by a two-state process, while the folding kinetics show four refolding phases, an additional burst reaction and two unfolding phases. All fast refolding phases show a rollover in the chevron plot but only one of these phases depends on the protein concentration reporting the dimerization step. Secondary structure formation occurs during the three fast refolding phases. The slowest phase can be assigned to a proline isomerization. All kinetic experiments were also followed by fluorescence anisotropy detection to verify the assignment of the dimerization step to the respective folding phase. Based on these experiments we propose for N-BAR two parallel folding pathways towards the homodimeric native state depending on the proline conformation in the unfolded state. PMID:26368922
Reservoir sedimentology of the Big Injun sandstone in Granny Creek field, West Virginia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou, Xiangdong; Donaldson, K.; Donaldson, A.C.
1992-01-01
Big Injun sandstones of Granny Creek oil field (WV) are interpreted as fluvial-deltaic deposits from core and geophysical log data. The reservoir consists of two distinctive lithologies throughout the field; fine-grained sandstones overlain by pebbly and coarse-grained sandstones. Lower fine-grained sandstones were deposited in westward prograding river-mouth bars, where distal, marine-dominant proximal, and fluvial-dominant proximal bar subfacies are recognized. Principal pay is marine-influenced proximal bar, where porosity ranges from 13 to 23% and permeability, up to 24 md. Thin marine transgressive shales and their laterally equivalent low-permeability sandstones bound time-rock sequences generally less than 10 meters thick. Where field mapped,more » width of prograding bar sequence is approximately 2.7 km (dip trend), measured from truncated eastern edge (pre-coarse-grained member erosional surface) to distal western margin. Dip-trending elongate lobes occur within marine-influenced proximal mouth-bar area, representing thickest part of tidally influenced preserved bar. Upper coarse-grained part of reservoir consists of pebbly sandstones of channel fill from bedload streams. Laterally persistent low permeability cemented interval in lower part commonly caps underlying pay zone and probably serves as seal to vertical oil migration. Southwest paleoflow trends based on thickness maps of unit portent emergence of West Virginia dome, which influences erosion patterns of pre-Greenbrier unconformity for this combination oil trap.« less
NASA Astrophysics Data System (ADS)
Okazaki, Hiroko; Kwak, Youngjoo; Tamura, Toru
2015-07-01
We conducted a ground-penetrating radar (GPR) survey of gravelly braid bars in the Abe River, central Japan, to clarify the three-dimensional (3D) variations in their depositional facies under various geomorphologic conditions. In September 2011, a ten-year return-period flood in the study area reworked and deposited braid bars. After the flood, we surveyed three bars with different geomorphologies using a GPR system with a 250-MHz antenna and identified seven fundamental radar depositional facies: Inclined reflections (facies Ia and Ib), horizontal to subhorizontal reflections (facies IIa and IIb), discontinuous reflections (facies IIIa and IIIb), and facies assemblage with a large-scale channel-shaped lower boundary (facies IV). Combinations of these facies indicate bar formation processes: channel filling, lateral aggradation, and lateral and downstream accretion. In the Abe River, aerial photographs and airborne laser scanning data were obtained before and after the flood. The observed changes of the surface topography are consistent with the subsurface results seen in the GPR sections. This study demonstrated that the erosional and depositional architecture observed among bars with different channel styles was related to river width and represented depositional processes for high-sediment discharge. The quantitative characterizations of the sedimentary architecture will be useful for interpreting gravelly fluvial deposits in the rock record.
NASA Technical Reports Server (NTRS)
Lucas, S. H.; Scotti, S. J.
1989-01-01
The nonlinear mathematical programming method (formal optimization) has had many applications in engineering design. A figure illustrates the use of optimization techniques in the design process. The design process begins with the design problem, such as the classic example of the two-bar truss designed for minimum weight as seen in the leftmost part of the figure. If formal optimization is to be applied, the design problem must be recast in the form of an optimization problem consisting of an objective function, design variables, and constraint function relations. The middle part of the figure shows the two-bar truss design posed as an optimization problem. The total truss weight is the objective function, the tube diameter and truss height are design variables, with stress and Euler buckling considered as constraint function relations. Lastly, the designer develops or obtains analysis software containing a mathematical model of the object being optimized, and then interfaces the analysis routine with existing optimization software such as CONMIN, ADS, or NPSOL. This final state of software development can be both tedious and error-prone. The Sizing and Optimization Language (SOL), a special-purpose computer language whose goal is to make the software implementation phase of optimum design easier and less error-prone, is presented.
Effects of salt secretion on psychrometric determinations of water potential of cotton leaves.
Klepper, B; Barrs, H D
1968-07-01
Thermocouple psychrometers gave lower estimates of water potential of cotton leaves than did a pressure chamber. This difference was considerable for turgid leaves, but progressively decreased for leaves with lower water potentials and fell to zero at water potentials below about -10 bars. The conductivity of washings from cotton leaves removed from the psychrometric equilibration chambers was related to the magnitude of this discrepancy in water potential, indicating that the discrepancy is due to salts on the leaf surface which make the psychrometric estimates too low. This error, which may be as great as 400 to 500%, cannot be eliminated by washing the leaves because salts may be secreted during the equilibration period. Therefore, a thermocouple psychrometer is not suitable for measuring the water potential of cotton leaves when it is above about -10 bars.
NASA Astrophysics Data System (ADS)
Syvitski, J. P.; Hutton, E. W.
2001-12-01
A new numerical approach (HydroTrend, v.2) allows the daily flux of sediment to be estimated for any river, whether gauged or not. The model can be driven by actual climate measurements (precipitation, temperature) or with statistical estimates of climate (modeled climate, remotely-sensed climate). In both cases, the character (e.g. soil depth, relief, vegetation index) of the drainage terrain is needed to complete the model domain. The HydroTrend approach allows us to examine the effects of climate on the supply of sediment to continental margins, and the nature of supply variability. A new relationship is defined as: $Qs = f (Psi) Qs-bar (Q/Q-bar)c+-σ where Qs-bar is the long-term sediment load, Q-bar is the long-term discharge, c and sigma are mean and standard deviation of the inter-annual variability of the rating coefficient, and Psi captures the measurement errors associated with Q and Qs, and the annual transients, affecting the supply of sediment including sediment and water source, and river (flood wave) dynamics. F = F(Psi, s). Smaller-discharge rivers have larger values of s, and s asymptotes to a small but consistent value for larger-discharge rivers. The coefficient c is directly proportional to the long-term suspended load (Qs-bar) and basin relief (R), and inversely proportional to mean annual temperature (T). sigma is directly proportional to the mean annual discharge. The long-term sediment load is given by: Qs-bar = a R1.5 A0.5 TT $ where a is a global constant, A is basin area; and TT is a function of mean annual temperature. This new approach provides estimates of sediment flux at the dynamic (daily) level and provides us a means to experiment on the sensitivity of marine sedimentary deposits in recording a paleoclimate signal. In addition the method provides us with spatial estimates for the flux of sediment to the coastal zone at the global scale.
A new photometric model of the Galactic bar using red clump giants
NASA Astrophysics Data System (ADS)
Cao, Liang; Mao, Shude; Nataf, David; Rattenbury, Nicholas J.; Gould, Andrew
2013-09-01
We present a study of the luminosity density distribution of the Galactic bar using number counts of red clump giants from the Optical Gravitational Lensing Experiment (OGLE) III survey. The data were recently published by Nataf et al. for 9019 fields towards the bulge and have 2.94 × 106 RC stars over a viewing area of 90.25 deg^2. The data include the number counts, mean distance modulus (μ), dispersion in μ and full error matrix, from which we fit the data with several triaxial parametric models. We use the Markov Chain Monte Carlo method to explore the parameter space and find that the best-fitting model is the E3 model, with the distance to the GC 8.13 kpc, the ratio of semimajor and semiminor bar axis scalelengths in the Galactic plane x0, y0 and vertical bar scalelength z0 x0: y0: z0 ≈ 1.00: 0.43: 0.40 (close to being prolate). The scalelength of the stellar density profile along the bar's major axis is ˜0.67 kpc and has an angle of 29.4°, slightly larger than the value obtained from a similar study based on OGLE-II data. The number of estimated RC stars within the field of view is 2.78 × 106, which is systematically lower than the observed value. We subtract the smooth parametric model from the observed counts and find that the residuals are consistent with the presence of an X-shaped structure in the Galactic Centre, the excess to the estimated mass content is ˜5.8 per cent. We estimate that the total mass of the bar is ˜1.8 × 1010 M⊙. Our results can be used as a key ingredient to construct new density models of the Milky Way and will have implications on the predictions of the optical depth to gravitational microlensing and the patterns of hydrodynamical gas flow in the Milky Way.
Nanoforging - Innovation in three-dimensional processing and shaping of nanoscaled structures.
Landefeld, Andreas; Rösler, Joachim
2014-01-01
This paper describes the shaping of freestanding objects out of metallic structures in the nano- and submicron size. The technique used, called nanoforging, is very similar to the macroscopic forging process. With spring actuated tools produced by focused ion beam milling, controlled forging is demonstrated. With only three steps, a conical bar stock is transformed to a flat- and semicircular bent bar stock. Compared with other forming techniques in the reduced scale, nanoforging represents a beneficial approach in forming freestanding metallic structures, due to its simplicity, and supplements other forming techniques.
Uy, Raymonde Charles Y; Kury, Fabricio P; Fontelo, Paul A
2015-01-01
The standard of safe medication practice requires strict observance of the five rights of medication administration: the right patient, drug, time, dose, and route. Despite adherence to these guidelines, medication errors remain a public health concern that has generated health policies and hospital processes that leverage automation and computerization to reduce these errors. Bar code, RFID, biometrics and pharmacy automation technologies have been demonstrated in literature to decrease the incidence of medication errors by minimizing human factors involved in the process. Despite evidence suggesting the effectivity of these technologies, adoption rates and trends vary across hospital systems. The objective of study is to examine the state and adoption trends of automatic identification and data capture (AIDC) methods and pharmacy automation technologies in U.S. hospitals. A retrospective descriptive analysis of survey data from the HIMSS Analytics® Database was done, demonstrating an optimistic growth in the adoption of these patient safety solutions.
User-centered design of quality of life reports for clinical care of patients with prostate cancer
Izard, Jason; Hartzler, Andrea; Avery, Daniel I.; Shih, Cheryl; Dalkin, Bruce L.; Gore, John L.
2014-01-01
Background Primary treatment of localized prostate cancer can result in bothersome urinary, sexual, and bowel symptoms. Yet clinical application of health-related quality-of-life (HRQOL) questionnaires is rare. We employed user-centered design to develop graphic dashboards of questionnaire responses from patients with prostate cancer to facilitate clinical integration of HRQOL measurement. Methods We interviewed 50 prostate cancer patients and 50 providers, assessed literacy with validated instruments (Rapid Estimate of Adult Literacy in Medicine short form, Subjective Numeracy Scale, Graphical Literacy Scale), and presented participants with prototype dashboards that display prostate cancer-specific HRQOL with graphic elements derived from patient focus groups. We assessed dashboard comprehension and preferences in table, bar, line, and pictograph formats with patient scores contextualized with HRQOL scores of similar patients serving as a comparison group. Results Health literacy (mean score, 6.8/7) and numeracy (mean score, 4.5/6) of patient participants was high. Patients favored the bar chart (mean rank, 1.8 [P = .12] vs line graph [P <.01] vs table and pictograph); providers demonstrated similar preference for table, bar, and line formats (ranked first by 30%, 34%, and 34% of providers, respectively). Providers expressed unsolicited concerns over presentation of comparison group scores (n = 19; 38%) and impact on clinic efficiency (n = 16; 32%). Conclusion Based on preferences of prostate cancer patients and providers, we developed the design concept of a dynamic HRQOL dashboard that permits a base patient-centered report in bar chart format that can be toggled to other formats and include error bars that frame comparison group scores. Inclusion of lower literacy patients may yield different preferences. PMID:24787105
User-centered design of quality of life reports for clinical care of patients with prostate cancer.
Izard, Jason; Hartzler, Andrea; Avery, Daniel I; Shih, Cheryl; Dalkin, Bruce L; Gore, John L
2014-05-01
Primary treatment of localized prostate cancer can result in bothersome urinary, sexual, and bowel symptoms. Yet clinical application of health-related quality-of-life (HRQOL) questionnaires is rare. We employed user-centered design to develop graphic dashboards of questionnaire responses from patients with prostate cancer to facilitate clinical integration of HRQOL measurement. We interviewed 50 prostate cancer patients and 50 providers, assessed literacy with validated instruments (Rapid Estimate of Adult Literacy in Medicine short form, Subjective Numeracy Scale, Graphical Literacy Scale), and presented participants with prototype dashboards that display prostate cancer-specific HRQOL with graphic elements derived from patient focus groups. We assessed dashboard comprehension and preferences in table, bar, line, and pictograph formats with patient scores contextualized with HRQOL scores of similar patients serving as a comparison group. Health literacy (mean score, 6.8/7) and numeracy (mean score, 4.5/6) of patient participants was high. Patients favored the bar chart (mean rank, 1.8 [P = .12] vs line graph [P < .01] vs table and pictograph); providers demonstrated similar preference for table, bar, and line formats (ranked first by 30%, 34%, and 34% of providers, respectively). Providers expressed unsolicited concerns over presentation of comparison group scores (n = 19; 38%) and impact on clinic efficiency (n = 16; 32%). Based on preferences of prostate cancer patients and providers, we developed the design concept of a dynamic HRQOL dashboard that permits a base patient-centered report in bar chart format that can be toggled to other formats and include error bars that frame comparison group scores. Inclusion of lower literacy patients may yield different preferences. Copyright © 2014 Mosby, Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Rong; Andrews, Elisabeth; Balkanski, Yves; Boucher, Olivier; Myhre, Gunnar; Samset, Bjørn Hallvard; Schulz, Michael; Schuster, Gregory L.; Valari, Myrto; Tao, Shu
2018-02-01
There is high uncertainty in the direct radiative forcing of black carbon (BC), an aerosol that strongly absorbs solar radiation. The observation-constrained estimate, which is several times larger than the bottom-up estimate, is influenced by the spatial representativeness error due to the mesoscale inhomogeneity of the aerosol fields and the relatively low resolution of global chemistry-transport models. Here we evaluated the spatial representativeness error for two widely used observational networks (AErosol RObotic NETwork and Global Atmosphere Watch) by downscaling the geospatial grid in a global model of BC aerosol absorption optical depth to 0.1° × 0.1°. Comparing the models at a spatial resolution of 2° × 2° with BC aerosol absorption at AErosol RObotic NETwork sites (which are commonly located near emission hot spots) tends to cause a global spatial representativeness error of 30%, as a positive bias for the current top-down estimate of global BC direct radiative forcing. By contrast, the global spatial representativeness error will be 7% for the Global Atmosphere Watch network, because the sites are located in such a way that there are almost an equal number of sites with positive or negative representativeness error.
Vasiljevic, Milica; Pechey, Rachel; Marteau, Theresa M
2015-08-01
Recent studies report that using green labels to denote healthier foods, and red to denote less healthy foods increases consumption of green- and decreases consumption of red-labelled foods. Other symbols (e.g. emoticons conveying normative approval and disapproval) could also be used to signal the healthiness and/or acceptability of consuming such products. The present study tested the combined effects of using emoticons and colours on labels amongst a nationally representative sample of the UK population (n = 955). In a 3 (emoticon expression: smiling vs. frowning vs. no emoticon) × 3 (colour label: green vs. red vs. white) ×2 (food option: chocolate bar vs. cereal bar) between-subjects experiment, participants rated the level of desirability, healthiness, tastiness, and calorific content of a snack bar they had been randomised to view. At the end they were further randomised to view one of nine possible combinations of colour and emoticon labels and asked to choose between a chocolate and a cereal bar. Regardless of label, participants rated the chocolate as tastier and more desirable when compared to the cereal bar, and the cereal bar as healthier than the chocolate bar. A series of interactions revealed that a frowning emoticon on a white background decreased perceptions of healthiness and tastiness of the cereal bar, but not the chocolate bar. In the explicit choice task selection was unaffected by label. Overall nutritional labels had limited effects on perceptions and no effects on choice of snack foods. Emoticon labels yielded stronger effects on perceptions of taste and healthiness of snacks than colour labels. Frowning emoticons may be more potent than smiling emoticons at influencing the perceived healthiness and tastiness of foods carrying health halos. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Preliminary test results in support of integrated EPP and SMT design methods development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yanli; Jetter, Robert I.; Sham, T. -L.
2016-02-09
The proposed integrated Elastic Perfectly-Plastic (EPP) and Simplified Model Test (SMT) methodology consists of incorporating a SMT data-based approach for creep-fatigue damage evaluation into the EPP methodology to avoid using the creep-fatigue interaction diagram (the D diagram) and to minimize over-conservatism while properly accounting for localized defects and stress risers. To support the implementation of the proposed code rules and to verify their applicability, a series of thermomechanical tests have been initiated. One test concept, the Simplified Model Test (SMT), takes into account the stress and strain redistribution in real structures by including representative follow-up characteristics in the test specimen.more » The second test concept is the two-bar thermal ratcheting tests with cyclic loading at high temperatures using specimens representing key features of potential component designs. This report summaries the previous SMT results on Alloy 617, SS316H and SS304H and presents the recent development on SMT approach on Alloy 617. These SMT specimen data are also representative of component loading conditions and have been used as part of the verification of the proposed integrated EPP and SMT design methods development. The previous two-bar thermal ratcheting test results on Alloy 617 and SS316H are also summarized and the new results from two bar thermal ratcheting tests on SS316H at a lower temperature range are reported.« less
van der Veen, Willem; van den Bemt, Patricia M L A; Wouters, Hans; Bates, David W; Twisk, Jos W R; de Gier, Johan J; Taxis, Katja; Duyvendak, Michiel; Luttikhuis, Karen Oude; Ros, Johannes J W; Vasbinder, Erwin C; Atrafi, Maryam; Brasse, Bjorn; Mangelaars, Iris
2018-04-01
To study the association of workarounds with medication administration errors using barcode-assisted medication administration (BCMA), and to determine the frequency and types of workarounds and medication administration errors. A prospective observational study in Dutch hospitals using BCMA to administer medication. Direct observation was used to collect data. Primary outcome measure was the proportion of medication administrations with one or more medication administration errors. Secondary outcome was the frequency and types of workarounds and medication administration errors. Univariate and multivariate multilevel logistic regression analysis were used to assess the association between workarounds and medication administration errors. Descriptive statistics were used for the secondary outcomes. We included 5793 medication administrations for 1230 inpatients. Workarounds were associated with medication administration errors (adjusted odds ratio 3.06 [95% CI: 2.49-3.78]). Most commonly, procedural workarounds were observed, such as not scanning at all (36%), not scanning patients because they did not wear a wristband (28%), incorrect medication scanning, multiple medication scanning, and ignoring alert signals (11%). Common types of medication administration errors were omissions (78%), administration of non-ordered drugs (8.0%), and wrong doses given (6.0%). Workarounds are associated with medication administration errors in hospitals using BCMA. These data suggest that BCMA needs more post-implementation evaluation if it is to achieve the intended benefits for medication safety. In hospitals using barcode-assisted medication administration, workarounds occurred in 66% of medication administrations and were associated with large numbers of medication administration errors.
A method for velocity signal reconstruction of AFDISAR/PDV based on crazy-climber algorithm
NASA Astrophysics Data System (ADS)
Peng, Ying-cheng; Guo, Xian; Xing, Yuan-ding; Chen, Rong; Li, Yan-jie; Bai, Ting
2017-10-01
The resolution of Continuous wavelet transformation (CWT) is different when the frequency is different. For this property, the time-frequency signal of coherent signal obtained by All Fiber Displacement Interferometer System for Any Reflector (AFDISAR) is extracted. Crazy-climber Algorithm is adopted to extract wavelet ridge while Velocity history curve of the measuring object is obtained. Numerical simulation is carried out. The reconstruction signal is completely consistent with the original signal, which verifies the accuracy of the algorithm. Vibration of loudspeaker and free end of Hopkinson incident bar under impact loading are measured by AFDISAR, and the measured coherent signals are processed. Velocity signals of loudspeaker and free end of Hopkinson incident bar are reconstructed respectively. Comparing with the theoretical calculation, the particle vibration arrival time difference error of the free end of Hopkinson incident bar is 2μs. It is indicated from the results that the algorithm is of high accuracy, and is of high adaptability to signals of different time-frequency feature. The algorithm overcomes the limitation of modulating the time window artificially according to the signal variation when adopting STFT, and is suitable for extracting signal measured by AFDISAR.
Analysis of the charmed semileptonic decay D +→ ρ 0 μ + v
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luiggi, Eduardo E.
2008-12-01
The search for the fundamental constituents of matter has been pursued and studied since the dawn of civilization. As early as the fourth century BCE, Democritus, expanding the teachings of Leucippus, proposed small, indivisible entities called atoms, interacting with each other to form the Universe. Democritus was convinced of this by observing the environment around him. He observed, for example, how a collection of tiny grains of sand can make out smooth beaches. Today, following the lead set by Democritus more than 2500 years ago, at the heart of particle physics is the hypothesis that everything we can observe in the Universe is made of a small number of fundamental particles interacting with each other. In contrast to Democritus, for the last hundred years we have been able to perform experiments that probe deeper and deeper into matter in the search for the fundamental particles of nature. Today's knowledge is encapsulated in the Standard Model of particle physics, a model describing the fundamental particles and their interactions. It is within this model that the work in this thesis is presented. This work attempts to add to the understanding of the Standard Model by measuring the relative branching fraction of the charmed semileptonic decay D + → ρ 0μ +v with respect to D + →more » $$\\bar{K}$$* 0μ +v. Many theoretical models that describe hadronic interactions predict the value of this relative branching fraction, but only a handful of experiments have been able to measure it with any precision. By making a precise measurement of this relative branching fraction theorists can distinguish between viable models as well as refine existing ones. In this thesis we presented the measurement of the branching fraction ratio of the Cabibbo suppressed semileptonic decay mode D + → ρ 0μ +v with respect to the Cabibbo favored mode D + → $$\\bar{K}$$* 0 μ +v using data collected by the FOCUS collaboration. We used a binned maximum log-likelihood fit that included all known semileptonic backgrounds as well as combinatorial and muonmisidentification backgrounds to extract the yields for both the signal and normalization modes. We reconstructed 320 ± 44 D + → ρ 0μ +v events and 11372 ± 161 D + → K -π +μ +v events. Taking into account the non-resonant contribution to the D + → K -π +μ +v yield due to a s-wave interference first measured by FOCUS the branching fraction ratio is: Γ(D + → ρ 0μ +v) = 0.0412 ± 0.0057 ± 0.0040 (VII.1) where the first error is statistical and the second error is the systematic uncertainty. This represents a substantial improvement over the previous world average. More importantly, the new world average for Γ(D +→ 0μ +v)/Γ(D +→$$\\bar{K}$$* 0μ +v) along with the improved measurements in the electronic mode can be used to discriminate among different theoretical approaches that aim to understand the hadronic current involved in the charm to light quark decay process. The average of the electronic and muonic modes indicate that predictions for the partial decay width Γ(D + → ρ 0ℓ +v) and the ratio Γ(D +→ρ 0ℓ +v)/Γ(D +→$$\\bar{K}$$* 0ℓ +v) based on Sum Rules are too low. Using the same data used to extract Γ(D +→ρ 0μ +v)/Γ(D +→$$\\bar{K}$$* 0μ +v) we studied the feasibility of measuring the form factors for the D + → ρ 0μ +v decay. We found that the need to further reduce the combinatorial and muon misidentification backgrounds left us with a much smaller sample of 52 ± 12 D + → ρ 0μ +μ events; not enough to make a statistically significant measurement of the form factors.« less
[Perception over smoke-free policies amongst bar and restaurant representatives in central Mexico].
Barrientos-Gutiérrez, Tonatiuh; Gimeno, David; Thrasher, James F; Reynales-Shigematsu, Luz Myriam; Amick, Benjamin C; Lazcano-Ponce, Eduardo; Hernández-Ávila, Mauricio
2010-01-01
To analyze the perceptions and appreciations over smoke-free environments of restaurant and bar managers from four cities in central Mexico. Managers from 219 restaurants and bars from Mexico City, Colima, Cuernavaca and Toluca were surveyed about smoke-free environments opinions and implementation. Simultaneously, environmental nicotine was monitored. The majority of surveyed managers considered public places should be smoke-free, although more than half were concerned with potential economic loses. Implementation of smoke-free environments was more frequent in Mexico City (85.4%) than in the other cities (15.3% overall), with consequently lower environmental nicotine concentrations. Managers acknowledge the need to create smoke-free environments. Concerns over economic negative effects derived from the prohibition could explain, at least partially, the rejection of this sector towards the implementation of this type of policy.
Ohri-Vachaspati, Punam; Turner, Lindsey; Adams, Marc A; Bruening, Meg; Chaloupka, Frank J
2016-03-01
Salad bars have been promoted as a strategy for increasing fruit and vegetable consumption in schools. To examine school-level resources and programs associated with the presence of salad bars in elementary schools and to assess whether there were differential changes in salad bar prevalence based on school-level resources and programs before and after the new US Department of Agriculture schools meals standards were proposed (January 2011) and implemented (July 2012). Repeated cross-sectional design. Data were collected annually between 2006-2007 and 2013. Nationally representative sample of 3,956 elementary schools participating in the National School Lunch Program. School personnel (ie, administrators and foodservice staff) provided data using a mail-back survey. Presence of salad bars in school was the primary outcome variable. School-level programs and resources were investigated as independent variables. Weighted logistic regression analyses examined associations between dependent and independent variables controlling for school demographic characteristics. Prevalence of salad bars increased significantly from 17.1% in 2006-2007 to 29.6% in 2012-2013. The prevalence of salad bars was significantly higher among schools that participated in the Team Nutrition program (odds ratio [OR] 1.37, 95% CI 1.10 to 1.70), the Fresh Fruit and Vegetable Program (OR 1.48, 95% CI 1.13 to 1.95), a Farm to School program (OR 1.77, 95% CI 1.36 to 2.33), and where school meals were provided by a foodservice management company (OR 1.46, 95% CI 1.08 to 1.97). No association was found for schools with full-service kitchen, school gardens, those offering nutrition education, or those with dietitians/nutritionists on staff. Prevalence of salad bars increased significantly after the US Department of Agriculture school meal guidelines were proposed and implemented. It is likely that schools are using salad bars to offer a variety of fruits and vegetables to students, and schools with greater numbers of school-level resources and programs are better positioned for having salad bars. Copyright © 2016 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
Holmqvist, Kristian; Svensson, Mats Y; Davidsson, Johan; Gutsche, Andreas; Tomasch, Ernst; Darok, Mario; Ravnik, Dean
2016-02-01
The chest response of the human body has been studied for several load conditions, but is not well known in the case of steering wheel rim-to-chest impact in heavy goods vehicle frontal collisions. The aim of this study was to determine the response of the human chest in a set of simulated steering wheel impacts. PMHS tests were carried out and analysed. The steering wheel load pattern was represented by a rigid pendulum with a straight bar-shaped front. A crash test dummy chest calibration pendulum was utilised for comparison. In this study, a set of rigid bar impacts were directed at various heights of the chest, spanning approximately 120mm around the fourth intercostal space. The impact energy was set below a level estimated to cause rib fracture. The analysed results consist of responses, evaluated with respect to differences in the impacting shape and impact heights on compression and viscous criteria chest injury responses. The results showed that the bar impacts consistently produced lesser scaled chest compressions than the hub; the Middle bar responses were around 90% of the hub responses. A superior bar impact provided lesser chest compression; the average response was 86% of the Middle bar response. For inferior bar impacts, the chest compression response was 116% of the chest compression in the middle. The damping properties of the chest caused the compression to decrease in the high speed bar impacts to 88% of that in low speed impacts. From the analysis it could be concluded that the bar impact shape provides lower chest criteria responses compared to the hub. Further, the bar responses are dependent on the impact location of the chest. Inertial and viscous effects of the upper body affect the responses. The results can be used to assess the responses of human substitutes such as anthropomorphic test devices and finite element human body models, which will benefit the development process of heavy goods vehicle safety systems. Copyright © 2015 Elsevier Ltd. All rights reserved.
Image decomposition of barred galaxies and AGN hosts
NASA Astrophysics Data System (ADS)
Gadotti, Dimitri Alexei
2008-02-01
I present the results of multicomponent decomposition of V and R broad-band images of a sample of 17 nearby galaxies, most of them hosting bars and active galactic nuclei (AGN). I use BUDDA v2.1 to produce the fits, allowing the inclusion of bars and AGN in the models. A comparison with previous results from the literature shows a fairly good agreement. It is found that the axial ratio of bars, as measured from ellipse fits, can be severely underestimated if the galaxy axisymmetric component is relatively luminous. Thus, reliable bar axial ratios can only be determined by taking into account the contributions of bulge and disc to the light distribution in the galaxy image. Through a number of tests, I show that neglecting bars when modelling barred galaxies can result in an overestimation of the bulge-to-total luminosity ratio of a factor of 2. Similar effects result when bright, type 1 AGN are not considered in the models. By artificially redshifting the images, I show that the structural parameters of more distant galaxies can in general be reliably retrieved through image fitting, at least up to the point where the physical spatial resolution is ~1.5kpc. This corresponds, for instance, to images of galaxies at z = 0.05 with a seeing full width at half-maximum (FWHM) of 1.5arcsec, typical of the Sloan Digital Sky Survey (SDSS). In addition, such a resolution is also similar to what can be achieved with the Hubble Space Telescope (HST), and ground-based telescopes with adaptive optics, at z ~ 1-2. Thus, these results also concern deeper studies such as COSMOS and SINS. This exercise shows that disc parameters are particularly robust, but bulge parameters are prone to errors if its effective radius is small compared to the seeing radius, and might suffer from systematic effects. For instance, the bulge-to-total luminosity ratio is systematically overestimated, on average, by 0.05 (i.e. 5 per cent of the galaxy total luminosity). In this low-resolution regime, the effects of ignoring bars are still present, but AGN light is smeared out. I briefly discuss the consequences of these results to studies of the structural properties of galaxies, in particular on the stellar mass budget in the local Universe. With reasonable assumptions, it is possible to show that the stellar content in bars can be similar to that in classical bulges and elliptical galaxies. Finally, I revisit the cases of NGC4608 and 5701 and show that the lack of stars in the disc region inside the bar radius is significant. Accordingly, the best-fitting model for the former uses a Freeman type II disc.
Temporal expectation in focal hand dystonia.
Avanzino, Laura; Martino, Davide; Martino, Isadora; Pelosin, Elisa; Vicario, Carmelo M; Bove, Marco; Defazio, Gianni; Abbruzzese, Giovanni
2013-02-01
Patients with writer's cramp present sensory and representational abnormalities relevant to motor control, such as impairment in the temporal discrimination between tactile stimuli and in pure motor imagery tasks, like the mental rotation of corporeal and inanimate objects. However, only limited information is available on the ability of patients with dystonia to process the time-dependent features (e.g. speed) of movement in real time. The processing of time-dependent features of movement has a crucial role in predicting whether the outcome of a complex motor sequence, such as handwriting or playing a musical passage, will be consistent with its ultimate goal, or results instead in an execution error. In this study, we sought to evaluate the implicit ability to perceive the temporal outcome of different movements in a group of patients with writer's cramp. Fourteen patients affected by writer's cramp in the right hand and 17 age- and gender-matched healthy subjects were recruited for the study. Subjects were asked to perform a temporal expectation task by predicting the end of visually perceived human body motion (handwriting, i.e. the action performed by the human body segment specifically affected by writer's cramp) or inanimate object motion (a moving circle reaching a spatial target). Videos representing movements were shown in full before experimental trials; the actual tasks consisted of watching the same videos, but interrupted after a variable interval ('pre-dark') from its onset by a dark interval of variable duration. During the 'dark' interval, subjects were asked to indicate when the movement represented in the video reached its end by clicking on the space bar of the keyboard. We also included a visual working memory task. Performance on the timing task was analysed measuring the absolute value of timing error, the coefficient of variability and the percentage of anticipation responses. Patients with writer's cramp exhibited greater absolute timing error compared with control subjects in the human body motion task (whereas no difference was observed in the inanimate object motion task). No effect of group was documented on the visual working memory tasks. Absolute timing error on the human body motion task did not significantly correlate with symptom severity, disease duration or writing speed. Our findings suggest an alteration of the writing movement representation at a central level and are consistent with the view that dystonia is not a purely motor disorder, but it also involves non-motor (sensory, cognitive) aspects related to movement processing and planning.
A pilot study to assess the bacterial contaminants in hookah pipes in a community setting.
Martinasek, M; Rivera, Z; Ferrer, A; Freundt, E
2018-05-01
Hookah smoking among young adults remains a public health threat. Increasing research has uncovered the deleterious effects of hookah smoking, including both acute and chronic health conditions. Due to the current lack of regulation, hookah bars/lounges lack protocols for equipment sanitation. To examine evidence of bacterial contamination in hookah pipes due to a lack of sanitation regulations. For this field/laboratory study, 10 hookah bars/lounges were studied. Isolated bacteria were characterized and identified by species using 16S ribosomal RNA gene sequencing. At the 10 hookah bars sampled, the mouthpiece had the highest bacterial prevalence and diversity. Some of the bacterial isolates were found to be antibiotic-resistant. Ten of the isolated bacteria were Gram-positive and two were identified as Gram-negative. Levels of bacterial contamination vary widely from one hookah bar to the next, and reflect a lack of industry standards for cleaning these devices. Bacterial contamination of hookah pipes may represent a fomite for transmission of infectious diseases. Our results warrant future surveillance of hookahs to monitor for potential human pathogens.
US college students' exposure to tobacco promotions: prevalence and association with tobacco use.
Rigotti, Nancy A; Moran, Susan E; Wechsler, Henry
2005-01-01
We assessed young adults' exposure to the tobacco industry marketing strategy of sponsoring social events at bars, nightclubs, and college campuses. We analyzed data from the 2001 Harvard College Alcohol Study, a random sample of 10904 students enrolled in 119 nationally representative 4-year colleges and universities. During the 2000-2001 school year, 8.5% of respondents attended a bar, nightclub, or campus social event where free cigarettes were distributed. Events were reported by students attending 118 of the 119 schools (99.2%). Attendance was associated with a higher student smoking prevalence after we adjusted for demographic factors, alcohol use, and recent bar/nightclub attendance. This association remained for students who did not smoke regularly before 19 years of age but not for students who smoked regularly by 19 years of age. Attendance at a tobacco industry-sponsored event at a bar, nightclub, or campus party was associated with a higher smoking prevalence among college students. Promotional events may encourage the initiation or the progression of tobacco use among college students who are not smoking regularly when they enter college.
The dynamics of stellar discs in live dark-matter haloes
NASA Astrophysics Data System (ADS)
Fujii, M. S.; Bédorf, J.; Baba, J.; Portegies Zwart, S.
2018-06-01
Recent developments in computer hardware and software enable researchers to simulate the self-gravitating evolution of galaxies at a resolution comparable to the actual number of stars. Here we present the results of a series of such simulations. We performed N-body simulations of disc galaxies with between 100 and 500 million particles over a wide range of initial conditions. Our calculations include a live bulge, disc, and dark-matter halo, each of which is represented by self-gravitating particles in the N-body code. The simulations are performed using the gravitational N-body tree-code BONSAI running on the Piz Daint supercomputer. We find that the time-scale over which the bar forms increases exponentially with decreasing disc-mass fraction and that the bar formation epoch exceeds a Hubble time when the disc-mass fraction is ˜0.35. These results can be explained with the swing-amplification theory. The condition for the formation of m = 2 spirals is consistent with that for the formation of the bar, which is also an m = 2 phenomenon. We further argue that the non-barred grand-design spiral galaxies are transitional, and that they evolve to barred galaxies on a dynamical time-scale. We also confirm that the disc-mass fraction and shear rate are important parameters for the morphology of disc galaxies. The former affects the number of spiral arms and the bar formation epoch, and the latter determines the pitch angle of the spiral arms.
Andrews, Elisabeth; Balkanski, Yves; Boucher, Olivier; Myhre, Gunnar; Samset, Bjørn Hallvard; Schulz, Michael; Schuster, Gregory L.; Valari, Myrto; Tao, Shu
2018-01-01
Abstract There is high uncertainty in the direct radiative forcing of black carbon (BC), an aerosol that strongly absorbs solar radiation. The observation‐constrained estimate, which is several times larger than the bottom‐up estimate, is influenced by the spatial representativeness error due to the mesoscale inhomogeneity of the aerosol fields and the relatively low resolution of global chemistry‐transport models. Here we evaluated the spatial representativeness error for two widely used observational networks (AErosol RObotic NETwork and Global Atmosphere Watch) by downscaling the geospatial grid in a global model of BC aerosol absorption optical depth to 0.1° × 0.1°. Comparing the models at a spatial resolution of 2° × 2° with BC aerosol absorption at AErosol RObotic NETwork sites (which are commonly located near emission hot spots) tends to cause a global spatial representativeness error of 30%, as a positive bias for the current top‐down estimate of global BC direct radiative forcing. By contrast, the global spatial representativeness error will be 7% for the Global Atmosphere Watch network, because the sites are located in such a way that there are almost an equal number of sites with positive or negative representativeness error. PMID:29937603
Effects of Salt Secretion on Psychrometric Determinations of Water Potential of Cotton Leaves
Klepper, Betty; Barrs, H. D.
1968-01-01
Thermocouple psychrometers gave lower estimates of water potential of cotton leaves than did a pressure chamber. This difference was considerable for turgid leaves, but progressively decreased for leaves with lower water potentials and fell to zero at water potentials below about −10 bars. The conductivity of washings from cotton leaves removed from the psychrometric equilibration chambers was related to the magnitude of this discrepancy in water potential, indicating that the discrepancy is due to salts on the leaf surface which make the psychrometric estimates too low. This error, which may be as great as 400 to 500%, cannot be eliminated by washing the leaves because salts may be secreted during the equilibration period. Therefore, a thermocouple psychrometer is not suitable for measuring the water potential of cotton leaves when it is above about −10 bars. PMID:16656895
Relationship between visual binding, reentry and awareness.
Koivisto, Mika; Silvanto, Juha
2011-12-01
Visual feature binding has been suggested to depend on reentrant processing. We addressed the relationship between binding, reentry, and visual awareness by asking the participants to discriminate the color and orientation of a colored bar (presented either alone or simultaneously with a white distractor bar) and to report their phenomenal awareness of the target features. The success of reentry was manipulated with object substitution masking and backward masking. The results showed that late reentrant processes are necessary for successful binding but not for phenomenal awareness of the bound features. Binding errors were accompanied by phenomenal awareness of the misbound feature conjunctions, demonstrating that they were experienced as real properties of the stimuli (i.e., illusory conjunctions). Our results suggest that early preattentive binding and local recurrent processing enable features to reach phenomenal awareness, while later attention-related reentrant iterations modulate the way in which the features are bound and experienced in awareness. Copyright © 2011 Elsevier Inc. All rights reserved.
A wireless passive pressure microsensor fabricated in HTCC MEMS technology for harsh environments.
Tan, Qiulin; Kang, Hao; Xiong, Jijun; Qin, Li; Zhang, Wendong; Li, Chen; Ding, Liqiong; Zhang, Xiansheng; Yang, Mingliang
2013-08-02
A wireless passive high-temperature pressure sensor without evacuation channel fabricated in high-temperature co-fired ceramics (HTCC) technology is proposed. The properties of the HTCC material ensure the sensor can be applied in harsh environments. The sensor without evacuation channel can be completely gastight. The wireless data is obtained with a reader antenna by mutual inductance coupling. Experimental systems are designed to obtain the frequency-pressure characteristic, frequency-temperature characteristic and coupling distance. Experimental results show that the sensor can be coupled with an antenna at 600 °C and max distance of 2.8 cm at room temperature. The senor sensitivity is about 860 Hz/bar and hysteresis error and repeatability error are quite low.
Waspe, Adam C; McErlain, David D; Pitelka, Vasek; Holdsworth, David W; Lacefield, James C; Fenster, Aaron
2010-04-01
Preclinical research protocols often require insertion of needles to specific targets within small animal brains. To target biologically relevant locations in rodent brains more effectively, a robotic device has been developed that is capable of positioning a needle along oblique trajectories through a single burr hole in the skull under volumetric microcomputed tomography (micro-CT) guidance. An x-ray compatible stereotactic frame secures the head throughout the procedure using a bite bar, nose clamp, and ear bars. CT-to-robot registration enables structures identified in the image to be mapped to physical coordinates in the brain. Registration is accomplished by injecting a barium sulfate contrast agent as the robot withdraws the needle from predefined points in a phantom. Registration accuracy is affected by the robot-positioning error and is assessed by measuring the surface registration error for the fiducial and target needle tracks (FRE and TRE). This system was demonstrated in situ by injecting 200 microm tungsten beads into rat brains along oblique trajectories through a single burr hole on the top of the skull under micro-CT image guidance. Postintervention micro-CT images of each skull were registered with preintervention high-field magnetic resonance images of the brain to infer the anatomical locations of the beads. Registration using four fiducial needle tracks and one target track produced a FRE and a TRE of 96 and 210 microm, respectively. Evaluation with tissue-mimicking gelatin phantoms showed that locations could be targeted with a mean error of 154 +/- 113 microm. The integration of a robotic needle-positioning device with volumetric micro-CT image guidance should increase the accuracy and reduce the invasiveness of stereotactic needle interventions in small animals.
A non-perturbative exploration of the high energy regime in Nf=3 QCD. ALPHA Collaboration
NASA Astrophysics Data System (ADS)
Dalla Brida, Mattia; Fritzsch, Patrick; Korzec, Tomasz; Ramos, Alberto; Sint, Stefan; Sommer, Rainer
2018-05-01
Using continuum extrapolated lattice data we trace a family of running couplings in three-flavour QCD over a large range of scales from about 4 to 128 GeV. The scale is set by the finite space time volume so that recursive finite size techniques can be applied, and Schrödinger functional (SF) boundary conditions enable direct simulations in the chiral limit. Compared to earlier studies we have improved on both statistical and systematic errors. Using the SF coupling to implicitly define a reference scale 1/L_0≈ 4 GeV through \\bar{g}^2(L_0) =2.012, we quote L_0 Λ ^{N_f=3}_{{\\overline{MS}}} =0.0791(21). This error is dominated by statistics; in particular, the remnant perturbative uncertainty is negligible and very well controlled, by connecting to infinite renormalization scale from different scales 2^n/L_0 for n=0,1,\\ldots ,5. An intermediate step in this connection may involve any member of a one-parameter family of SF couplings. This provides an excellent opportunity for tests of perturbation theory some of which have been published in a letter (ALPHA collaboration, M. Dalla Brida et al. in Phys Rev Lett 117(18):182001, 2016). The results indicate that for our target precision of 3 per cent in L_0 Λ ^{N_f=3}_{{\\overline{MS}}}, a reliable estimate of the truncation error requires non-perturbative data for a sufficiently large range of values of α _s=\\bar{g}^2/(4π ). In the present work we reach this precision by studying scales that vary by a factor 2^5= 32, reaching down to α _s≈ 0.1. We here provide the details of our analysis and an extended discussion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waspe, Adam C.; McErlain, David D.; Pitelka, Vasek
Purpose: Preclinical research protocols often require insertion of needles to specific targets within small animal brains. To target biologically relevant locations in rodent brains more effectively, a robotic device has been developed that is capable of positioning a needle along oblique trajectories through a single burr hole in the skull under volumetric microcomputed tomography (micro-CT) guidance. Methods: An x-ray compatible stereotactic frame secures the head throughout the procedure using a bite bar, nose clamp, and ear bars. CT-to-robot registration enables structures identified in the image to be mapped to physical coordinates in the brain. Registration is accomplished by injecting amore » barium sulfate contrast agent as the robot withdraws the needle from predefined points in a phantom. Registration accuracy is affected by the robot-positioning error and is assessed by measuring the surface registration error for the fiducial and target needle tracks (FRE and TRE). This system was demonstrated in situ by injecting 200 {mu}m tungsten beads into rat brains along oblique trajectories through a single burr hole on the top of the skull under micro-CT image guidance. Postintervention micro-CT images of each skull were registered with preintervention high-field magnetic resonance images of the brain to infer the anatomical locations of the beads. Results: Registration using four fiducial needle tracks and one target track produced a FRE and a TRE of 96 and 210 {mu}m, respectively. Evaluation with tissue-mimicking gelatin phantoms showed that locations could be targeted with a mean error of 154{+-}113 {mu}m. Conclusions: The integration of a robotic needle-positioning device with volumetric micro-CT image guidance should increase the accuracy and reduce the invasiveness of stereotactic needle interventions in small animals.« less
Spontaneous symmetry breaking in a two-lane model for bidirectional overtaking traffic
NASA Astrophysics Data System (ADS)
Appert-Rolland, C.; Hilhorst, H. J.; Schehr, G.
2010-08-01
Firstly, we consider a unidirectional flux \\bar {\\omega } of vehicles, each of which is characterized by its 'natural' velocity v drawn from a distribution P(v). The traffic flow is modeled as a collection of straight 'world lines' in the time-space plane, with overtaking events represented by a fixed queuing time τ imposed on the overtaking vehicle. This geometrical model exhibits platoon formation and allows, among many other things, for the calculation of the effective average velocity w\\equiv \\phi (v) of a vehicle of natural velocity v. Secondly, we extend the model to two opposite lanes, A and B. We argue that the queuing time τ in one lane is determined by the traffic density in the opposite lane. On the basis of reasonable additional assumptions we establish a set of equations that couple the two lanes and can be solved numerically. It appears that above a critical value \\bar {\\omega }_{\\mathrm {c}} of the control parameter \\bar {\\omega } the symmetry between the lanes is spontaneously broken: there is a slow lane where long platoons form behind the slowest vehicles, and a fast lane where overtaking is easy due to the wide spacing between the platoons in the opposite direction. A variant of the model is studied in which the spatial vehicle density \\bar {\\rho } rather than the flux \\bar {\\omega } is the control parameter. Unequal fluxes \\bar {\\omega }_{\\mathrm {A}} and \\bar {\\omega }_{\\mathrm {B}} in the two lanes are also considered. The symmetry breaking phenomenon exhibited by this model, even though no doubt hard to observe in pure form in real-life traffic, nevertheless indicates a tendency of such traffic.
Mills, Britain A; Caetano, Raul; Vaeth, Patrice A C; Reingle Gonzalez, Jennifer M
2015-11-01
Levels of drinking are unusually elevated among young adults on the U.S.-Mexico border, and this elevation can be largely explained by young border residents' unusually high frequency of bar attendance. However, this explanation complicates interpretation of high alcohol problem rates that have also been observed in this group. Because bar environments can lower the threshold for many types of problems, the extent to which elevated alcohol problems among young border residents can be attributed to drinking per se-versus this common drinking context-is not clear. Data were collected from multistage cluster samples of adult Mexican Americans on and off the U.S.-Mexico border (current drinker N = 1,351). After developing structural models of acute alcohol problems, estimates were subjected to path decompositions to disentangle the common and distinct contributions of drinking and bar attendance to problem disparities on and off the border. Additionally, models were estimated under varying degrees of adjustment to gauge the sensitivity of the results to sociodemographic, social-cognitive, and environmental sources of confounding. Consistent with previous findings for both drinking and other problem measures, acute alcohol problems were particularly elevated among young adults on the border. This elevation was entirely explained by a single common pathway involving bar attendance frequency and drinking. Bar attendance did not predict acute alcohol problems independently of drinking, and its effect was not moderated by border proximity or age. The common indirect effect and its component effects (of border youth on bar attendance, of bar attendance on drinking, and of drinking on problems) were surprisingly robust to adjustment for confounding in all parts of the model (e.g., fully adjusted indirect effect: b = 0.11, SE = 0.04, p < 0.01). Bar attendance and associated increases in drinking play a key, unique role in the high levels of acute alcohol problems among the border's young adult population that cannot be entirely explained by sociodemographic or social-cognitive characteristics of young border residents, by contextual effects of bars on problems, or by broader neighborhood factors. Bar attendance in particular may represent an early modifiable risk factor that can be targeted to reduce alcohol problem disparities in the region. Copyright © 2015 by the Research Society on Alcoholism.
NASA Astrophysics Data System (ADS)
Ho, Shirley; Agarwal, Nishant; Myers, Adam D.; Lyons, Richard; Disbrow, Ashley; Seo, Hee-Jong; Ross, Ashley; Hirata, Christopher; Padmanabhan, Nikhil; O'Connell, Ross; Huff, Eric; Schlegel, David; Slosar, Anže; Weinberg, David; Strauss, Michael; Ross, Nicholas P.; Schneider, Donald P.; Bahcall, Neta; Brinkmann, J.; Palanque-Delabrouille, Nathalie; Yèche, Christophe
2015-05-01
The Sloan Digital Sky Survey has surveyed 14,555 square degrees of the sky, and delivered over a trillion pixels of imaging data. We present the large-scale clustering of 1.6 million quasars between z=0.5 and z=2.5 that have been classified from this imaging, representing the highest density of quasars ever studied for clustering measurements. This data set spans 0~ 11,00 square degrees and probes a volume of 80 h-3 Gpc3. In principle, such a large volume and medium density of tracers should facilitate high-precision cosmological constraints. We measure the angular clustering of photometrically classified quasars using an optimal quadratic estimator in four redshift slices with an accuracy of ~ 25% over a bin width of δl ~ 10-15 on scales corresponding to matter-radiation equality and larger (0l ~ 2-3). Observational systematics can strongly bias clustering measurements on large scales, which can mimic cosmologically relevant signals such as deviations from Gaussianity in the spectrum of primordial perturbations. We account for systematics by employing a new method recently proposed by Agarwal et al. (2014) to the clustering of photometrically classified quasars. We carefully apply our methodology to mitigate known observational systematics and further remove angular bins that are contaminated by unknown systematics. Combining quasar data with the photometric luminous red galaxy (LRG) sample of Ross et al. (2011) and Ho et al. (2012), and marginalizing over all bias and shot noise-like parameters, we obtain a constraint on local primordial non-Gaussianity of fNL = -113+154-154 (1σ error). We next assume that the bias of quasar and galaxy distributions can be obtained independently from quasar/galaxy-CMB lensing cross-correlation measurements (such as those in Sherwin et al. (2013)). This can be facilitated by spectroscopic observations of the sources, enabling the redshift distribution to be completely determined, and allowing precise estimates of the bias parameters. In this paper, if the bias and shot noise parameters are fixed to their known values (which we model by fixing them to their best-fit Gaussian values), we find that the error bar reduces to 1σ simeq 65. We expect this error bar to reduce further by at least another factor of five if the data is free of any observational systematics. We therefore emphasize that in order to make best use of large scale structure data we need an accurate modeling of known systematics, a method to mitigate unknown systematics, and additionally independent theoretical models or observations to probe the bias of dark matter halos.
Optimal control of parametric oscillations of compressed flexible bars
NASA Astrophysics Data System (ADS)
Alesova, I. M.; Babadzanjanz, L. K.; Pototskaya, I. Yu.; Pupysheva, Yu. Yu.; Saakyan, A. T.
2018-05-01
In this paper the problem of damping of the linear systems oscillations with piece-wise constant control is solved. The motion of bar construction is reduced to the form described by Hill's differential equation using the Bubnov-Galerkin method. To calculate switching moments of the one-side control the method of sequential linear programming is used. The elements of the fundamental matrix of the Hill's equation are approximated by trigonometric series. Examples of the optimal control of the systems for various initial conditions and different number of control stages have been calculated. The corresponding phase trajectories and transient processes are represented.
Nanoforging – Innovation in three-dimensional processing and shaping of nanoscaled structures
Rösler, Joachim
2014-01-01
Summary Background: This paper describes the shaping of freestanding objects out of metallic structures in the nano- and submicron size. The technique used, called nanoforging, is very similar to the macroscopic forging process. Results: With spring actuated tools produced by focused ion beam milling, controlled forging is demonstrated. With only three steps, a conical bar stock is transformed to a flat- and semicircular bent bar stock. Conclusion: Compared with other forming techniques in the reduced scale, nanoforging represents a beneficial approach in forming freestanding metallic structures, due to its simplicity, and supplements other forming techniques. PMID:25161840
Vecchio, Elizabeth A; Chuo, Chung Hui; Baltos, Jo-Anne; Ford, Leigh; Scammells, Peter J; Wang, Bing H; Christopoulos, Arthur; White, Paul J; May, Lauren T
2016-10-01
We have recently described the rationally-designed adenosine receptor agonist, 4-(5-amino-4-benzoyl-3-(3-(trifluoromethyl)phenyl)thiophen-2-yl)-N-(6-(9-((2R,3R,4S,5R)-3,4-dihydroxy-5-(hydroxylmethyl)tetrahydro-furan-2-yl)-9H-purin-6-ylamino)hexyl)benzamide (VCP746), a hybrid molecule consisting of an adenosine moiety linked to an adenosine A1 receptor (A1AR) allosteric modulator moiety. At the A1AR, VCP746 mediated cardioprotection in the absence of haemodynamic side effects such as bradycardia. The current study has now identified VCP746 as an important pharmacological tool for the adenosine A2B receptor (A2BAR). The binding and function of VCP746 at the A2BAR was rigorously characterised in a heterologous expression system, in addition to examination of its anti-fibrotic signalling in cardiac- and renal-derived cells. In FlpInCHO cells stably expressing the human A2BAR, VCP746 was a high affinity, high potency A2BAR agonist that stimulated Gs- and Gq-mediated signal transduction, with an apparent lack of system bias relative to prototypical A2BAR agonists. The distinct agonist profile may result from an atypical binding mode of VCP746 at the A2BAR, which was consistent with a bivalent mechanism of receptor interaction. In isolated neonatal rat cardiac fibroblasts (NCF), VCP746 stimulated potent inhibition of both TGF-β1- and angiotensin II-mediated collagen synthesis. Similar attenuation of TGF-β1-mediated collagen synthesis was observed in renal mesangial cells (RMC). The anti-fibrotic signalling mediated by VCP746 in NCF and RMC was selectively reversed in the presence of an A2BAR antagonist. Thus, we believe, VCP746 represents an important tool to further investigate the role of the A2BAR in cardiac (patho)physiology. Copyright © 2016 Elsevier Inc. All rights reserved.
Molecular Analysis of Motility in Metastatic Mammary Adenocarcinoma Cells
1996-09-01
elements of epidermoid carcinoma (A43 1) cells. J. Cell. Biol. 103: 87-94 Winkler, M. (1988). Translational regulation in sea urchin eggs: a complex...and Methods. Error bars show SEM . Figure 2. Rhodamine-actin polymerizes preferentially at the tips of lamellipods in EGF- stimulated cells. MTLn3...lamellipods. B) rhodamine-actin intensity at the cell center. Data for each time point is the average and SEM of 15 different cells. Images A and B
The Effect of Information Level on Human-Agent Interaction for Route Planning
2015-12-01
13 Fig. 4 Experiment 1 shows regression results for time spent at DP predicting posttest trust group membership for the high LOI...decision time by pretest trust group membership. Bars denote standard error (SE). DT at DP was evaluated to see if it predicted posttest trust... group . Linear regression indicated that DT at DP was not a significant predictor of posttest trust for the Low or the Medium LOI conditions; however, it
Thermal Conductivities of Some Polymers and Composites
2018-02-01
volume fraction of glass and fabric style. The experimental results are compared to modeled results for Kt in composites. 15. SUBJECT TERMS...entities in a polymer above TG increases, so Cp will increase at TG. For Kt to remain constant, there would have to be a comparable decrease in α due to...scanning calorimetry (DSC) method, and have error bars as large as the claimed effect. Their Kt values for their carbon fiber samples are comparable to
New Methods for the Computational Fabrication of Appearance
2015-06-01
disadvantage is that it does not model phenomena such as retro-reflection and grazing-angle e↵ects. We find that previously proposed BRDF metrics performed well...Figure 3.15-right shows the mean BRDF in blue and the corresponding error bars. In order to interpret our data, we fit a parametric model to slices of the...and Wojciech Matusik. Image-driven navigation of analytical brdf models . In Eurographics Symposium on Rendering, 2006. 107 [40] F. E. Nicodemus, J. C
NASA Astrophysics Data System (ADS)
Llorens-Chiralt, R.; Weiss, P.; Mikonsaari, I.
2014-05-01
Material characterization is one of the key steps when conductive polymers are developed. The dispersion of carbon nanotubes (CNTs) in a polymeric matrix using melt mixing influence final composite properties. The compounding becomes trial and error using a huge amount of materials, spending time and money to obtain competitive composites. Traditional methods to carry out electrical conductivity characterization include compression and injection molding. Both methods need extra equipments and moulds to obtain standard bars. This study aims to investigate the accuracy of the data obtained from absolute resistance recorded during the melt compounding, using an on-line setup developed by our group, and to correlate these values with off-line characterization and processing parameters (screw/barrel configuration, throughput, screw speed, temperature profile and CNTs percentage). Compounds developed with different percentages of multi walled carbon nanotubes (MWCNTs) and polycarbonate has been characterized during and after extrusion. Measurements, on-line resistance and off-line resistivity, showed parallel response and reproducibility, confirming method validity. The significance of the results obtained stems from the fact that we are able to measure on-line resistance and to change compounding parameters during production to achieve reference values reducing production/testing cost and ensuring material quality. Also, this method removes errors which can be found in test bars development, showing better correlation with compounding parameters.
[Constructing a database that can input record of use and product-specific information].
Kawai, Satoru; Satoh, Kenichi; Yamamoto, Hideo
2012-01-01
In Japan, patients were infected by viral hepatitis C generally by administering a specific fibrinogen injection. However, it has been difficult to identify patients who were infected as result of the injections due to the lack of medical records. It is still not a common practice by a number of medical facilities to maintain detailed information because manual record keeping is extremely time consuming and subject to human error. Due to these reasons, the regulator required Medical device manufacturers and pharmaceutical companies to attach a bar code called "GS1-128" effective March 28, 2008. Based on this new process, we have come up with the idea of constructing a new database whose records can be entered by bar code scanning to ensure data integrity. Upon examining the efficacy of this new data collection process from the perspective of time efficiency and of course data accuracy, "GS1-128" proved that it significantly reduces time and record keeping mistakes. Patients not only became easily identifiable by a lot number and a serial number when immediate care was required, but "GS1-128" enhanced the ability to pinpoint manufacturing errors in the event any trouble or side effects are reported. This data can be shared with and utilized by the entire medical industry and will help perfect the products and enhance record keeping. I believe this new process is extremely important.
VizieR Online Data Catalog: R absolute magnitudes of Kuiper Belt objects (Peixinho+, 2012)
NASA Astrophysics Data System (ADS)
Peixinho, N.; Delsanti, A.; Guilbert-Lepoutre, A.; Gafeira, R.; Lacerda, P.
2012-06-01
Compilation of absolute magnitude HRα, B-R color spectral features used in this work. For each object, we computed the average color index from the different papers presenting data obtained simultaneously in B and R bands (e.g. contiguous observations within a same night). When individual R apparent magnitude and date were available, we computed the HRα=R-5log(r Delta), where R is the R-band magnitude, r and Delta are the helio- and geocentric distances at the time of observation in AU, respectively. When V and V-R colors were available, we derived an R and then HRα value. We did not correct for the phase-angle α effect. This table includes also spectral information on the presence of water ice, methanol, methane, or confirmed featureless spectra, as available in the literature. We highlight only the cases with clear bands in the spectrum, which were reported/confirmed by some other work. The 1st column indicates the object identification number and name or provisional designation; the 2nd column indicates the dynamical class; the 3rd column indicates the average HRα value and 1-σ error bars; the 4th column indicates the average $B-R$ color and 1-σ error bars; the 5th column indicates the most important spectral features detected; and the 6th column points to the bibliographic references used for each object. (3 data files).
The use of information technology to enhance patient safety and nursing efficiency.
Lee, Tso-Ying; Sun, Gi-Tseng; Kou, Li-Tseng; Yeh, Mei-Ling
2017-10-23
Issues in patient safety and nursing efficiency have long been of concern. Advancing the role of nursing informatics is seen as the best way to address this. The aim of this study was to determine if the use, outcomes and satisfaction with a nursing information system (NIS) improved patient safety and the quality of nursing care in a hospital in Taiwan. This study adopts a quasi-experimental design. Nurses and patients were surveyed by questionnaire and data retrieval before and after the implementation of NIS in terms of blood drawing, nursing process, drug administration, bar code scanning, shift handover, and information and communication integration. Physiologic values were easier to read and interpret; it took less time to complete electronic records (3.7 vs. 9.1 min); the number of errors in drug administration was reduced (0.08% vs. 0.39%); bar codes reduced the number of errors in blood drawing (0 vs. 10) and transportation of specimens (0 vs. 0.42%); satisfaction with electronic shift handover increased significantly; there was a reduction in nursing turnover (14.9% vs. 16%); patient satisfaction increased significantly (3.46 vs. 3.34). Introduction of NIS improved patient safety and nursing efficiency and increased nurse and patient satisfaction. Medical organizations must continually improve the nursing information system if they are to provide patients with high quality service in a competitive environment.
Marine Mammal Demographics Off the Outer Washington Coast and Near Hawaii
2012-04-01
and June 2009 recorded at the inshore site. Black bars represent the fraction of days blue whales were detected in a month, and blue...point FFT and 98% overlap.) Figure 12: Occurrence of fin whale calls between June 2008 and June 2009 recorded at the inshore site. Black bars...Figure 14: Occurrence of humpback whale sounds (song and non‐song) between June 2008 and June 2009 recorded at the inshore site. Black bars
Debiasing affective forecasting errors with targeted, but not representative, experience narratives.
Shaffer, Victoria A; Focella, Elizabeth S; Scherer, Laura D; Zikmund-Fisher, Brian J
2016-10-01
To determine whether representative experience narratives (describing a range of possible experiences) or targeted experience narratives (targeting the direction of forecasting bias) can reduce affective forecasting errors, or errors in predictions of experiences. In Study 1, participants (N=366) were surveyed about their experiences with 10 common medical events. Those who had never experienced the event provided ratings of predicted discomfort and those who had experienced the event provided ratings of actual discomfort. Participants making predictions were randomly assigned to either the representative experience narrative condition or the control condition in which they made predictions without reading narratives. In Study 2, participants (N=196) were again surveyed about their experiences with these 10 medical events, but participants making predictions were randomly assigned to either the targeted experience narrative condition or the control condition. Affective forecasting errors were observed in both studies. These forecasting errors were reduced with the use of targeted experience narratives (Study 2) but not representative experience narratives (Study 1). Targeted, but not representative, narratives improved the accuracy of predicted discomfort. Public collections of patient experiences should favor stories that target affective forecasting biases over stories representing the range of possible experiences. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Lesion correlates of impairments in actual tool use following unilateral brain damage.
Salazar-López, E; Schwaiger, B J; Hermsdörfer, J
2016-04-01
To understand how the brain controls actions involving tools, tests have been developed employing different paradigms such as pantomime, imitation and real tool use. The relevant areas have been localized in the premotor cortex, the middle temporal gyrus and the superior and inferior parietal lobe. This study employs Voxel Lesion Symptom Mapping to relate the functional impairment in actual tool use with extent and localization of the structural damage in the left (LBD, N=31) and right (RBD, N=19) hemisphere in chronic stroke patients. A series of 12 tools was presented to participants in a carousel. In addition, a non-tool condition tested the prescribed manipulation of a bar. The execution was scored according to an apraxic error scale based on the dimensions grasp, movement, direction and space. Results in the LBD group show that the ventro-dorsal stream constitutes the core of the defective network responsible for impaired tool use; it is composed of the inferior parietal lobe, the supramarginal and angular gyrus and the dorsal premotor cortex. In addition, involvement of regions in the temporal lobe, the rolandic operculum, the ventral premotor cortex and the middle occipital gyrus provide evidence of the role of the ventral stream in this task. Brain areas related to the use of the bar largely overlapped with this network. For patients with RBD data were less conclusive; however, a trend for the involvement of the temporal lobe in apraxic errors was manifested. Skilled bar manipulation depended on the same temporal area in these patients. Therefore, actual tool use depends on a well described left fronto-parietal-temporal network. RBD affects actual tool use, however the underlying neural processes may be more widely distributed and more heterogeneous. Goal directed manipulation of non-tool objects seems to involve very similar brain areas as tool use, suggesting that both types of manipulation share identical processes and neural representations. Copyright © 2016 Elsevier Ltd. All rights reserved.
A quality assessment of 3D video analysis for full scale rockfall experiments
NASA Astrophysics Data System (ADS)
Volkwein, A.; Glover, J.; Bourrier, F.; Gerber, W.
2012-04-01
Main goal of full scale rockfall experiments is to retrieve a 3D trajectory of a boulder along the slope. Such trajectories then can be used to calibrate rockfall simulation models. This contribution presents the application of video analysis techniques capturing rock fall velocity of some free fall full scale rockfall experiments along a rock face with an inclination of about 50 degrees. Different scaling methodologies have been evaluated. They mainly differ in the way the scaling factors between the movie frames and the reality and are determined. For this purpose some scale bars and targets with known dimensions have been distributed in advance along the slope. The single scaling approaches are briefly described as follows: (i) Image raster is scaled to the distant fixed scale bar then recalibrated to the plane of the passing rock boulder by taking the measured position of the nearest impact as the distance to the camera. The distance between the camera, scale bar, and passing boulder are surveyed. (ii) The image raster was scaled using the four nearest targets (identified using frontal video) from the trajectory to be analyzed. The average of the scaling factors was finally taken as scaling factor. (iii) The image raster was scaled using the four nearest targets from the trajectory to be analyzed. The scaling factor for one trajectory was calculated by balancing the mean scaling factors associated with the two nearest and the two farthest targets in relation to their mean distance to the analyzed trajectory. (iv) Same as previous method but with varying scaling factors during along the trajectory. It has shown that a direct measure of the scaling target and nearest impact zone is the most accurate. If constant plane is assumed it doesn't account for the lateral deviations of the rock boulder from the fall line consequently adding error into the analysis. Thus a combination of scaling methods (i) and (iv) are considered to give the best results. For best results regarding the lateral rough positioning along the slope, the frontal video must also be scaled. The error in scaling the video images can be evaluated by comparing the data by additional combination of the vertical trajectory component over time with the theoretical polynomial trend according to gravity. The different tracking techniques used to plot the position of the boulder's center of gravity all generated positional data with minimal error acceptable for trajectory analysis. However, when calculating instantaneous velocities an amplification of this error becomes un acceptable. A regression analysis of the data is helpful to optimize trajectory and velocity, respectively.
Change in indoor particle levels after a smoking ban in Minnesota bars and restaurants.
Bohac, David L; Hewett, Martha J; Kapphahn, Kristopher I; Grimsrud, David T; Apte, Michael G; Gundel, Lara A
2010-12-01
Smoking bans in bars and restaurants have been shown to improve worker health and reduce hospital admissions for acute myocardial infarction. Several studies have also reported improved indoor air quality, although these studies generally used single visits before and after a ban for a convenience sample of venues. The primary objective of this study was to provide detailed time-of-day and day-of-week secondhand smoke-exposure data for representative bars and restaurants in Minnesota. This study improved on previous approaches by using a statistically representative sample of three venue types (drinking places, limited-service restaurants, and full-service restaurants), conducting repeat visits to the same venue prior to the ban, and matching the day of week and time of day for the before- and after-ban monitoring. The repeat visits included laser photometer fine particulate (PM₂.₅) concentration measurements, lit cigarette counts, and customer counts for 19 drinking places, eight limited-service restaurants, and 35 full-service restaurants in the Minneapolis/St. Paul metropolitan area. The more rigorous design of this study provides improved confidence in the findings and reduces the likelihood of systematic bias. The median reduction in PM₂.₅ was greater than 95% for all three venue types. Examination of data from repeated visits shows that making only one pre-ban visit to each venue would greatly increase the range of computed percentage reductions and lower the statistical power of pre-post tests. Variations in PM₂.₅ concentrations were found based on time of day and day of week when monitoring occurred. These comprehensive measurements confirm that smoking bans provide significant reductions in SHS constituents, protecting customers and workers from PM₂.₅ in bars and restaurants. Copyright © 2010 American Journal of Preventive Medicine. All rights reserved.
Török, T J; Tauxe, R V; Wise, R P; Livengood, J R; Sokolow, R; Mauvais, S; Birkness, K A; Skeels, M R; Horan, J M; Foster, L R
1997-08-06
This large outbreak of foodborne disease highlights the challenge of investigating outbreaks caused by intentional contamination and demonstrates the vulnerability of self-service foods to intentional contamination. To investigate a large community outbreak of Salmonella Typhimurium infections. Epidemiologic investigation of patients with Salmonella gastroenteritis and possible exposures in The Dalles, Oregon. Cohort and case-control investigations were conducted among groups of restaurant patrons and employees to identify exposures associated with illness. A community in Oregon. Outbreak period was September and October 1984. A total of 751 persons with Salmonella gastroenteritis associated with eating or working at area restaurants. Most patients were identified through passive surveillance; active surveillance was conducted for selected groups. A case was defined either by clinical criteria or by a stool culture yielding S Typhimurium. The outbreak occurred in 2 waves, September 9 through 18 and September 19 through October 10. Most cases were associated with 10 restaurants, and epidemiologic studies of customers at 4 restaurants and of employees at all 10 restaurants implicated eating from salad bars as the major risk factor for infection. Eight (80%) of 10 affected restaurants compared with only 3 (11%) of the 28 other restaurants in The Dalles operated salad bars (relative risk, 7.5; 95% confidence interval, 2.4-22.7; P<.001). The implicated food items on the salad bars differed from one restaurant to another. The investigation did not identify any water supply, food item, supplier, or distributor common to all affected restaurants, nor were employees exposed to any single common source. In some instances, infected employees may have contributed to the spread of illness by inadvertently contaminating foods. However, no evidence was found linking ill employees to initiation of the outbreak. Errors in food rotation and inadequate refrigeration on ice-chilled salad bars may have facilitated growth of the S Typhimurium but could not have caused the outbreak. A subsequent criminal investigation revealed that members of a religious commune had deliberately contaminated the salad bars. An S Typhimurium strain found in a laboratory at the commune was indistinguishable from the outbreak strain. This outbreak of salmonellosis was caused by intentional contamination of restaurant salad bars by members of a religious commune.
NASA Technical Reports Server (NTRS)
Ceamanos, Xavier; Doute, S.; Fernando, J.; Pinet, P.; Lyapustin, A.
2013-01-01
This article addresses the correction for aerosol effects in near-simultaneous multiangle observations acquired by the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) aboard the Mars Reconnaissance Orbiter. In the targeted mode, CRISM senses the surface of Mars using 11 viewing angles, which allow it to provide unique information on the scattering properties of surface materials. In order to retrieve these data, however, appropriate strategies must be used to compensate the signal sensed by CRISM for aerosol contribution. This correction is particularly challenging as the photometric curve of these suspended particles is often correlated with the also anisotropic photometric curve of materials at the surface. This article puts forward an innovative radiative transfer based method named Multi-angle Approach for Retrieval of Surface Reflectance from CRISM Observations (MARS-ReCO). The proposed method retrieves photometric curves of surface materials in reflectance units after removing aerosol contribution. MARS-ReCO represents a substantial improvement regarding previous techniques as it takes into consideration the anisotropy of the surface, thus providing more realistic surface products. Furthermore, MARS-ReCO is fast and provides error bars on the retrieved surface reflectance. The validity and accuracy of MARS-ReCO is explored in a sensitivity analysis based on realistic synthetic data. According to experiments, MARS-ReCO provides accurate results (up to 10 reflectance error) under favorable acquisition conditions. In the companion article, photometric properties of Martian materials are retrieved using MARS-ReCO and validated using in situ measurements acquired during the Mars Exploration Rovers mission.
Wetmore, Kelly M.; Price, Morgan N.; Waters, Robert J.; Lamson, Jacob S.; He, Jennifer; Hoover, Cindi A.; Blow, Matthew J.; Bristow, James; Butland, Gareth
2015-01-01
ABSTRACT Transposon mutagenesis with next-generation sequencing (TnSeq) is a powerful approach to annotate gene function in bacteria, but existing protocols for TnSeq require laborious preparation of every sample before sequencing. Thus, the existing protocols are not amenable to the throughput necessary to identify phenotypes and functions for the majority of genes in diverse bacteria. Here, we present a method, random bar code transposon-site sequencing (RB-TnSeq), which increases the throughput of mutant fitness profiling by incorporating random DNA bar codes into Tn5 and mariner transposons and by using bar code sequencing (BarSeq) to assay mutant fitness. RB-TnSeq can be used with any transposon, and TnSeq is performed once per organism instead of once per sample. Each BarSeq assay requires only a simple PCR, and 48 to 96 samples can be sequenced on one lane of an Illumina HiSeq system. We demonstrate the reproducibility and biological significance of RB-TnSeq with Escherichia coli, Phaeobacter inhibens, Pseudomonas stutzeri, Shewanella amazonensis, and Shewanella oneidensis. To demonstrate the increased throughput of RB-TnSeq, we performed 387 successful genome-wide mutant fitness assays representing 130 different bacterium-carbon source combinations and identified 5,196 genes with significant phenotypes across the five bacteria. In P. inhibens, we used our mutant fitness data to identify genes important for the utilization of diverse carbon substrates, including a putative d-mannose isomerase that is required for mannitol catabolism. RB-TnSeq will enable the cost-effective functional annotation of diverse bacteria using mutant fitness profiling. PMID:25968644
Kinematic signature of a rotating bar near a resonance
NASA Technical Reports Server (NTRS)
Weinberg, Martin D.
1994-01-01
Recent work based on H I, star count and emission data suggests that the Milky Way has rotating bar-like features. In this paper, I show that such features cause distinctive stellar kinematic signatures near Outer Lindblad Resonance (OLR) and Inner Lindblad Resonance (ILR). The effect of these resonances may be observable far from the peak density of the pattern and relatively nearby the solar position. The details of the kinematic signatures depend on the evolutionary history of the 'bar' and therefore velocity data, both systematic and velocity dispersion, may be used to probe the evolutionary history as well as the present state of Galaxy. Kinematic models for a variety of sample scenarios are presented. Models with evolving pattern speeds show significantly stronger dispersion signatures than those with static pattern speeds, suggesting that useful observational constraints are possible. The models are applied to the proposed rotating spheroid and bar models; we find (1) none of these models chosen to represent the proposed large-scale rotating spheroid are consistent with the stellar kinematics and (2) a Galactic bar with semimajor axis of 3 kpc will cause a large increase in velocity dispersion in the vicinity of OLR (approximately 5 kpc) with little change in the net radial motion and such a signature is suggested by K-giant velocity data. Potential future observations and analyses are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martinez-Medina, L. A.; Pichardo, B.; Moreno, E.
We present a dynamical study of the effect of the bar and spiral arms on the simulated orbits of open clusters in the Galaxy. Specifically, this work is devoted to the puzzling presence of high-altitude open clusters in the Galaxy. For this purpose we employ a very detailed observationally motivated potential model for the Milky Way and a careful set of initial conditions representing the newly born open clusters in the thin disk. We find that the spiral arms are able to raise an important percentage of open clusters (about one-sixth of the total employed in our simulations, depending onmore » the structural parameters of the arms) above the Galactic plane to heights beyond 200 pc, producing a bulge-shaped structure toward the center of the Galaxy. Contrary to what was expected, the spiral arms produce a much greater vertical effect on the clusters than the bar, both in quantity and height; this is due to the sharper concentration of the mass on the spiral arms, when compared to the bar. When a bar and spiral arms are included, spiral arms are still capable of raising an important percentage of the simulated open clusters through chaotic diffusion (as tested from classification analysis of the resultant high-z orbits), but the bar seems to restrain them, diminishing the elevation above the plane by a factor of about two.« less
Lopes, Hugo; Redig, Pat; Glaser, Amy; Armien, Anibal; Wünschmann, Arno
2007-03-01
West Nile Virus (WNV) infection manifests itself clinically a nd pathologically differently in various species of birds. The clinicopathologic findings and WNV antigen tissue distribution of six great gray owls (Strix nebulosa) and two barred owls (Strix varia) with WNV infection are described in this report. Great gray owls usually live in northern Canada, whereas the phylogenetically related barred owls are native to the midwestern and eastern United States and southern Canada. Naturally acquired WNV infection caused death essentially without previous signs of disease in the six great gray owls during a mortality event. Lesions of WNV infection we re dominated by hepatic and splenic necrosis, with evidence o f disseminatedintravascular coagulation in the great gray owls. WNV antigen was widely distributed in th e organs of the great gray owls and appeared totarget endothelial cells, macrophages, and hepatocytes. The barred owls represented two sporadic cases. They had neurologic disease with mental dullness that led to euthanasia. These birds had mild to moderate lymphoplasmacytic encephalitis with glial nodules and lymphoplasmacytic pectenitis. WNV antigen was sparse in barred owls and only present in a few brain neurons and renaltubular epithelial cells. The cause of the different manifestations of WNV disease in these fairly closely related owl species is uncertain.
Action planning and position sense in children with Developmental Coordination Disorder.
Adams, Imke L J; Ferguson, Gillian D; Lust, Jessica M; Steenbergen, Bert; Smits-Engelsman, Bouwien C M
2016-04-01
The present study examined action planning and position sense in children with Developmental Coordination Disorder (DCD). Participants performed two action planning tasks, the sword task and the bar grasping task, and an active elbow matching task to examine position sense. Thirty children were included in the DCD group (aged 6-10years) and age-matched to 90 controls. The DCD group had a MABC-2 total score ⩽5th percentile, the control group a total score ⩾25th percentile. Results from the sword-task showed that children with DCD planned less for end-state comfort. On the bar grasping task no significant differences in planning for end-state comfort between the DCD and control group were found. There was also no significant difference in the position sense error between the groups. The present study shows that children with DCD plan less for end-state comfort, but that this result is task-dependent and becomes apparent when more precision is needed at the end of the task. In that respect, the sword-task appeared to be a more sensitive task to assess action planning abilities, than the bar grasping task. The action planning deficit in children with DCD cannot be explained by an impaired position sense during active movements. Copyright © 2016 Elsevier B.V. All rights reserved.
PARTIAL ENTRAINMENT OF GRAVEL BARS DURING FLOODS. (R825284)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Uy, Raymonde Charles Y.; Kury, Fabricio P.; Fontelo, Paul A.
2015-01-01
The standard of safe medication practice requires strict observance of the five rights of medication administration: the right patient, drug, time, dose, and route. Despite adherence to these guidelines, medication errors remain a public health concern that has generated health policies and hospital processes that leverage automation and computerization to reduce these errors. Bar code, RFID, biometrics and pharmacy automation technologies have been demonstrated in literature to decrease the incidence of medication errors by minimizing human factors involved in the process. Despite evidence suggesting the effectivity of these technologies, adoption rates and trends vary across hospital systems. The objective of study is to examine the state and adoption trends of automatic identification and data capture (AIDC) methods and pharmacy automation technologies in U.S. hospitals. A retrospective descriptive analysis of survey data from the HIMSS Analytics® Database was done, demonstrating an optimistic growth in the adoption of these patient safety solutions. PMID:26958264
Creating a Satellite-Based Record of Tropospheric Ozone
NASA Technical Reports Server (NTRS)
Oetjen, Hilke; Payne, Vivienne H.; Kulawik, Susan S.; Eldering, Annmarie; Worden, John; Edwards, David P.; Francis, Gene L.; Worden, Helen M.
2013-01-01
The TES retrieval algorithm has been applied to IASI radiances. We compare the retrieved ozone profiles with ozone sonde profiles for mid-latitudes for the year 2008. We find a positive bias in the IASI ozone profiles in the UTLS region of up to 22 %. The spatial coverage of the IASI instrument allows sampling of effectively the same air mass with several IASI scenes simultaneously. Comparisons of the root-mean-square of an ensemble of IASI profiles to theoretical errors indicate that the measurement noise and the interference of temperature and water vapour on the retrieval together mostly explain the empirically derived random errors. The total degrees of freedom for signal of the retrieval for ozone are 3.1 +/- 0.2 and the tropospheric degrees of freedom are 1.0 +/- 0.2 for the described cases. IASI ozone profiles agree within the error bars with coincident ozone profiles derived from a TES stare sequence for the ozone sonde station at Bratt's Lake (50.2 deg N, 104.7 deg W).
Characteristics of manipulator for industrial robot with three rotational pairs having parallel axes
NASA Astrophysics Data System (ADS)
Poteyev, M. I.
1986-01-01
The dynamics of a manipulator with three rotatinal kinematic pairs having parallel axes are analyzed, for application in an industrial robot. The system of Lagrange equations of the second kind, describing the motion of such a mechanism in terms of kinetic energy in generalized coordinates, is reduced to equations of motion in terms of Newton's laws. These are useful not only for either determining the moments of force couples which will produce a prescribed motion or, conversely determining the motion which given force couples will produce but also for solving optimization problems under constraints in both cases and for estimating dynamic errors. As a specific example, a manipulator with all three axes of vertical rotation is considered. The performance of this manipulator, namely the parameters of its motion as functions of time, is compared with that of a manipulator having one rotational and two translational kinematic pairs. Computer aided simulation of their motion on the basis of ideal models, with all three links represented by identical homogeneous bars, has yielded velocity time diagrams which indicate that the manipulator with three rotational pairs is 4.5 times faster.
Dynamic analysis of I cross beam section dissimilar plate joined by TIG welding
NASA Astrophysics Data System (ADS)
Sani, M. S. M.; Nazri, N. A.; Rani, M. N. Abdul; Yunus, M. A.
2018-04-01
In this paper, finite element (FE) joint modelling technique for prediction of dynamic properties of sheet metal jointed by tungsten inert gas (TTG) will be presented. I cross section dissimilar flat plate with different series of aluminium alloy; AA7075 and AA6061 joined by TTG are used. In order to find the most optimum set of TTG welding dissimilar plate, the finite element model with three types of joint modelling were engaged in this study; bar element (CBAR), beam element and spot weld element connector (CWELD). Experimental modal analysis (EMA) was carried out by impact hammer excitation on the dissimilar plates that welding by TTG method. Modal properties of FE model with joints were compared and validated with model testing. CWELD element was chosen to represent weld model for TTG joints due to its accurate prediction of mode shapes and contains an updating parameter for weld modelling compare to other weld modelling. Model updating was performed to improve correlation between EMA and FEA and before proceeds to updating, sensitivity analysis was done to select the most sensitive updating parameter. After perform model updating, average percentage of error of the natural frequencies for CWELD model is improved significantly.
Adaptive resolution simulation of an atomistic protein in MARTINI water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zavadlav, Julija; Melo, Manuel Nuno; Marrink, Siewert J., E-mail: s.j.marrink@rug.nl
2014-02-07
We present an adaptive resolution simulation of protein G in multiscale water. We couple atomistic water around the protein with mesoscopic water, where four water molecules are represented with one coarse-grained bead, farther away. We circumvent the difficulties that arise from coupling to the coarse-grained model via a 4-to-1 molecule coarse-grain mapping by using bundled water models, i.e., we restrict the relative movement of water molecules that are mapped to the same coarse-grained bead employing harmonic springs. The water molecules change their resolution from four molecules to one coarse-grained particle and vice versa adaptively on-the-fly. Having performed 15 ns long molecularmore » dynamics simulations, we observe within our error bars no differences between structural (e.g., root-mean-squared deviation and fluctuations of backbone atoms, radius of gyration, the stability of native contacts and secondary structure, and the solvent accessible surface area) and dynamical properties of the protein in the adaptive resolution approach compared to the fully atomistically solvated model. Our multiscale model is compatible with the widely used MARTINI force field and will therefore significantly enhance the scope of biomolecular simulations.« less
Adaptive resolution simulation of an atomistic protein in MARTINI water.
Zavadlav, Julija; Melo, Manuel Nuno; Marrink, Siewert J; Praprotnik, Matej
2014-02-07
We present an adaptive resolution simulation of protein G in multiscale water. We couple atomistic water around the protein with mesoscopic water, where four water molecules are represented with one coarse-grained bead, farther away. We circumvent the difficulties that arise from coupling to the coarse-grained model via a 4-to-1 molecule coarse-grain mapping by using bundled water models, i.e., we restrict the relative movement of water molecules that are mapped to the same coarse-grained bead employing harmonic springs. The water molecules change their resolution from four molecules to one coarse-grained particle and vice versa adaptively on-the-fly. Having performed 15 ns long molecular dynamics simulations, we observe within our error bars no differences between structural (e.g., root-mean-squared deviation and fluctuations of backbone atoms, radius of gyration, the stability of native contacts and secondary structure, and the solvent accessible surface area) and dynamical properties of the protein in the adaptive resolution approach compared to the fully atomistically solvated model. Our multiscale model is compatible with the widely used MARTINI force field and will therefore significantly enhance the scope of biomolecular simulations.
Modeling of nonequilibrium CO Fourth-Positive and CN Violet emission in CO2-N2 gases
NASA Astrophysics Data System (ADS)
Johnston, C. O.; Brandis, A. M.
2014-12-01
This work develops a chemical kinetic rate model for simulating nonequilibrium radiation from CO2-N2 gases, representative of Mars or Venus entry shock layers. Using recent EAST shock tube measurements of nonequilibrium CO 4th Positive and CN Violet emission at pressures and velocities ranging from 0.10 to 1.0 Torr and 6 to 8 km/s, the rate model is developed through an optimization procedure that minimizes the disagreement between the measured and simulated nonequilibrium radiance profiles. Only the dissociation rates of CO2, CO, and NO, along with the CN + O and CO + N rates were treated as unknown in this optimization procedure, as the nonequilibrium radiance was found to be most sensitive to them. The other rates were set to recent values from the literature. Increases in over a factor of 5 in the CO dissociation rate relative to the previous widely used value were found to provide the best agreement with measurements, while the CO2 rate was not changed. The developed model is found to capture the measured nonequilibrium radiance of CO 4th Positive and CN Violet within error bars of ±30%.
METALLICITY GRADIENTS THROUGH DISK INSTABILITY: A SIMPLE MODEL FOR THE MILKY WAY'S BOXY BULGE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martinez-Valpuesta, Inma; Gerhard, Ortwin, E-mail: imv@mpe.mpg.de, E-mail: gerhard@mpe.mpg.de
2013-03-20
Observations show a clear vertical metallicity gradient in the Galactic bulge, which is often taken as a signature of dissipative processes in the formation of a classical bulge. Various evidence shows, however, that the Milky Way is a barred galaxy with a boxy bulge representing the inner three-dimensional part of the bar. Here we show with a secular evolution N-body model that a boxy bulge formed through bar and buckling instabilities can show vertical metallicity gradients similar to the observed gradient if the initial axisymmetric disk had a comparable radial metallicity gradient. In this framework, the range of metallicities inmore » bulge fields constrains the chemical structure of the Galactic disk at early times before bar formation. Our secular evolution model was previously shown to reproduce inner Galaxy star counts and we show here that it also has cylindrical rotation. We use it to predict a full mean metallicity map across the Galactic bulge from a simple metallicity model for the initial disk. This map shows a general outward gradient on the sky as well as longitudinal perspective asymmetries. We also briefly comment on interpreting metallicity gradient observations in external boxy bulges.« less
Mechanical Behavior of a Low-Cost Ti-6Al-4V Alloy
NASA Astrophysics Data System (ADS)
Casem, D. T.; Weerasooriya, T.; Walter, T. R.
2018-01-01
Mechanical compression tests were performed on an economical Ti-6Al-4V alloy over a range of strain-rates and temperatures. Low rate experiments (0.001-0.1/s) were performed with a servo-hydraulic load frame and high rate experiments (1000-80,000/s) were performed with the Kolsky bar (Split Hopkinson pressure bar). Emphasis is placed on the large strain, high-rate, and high temperature behavior of the material in an effort to develop a predictive capability for adiabatic shear bands. Quasi-isothermal experiments were performed with the Kolsky bar to determine the large strain response at elevated rates, and bars with small diameters (1.59 mm and 794 µm, instrumented optically) were used to study the response at the higher strain-rates. Experiments were also conducted at temperatures ranging from 81 to 673 K. Two constitutive models are used to represent the data. The first is the Zerilli-Armstrong recovery strain model and the second is a modified Johnson-Cook model which uses the recovery strain term from the Zerilli-Armstrong model. In both cases, the recovery strain feature is critical for capturing the instability that precedes localization.
US College Students’ Exposure to Tobacco Promotions: Prevalence and Association With Tobacco Use
Rigotti, Nancy A.; Moran, Susan E.; Wechsler, Henry
2005-01-01
Objectives. We assessed young adults’ exposure to the tobacco industry marketing strategy of sponsoring social events at bars, nightclubs, and college campuses. Methods. We analyzed data from the 2001 Harvard College Alcohol Study, a random sample of 10904 students enrolled in 119 nationally representative 4-year colleges and universities. Results. During the 2000–2001 school year, 8.5% of respondents attended a bar, nightclub, or campus social event where free cigarettes were distributed. Events were reported by students attending 118 of the 119 schools (99.2%). Attendance was associated with a higher student smoking prevalence after we adjusted for demographic factors, alcohol use, and recent bar/nightclub attendance. This association remained for students who did not smoke regularly before 19 years of age but not for students who smoked regularly by 19 years of age. Conclusions. Attendance at a tobacco industry–sponsored event at a bar, nightclub, or campus party was associated with a higher smoking prevalence among college students. Promotional events may encourage the initiation or the progression of tobacco use among college students who are not smoking regularly when they enter college. PMID:15623874
The Effect of Information Level on Human-Agent Interaction for Route Planning
2015-12-01
χ2 (4, 60) = 11.41, p = 0.022, and Cramer’s V = 0.308, indicating there was no effect of experiment on posttest trust. Pretest trust was not a...decision time by pretest trust group membership. Bars denote standard error (SE). DT at DP was evaluated to see if it predicted posttest trust...0.007, Cramer’s V = 0.344, indicating there was no effect of experiment on posttest trust. Pretest trust was not a significant prediction of total DT
Elimination of Sensor Artifacts from Infrared Data.
1984-12-11
channel to compensate detector responsivity nonuniformity . Before inspecting the bar target measurements, it was expected that the preceding sequence of...sample errors and by applyieg separate pain and offset costants to each canel for nonuniformity compensation. 12(t) -7. Y2 lar I ,ar hr’ In apern...W5 RICHARD STEIDRO E 1 -- t4 ii x3 .13 275 325 3i5 425 SAMPLE NUMBER FI. 4 - Postamplfler output waveform for LWIR channel 3, for data frame shown in
Correction of Thermal Gradient Errors in Stem Thermocouple Hygrometers
Michel, Burlyn E.
1979-01-01
Stem thermocouple hygrometers were subjected to transient and stable thermal gradients while in contact with reference solutions of NaCl. Both dew point and psychrometric voltages were directly related to zero offset voltages, the latter reflecting the size of the thermal gradient. Although slopes were affected by absolute temperature, they were not affected by water potential. One hygrometer required a correction of 1.75 bars water potential per microvolt of zero offset, a value that was constant from 20 to 30 C. PMID:16660685
Bepko, Robert J; Moore, John R; Coleman, John R
2009-01-01
This article reports an intervention to improve the quality and safety of hospital patient care by introducing the use of pharmacy robotics into the medication distribution process. Medication safety is vitally important. The integration of pharmacy robotics with computerized practitioner order entry and bedside medication bar coding produces a significant reduction in medication errors. The creation of a safe medication-from initial ordering to bedside administration-provides enormous benefits to patients, to health care providers, and to the organization as well.
Watts Bar Nuclear Plant Title V Applicability
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Volume Phase Masks in Photo-Thermo-Refractive Glass
2014-10-06
development when forming the nanocrystals. Fig. 1.1 shows the refractive index change curves for some common glass melts when exposed to a beam at 325 nm...integral curve to the curve for the ideal phase mask. If there is a deviation in the experimental curve from the ideal curve , whether the overlap...redevelopments of the sample. Note that the third point on the spherical curve and the third and fourth points on the coma y curve have larger error bars than
Coupling constant for N*(1535)N{rho}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie Jujun; Graduate University of Chinese Academy of Sciences, Beijing 100049; Wilkin, Colin
2008-05-15
The value of the N*(1535)N{rho} coupling constant g{sub N*N{rho}} derived from the N*(1535){yields}N{rho}{yields}N{pi}{pi} decay is compared with that deduced from the radiative decay N*(1535){yields}N{gamma} using the vector-meson-dominance model. On the basis of an effective Lagrangian approach, we show that the values of g{sub N*N{rho}} extracted from the available experimental data on the two decays are consistent, though the error bars are rather large.
Khim, Dongyoon; Ryu, Gi-Seong; Park, Won-Tae; Kim, Hyunchul; Lee, Myungwon; Noh, Yong-Young
2016-04-13
A uniform ultrathin polymer film is deposited over a large area with molecularlevel precision by the simple wire-wound bar-coating method. The bar-coated ultrathin films not only exhibit high transparency of up to 90% in the visible wavelength range but also high charge carrier mobility with a high degree of percolation through the uniformly covered polymer nanofibrils. They are capable of realizing highly sensitive multigas sensors and represent the first successful report of ethylene detection using a sensor based on organic field-effect transistors. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Two-dimensional multi-component photometric decomposition of CALIFA galaxies
NASA Astrophysics Data System (ADS)
Méndez-Abreu, J.; Ruiz-Lara, T.; Sánchez-Menguiano, L.; de Lorenzo-Cáceres, A.; Costantin, L.; Catalán-Torrecilla, C.; Florido, E.; Aguerri, J. A. L.; Bland-Hawthorn, J.; Corsini, E. M.; Dettmar, R. J.; Galbany, L.; García-Benito, R.; Marino, R. A.; Márquez, I.; Ortega-Minakata, R. A.; Papaderos, P.; Sánchez, S. F.; Sánchez-Blazquez, P.; Spekkens, K.; van de Ven, G.; Wild, V.; Ziegler, B.
2017-02-01
We present a two-dimensional multi-component photometric decomposition of 404 galaxies from the Calar Alto Legacy Integral Field Area data release 3 (CALIFA-DR3). They represent all possible galaxies with no clear signs of interaction and not strongly inclined in the final CALIFA data release. Galaxies are modelled in the g, r, and I Sloan Digital Sky Survey (SDSS) images including, when appropriate, a nuclear point source, bulge, bar, and an exponential or broken disc component. We use a human-supervised approach to determine the optimal number of structures to be included in the fit. The dataset, including the photometric parameters of the CALIFA sample, is released together with statistical errors and a visual analysis of the quality of each fit. The analysis of the photometric components reveals a clear segregation of the structural composition of galaxies with stellar mass. At high masses (log (M⋆/M⊙) > 11), the galaxy population is dominated by galaxies modelled with a single Sérsic or a bulge+disc with a bulge-to-total (B/T) luminosity ratio B/T > 0.2. At intermediate masses (9.5 < log (M⋆/M⊙) < 11), galaxies described with bulge+disc but B/T < 0.2 are preponderant, whereas, at the low mass end (log (M⋆/M⊙) < 9.5), the prevailing population is constituted by galaxies modelled with either purediscs or nuclear point sources+discs (I.e., no discernible bulge). We obtain that 57% of the volume corrected sample of disc galaxies in the CALIFA sample host a bar. This bar fraction shows a significant drop with increasing galaxy mass in the range 9.5 < log (M⋆/M⊙) < 11.5. The analyses of the extended multi-component radial profile result in a volume-corrected distribution of 62%, 28%, and 10% for the so-called Type I (pure exponential), Type II (down-bending), and Type III (up-bending) disc profiles, respectively. These fractions are in discordance with previous findings. We argue that the different methodologies used to detect the breaks are the main cause for these differences. The catalog of fitted parameters is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/598/A32
Role of Erosion in Shaping Point Bars
NASA Astrophysics Data System (ADS)
Moody, J.; Meade, R.
2012-04-01
A powerful metaphor in fluvial geomorphology has been that depositional features such as point bars (and other floodplain features) constitute the river's historical memory in the form of uniformly thick sedimentary deposits waiting for the geomorphologist to dissect and interpret the past. For the past three decades, along the channel of Powder River (Montana USA) we have documented (with annual cross-sectional surveys and pit trenches) the evolution of the shape of three point bars that were created when an extreme flood in 1978 cut new channels across the necks of two former meander bends and radically shifted the location of a third bend. Subsequent erosion has substantially reshaped, at different time scales, the relic sediment deposits of varying age. At the weekly to monthly time scale (i.e., floods from snowmelt or floods from convective or cyclonic storms), the maximum scour depth was computed (by using a numerical model) at locations spaced 1 m apart across the entire point bar for a couple of the largest floods. The maximum predicted scour is about 0.22 m. At the annual time scale, repeated cross-section topographic surveys (25 during 32 years) indicate that net annual erosion at a single location can be as great as 0.5 m, and that the net erosion is greater than net deposition during 8, 16, and 32% of the years for the three point bars. On average, the median annual net erosion was 21, 36, and 51% of the net deposition. At the decadal time scale, an index of point bar preservation often referred to as completeness was defined for each cross section as the percentage of the initial deposit (older than 10 years) that was still remaining in 2011; computations indicate that 19, 41, and 36% of the initial deposits of sediment were eroded. Initial deposits were not uniform in thickness and often represented thicker pods of sediment connected by thin layers of sediment or even isolated pods at different elevations across the point bar in response to multiple floods during a water year. Erosion often was preferential and removed part or all of pods at lower elevations, and in time left what appears to be a random arrangement of sediment pods forming the point bar. Thus, we conclude that the erosional process is as important as the deposition process in shaping the final form of the point bar, and that point bars are not uniformly aggradational or transgressive deposits of sediment in which the age of the deposit increases monotonically downward at all locations across the point bar.
Technology utilization to prevent medication errors.
Forni, Allison; Chu, Hanh T; Fanikos, John
2010-01-01
Medication errors have been increasingly recognized as a major cause of iatrogenic illness and system-wide improvements have been the focus of prevention efforts. Critically ill patients are particularly vulnerable to injury resulting from medication errors because of the severity of illness, need for high risk medications with a narrow therapeutic index and frequent use of intravenous infusions. Health information technology has been identified as method to reduce medication errors as well as improve the efficiency and quality of care; however, few studies regarding the impact of health information technology have focused on patients in the intensive care unit. Computerized physician order entry and clinical decision support systems can play a crucial role in decreasing errors in the ordering stage of the medication use process through improving the completeness and legibility of orders, alerting physicians to medication allergies and drug interactions and providing a means for standardization of practice. Electronic surveillance, reminders and alerts identify patients susceptible to an adverse event, communicate critical changes in a patient's condition, and facilitate timely and appropriate treatment. Bar code technology, intravenous infusion safety systems, and electronic medication administration records can target prevention of errors in medication dispensing and administration where other technologies would not be able to intercept a preventable adverse event. Systems integration and compliance are vital components in the implementation of health information technology and achievement of a safe medication use process.
Torsional Split Hopkinson Bar Optimization
2012-04-10
EML 4905 Senior Design Project A B.S. THESIS PREPARED IN PARTIAL FULFILLMENT OF THE REQUIREMENT FOR THE DEGREE OF BACHELOR OF...fulfillment of the requirements in EML 4511. The contents represent the opinion of the authors and not the Department of Mechanical and Materials
NASA Astrophysics Data System (ADS)
Richter, J.; Mayer, J.; Weigand, B.
2018-02-01
Non-resonant laser-induced thermal acoustics (LITA) was applied to measure Mach number, temperature and turbulence level along the centerline of a transonic nozzle flow. The accuracy of the measurement results was systematically studied regarding misalignment of the interrogation beam and frequency analysis of the LITA signals. 2D steady-state Reynolds-averaged Navier-Stokes (RANS) simulations were performed for reference. The simulations were conducted using ANSYS CFX 18 employing the shear-stress transport turbulence model. Post-processing of the LITA signals is performed by applying a discrete Fourier transformation (DFT) to determine the beat frequencies. It is shown that the systematical error of the DFT, which depends on the number of oscillations, signal chirp, and damping rate, is less than 1.5% for our experiments resulting in an average error of 1.9% for Mach number. Further, the maximum calibration error is investigated for a worst-case scenario involving maximum in situ readjustment of the interrogation beam within the limits of constructive interference. It is shown that the signal intensity becomes zero if the interrogation angle is altered by 2%. This, together with the accuracy of frequency analysis, results in an error of about 5.4% for temperature throughout the nozzle. Comparison with numerical results shows good agreement within the error bars.
Improved simulation of aerosol, cloud, and density measurements by shuttle lidar
NASA Technical Reports Server (NTRS)
Russell, P. B.; Morley, B. M.; Livingston, J. M.; Grams, G. W.; Patterson, E. W.
1981-01-01
Data retrievals are simulated for a Nd:YAG lidar suitable for early flight on the space shuttle. Maximum assumed vertical and horizontal resolutions are 0.1 and 100 km, respectively, in the boundary layer, increasing to 2 and 2000 km in the mesosphere. Aerosol and cloud retrievals are simulated using 1.06 and 0.53 microns wavelengths independently. Error sources include signal measurement, conventional density information, atmospheric transmission, and lidar calibration. By day, tenuous clouds and Saharan and boundary layer aerosols are retrieved at both wavelengths. By night, these constituents are retrieved, plus upper tropospheric, stratospheric, and mesospheric aerosols and noctilucent clouds. Density, temperature, and improved aerosol and cloud retrievals are simulated by combining signals at 0.35, 1.06, and 0.53 microns. Particlate contamination limits the technique to the cloud free upper troposphere and above. Error bars automatically show effect of this contamination, as well as errors in absolute density nonmalization, reference temperature or pressure, and the sources listed above. For nonvolcanic conditions, relative density profiles have rms errors of 0.54 to 2% in the upper troposphere and stratosphere. Temperature profiles have rms errors of 1.2 to 2.5 K and can define the tropopause to 0.5 km and higher wave structures to 1 or 2 km.
Looking for trouble? Diagnostics expanding disease and producing patients.
Hofmann, Bjørn
2018-05-23
Novel tests give great opportunities for earlier and more precise diagnostics. At the same time, new tests expand disease, produce patients, and cause unnecessary harm in overdiagnosis and overtreatment. How can we evaluate diagnostics to obtain the benefits and avoid harm? One way is to pay close attention to the diagnostic process and its core concepts. Doing so reveals 3 errors that expand disease and increase overdiagnosis. The first error is to decouple diagnostics from harm, eg, by diagnosing insignificant conditions. The second error is to bypass proper validation of the relationship between test indicator and disease, eg, by introducing biomarkers for Alzheimer's disease before the tests are properly validated. The third error is to couple the name of disease to insignificant or indecisive indicators, eg, by lending the cancer name to preconditions, such as ductal carcinoma in situ. We need to avoid these errors to promote beneficial testing, bar harmful diagnostics, and evade unwarranted expansion of disease. Accordingly, we must stop identifying and testing for conditions that are only remotely associated with harm. We need more stringent verification of tests, and we must avoid naming indicators and indicative conditions after diseases. If not, we will end like ancient tragic heroes, succumbing because of our very best abilities. © 2018 John Wiley & Sons, Ltd.
Stellar mass distribution of S4G disk galaxies and signatures of bar-induced secular evolution
NASA Astrophysics Data System (ADS)
Díaz-García, S.; Salo, H.; Laurikainen, E.
2016-12-01
Context. Models of galaxy formation in a cosmological framework need to be tested against observational constraints, such as the average stellar density profiles (and their dispersion) as a function of fundamental galaxy properties (e.g. the total stellar mass). Simulation models predict that the torques produced by stellar bars efficiently redistribute the stellar and gaseous material inside the disk, pushing it outwards or inwards depending on whether it is beyond or inside the bar corotation resonance radius. Bars themselves are expected to evolve, getting longer and narrower as they trap particles from the disk and slow down their rotation speed. Aims: We use 3.6 μm photometry from the Spitzer Survey of Stellar Structure in Galaxies (S4G) to trace the stellar distribution in nearby disk galaxies (z ≈ 0) with total stellar masses 108.5 ≲ M∗/M⊙ ≲ 1011 and mid-IR Hubble types - 3 ≤ T ≤ 10. We characterize the stellar density profiles (Σ∗), the stellar contribution to the rotation curves (V3.6 μm), and the m = 2 Fourier amplitudes (A2) as a function of M∗ and T. We also describe the typical shapes and strengths of stellar bars in the S4G sample and link their properties to the total stellar mass and morphology of their host galaxy. Methods: For 1154 S4G galaxies with disk inclinations lower than 65°, we perform a Fourier decomposition and rescale their images to a common frame determined by the size in physical units, by their disk scalelength, and for 748 barred galaxies by both the length and orientation of their bars. We stack the resized density profiles and images to obtain statistically representative average stellar disks and bars in bins of M∗ and T. Based on the radial force profiles of individual galaxies we calculate the mean stellar contribution to the circular velocity. We also calculate average A2 profiles, where the radius is normalized to R25.5. Furthermore, we infer the gravitational potentials from the synthetic bars to obtain the tangential-to-radial force ratio (QT) and A2 profiles in the different bins. We also apply ellipse fitting to quantitatively characterize the shape of the bar stacks. Results: For M∗ ≥ 109M⊙, we find a significant difference in the stellar density profiles of barred and non-barred systems: (I) disks in barred galaxies show larger scalelengths (hR) and fainter extrapolated central surface brightnesses (Σ°); (II) the mean surface brightness profiles (Σ∗) of barred and non-barred galaxies intersect each other slightly beyond the mean bar length, most likely at the bar corotation; and (III) the central mass concentration of barred galaxies is higher (by almost a factor 2 when T ≤ 5) than in their non-barred counterparts. The averaged Σ∗ profiles follow an exponential slope down to at least 10 M⊙ pc-2, which is the typical depth beyond which the sample coverage in the radial direction starts to drop. Central mass concentrations in massive systems (≥1010M⊙) are substantially larger than in fainter galaxies, and their prominence scales with T. This segregation also manifests in the inner slope of the mean stellar component of the circular velocity: lenticular (S0) galaxies present the most sharply rising V3.6 μm. Based on the analysis of bar stacks, we show that early- and intermediate-type spirals (0 ≤ T< 5) have intrinsically narrower bars than later types and S0s, whose bars are oval-shaped. We show a clear agreement between galaxy family and quantitative estimates of bar strength. In early- and intermediate-type spirals, A2 is larger within and beyond the typical bar region among barred galaxies than in the non-barred subsample. Strongly barred systems also tend to have larger A2 amplitudes at all radii than their weakly barred counterparts. Conclusions: Using near-IR wavelengths (S4G 3.6 μm), we provide observational constraints that galaxy formation models can be checked against. In particular, we calculate the mean stellar density profiles, and the disk(+bulge) component of the rotation curve (and their dispersion) in bins of M∗ and T. We find evidence for bar-induced secular evolution of disk galaxies in terms of disk spreading and enhanced central mass concentration. We also obtain average bars (2D), and we show that bars hosted by early-type galaxies are more centrally concentrated and have larger density amplitudes than their late-type counterparts. The FITS files of the synthetic images and the tabulated radial profiles of the mean (and dispersion of) stellar mass density, 3.6 μm surface brightness, Fourier amplitudes, gravitational force, and the stellar contribution to the circular velocity are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/596/A84
Nagelhout, Gera E.; Mons, Ute; Allwright, Shane; Guignard, Romain; Beck, Francois; Fong, Geoffrey T.; de Vries, Hein; Willemsen, Marc C.
2015-01-01
National level smoke-free legislation is implemented to protect the public from exposure to second-hand tobacco smoke (SHS). The first aim of this study was to investigate how successful the smoke-free hospitality industry legislation in Ireland (March 2004), France (January 2008), the Netherlands (July 2008), and Germany (between August 2007 and July 2008) was in reducing smoking in bars. The second aim was to assess individual smokers’ predictors of smoking in bars post-ban. The third aim was to examine country differences in predictors and the fourth aim to examine differences between educational levels (as an indicator of socioeconomic status). This study used nationally representative samples of 3,147 adult smokers from the International Tobacco Control (ITC) Europe Surveys who were surveyed pre- and post-ban. The results reveal that while the partial smoke-free legislation in the Netherlands and Germany was effective in reducing smoking in bars (from 88% to 34% and from 87% to 44% respectively), the effectiveness was much lower than the comprehensive legislation in Ireland and France which almost completely eliminated smoking in bars (from 97% to 3% and from 84% to 3% respectively). Smokers who were more supportive of the ban, were more aware of the harm of SHS, and who had negative opinions of smoking were less likely to smoke in bars post-ban. Support for the ban was a stronger predictor in Germany. SHS harm awareness was a stronger predictor among less educated smokers in the Netherlands and Germany. The results indicate the need for strong comprehensive smoke-free legislation without exceptions. This should be accompanied by educational campaigns in which the public health rationale for the legislation is clearly explained. PMID:21497973
Spinophilin Is Indispensable for the α2B Adrenergic Receptor-Elicited Hypertensive Response.
Che, Pulin; Chen, Yunjia; Lu, Roujian; Peng, Ning; Gannon, Mary; Wyss, J Michael; Jiao, Kai; Wang, Qin
2015-01-01
The α2 adrenergic receptor (AR) subtypes are important for blood pressure control. When activated, the α2A subtype elicits a hypotensive response whereas the α2B subtype mediates a hypertensive effect that counteracts the hypotensive response by the α2A subtype. We have previously shown that spinophilin attenuates the α2AAR-dependent hypotensive response; in spinophilin null mice, this response is highly potentiated. In this study, we demonstrate that spinophilin impedes arrestin-dependent phosphorylation and desensitization of the α2BAR subtype by competing against arrestin binding to this receptor subtype. The Del301-303 α2BAR, a human variation that shows impaired phosphorylation and desensitization and is linked to hypertension in certain populations, exhibits preferential interaction with spinophilin over arrestin. Furthermore, Del301-303 α2BAR-induced ERK signaling is quickly desensitized in cells without spinophilin expression, showing a profile similar to that induced by the wild type receptor in these cells. Together, these data suggest a critical role of spinophilin in sustaining α2BAR signaling. Consistent with this notion, our in vivo study reveals that the α2BAR-elicited hypertensive response is diminished in spinophilin deficient mice. In arrestin 3 deficient mice, where the receptor has a stronger binding to spinophilin, the same hypertensive response is enhanced. These data suggest that interaction with spinophilin is indispensable for the α2BAR to elicit the hypertensive response. This is opposite of the negative role of spinophilin in regulating α2AAR-mediated hypotensive response, suggesting that spinophilin regulation of these closely related receptor subtypes can result in distinct functional outcomes in vivo. Thus, spinophilin may represent a useful therapeutic target for treatment of hypertension.
Synthetic aperture imaging in ultrasound calibration
NASA Astrophysics Data System (ADS)
Ameri, Golafsoun; Baxter, John S. H.; McLeod, A. Jonathan; Jayaranthe, Uditha L.; Chen, Elvis C. S.; Peters, Terry M.
2014-03-01
Ultrasound calibration allows for ultrasound images to be incorporated into a variety of interventional applica tions. Traditional Z- bar calibration procedures rely on wired phantoms with an a priori known geometry. The line fiducials produce small, localized echoes which are then segmented from an array of ultrasound images from different tracked probe positions. In conventional B-mode ultrasound, the wires at greater depths appear blurred and are difficult to segment accurately, limiting the accuracy of ultrasound calibration. This paper presents a novel ultrasound calibration procedure that takes advantage of synthetic aperture imaging to reconstruct high resolution ultrasound images at arbitrary depths. In these images, line fiducials are much more readily and accu rately segmented, leading to decreased calibration error. The proposed calibration technique is compared to one based on B-mode ultrasound. The fiducial localization error was improved from 0.21mm in conventional B-mode images to 0.15mm in synthetic aperture images corresponding to an improvement of 29%. This resulted in an overall reduction of calibration error from a target registration error of 2.00mm to 1.78mm, an improvement of 11%. Synthetic aperture images display greatly improved segmentation capabilities due to their improved resolution and interpretability resulting in improved calibration.
NASA Astrophysics Data System (ADS)
Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou
2013-10-01
A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.
The Neural-fuzzy Thermal Error Compensation Controller on CNC Machining Center
NASA Astrophysics Data System (ADS)
Tseng, Pai-Chung; Chen, Shen-Len
The geometric errors and structural thermal deformation are factors that influence the machining accuracy of Computer Numerical Control (CNC) machining center. Therefore, researchers pay attention to thermal error compensation technologies on CNC machine tools. Some real-time error compensation techniques have been successfully demonstrated in both laboratories and industrial sites. The compensation results still need to be enhanced. In this research, the neural-fuzzy theory has been conducted to derive a thermal prediction model. An IC-type thermometer has been used to detect the heat sources temperature variation. The thermal drifts are online measured by a touch-triggered probe with a standard bar. A thermal prediction model is then derived by neural-fuzzy theory based on the temperature variation and the thermal drifts. A Graphic User Interface (GUI) system is also built to conduct the user friendly operation interface with Insprise C++ Builder. The experimental results show that the thermal prediction model developed by neural-fuzzy theory methodology can improve machining accuracy from 80µm to 3µm. Comparison with the multi-variable linear regression analysis the compensation accuracy is increased from ±10µm to ±3µm.
The joke in the title continues with John Q. Public approaching each character and inquiring "how;s the water? Each response represents a different perspective on assessing water resources. The Governor touts...
A Handheld Point-of-Care Genomic Diagnostic System
Myers, Frank B.; Henrikson, Richard H.; Bone, Jennifer; Lee, Luke P.
2013-01-01
The rapid detection and identification of infectious disease pathogens is a critical need for healthcare in both developed and developing countries. As we gain more insight into the genomic basis of pathogen infectivity and drug resistance, point-of-care nucleic acid testing will likely become an important tool for global health. In this paper, we present an inexpensive, handheld, battery-powered instrument designed to enable pathogen genotyping in the developing world. Our Microfluidic Biomolecular Amplification Reader (µBAR) represents the convergence of molecular biology, microfluidics, optics, and electronics technology. The µBAR is capable of carrying out isothermal nucleic acid amplification assays with real-time fluorescence readout at a fraction of the cost of conventional benchtop thermocyclers. Additionally, the µBAR features cell phone data connectivity and GPS sample geotagging which can enable epidemiological surveying and remote healthcare delivery. The µBAR controls assay temperature through an integrated resistive heater and monitors real-time fluorescence signals from 60 individual reaction chambers using LEDs and phototransistors. Assays are carried out on PDMS disposable microfluidic cartridges which require no external power for sample loading. We characterize the fluorescence detection limits, heater uniformity, and battery life of the instrument. As a proof-of-principle, we demonstrate the detection of the HIV-1 integrase gene with the µBAR using the Loop-Mediated Isothermal Amplification (LAMP) assay. Although we focus on the detection of purified DNA here, LAMP has previously been demonstrated with a range of clinical samples, and our eventual goal is to develop a microfluidic device which includes on-chip sample preparation from raw samples. The µBAR is based entirely around open source hardware and software, and in the accompanying online supplement we present a full set of schematics, bill of materials, PCB layouts, CAD drawings, and source code for the µBAR instrument with the goal of spurring further innovation toward low-cost genetic diagnostics. PMID:23936402
Ghrefat, H.A.; Goodell, P.C.; Hubbard, B.E.; Langford, R.P.; Aldouri, R.E.
2007-01-01
Visible and Near-Infrared (VNIR) through Short Wavelength Infrared (SWIR) (0.4-2.5????m) AVIRIS data, along with laboratory spectral measurements and analyses of field samples, were used to characterize grain size variations in aeolian gypsum deposits across barchan-transverse, parabolic, and barchan dunes at White Sands, New Mexico, USA. All field samples contained a mineralogy of ?????100% gypsum. In order to document grain size variations at White Sands, surficial gypsum samples were collected along three Transects parallel to the prevailing downwind direction. Grain size analyses were carried out on the samples by sieving them into seven size fractions ranging from 45 to 621????m, which were subjected to spectral measurements. Absorption band depths of the size fractions were determined after applying an automated continuum-removal procedure to each spectrum. Then, the relationship between absorption band depth and gypsum size fraction was established using a linear regression. Three software processing steps were carried out to measure the grain size variations of gypsum in the Dune Area using AVIRIS data. AVIRIS mapping results, field work and laboratory analysis all show that the interdune areas have lower absorption band depth values and consist of finer grained gypsum deposits. In contrast, the dune crest areas have higher absorption band depth values and consist of coarser grained gypsum deposits. Based on laboratory estimates, a representative barchan-transverse dune (Transect 1) has a mean grain size of 1.16 ??{symbol} (449????m). The error bar results show that the error ranges from - 50 to + 50????m. Mean grain size for a representative parabolic dune (Transect 2) is 1.51 ??{symbol} (352????m), and 1.52 ??{symbol} (347????m) for a representative barchan dune (Transect 3). T-test results confirm that there are differences in the grain size distributions between barchan and parabolic dunes and between interdune and dune crest areas. The t-test results also show that there are no significant differences between modeled and laboratory-measured grain size values. Hyperspectral grain size modeling can help to determine dynamic processes shaping the formation of the dunes such as wind directions, and the relative strengths of winds through time. This has implications for studying such processes on other planetary landforms that have mineralogy with unique absorption bands in VNIR-SWIR hyperspectral data. ?? 2006 Elsevier B.V. All rights reserved.
Automatic Ammunition Identification Technology Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weil, B.
1993-01-01
The Automatic Ammunition Identification Technology (AAIT) Project is an activity of the Robotics Process Systems Division at the Oak Ridge National Laboratory (ORNL) for the US Army's Project Manager-Ammunition Logistics (PM-AMMOLOG) at the Picatinny Arsenal in Picatinny, New Jersey. The project objective is to evaluate new two-dimensional bar code symbologies for potential use in ammunition logistics systems and automated reloading equipment. These new symbologies are a significant improvement over typical linear bar codes since machine-readable alphanumeric messages up to 2000 characters long are achievable. These compressed data symbologies are expected to significantly improve logistics and inventory management tasks and permitmore » automated feeding and handling of ammunition to weapon systems. The results will be increased throughout capability, better inventory control, reduction of human error, lower operation and support costs, and a more timely re-supply of various weapon systems. This paper will describe the capabilities of existing compressed data symbologies and the symbol testing activities being conducted at ORNL for the AAIT Project.« less
Automatic Ammunition Identification Technology Project. Ammunition Logistics Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weil, B.
1993-03-01
The Automatic Ammunition Identification Technology (AAIT) Project is an activity of the Robotics & Process Systems Division at the Oak Ridge National Laboratory (ORNL) for the US Army`s Project Manager-Ammunition Logistics (PM-AMMOLOG) at the Picatinny Arsenal in Picatinny, New Jersey. The project objective is to evaluate new two-dimensional bar code symbologies for potential use in ammunition logistics systems and automated reloading equipment. These new symbologies are a significant improvement over typical linear bar codes since machine-readable alphanumeric messages up to 2000 characters long are achievable. These compressed data symbologies are expected to significantly improve logistics and inventory management tasks andmore » permit automated feeding and handling of ammunition to weapon systems. The results will be increased throughout capability, better inventory control, reduction of human error, lower operation and support costs, and a more timely re-supply of various weapon systems. This paper will describe the capabilities of existing compressed data symbologies and the symbol testing activities being conducted at ORNL for the AAIT Project.« less
Universal behavior of the γ⁎γ→(π0,η,η′) transition form factors
Melikhov, Dmitri; Stech, Berthold
2012-01-01
The photon transition form factors of π, η and η′ are discussed in view of recent measurements. It is shown that the exact axial anomaly sum rule allows a precise comparison of all three form factors at high-Q2 independent of the different structures and distribution amplitudes of the participating pseudoscalar mesons. We conclude: (i) The πγ form factor reported by Belle is in excellent agreement with the nonstrange I=0 component of the η and η′ form factors obtained from the BaBar measurements. (ii) Within errors, the πγ form factor from Belle is compatible with the asymptotic pQCD behavior, similar to the η and η′ form factors from BaBar. Still, the best fits to the data sets of πγ, ηγ, and η′γ form factors favor a universal small logarithmic rise Q2FPγ(Q2)∼log(Q2). PMID:23226917
Interpretation of fast-ion signals during beam modulation experiments
Heidbrink, W. W.; Collins, C. S.; Stagner, L.; ...
2016-07-22
Fast-ion signals produced by a modulated neutral beam are used to infer fast-ion transport. The measured quantity is the divergence of perturbed fast-ion flux from the phase-space volume measured by the diagnostic, ∇•more » $$\\bar{Γ}$$. Since velocity-space transport often contributes to this divergence, the phase-space sensitivity of the diagnostic (or “weight function”) plays a crucial role in the interpretation of the signal. The source and sink make major contributions to the signal but their effects are accurately modeled by calculations that employ an exponential decay term for the sink. Recommendations for optimal design of a fast-ion transport experiment are given, illustrated by results from DIII-D measurements of fast-ion transport by Alfv´en eigenmodes. Finally, the signal-to-noise ratio of the diagnostic, systematic uncertainties in the modeling of the source and sink, and the non-linearity of the perturbation all contribute to the error in ∇•$$\\bar{Γ}$$.« less
New estimates of the CMB angular power spectra from the WMAP 5 year low-resolution data
NASA Astrophysics Data System (ADS)
Gruppuso, A.; de Rosa, A.; Cabella, P.; Paci, F.; Finelli, F.; Natoli, P.; de Gasperis, G.; Mandolesi, N.
2009-11-01
A quadratic maximum likelihood (QML) estimator is applied to the Wilkinson Microwave Anisotropy Probe (WMAP) 5 year low-resolution maps to compute the cosmic microwave background angular power spectra (APS) at large scales for both temperature and polarization. Estimates and error bars for the six APS are provided up to l = 32 and compared, when possible, to those obtained by the WMAP team, without finding any inconsistency. The conditional likelihood slices are also computed for the Cl of all the six power spectra from l = 2 to 10 through a pixel-based likelihood code. Both the codes treat the covariance for (T, Q, U) in a single matrix without employing any approximation. The inputs of both the codes (foreground-reduced maps, related covariances and masks) are provided by the WMAP team. The peaks of the likelihood slices are always consistent with the QML estimates within the error bars; however, an excellent agreement occurs when the QML estimates are used as a fiducial power spectrum instead of the best-fitting theoretical power spectrum. By the full computation of the conditional likelihood on the estimated spectra, the value of the temperature quadrupole CTTl=2 is found to be less than 2σ away from the WMAP 5 year Λ cold dark matter best-fitting value. The BB spectrum is found to be well consistent with zero, and upper limits on the B modes are provided. The parity odd signals TB and EB are found to be consistent with zero.
NASA Astrophysics Data System (ADS)
Hiramatsu, Takashi; Komatsu, Eiichiro; Hazumi, Masashi; Sasaki, Misao
2018-06-01
Given observations of the B -mode polarization power spectrum of the cosmic microwave background (CMB), we can reconstruct power spectra of primordial tensor modes from the early Universe without assuming their functional form such as a power-law spectrum. The shape of the reconstructed spectra can then be used to probe the origin of tensor modes in a model-independent manner. We use the Fisher matrix to calculate the covariance matrix of tensor power spectra reconstructed in bins. We find that the power spectra are best reconstructed at wave numbers in the vicinity of k ≈6 ×10-4 and 5 ×10-3 Mpc-1 , which correspond to the "reionization bump" at ℓ≲6 and "recombination bump" at ℓ≈80 of the CMB B -mode power spectrum, respectively. The error bar between these two wave numbers is larger because of the lack of the signal between the reionization and recombination bumps. The error bars increase sharply toward smaller (larger) wave numbers because of the cosmic variance (CMB lensing and instrumental noise). To demonstrate the utility of the reconstructed power spectra, we investigate whether we can distinguish between various sources of tensor modes including those from the vacuum metric fluctuation and SU(2) gauge fields during single-field slow-roll inflation, open inflation, and massive gravity inflation. The results depend on the model parameters, but we find that future CMB experiments are sensitive to differences in these models. We make our calculation tool available online.
Method of estimating natural recharge to the Edwards Aquifer in the San Antonio area, Texas
Puente, Celso
1978-01-01
The principal errors in the estimates of annual recharge are related to errors in estimating runoff in ungaged areas, which represent about 30 percent of the infiltration area. The estimated long-term average annual recharge in each basin, however, is probably representative of the actual recharge because the averaging procedure tends to cancel out the major errors.
Shape distortions and Gestalt grouping in anorthoscopic perception
Aydın, Murat; Herzog, Michael H.; Öğmen, Haluk
2011-01-01
When a figure moves behind a stationary narrow slit, observers often report seeing the figure as a whole, a phenomenon called slit viewing or anorthoscopic perception. Interestingly, in slit viewing, the figure is perceived compressed along the axis of motion. As with other perceptual distortions, it is unclear whether the perceptual space in the vicinity of the slit or the representation of the figure itself undergoes compression. In a psychophysical experiment, we tested these two hypotheses. We found that the percept of a stationary bar, presented within the slit, was not distorted even when at the same time a circle underwent compression by moving through the slit. This result suggests that the compression of form results from figural rather than from space compression. In support of this hypothesis, we found that when the bar was perceptually grouped with the circle, the bar appeared compressed. Our results show that, in slit viewing, the distortion occurs at a non-retinotopic level where grouped objects are jointly represented. PMID:19757947
HIV type 1 subtypes among bar and hotel workers in Moshi, Tanzania.
Kiwelu, Ireen E; Renjifo, Boris; Chaplin, Beth; Sam, Noel; Nkya, Watoky M M M; Shao, John; Kapiga, Saidi; Essex, Max
2003-01-01
The HIV-1 prevalence among bar and hotel workers in Tanzania suggests they are a high-risk group for HIV-1 infection. We determined the HIV-1 subtype of 3'-p24/5'-p7 gag and C2-C5 env sequences from 40 individuals representing this population in Moshi. Genetic patterns composed of A(gag)-A(env), C(gag)-C(env), and D(gag)-D(env) were found in 19 (48.0%), 8 (20.0%), and 3 (8.0%) samples, respectively. The remaining 10 samples (25%) had different subtypes in gag and env, indicative of intersubtype recombinants. Among these recombinants, two contained sequences from HIV-1 subsubtype A2, a new genetic variant in Tanzania. Five bar and hotel workers may have been infected with viruses from a common source, based on phylogenetic analysis. The information obtained by surveillance of HIV-1 subtypes in a high-risk population should be useful in the design and evaluation of behavioral, therapeutic, and vaccine trial interventions aimed at reducing HIV-1 transmission.
Modified SPC for short run test and measurement process in multi-stations
NASA Astrophysics Data System (ADS)
Koh, C. K.; Chin, J. F.; Kamaruddin, S.
2018-03-01
Due to short production runs and measurement error inherent in electronic test and measurement (T&M) processes, continuous quality monitoring through real-time statistical process control (SPC) is challenging. Industry practice allows the installation of guard band using measurement uncertainty to reduce the width of acceptance limit, as an indirect way to compensate the measurement errors. This paper presents a new SPC model combining modified guard band and control charts (\\bar{\\text{Z}} chart and W chart) for short runs in T&M process in multi-stations. The proposed model standardizes the observed value with measurement target (T) and rationed measurement uncertainty (U). S-factor (S f) is introduced to the control limits to improve the sensitivity in detecting small shifts. The model was embedded in automated quality control system and verified with a case study in real industry.
Pressline, N.; Trusdell, F.A.; Gubbins, David
2009-01-01
Radiocarbon dates have been obtained for 30 charcoal samples corresponding to 27 surface lava flows from the Mauna Loa and Kilauea volcanoes on the Island of Hawaii. The submitted charcoal was a mixture of fresh and archived material. Preparation and analysis was undertaken at the NERC Radiocarbon Laboratory in Glasgow, Scotland, and the associated SUERC Accelerator Mass Spectrometry facility. The resulting dates range from 390 years B.P. to 12,910 years B.P. with corresponding error bars an order of magnitude smaller than previously obtained using the gas-counting method. The new and revised 14C data set can aid hazard and risk assessment on the island. The data presented here also have implications for geomagnetic modelling, which at present is limited by large dating errors. Copyright 2009 by the American Geophysical Union.
An accurate ab initio quartic force field for ammonia
NASA Technical Reports Server (NTRS)
Martin, J. M. L.; Lee, Timothy J.; Taylor, Peter R.
1992-01-01
The quartic force field of ammonia is computed using basis sets of spdf/spd and spdfg/spdf quality and an augmented coupled cluster method. After correcting for Fermi resonance, the computed fundamentals and nu 4 overtones agree on average to better than 3/cm with the experimental ones except for nu 2. The discrepancy for nu 2 is principally due to higher-order anharmonicity effects. The computed omega 1, omega 3, and omega 4 confirm the recent experimental determination by Lehmann and Coy (1988) but are associated with smaller error bars. The discrepancy between the computed and experimental omega 2 is far outside the expected error range, which is also attributed to higher-order anharmonicity effects not accounted for in the experimental determination. Spectroscopic constants are predicted for a number of symmetric and asymmetric top isotopomers of NH3.
Code of Federal Regulations, 2010 CFR
2010-10-01
... percent (60%) of a governing body shall be attorney members. (1) A majority of the members of the governing body shall be attorney members appointed by the governing body(ies) of one or more State, county or municipal bar associations, the membership of which represents a majority of attorneys practicing...
15 CFR 280.205 - Representation.
Code of Federal Regulations, 2011 CFR
2011-01-01
... OF STANDARDS AND TECHNOLOGY, DEPARTMENT OF COMMERCE ACCREDITATION AND ASSESSMENT PROGRAMS FASTENER... respondent is represented by counsel, counsel shall be a member in good standing of the bar of any State, Commonwealth or Territory of the United States, or of the District of Columbia, or be licensed to practice law...
15 CFR 280.205 - Representation.
Code of Federal Regulations, 2010 CFR
2010-01-01
... OF STANDARDS AND TECHNOLOGY, DEPARTMENT OF COMMERCE ACCREDITATION AND ASSESSMENT PROGRAMS FASTENER... respondent is represented by counsel, counsel shall be a member in good standing of the bar of any State, Commonwealth or Territory of the United States, or of the District of Columbia, or be licensed to practice law...
77 FR 38632 - Findings of Research Misconduct
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-28
... counts of nigrostriatal neurons in brains of several mice and rats by copying a single data file from a... Used Herbicide, Atrazine: Altered Function and Loss of Neurons in Brain Monamine Systems.'' Environ... 2004 and 2006; Falsifying a bar graph representing brain proteasomal activity, by selectively altering...
Iron Activation of Cellular Oxidases: Modulation of NeuronalViability (In Vitro).
2018-04-06
Findings related to each specific aim of the study or project, answering each research or study questions, and/or hypothesis: The experimentation ...significant (NS) differences between groups when normalized to GAPDH. All groups were compared using one-way ANOVA with Tukey’s post -hoc test . Western...pɘ.0001. All groups were compared using one-way ANOVA with Tukey’s post -hoc test . All graphs represent n=6. Bars represent mean +/- SEM. It is
Absolute binding free energy calculations of CBClip host–guest systems in the SAMPL5 blind challenge
Tofoleanu, Florentina; Pickard, Frank C.; König, Gerhard; Huang, Jing; Damjanović, Ana; Baek, Minkyung; Seok, Chaok; Brooks, Bernard R.
2016-01-01
Herein, we report the absolute binding free energy calculations of CBClip complexes in the SAMPL5 blind challenge. Initial conformations of CBClip complexes were obtained using docking and molecular dynamics simulations. Free energy calculations were performed using thermodynamic integration (TI) with soft-core potentials and Bennett’s acceptance ratio (BAR) method based on a serial insertion scheme. We compared the results obtained with TI simulations with soft-core potentials and Hamiltonian replica exchange simulations with the serial insertion method combined with the BAR method. The results show that the difference between the two methods can be mainly attributed to the van der Waals free energies, suggesting that either the simulations used for TI or the simulations used for BAR, or both are not fully converged and the two sets of simulations may have sampled difference phase space regions. The penalty scores of force field parameters of the 10 guest molecules provided by CHARMM Generalized Force Field can be an indicator of the accuracy of binding free energy calculations. Among our submissions, the combination of docking and TI performed best, which yielded the root mean square deviation of 2.94 kcal/mol and an average unsigned error of 3.41 kcal/mol for the ten guest molecules. These values were best overall among all participants. However, our submissions had little correlation with experiments. PMID:27677749
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adamczyk, L.; Adkins, J. K.; Agakishiev, G.
Here we present measurements of second-order azimuthal anisotropy ( v 2 ) at midrapidity ( |y| < 1.0 ) for light nuclei d , t , 3He (formore » $$\\sqrt{s}$$$_{NN}$$ = 200 , 62.4, 39, 27, 19.6, 11.5, and 7.7 GeV) and antinuclei$$\\bar{d}$$ ( $$\\sqrt{s}$$$_{NN}$$ = 200 , 62.4, 39, 27, and 19.6 GeV) and 3 ¯¯¯¯¯ He ( $$\\sqrt{s}$$$_{NN}$$ = 200 GeV) in the STAR (Solenoidal Tracker at RHIC) experiment. The v 2 for these light nuclei produced in heavy-ion collisions is compared with those for p and $$\\bar{p}$$. We observe mass ordering in nuclei v 2 ( p T) at low transverse momenta ( p T < 2.0 GeV/c). We also find a centrality dependence of v 2 for d and $$\\bar{d}$$ . The magnitude of v 2 for t and 3He agree within statistical errors. Light-nuclei v 2 are compared with predictions from a blast-wave model. Atomic mass number ( A ) scaling of light-nuclei v 2 (p T) seems to hold for p T / A < 1.5 GeV/c . Results on light-nuclei v 2 from a transport-plus-coalescence model are consistent with the experimental measurements.« less
Adamczyk, L.; Adkins, J. K.; Agakishiev, G.; ...
2016-09-23
Here we present measurements of second-order azimuthal anisotropy ( v 2 ) at midrapidity ( |y| < 1.0 ) for light nuclei d , t , 3He (formore » $$\\sqrt{s}$$$_{NN}$$ = 200 , 62.4, 39, 27, 19.6, 11.5, and 7.7 GeV) and antinuclei$$\\bar{d}$$ ( $$\\sqrt{s}$$$_{NN}$$ = 200 , 62.4, 39, 27, and 19.6 GeV) and 3 ¯¯¯¯¯ He ( $$\\sqrt{s}$$$_{NN}$$ = 200 GeV) in the STAR (Solenoidal Tracker at RHIC) experiment. The v 2 for these light nuclei produced in heavy-ion collisions is compared with those for p and $$\\bar{p}$$. We observe mass ordering in nuclei v 2 ( p T) at low transverse momenta ( p T < 2.0 GeV/c). We also find a centrality dependence of v 2 for d and $$\\bar{d}$$ . The magnitude of v 2 for t and 3He agree within statistical errors. Light-nuclei v 2 are compared with predictions from a blast-wave model. Atomic mass number ( A ) scaling of light-nuclei v 2 (p T) seems to hold for p T / A < 1.5 GeV/c . Results on light-nuclei v 2 from a transport-plus-coalescence model are consistent with the experimental measurements.« less
Ghost-Free APT Analysis of Perturbative QCD Observables
NASA Astrophysics Data System (ADS)
Shirkov, Dmitry V.
The review of the essence and of application of recently devised ghost-free Analytic Perturbation Theory (APT) is presented. First, we discuss the main intrinsic problem of perturbative QCD - ghost singularities and with the resume of its resolving within the APT. By examples for diverse energy and momentum transfer values we show the property of better convergence for the APT modified QCD expansion. It is shown that in the APT analysis the three-loop contribution (sim alpha_s^3) is numerically inessential. This gives raise a hope for practical solution of the well-known problem of non-satisfactory convergence of QFT perturbation series due to its asymptotic nature. Our next result is that a usual perturbative analysis of time-like events is not adequate at sleq 2 GeV2. In particular, this relates to tau decay. Then, for the "high" (f=5) region it is shown that the common NLO, NLLA perturbation approximation widely used there (at 10 GeV lesssimsqrt{s}lesssim 170 GeV) yields a systematic theoretic negative error of a couple per cent level for the bar {alpha}_s^2 values. This results in a conclusion that the bar α_s(M^2_Z) value averaged over the f=5 data appreciably differs < bar {alpha}_s(M^2_Z)rangle_{f=5} simeq 0.124 from the currently popular "world average" (=0.118 ).
von Laßberg, Christoph; Rapp, Walter; Krug, Jürgen
2014-06-01
In a prior study with high level gymnasts we could demonstrate that the neuromuscular activation pattern during the "whip-like" leg acceleration phases (LAP) in accelerating movement sequences on high bar, primarily runs in a consecutive succession from the bar (punctum fixum) to the legs (punctum mobile). The current study presents how the neuromuscular activation is represented during movement sequences that immediately follow the LAP by the antagonist muscle chain to generate an effective transfer of momentum for performing specific elements, based on the energy generated by the preceding LAP. Thirteen high level gymnasts were assessed by surface electromyography during high performance elements on high bar and parallel bars. The results show that the neuromuscular succession runs primarily from punctum mobile towards punctum fixum for generating the transfer of momentum. Additionally, further principles of neuromuscular interactions between the anterior and posterior muscle chain during such movement sequences are presented. The findings complement the understanding of neuromuscular activation patterns during rotational movements around fixed axes and will help to form the basis of more direct and better teaching methods regarding earlier optimization and facilitation of the motor learning process concerning fundamental movement requirements. Copyright © 2014 Elsevier Ltd. All rights reserved.
Braided fluvial sedimentation in the lower paleozoic cape basin, South Africa
NASA Astrophysics Data System (ADS)
Vos, Richard G.; Tankard, Anthony J.
1981-07-01
Lower Paleozoic braided stream deposits from the Piekenier Formation in the Cape Province, South Africa, provide information on lateral and vertical facies variability in an alluvial plain complex influenced by a moderate to high runoff. Four braided stream facies are recognized on the basis of distinct lithologies and assemblages of sedimentary structures. A lower facies, dominated by upward-fining conglomerate to sandstone and mudstone channel fill sequences, is interpreted as a middle to lower alluvial plain deposit with significant suspended load sedimentation in areas of moderate to low gradients. These deposits are succeeded by longitudinal conglomerate bars which are attributed to middle to upper alluvial plain sedimentation with steeper gradients. This facies is in turn overlain by braid bar complexes of large-scale transverse to linguoid dunes consisting of coarse-grained pebbly sandstones with conglomerate lenses. These bar complexes are compared with environments of the Recent Platte River. They represent a middle to lower alluvial plain facies with moderate gradients and no significant suspended load sedimentation or vegetation to stabilize channels. These bar complexes interfinger basinward with plane bedded medium to coarse-grained sandstones interpreted as sheet flood deposits over the distal portions of an alluvial plain with low gradients and lacking fine-grained detritus or vegetation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coca, Mircea Norocel
2004-01-01
The top quark was discovered by CDF and D0 in 1995, in Run I. This thesis presents the results of the top production cross-section measurement, in the dilepton channel, using data taken at the Collider Detector at Fermilab (CDF) in pmore » $$\\bar{p}$$ collisions, at a center-of-mass energy of √ s =1.96 TeV. The dataset represents an integrated luminosity of 193 pb -1 and was collelcted between the period March 2002 - September 2003. Thirteen events were observed (1 e/e, 9 e/ μ , 3 μ/μ ), passing the selection requirements, with an estimated background of 2.8 ± 0.7 events. These are used to measure a t$$\\bar{t}$$ ross-section of σ t$$\\bar{t}$$ = 8.4_ $$+3.2\\atop{-2.7}$$ (stat)_ $$+1.5\\atop{- 1.1}$$ (syst) ± 0.5 (lum)pb. This is in a good agreement with the Standard Model prediction of σ $$t\\bar{t}$$ = 6.7_ $$+0.71\\atop{- 0.88}$$ pb, for a top quark mass of 175 GeV/c 2 . Also few kinematical distributions are compared with the Standard Model and found to agree well.« less
Laboratory errors and patient safety.
Miligy, Dawlat A
2015-01-01
Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that evaluated the encountered laboratory errors and launch the great need for universal standardization and bench marking measures to control the laboratory work.
Annual Sediment Budgets for Newly Formed Point Bars on Powder River, Montana, USA
NASA Astrophysics Data System (ADS)
Moody, John; Meade, Robert
2013-04-01
Morphodynamic processes have been monitored for 37 years on Powder River, a large, unregulated meandering river that drains an area of about 35,000 km2 in northeastern Wyoming and southeastern Montana, USA. Cross-sectional surveys of the channel and adjacent floodplains and terraces have been measured nearly annually (30 out of 37 years) by the U.S. Geological Survey (USGS) at 24 locations along 90 kilometers of the river. This long-term data set has provided insights into the natural morphological and sedimentary processes; and most recently, into the annual sediment budgets for three point bars that were created when an extreme flood in 1978 cut new channels across the necks of two former meander bends and radically shifted the location of a third bend. Because our cross-sectional surveys are generally made only once a year (during the low-flow period, usually September-October), we record only the net change in thickness of the annual deposition and erosion because some areas on a point bar may be scoured and refilled during multiple floods in a year. Point-bar sediment budgets vary spatially as well as annually. The long-term average of the net annual sediment budgets during the post-1978 years (n=26 surveys) indicates that the average annual increment of new sediment deposited on the three point bars has been three to four times the average annual increment of old sediment eroded from the point bars. This annual deposition-to-erosion ratio has varied at one point bar from a minimum of 0.14 (1986) to maximum of 275 (1995). At the other two point-bar sites the ratio ranged from 0.18 (1991) to 265 (2008) and from 0.023 (1980) to 479 (1987). The lack of correlation from year to year or from one point bar to the next suggests the importance of differences in the planimetric configurations and hydraulic histories of each point bar in the evolutionary process. All the deposited sediment we measured during an annual survey represents the same sediment year class, whereas the eroded sediment we measured is composed of different proportions of previous sediment year classes. An index of the preservation (completeness) of these sediment year classes was defined for each point-bar as the percent of the initial deposit (older than 10 years) that was still remaining in 2011. The average (n=20 surveys) completeness was 59, 81, and 64%, and in general, deposits had better chances for being preserved if they were deposited higher on the point bar surface, or if they were covered by new deposition in the following year. Net annual deposition correlated only weakly with annual peak water discharge, and we found no correlation between annual peak water discharge and the amount of sediment eroded from the point bars. These low correlations may be the result of our using only net deposition and erosion values, and not the total deposition and erosion. These results illustrate the dynamic nature of point bars that adds an important component to earlier uniform, lateral accretion models of point bars. This dynamic nature produces a range of vegetation year classes, and thus, a rich diverse habitat for terrestrial and aquatic populations. This abstract has described one application of this unique long-term data set, and the authors will be pleased to provide the data set to anyone who might need long-term fluvial geomorphic data to address other research questions such as floodplain contaminant storage, river restoration, and environmental change.
NASA Technical Reports Server (NTRS)
Stramski, Dariusz; Shalapyonok, Alexi; Reynolds, Rick A.
1995-01-01
The optical properties of the ocenanic cyanobacterium Synechococcus (clone WH8103) were examined in a nutrient-replete laboratory culture grown under a day-night cycle in natural irradiance. Measurements of the spectral absorption and beam attenuation coefficients, the size distribution of cells in suspension, and microscopic analysis of samples were made at intervals of 2-4 hours for 2 days. These measurements were used to calculate the optical properties at the level of a single 'mean' cell representative of the acutal population, specifically, the optical cross sections for spectral absorption bar-(sigma(sub a)), scattering bar-sigma(sub b))(lambda), and attentuation bar-(sigma(sub c))(lambda). In addition, concurrent determinations of chlorophyll a and particulate organic carbon allowed calculation of the Chl a- and C-specific optical coefficients. The refractive index of cells was derived from the observed data using a theory of light absorption and scattering by homogeneous spheres. Low irradiance because of cloudy skies resulted in slow division rates of cells in the culture. The percentage of dividing cells was unusually high (greater than 30%) throughout the experiment. The optical cross sections varied greatly over a day-night cycle, with a minimum near dawn or midmorning and maximum near dusk. During daylight hours, bar-(sigma(sub b)) and bar-(sigma(sub c)) can increase more than twofold and bar-(sigma(sub a) by as much as 45%. The real part of the refractive index n increaed during the day; changes in n had equal or greater effect than the varying size distribution on changes in bar-(sigma(sub c)) and bar-(sigma(sub b)). The contribution of changes in n to the increase of bar-(sigma(sub c))(660) during daylight hours was 65.7% and 45.1% on day 1 and 2, respectively. During the dark period, when bar-(sigma(sub c))(660) decreased by a factor of 2.9, the effect of decreasing n was dominant (86.3%). With the exception of a few hours during the second light period, the imaginary part of the refractive index n' showed little variation over a day-night cycle, and bar-(sigma(sub a)) was largely controlled by variations in cell size. The real part of the refractive index at lambda = 660 nm was correlated with the intracellular C concentration and the imaginary part at lambda = 678 nm with the intracellular Chl a concentration. The C-specfic attenuation coefficient showed significant diel variability, which has implications for the estimation of oceanic primary production from measurements of diel variability in beam attenuation. This study provides strong evidence that diel variability is an important component of the optical characterization of marine phytoplankton.
NASA Astrophysics Data System (ADS)
Wong, Michael H.; Atreya, Sushil K.; Kuhn, William R.; Romani, Paul N.; Mihalka, Kristen M.
2015-01-01
Models of cloud condensation under thermodynamic equilibrium in planetary atmospheres are useful for several reasons. These equilibrium cloud condensation models (ECCMs) calculate the wet adiabatic lapse rate, determine saturation-limited mixing ratios of condensing species, calculate the stabilizing effect of latent heat release and molecular weight stratification, and locate cloud base levels. Many ECCMs trace their heritage to Lewis (Lewis, J.S. [1969]. Icarus 10, 365-378) and Weidenschilling and Lewis (Weidenschilling, S.J., Lewis, J.S. [1973]. Icarus 20, 465-476). Calculation of atmospheric structure and gas mixing ratios are correct in these models. We resolve errors affecting the cloud density calculation in these models by first calculating a cloud density rate: the change in cloud density with updraft length scale. The updraft length scale parameterizes the strength of the cloud-forming updraft, and converts the cloud density rate from the ECCM into cloud density. The method is validated by comparison with terrestrial cloud data. Our parameterized updraft method gives a first-order prediction of cloud densities in a “fresh” cloud, where condensation is the dominant microphysical process. Older evolved clouds may be better approximated by another 1-D method, the diffusive-precipitative Ackerman and Marley (Ackerman, A.S., Marley, M.S. [2001]. Astrophys. J. 556, 872-884) model, which represents a steady-state equilibrium between precipitation and condensation of vapor delivered by turbulent diffusion. We re-evaluate observed cloud densities in the Galileo Probe entry site (Ragent, B. et al. [1998]. J. Geophys. Res. 103, 22891-22910), and show that the upper and lower observed clouds at ∼0.5 and ∼3 bars are consistent with weak (cirrus-like) updrafts under conditions of saturated ammonia and water vapor, respectively. The densest observed cloud, near 1.3 bar, requires unexpectedly strong updraft conditions, or higher cloud density rates. The cloud density rate in this layer may be augmented by a composition with non-NH4SH components (possibly including adsorbed NH3).
An anthropomorphic phantom for quantitative evaluation of breast MRI.
Freed, Melanie; de Zwart, Jacco A; Loud, Jennifer T; El Khouli, Riham H; Myers, Kyle J; Greene, Mark H; Duyn, Jeff H; Badano, Aldo
2011-02-01
In this study, the authors aim to develop a physical, tissue-mimicking phantom for quantitative evaluation of breast MRI protocols. The objective of this phantom is to address the need for improved standardization in breast MRI and provide a platform for evaluating the influence of image protocol parameters on lesion detection and discrimination. Quantitative comparisons between patient and phantom image properties are presented. The phantom is constructed using a mixture of lard and egg whites, resulting in a random structure with separate adipose- and glandular-mimicking components. T1 and T2 relaxation times of the lard and egg components of the phantom were estimated at 1.5 T from inversion recovery and spin-echo scans, respectively, using maximum-likelihood methods. The image structure was examined quantitatively by calculating and comparing spatial covariance matrices of phantom and patient images. A static, enhancing lesion was introduced by creating a hollow mold with stereolithography and filling it with a gadolinium-doped water solution. Measured phantom relaxation values fall within 2 standard errors of human values from the literature and are reasonably stable over 9 months of testing. Comparison of the covariance matrices of phantom and patient data demonstrates that the phantom and patient data have similar image structure. Their covariance matrices are the same to within error bars in the anterior-posterior direction and to within about two error bars in the right-left direction. The signal from the phantom's adipose-mimicking material can be suppressed using active fat-suppression protocols. A static, enhancing lesion can also be included with the ability to change morphology and contrast agent concentration. The authors have constructed a phantom and demonstrated its ability to mimic human breast images in terms of key physical properties that are relevant to breast MRI. This phantom provides a platform for the optimization and standardization of breast MRI imaging protocols for lesion detection and characterization.
Integrating technology to improve medication administration.
Prusch, Amanda E; Suess, Tina M; Paoletti, Richard D; Olin, Stephen T; Watts, Starann D
2011-05-01
The development, implementation, and evaluation of an i.v. interoperability program to advance medication safety at the bedside are described. I.V. interoperability integrates intelligent infusion devices (IIDs), the bar-code-assisted medication administration system, and the electronic medication administration record system into a bar-code-driven workflow that populates provider-ordered, pharmacist-validated infusion parameters on IIDs. The purpose of this project was to improve medication safety through the integration of these technologies and decrease the potential for error during i.v. medication administration. Four key phases were essential to developing and implementing i.v. interoperability: (a) preparation, (b) i.v. interoperability pilot, (c) preliminary validation, and (d) expansion. The establishment of pharmacy involvement in i.v. interoperability resulted in two additional safety checks: pharmacist infusion rate oversight and nurse independent validation of the autoprogrammed rate. After instituting i.v. interoperability, monthly compliance to the telemetry drug library increased to a mean ± S.D. of 72.1% ± 2.1% from 56.5% ± 1.5%, and the medical-surgical nursing unit's drug library monthly compliance rate increased to 58.6% ± 2.9% from 34.1% ± 2.6% (p < 0.001 for both comparisons). The number of manual pump edits decreased with both telemetry and medical-surgical drug libraries, demonstrating a reduction from 56.9 ± 12.8 to 14.2 ± 3.9 and from 61.2 ± 15.4 to 14.7 ± 3.8, respectively (p < 0.001 for both comparisons). Through the integration and incorporation of pharmacist oversight for rate changes, the telemetry and medical-surgical patient care areas demonstrated a 32% reduction in reported monthly errors involving i.v. administration of heparin. By integrating two stand-alone technologies, i.v. interoperability was implemented to improve medication administration. Medication errors were reduced, nursing workflow was simplified, and pharmacists became involved in checking infusion rates of i.v. medications.
Does the Newtonian Gravity "Constant" G Vary?
NASA Astrophysics Data System (ADS)
Noerdlinger, Peter D.
2015-08-01
A series of measurements of Newton's gravity constant, G, dating back as far as 1893, yielded widely varying values, the variation greatly exceeding the stated error estimates (Gillies, 1997; Quinn, 2000, Mohr et al 2008). The value of G is usually said to be unrelated to other physics, but we point out that the 8B Solar Neutrino Rate ought to be very sensitive. Improved pulsar timing could also help settle the issue as to whether G really varies. We claim that the variation in measured values over time (1893-2014 C.E.) is a more serious problem than the failure of the error bars to overlap; it appears that challenging or adjusting the error bars hardly masks the underlying disagreement in central values. We have assessed whether variations in the gravitational potential due to (for example) local dark matter (DM) could explain the variations. We find that the required potential fluctuations could transiently accelerate the Solar System and nearby stars to speeds in excess of the Galactic escape speed. Previous theories for the variation in G generally deal with supposed secular variation on a cosmological timescale, or very rapid oscillations whose envelope changes on that scale (Steinhardt and Will 1995). Therefore, these analyses fail to support variations on the timescale of years or spatial scales of order parsecs, which would be required by the data for G. We note that true variations in G would be associated with variations in clock rates (Derevianko and Pospelov 2014; Loeb and Maoz 2015), which could mask changes in orbital dynamics. Geringer-Sameth et al (2014) studied γ-ray emission from the nearby Reticulum dwarf galaxy, which is expected to be free of "ordinary" (stellar, black hole) γ-ray sources and found evidence for DM decay. Bernabei et al (2003) also found evidence for DM penetrating deep underground at Gran Sasso. If, indeed, variations in G can be tied to variations in gravitational potential, we have a new tool to assess the DM density.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, C.; Pujol, A.; Gaztañaga, E.
We measure the redshift evolution of galaxy bias for a magnitude-limited galaxy sample by combining the galaxy density maps and weak lensing shear maps for a ~116 deg 2 area of the Dark Energy Survey (DES) Science Verification (SV) data. This method was first developed in Amara et al. and later re-examined in a companion paper with rigorous simulation tests and analytical treatment of tomographic measurements. In this work we apply this method to the DES SV data and measure the galaxy bias for a i < 22.5 galaxy sample. We find the galaxy bias and 1σ error bars inmore » four photometric redshift bins to be 1.12 ± 0.19 (z = 0.2–0.4), 0.97 ± 0.15 (z = 0.4–0.6), 1.38 ± 0.39 (z = 0.6–0.8), and 1.45 ± 0.56 (z = 0.8–1.0). These measurements are consistent at the 2σ level with measurements on the same data set using galaxy clustering and cross-correlation of galaxies with cosmic microwave background lensing, with most of the redshift bins consistent within the 1σ error bars. In addition, our method provides the only σ8 independent constraint among the three. We forward model the main observational effects using mock galaxy catalogues by including shape noise, photo-z errors, and masking effects. We show that our bias measurement from the data is consistent with that expected from simulations. With the forthcoming full DES data set, we expect this method to provide additional constraints on the galaxy bias measurement from more traditional methods. Moreover, in the process of our measurement, we build up a 3D mass map that allows further exploration of the dark matter distribution and its relation to galaxy evolution.« less
Chang, C.; Pujol, A.; Gaztañaga, E.; ...
2016-04-15
We measure the redshift evolution of galaxy bias for a magnitude-limited galaxy sample by combining the galaxy density maps and weak lensing shear maps for a ~116 deg 2 area of the Dark Energy Survey (DES) Science Verification (SV) data. This method was first developed in Amara et al. and later re-examined in a companion paper with rigorous simulation tests and analytical treatment of tomographic measurements. In this work we apply this method to the DES SV data and measure the galaxy bias for a i < 22.5 galaxy sample. We find the galaxy bias and 1σ error bars inmore » four photometric redshift bins to be 1.12 ± 0.19 (z = 0.2–0.4), 0.97 ± 0.15 (z = 0.4–0.6), 1.38 ± 0.39 (z = 0.6–0.8), and 1.45 ± 0.56 (z = 0.8–1.0). These measurements are consistent at the 2σ level with measurements on the same data set using galaxy clustering and cross-correlation of galaxies with cosmic microwave background lensing, with most of the redshift bins consistent within the 1σ error bars. In addition, our method provides the only σ8 independent constraint among the three. We forward model the main observational effects using mock galaxy catalogues by including shape noise, photo-z errors, and masking effects. We show that our bias measurement from the data is consistent with that expected from simulations. With the forthcoming full DES data set, we expect this method to provide additional constraints on the galaxy bias measurement from more traditional methods. Moreover, in the process of our measurement, we build up a 3D mass map that allows further exploration of the dark matter distribution and its relation to galaxy evolution.« less
Chandra Source Catalog: User Interface
NASA Astrophysics Data System (ADS)
Bonaventura, Nina; Evans, Ian N.; Rots, Arnold H.; Tibbetts, Michael S.; van Stone, David W.; Zografou, Panagoula; Primini, Francis A.; Glotfelty, Kenny J.; Anderson, Craig S.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; He, Helen; Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Refsdal, Brian L.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Winkelman, Sherry L.
2009-09-01
The Chandra Source Catalog (CSC) is intended to be the definitive catalog of all X-ray sources detected by Chandra. For each source, the CSC provides positions and multi-band fluxes, as well as derived spatial, spectral, and temporal source properties. Full-field and source region data products are also available, including images, photon event lists, light curves, and spectra. The Chandra X-ray Center CSC website (http://cxc.harvard.edu/csc/) is the place to visit for high-level descriptions of each source property and data product included in the catalog, along with other useful information, such as step-by-step catalog tutorials, answers to FAQs, and a thorough summary of the catalog statistical characterization. Eight categories of detailed catalog documents may be accessed from the navigation bar on most of the 50+ CSC pages; these categories are: About the Catalog, Creating the Catalog, Using the Catalog, Catalog Columns, Column Descriptions, Documents, Conferences, and Useful Links. There are also prominent links to CSCview, the CSC data access GUI, and related help documentation, as well as a tutorial for using the new CSC/Google Earth interface. Catalog source properties are presented in seven scientific categories, within two table views: the Master Source and Source Observations tables. Each X-ray source has one ``master source'' entry and one or more ``source observation'' entries, the details of which are documented on the CSC ``Catalog Columns'' pages. The master source properties represent the best estimates of the properties of a source; these are extensively described on the following pages of the website: Position and Position Errors, Source Flags, Source Extent and Errors, Source Fluxes, Source Significance, Spectral Properties, and Source Variability. The eight tutorials (``threads'') available on the website serve as a collective guide for accessing, understanding, and manipulating the source properties and data products provided by the catalog.
Changes in Atmospheric CO2 Influence the Allergenicity of Aspergillus fumigatus fungal spore
NASA Astrophysics Data System (ADS)
Lang-Yona, N.; Levin, Y.; Dannemoller, K. C.; Yarden, O.; Peccia, J.; Rudich, Y.
2013-12-01
Increased allergic susceptibility has been documented without a comprehensive understanding for its causes. Therefore understanding trends and mechanisms of allergy inducing agents is essential. In this study we investigated whether elevated atmospheric CO2 levels can affect the allergenicity of Aspergillus fumigatus, a common allergenic fungal species. Both direct exposure to changing CO2 levels during fungal growth, and indirect exposure through changes in the C:N ratios in the growth media were inspected. We determined the allergenicity of the spores through two types of immunoassays, accompanied with genes expression analysis, and proteins relative quantification. We show that fungi grown under present day CO2 levels (392 ppm) exhibit 8.5 and 3.5 fold higher allergenicity compared to fungi grown at preindustrial (280 ppm) and double (560 ppm) CO2 levels, respectively. A corresponding trend is observed in the expression of genes encoding for known allergenic proteins and in the major allergen Asp f1 concentrations, possibly due to physiological changes such as respiration rates and the nitrogen content of the fungus, influenced by the CO2 concentrations. Increased carbon and nitrogen levels in the growth medium also lead to a significant increase in the allergenicity, for which we propose two different biological mechanisms. We suggest that climatic changes such as increasing atmospheric CO2 levels and changes in the fungal growth medium may impact the ability of allergenic fungi such as Aspergillus fumigatus to induce allergies. The effect of changing CO2 concentrations on the total allergenicity per 10^7 spores of A. fumigatus (A), the major allergen Asp f1 concentration in ng per 10^7 spores (B), and the gene expression by RT-PCR (C). The error bars represent the standard error of the mean.
Wullschleger, Marcel; Aghlmandi, Soheila; Egger, Marcel; Zwahlen, Marcel
2014-01-01
In biomedical journals authors sometimes use the standard error of the mean (SEM) for data description, which has been called inappropriate or incorrect. To assess the frequency of incorrect use of SEM in articles in three selected cardiovascular journals. All original journal articles published in 2012 in Cardiovascular Research, Circulation: Heart Failure and Circulation Research were assessed by two assessors for inappropriate use of SEM when providing descriptive information of empirical data. We also assessed whether the authors state in the methods section that the SEM will be used for data description. Of 441 articles included in this survey, 64% (282 articles) contained at least one instance of incorrect use of the SEM, with two journals having a prevalence above 70% and "Circulation: Heart Failure" having the lowest value (27%). In 81% of articles with incorrect use of SEM, the authors had explicitly stated that they use the SEM for data description and in 89% SEM bars were also used instead of 95% confidence intervals. Basic science studies had a 7.4-fold higher level of inappropriate SEM use (74%) than clinical studies (10%). The selection of the three cardiovascular journals was based on a subjective initial impression of observing inappropriate SEM use. The observed results are not representative for all cardiovascular journals. In three selected cardiovascular journals we found a high level of inappropriate SEM use and explicit methods statements to use it for data description, especially in basic science studies. To improve on this situation, these and other journals should provide clear instructions to authors on how to report descriptive information of empirical data.
NASA Astrophysics Data System (ADS)
Lazic, V.; De Ninno, A.
2017-11-01
The laser induced plasma spectroscopy was applied on particles attached on substrate represented by a silica wafer covered with a thin oil film. The substrate itself weakly interacts with a ns Nd:YAG laser (1064 nm) while presence of particles strongly enhances the plasma emission, here detected by a compact spectrometer array. Variations of the sample mass from one laser spot to another exceed one order of magnitude, as estimated by on-line photography and the initial image calibration for different sample loadings. Consequently, the spectral lines from particles show extreme intensity fluctuations from one sampling point to another, between the detection threshold and the detector's saturation in some cases. In such conditions the common calibration approach based on the averaged spectra, also when considering ratios of the element lines i.e. concentrations, produces errors too large for measuring the sample compositions. On the other hand, intensities of an analytical and the reference line from single shot spectra are linearly correlated. The corresponding slope depends on the concentration ratio and it is weakly sensitive to fluctuations of the plasma temperature inside the data set. A use of the slopes for constructing the calibration graphs significantly reduces the error bars but it does not eliminate the point scattering caused by the matrix effect, which is also responsible for large differences in the average plasma temperatures among the samples. Well aligned calibration points were obtained after identifying the couples of transitions less sensitive to variations of the plasma temperature, and this was achieved by simple theoretical simulations. Such selection of the analytical lines minimizes the matrix effect, and together with the chosen calibration approach, allows to measure the relative element concentrations even in highly unstable laser induced plasmas.
Code of Federal Regulations, 2011 CFR
2011-07-01
... CLE credit by any State bar association and, at a minimum, must cover the following topics... and home and business addresses; (ii) Information concerning the applicant's military and civilian...: Director, Management and Administration (01E), Board of Veterans' Appeals, 810 Vermont Avenue, NW...
32 CFR 865.3 - Application procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
... (DD Form 149) and http://www.e-publishing.af.mil/shared/media/epubs/AFPAM36-2607.pdf (Air Force...) The name under which the member served. (2) The member's social security number or Air Force service... term “counsel” includes members in good standing of the bar of any state, accredited representatives of...
32 CFR 865.3 - Application procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... (DD Form 149) and http://www.e-publishing.af.mil/shared/media/epubs/AFPAM36-2607.pdf (Air Force...) The name under which the member served. (2) The member's social security number or Air Force service... term “counsel” includes members in good standing of the bar of any state, accredited representatives of...
32 CFR 865.3 - Application procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... (DD Form 149) and http://www.e-publishing.af.mil/shared/media/epubs/AFPAM36-2607.pdf (Air Force...) The name under which the member served. (2) The member's social security number or Air Force service... term “counsel” includes members in good standing of the bar of any state, accredited representatives of...
32 CFR 865.3 - Application procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... (DD Form 149) and http://www.e-publishing.af.mil/shared/media/epubs/AFPAM36-2607.pdf (Air Force...) The name under which the member served. (2) The member's social security number or Air Force service... term “counsel” includes members in good standing of the bar of any state, accredited representatives of...
Regulation of the Two Delta Crystallin Genes during Lens Development in the Chicken Embryo
1991-08-22
Stabilization of tubulin mRNA by inhibition of protein synthesis sea 148 urchin embryos. Mol. Cell. Biol. 8, 3518-3525. Goto, K., Okada, T.S...counts from twenty lens epithelia. Error bars are ± SEM . Symbols: control lens tissue, (square), 0.5 ng/ml actinomycin D, (inverted triangle), 30 ng...Ŝ]-methionine for 5 hr in the absence or presence of actinomycin D (0.5 or 30 M-g/̂ iD • Values are the means ± SEM for ten groups of three lens
A Search for Periodicity in the X-Ray Spectrum of Black Hole Candidate A0620-00
1991-06-01
They are observed as radio pulsars and as the X-ray emitting components of binary X-ray sources. The limits of stability of neutron stars are not...4 Lo ). The three candidates are CYG X-1, LMC X-3, and A0620. In this section all data such as mass functions, luminosities, distances, periods, etc...1.4. Finally, we discard data for which a/ lo > 1. Such a point is of little statistical significance since its error bars are so large. Figure 2.2d
The nuclear electric quadrupole moment of copper.
Santiago, Régis Tadeu; Teodoro, Tiago Quevedo; Haiduke, Roberto Luiz Andrade
2014-06-21
The nuclear electric quadrupole moment (NQM) of the (63)Cu nucleus was determined from an indirect approach by combining accurate experimental nuclear quadrupole coupling constants (NQCCs) with relativistic Dirac-Coulomb coupled cluster calculations of the electric field gradient (EFG). The data obtained at the highest level of calculation, DC-CCSD-T, from 14 linear molecules containing the copper atom give rise to an indicated NQM of -198(10) mbarn. Such result slightly deviates from the previously accepted standard value given by the muonic method, -220(15) mbarn, although the error bars are superimposed.
Watts Bar Nuclear Plant Title V Applicability
This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Policy and Guidance Database available at www2.epa.gov/title-v-operating-permits/title-v-operating-permit-policy-and-guidance-document-index. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
NASA Technical Reports Server (NTRS)
Rapp, R. H.
1977-01-01
The frequently used rule specifying the relationship between a mean gravity anomaly in a block whose side length is theta degrees and a spherical harmonic representation of these data to degree l-bar is examined in light of the smoothing parameter used by Pellinen (1966). It is found that if the smoothing parameter is not considered, mean anomalies computed from potential coefficients can be in error by about 30% of the rms anomaly value. It is suggested that the above mentioned rule should be considered only a crude approximation.
Data free inference with processed data products
Chowdhary, K.; Najm, H. N.
2014-07-12
Here, we consider the context of probabilistic inference of model parameters given error bars or confidence intervals on model output values, when the data is unavailable. We introduce a class of algorithms in a Bayesian framework, relying on maximum entropy arguments and approximate Bayesian computation methods, to generate consistent data with the given summary statistics. Once we obtain consistent data sets, we pool the respective posteriors, to arrive at a single, averaged density on the parameters. This approach allows us to perform accurate forward uncertainty propagation consistent with the reported statistics.
Aquarius Radiometer Performance: Early On-Orbit Calibration and Results
NASA Technical Reports Server (NTRS)
Piepmeier, Jeffrey R.; LeVine, David M.; Yueh, Simon H.; Wentz, Frank; Ruf, Christopher
2012-01-01
The Aquarius/SAC-D observatory was launched into a 657-km altitude, 6-PM ascending node, sun-synchronous polar orbit from Vandenberg, California, USA on June 10, 2011. The Aquarius instrument was commissioned two months after launch and began operating in mission mode August 25. The Aquarius radiometer meets all engineering requirements, exhibited initial calibration biases within expected error bars, and continues to operate well. A review of the instrument design, discussion of early on-orbit performance and calibration assessment, and investigation of an on-going calibration drift are summarized in this abstract.
Micro Computer Feedback Report for the Strategic Leader Development Inventory; Source Code
1994-03-01
SEL5 ;exit if error CALL SELZCT SCRZEN ;display select screen JC SEL4 ;no files in directory .------- display the files NOV BX, [BarPos] ;starting...SEL2 ;if not goto next test imp SEL4 ; Ecit SEL2: CUP AL,ODh ;in it a pick ? 3Z SEL3 ;if YES exit loop ------- see if an active control key was...file CALL READCOMFIG eread file into memory JC SEL5 ;exit to main menu CALL OPEN DATA FILE ;is data arailable? SEL4 : CALL RELEASE_ _MDR ;release mom
NASA Astrophysics Data System (ADS)
Fernandez, Alvaro; Müller, Inigo A.; Rodríguez-Sanz, Laura; van Dijk, Joep; Looser, Nathan; Bernasconi, Stefano M.
2017-12-01
Carbonate clumped isotopes offer a potentially transformational tool to interpret Earth's history, but the proxy is still limited by poor interlaboratory reproducibility. Here, we focus on the uncertainties that result from the analysis of only a few replicate measurements to understand the extent to which unconstrained errors affect calibration relationships and paleoclimate reconstructions. We find that highly precise data can be routinely obtained with multiple replicate analyses, but this is not always done in many laboratories. For instance, using published estimates of external reproducibilities we find that typical clumped isotope measurements (three replicate analyses) have margins of error at the 95% confidence level (CL) that are too large for many applications. These errors, however, can be systematically reduced with more replicate measurements. Second, using a Monte Carlo-type simulation we demonstrate that the degree of disagreement on published calibration slopes is about what we should expect considering the precision of Δ47 data, the number of samples and replicate analyses, and the temperature range covered in published calibrations. Finally, we show that the way errors are typically reported in clumped isotope data can be problematic and lead to the impression that data are more precise than warranted. We recommend that uncertainties in Δ47 data should no longer be reported as the standard error of a few replicate measurements. Instead, uncertainties should be reported as margins of error at a specified confidence level (e.g., 68% or 95% CL). These error bars are a more realistic indication of the reliability of a measurement.
Morgan, C.D.; Bereskin, S.R.
2003-01-01
The oil-productive Eocene Green River Formation in the central Uinta Basin of northeastern Utah is divided into five distinct intervals. In stratigraphically ascending order these are: 1) Uteland Butte, 2) Castle Peak, 3) Travis, 4) Monument Butte, and 5) Beluga. The reservoir in the Uteland Butte interval is mainly lacustrine limestone with rare bar sandstone beds, whereas the reservoirs in the other four intervals are mainly channel and lacustrine sandstone beds. The changing depositional environments of Paleocene-Eocene Lake Uinta controlled the characteristics of each interval and the reservoir rock contained within. The Uteland Butte consists of carbonate and rare, thin, shallow-lacustrine sandstone bars deposited during the initial rise of the lake. The Castle Peak interval was deposited during a time of numerous and rapid lake-level fluctuations, which developed a simple drainage pattern across the exposed shallow and gentle shelf with each fall and rise cycle. The Travis interval records a time of active tectonism that created a steeper slope and a pronounced shelf break where thick cut-and-fill valleys developed during lake-level falls and rises. The Monument Butte interval represents a return to a gentle, shallow shelf where channel deposits are stacked in a lowstand delta plain and amalgamated into the most extensive reservoir in the central Uinta Basin. The Beluga interval represents a time of major lake expansion with fewer, less pronounced lake-level falls, resulting in isolated single-storied channel and shallow-bar sandstone deposits.
Hydrological regime as key to the morpho-texture and activity of braided streams
NASA Astrophysics Data System (ADS)
Storz-Peretz, Y.; Laronne, J. B.
2012-04-01
Braided streams are a common fluvial pattern in different climates. However, studies of gravel braided streams have mainly been conducted in humid braided systems or in flume simulations thereof, leaving arid braided streams scarcely investigated. Dryland rivers have bare catchments, rapid flow recession and unarmoured channel beds which are responsible for very high bedload discharges, thereby increasing the likelihood for braiding. Our main objective is to characterize the morpho-texture of the main morphological elements - mid-channel bars, chutes and anabranches (braid-cells) in the dryland braided system and compare them to their humid counterparts. Selected areas of the dryland braided Wadis Ze'elim, Rahaf and Roded in the SE hyper-arid Israel were measured, as were La-Bleone river in the French pre-alps along with the Saisera and Cimoliana rivers in NE Italy representing humid braided systems. Terrestrial Laser Scanning (TLS) of morphological units produced point clouds from which high resolution accurate Digital Elevation Models (DEMs) were extracted. Active braid cells in humid environments were also surveyed by electronic theodolite. Roughness and upper tail Grain Size Distribution (GSD) quantiles were derived from the scanned point clouds or from Wolman sampling. Results indicate that dryland anabranches tend to be finer-grained and less armoured than the bars, contrary to the humid braided systems, where the main or larger anabranches are coarser-grained and more armoured than the bars. Chutes are commonly similar or coarser-grained than the bars they dissect, in accordance with their steeper gradients due to the considerable relief of the bar-anabranch. The morpho-texture displayed in the steep braided Saisera River, located in the Italian Dolomites having the highest annual precipitation, has similarity to that of the dryland braided channels. In drylands coarse gravel is deposited mainly as bars due to the high flux of bedload, whereas the rapid flow recession is responsible for deposition of finer sediment with minimal winnowing in the branch channels. Therefore, channels are finer-grained than the bars. This process is associated with the mid-channel deposition of central bars. However, the steeper chutes and coarser anabranches are associated with erosive braiding processes, such as chute cutoffs and multiple bar dissection, allowing winnowing to occur also during rapid recession. Hence coarser-grained anabranches in drylands are essentially chutes. Lengthy flow recession in humid braided channels allows winnowing of fines, thereby generating armored channels, the finer sedimentary particles often deposited downstream as unit bars. Therefore, channels are coarser-grained than the bars they surround. Even though the steep Saisera is in a humid region, its hydrological regime is ephemeral with rapid and short recessions, responsible for a morpho-texture similar to that of dryland braided streams. Hence, the hydrologic regimen is a key to understanding the morpho-textural character of braided channels and for the higher activity of the ephemeral unarmoured channels in sub-barful events compared to their humid counterparts.
Impact of Frequent Interruption on Nurses' Patient-Controlled Analgesia Programming Performance.
Campoe, Kristi R; Giuliano, Karen K
2017-12-01
The purpose was to add to the body of knowledge regarding the impact of interruption on acute care nurses' cognitive workload, total task completion times, nurse frustration, and medication administration error while programming a patient-controlled analgesia (PCA) pump. Data support that the severity of medication administration error increases with the number of interruptions, which is especially critical during the administration of high-risk medications. Bar code technology, interruption-free zones, and medication safety vests have been shown to decrease administration-related errors. However, there are few published data regarding the impact of number of interruptions on nurses' clinical performance during PCA programming. Nine acute care nurses completed three PCA pump programming tasks in a simulation laboratory. Programming tasks were completed under three conditions where the number of interruptions varied between two, four, and six. Outcome measures included cognitive workload (six NASA Task Load Index [NASA-TLX] subscales), total task completion time (seconds), nurse frustration (NASA-TLX Subscale 6), and PCA medication administration error (incorrect final programming). Increases in the number of interruptions were associated with significant increases in total task completion time ( p = .003). We also found increases in nurses' cognitive workload, nurse frustration, and PCA pump programming errors, but these increases were not statistically significant. Complex technology use permeates the acute care nursing practice environment. These results add new knowledge on nurses' clinical performance during PCA pump programming and high-risk medication administration.
Shimazu, Chisato; Hoshino, Satoshi; Furukawa, Taiji
2013-08-01
We constructed an integrated personal identification workflow chart using both bar code reading and an all in-one laboratory information system. The information system not only handles test data but also the information needed for patient guidance in the laboratory department. The reception terminals at the entrance, displays for patient guidance and patient identification tools at blood-sampling booths are all controlled by the information system. The number of patient identification errors was greatly reduced by the system. However, identification errors have not been abolished in the ultrasound department. After re-evaluation of the patient identification process in this department, we recognized that the major reason for the errors came from excessive identification workflow. Ordinarily, an ultrasound test requires patient identification 3 times, because 3 different systems are required during the entire test process, i.e. ultrasound modality system, laboratory information system and a system for producing reports. We are trying to connect the 3 different systems to develop a one-time identification workflow, but it is not a simple task and has not been completed yet. Utilization of the laboratory information system is effective, but is not yet perfect for patient identification. The most fundamental procedure for patient identification is to ask a person's name even today. Everyday checks in the ordinary workflow and everyone's participation in safety-management activity are important for the prevention of patient identification errors.
Effects of line fiducial parameters and beamforming on ultrasound calibration
Ameri, Golafsoun; Baxter, John S. H.; McLeod, A. Jonathan; Peters, Terry M.; Chen, Elvis C. S.
2017-01-01
Abstract. Ultrasound (US)-guided interventions are often enhanced via integration with an augmented reality environment, a necessary component of which is US calibration. Calibration requires the segmentation of fiducials, i.e., a phantom, in US images. Fiducial localization error (FLE) can decrease US calibration accuracy, which fundamentally affects the total accuracy of the interventional guidance system. Here, we investigate the effects of US image reconstruction techniques as well as phantom material and geometry on US calibration. It was shown that the FLE was reduced by 29% with synthetic transmit aperture imaging compared with conventional B-mode imaging in a Z-bar calibration, resulting in a 10% reduction of calibration error. In addition, an evaluation of a variety of calibration phantoms with different geometrical and material properties was performed. The phantoms included braided wire, plastic straws, and polyvinyl alcohol cryogel tubes with different diameters. It was shown that these properties have a significant effect on calibration error, which is a variable based on US beamforming techniques. These results would have important implications for calibration procedures and their feasibility in the context of image-guided procedures. PMID:28331886
Effects of line fiducial parameters and beamforming on ultrasound calibration.
Ameri, Golafsoun; Baxter, John S H; McLeod, A Jonathan; Peters, Terry M; Chen, Elvis C S
2017-01-01
Ultrasound (US)-guided interventions are often enhanced via integration with an augmented reality environment, a necessary component of which is US calibration. Calibration requires the segmentation of fiducials, i.e., a phantom, in US images. Fiducial localization error (FLE) can decrease US calibration accuracy, which fundamentally affects the total accuracy of the interventional guidance system. Here, we investigate the effects of US image reconstruction techniques as well as phantom material and geometry on US calibration. It was shown that the FLE was reduced by 29% with synthetic transmit aperture imaging compared with conventional B-mode imaging in a Z-bar calibration, resulting in a 10% reduction of calibration error. In addition, an evaluation of a variety of calibration phantoms with different geometrical and material properties was performed. The phantoms included braided wire, plastic straws, and polyvinyl alcohol cryogel tubes with different diameters. It was shown that these properties have a significant effect on calibration error, which is a variable based on US beamforming techniques. These results would have important implications for calibration procedures and their feasibility in the context of image-guided procedures.
Geometric errors in 3D optical metrology systems
NASA Astrophysics Data System (ADS)
Harding, Kevin; Nafis, Chris
2008-08-01
The field of 3D optical metrology has seen significant growth in the commercial market in recent years. The methods of using structured light to obtain 3D range data is well documented in the literature, and continues to be an area of development in universities. However, the step between getting 3D data, and getting geometrically correct 3D data that can be used for metrology is not nearly as well developed. Mechanical metrology systems such as CMMs have long established standard means of verifying the geometric accuracies of their systems. Both local and volumentric measurments are characterized on such system using tooling balls, grid plates, and ball bars. This paper will explore the tools needed to characterize and calibrate an optical metrology system, and discuss the nature of the geometric errors often found in such systems, and suggest what may be a viable standard method of doing characterization of 3D optical systems. Finally, we will present a tradeoff analysis of ways to correct geometric errors in an optical systems considering what can be gained by hardware methods versus software corrections.
Optics measurement algorithms and error analysis for the proton energy frontier
NASA Astrophysics Data System (ADS)
Langner, A.; Tomás, R.
2015-03-01
Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β -functions (β*). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased the average error bars by a factor of three to four. This allowed the calculation of β* values and demonstrated to be fundamental in the understanding of emittance evolution during the energy ramp.
Lobaugh, Lauren M Y; Martin, Lizabeth D; Schleelein, Laura E; Tyler, Donald C; Litman, Ronald S
2017-09-01
Wake Up Safe is a quality improvement initiative of the Society for Pediatric Anesthesia that contains a deidentified registry of serious adverse events occurring in pediatric anesthesia. The aim of this study was to describe and characterize reported medication errors to find common patterns amenable to preventative strategies. In September 2016, we analyzed approximately 6 years' worth of medication error events reported to Wake Up Safe. Medication errors were classified by: (1) medication category; (2) error type by phase of administration: prescribing, preparation, or administration; (3) bolus or infusion error; (4) provider type and level of training; (5) harm as defined by the National Coordinating Council for Medication Error Reporting and Prevention; and (6) perceived preventability. From 2010 to the time of our data analysis in September 2016, 32 institutions had joined and submitted data on 2087 adverse events during 2,316,635 anesthetics. These reports contained details of 276 medication errors, which comprised the third highest category of events behind cardiac and respiratory related events. Medication errors most commonly involved opioids and sedative/hypnotics. When categorized by phase of handling, 30 events occurred during preparation, 67 during prescribing, and 179 during administration. The most common error type was accidental administration of the wrong dose (N = 84), followed by syringe swap (accidental administration of the wrong syringe, N = 49). Fifty-seven (21%) reported medication errors involved medications prepared as infusions as opposed to 1 time bolus administrations. Medication errors were committed by all types of anesthesia providers, most commonly by attendings. Over 80% of reported medication errors reached the patient and more than half of these events caused patient harm. Fifteen events (5%) required a life sustaining intervention. Nearly all cases (97%) were judged to be either likely or certainly preventable. Our findings characterize the most common types of medication errors in pediatric anesthesia practice and provide guidance on future preventative strategies. Many of these errors will be almost entirely preventable with the use of prefilled medication syringes to avoid accidental ampule swap, bar-coding at the point of medication administration to prevent syringe swap and to confirm the proper dose, and 2-person checking of medication infusions for accuracy.
NASA Astrophysics Data System (ADS)
Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar
2017-11-01
Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.
NASA Astrophysics Data System (ADS)
McInerney, David; Thyer, Mark; Kavetski, Dmitri; Kuczera, George
2016-04-01
Appropriate representation of residual errors in hydrological modelling is essential for accurate and reliable probabilistic streamflow predictions. In particular, residual errors of hydrological predictions are often heteroscedastic, with large errors associated with high runoff events. Although multiple approaches exist for representing this heteroscedasticity, few if any studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating a range of approaches for representing heteroscedasticity in residual errors. These approaches include the 'direct' weighted least squares approach and 'transformational' approaches, such as logarithmic, Box-Cox (with and without fitting the transformation parameter), logsinh and the inverse transformation. The study reports (1) theoretical comparison of heteroscedasticity approaches, (2) empirical evaluation of heteroscedasticity approaches using a range of multiple catchments / hydrological models / performance metrics and (3) interpretation of empirical results using theory to provide practical guidance on the selection of heteroscedasticity approaches. Importantly, for hydrological practitioners, the results will simplify the choice of approaches to represent heteroscedasticity. This will enhance their ability to provide hydrological probabilistic predictions with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality).
Vajhala, Chakravarthy S K; Sadumpati, Vijaya Kumar; Nunna, Hariprasad Rao; Puligundla, Sateesh Kumar; Vudem, Dashavantha Reddy; Khareedu, Venkateswara Rao
2013-01-01
Mannose-specific Allium sativum leaf agglutinin encoding gene (ASAL) and herbicide tolerance gene (BAR) were introduced into an elite cotton inbred line (NC-601) employing Agrobacterium-mediated genetic transformation. Cotton transformants were produced from the phosphinothricin (PPT)-resistant shoots obtained after co-cultivation of mature embryos with the Agrobacterium strain EHA105 harbouring recombinant binary vector pCAMBIA3300-ASAL-BAR. PCR and Southern blot analysis confirmed the presence and stable integration of ASAL and BAR genes in various transformants of cotton. Basta leaf-dip assay, northern blot, western blot and ELISA analyses disclosed variable expression of BAR and ASAL transgenes in different transformants. Transgenes, ASAL and BAR, were stably inherited and showed co-segregation in T1 generation in a Mendelian fashion for both PPT tolerance and insect resistance. In planta insect bioassays on T2 and T3 homozygous ASAL-transgenic lines revealed potent entomotoxic effects of ASAL on jassid and whitefly insects, as evidenced by significant decreases in the survival, development and fecundity of the insects when compared to the untransformed controls. Furthermore, the transgenic cotton lines conferred higher levels of resistance (1-2 score) with minimal plant damage against these major sucking pests when bioassays were carried out employing standard screening techniques. The developed transgenics could serve as a potential genetic resource in recombination breeding aimed at improving the pest resistance of cotton. This study represents the first report of its kind dealing with the development of transgenic cotton resistant to two major sap-sucking insects.
Nunna, Hariprasad Rao; Puligundla, Sateesh Kumar; Vudem, Dashavantha Reddy; Khareedu, Venkateswara Rao
2013-01-01
Mannose-specific Allium sativum leaf agglutinin encoding gene (ASAL) and herbicide tolerance gene (BAR) were introduced into an elite cotton inbred line (NC-601) employing Agrobacterium-mediated genetic transformation. Cotton transformants were produced from the phosphinothricin (PPT)-resistant shoots obtained after co-cultivation of mature embryos with the Agrobacterium strain EHA105 harbouring recombinant binary vector pCAMBIA3300-ASAL-BAR. PCR and Southern blot analysis confirmed the presence and stable integration of ASAL and BAR genes in various transformants of cotton. Basta leaf-dip assay, northern blot, western blot and ELISA analyses disclosed variable expression of BAR and ASAL transgenes in different transformants. Transgenes, ASAL and BAR, were stably inherited and showed co-segregation in T1 generation in a Mendelian fashion for both PPT tolerance and insect resistance. In planta insect bioassays on T2 and T3 homozygous ASAL-transgenic lines revealed potent entomotoxic effects of ASAL on jassid and whitefly insects, as evidenced by significant decreases in the survival, development and fecundity of the insects when compared to the untransformed controls. Furthermore, the transgenic cotton lines conferred higher levels of resistance (1–2 score) with minimal plant damage against these major sucking pests when bioassays were carried out employing standard screening techniques. The developed transgenics could serve as a potential genetic resource in recombination breeding aimed at improving the pest resistance of cotton. This study represents the first report of its kind dealing with the development of transgenic cotton resistant to two major sap-sucking insects. PMID:24023750
SPENDING TOO MUCH TIME AT THE GALACTIC BAR: CHAOTIC FANNING OF THE OPHIUCHUS STREAM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Price-Whelan, Adrian M.; Johnston, Kathryn V.; Sesar, Branimir
2016-06-20
The Ophiuchus stellar stream is peculiar: (1) its length is short given the age of its constituent stars, and (2) several probable member stars have dispersions in sky position and velocity that far exceed those seen within the stream. The stream’s proximity to the Galactic center suggests that its dynamical history is significantly influenced by the Galactic bar. We explore this hypothesis with models of stream formation along orbits consistent with Ophiuchus’ properties in a Milky Way potential model that includes a rotating bar. In all choices for the rotation parameters of the bar, orbits fit to the stream aremore » strongly chaotic. Mock streams generated along these orbits qualitatively match the observed properties of the stream: because of chaos, stars stripped early generally form low-density, high-dispersion “fans” leaving only the most recently disrupted material detectable as a strong over-density. Our models predict that there should be a significant amount of low-surface-brightness tidal debris around the stream with a complex phase-space morphology. The existence of or lack of these features could provide interesting constraints on the Milky Way bar and would rule out formation scenarios for the stream. This is the first time that chaos has been used to explain the properties of a stellar stream and is the first demonstration of the dynamical importance of chaos in the Galactic halo. The existence of long, thin streams around the Milky Way, presumably formed along non- or weakly chaotic orbits, may represent only a subset of the total population of disrupted satellites.« less
Current measurement by Faraday effect on GEPOPU
NASA Astrophysics Data System (ADS)
N, Correa; H, Chuaqui; E, Wyndham; F, Veloso; J, Valenzuela; M, Favre; H, Bhuyan
2014-05-01
The design and calibration of an optical current sensor using BK7 glass is presented. The current sensor is based on the polarization rotation by Faraday effect. GEPOPU is a pulsed power generator, double transit time 120ns, 1.5 Ohm impedance, coaxial geometry, where Z pinch experiment are performed. The measurements were performed at the Optics and Plasma Physics Laboratory of Pontificia Universidad Catolica de Chile. The verdet constant for two different optical materials was obtained using He-Ne laser. The values obtained are within the experimental error bars of measurements published in the literature (less than 15% difference). Two different sensor geometries were tried. We present the preliminary results for one of the geometries. The values obtained for the current agree within the measurement error with those obtained by means of a Spice simulation of the generator. Signal traces obtained are completely noise free.
Confidence limits for data mining models of options prices
NASA Astrophysics Data System (ADS)
Healy, J. V.; Dixon, M.; Read, B. J.; Cai, F. F.
2004-12-01
Non-parametric methods such as artificial neural nets can successfully model prices of financial options, out-performing the Black-Scholes analytic model (Eur. Phys. J. B 27 (2002) 219). However, the accuracy of such approaches is usually expressed only by a global fitting/error measure. This paper describes a robust method for determining prediction intervals for models derived by non-linear regression. We have demonstrated it by application to a standard synthetic example (29th Annual Conference of the IEEE Industrial Electronics Society, Special Session on Intelligent Systems, pp. 1926-1931). The method is used here to obtain prediction intervals for option prices using market data for LIFFE “ESX” FTSE 100 index options ( http://www.liffe.com/liffedata/contracts/month_onmonth.xls). We avoid special neural net architectures and use standard regression procedures to determine local error bars. The method is appropriate for target data with non constant variance (or volatility).
NASA Technical Reports Server (NTRS)
Weaver, J. S.; Chipman, D. W.; Takahashi, T.
1979-01-01
Phase stability and elasticity data have been used to calculate the Gibbs free energy, enthalpy, and entropy changes at 298 K and 1 bar associated with the quartz-coesite and coesite-stishovite transformations in the system SiO2. For the quartz-coesite transformation, these changes disagree by a factor of two or three with those obtained by calorimetric techniques. The phase boundary for this transformation appears to be well determined by experiment; the discrepancy, therefore, suggests that the calorimetric data for coesite are in error. Although the calorimetric and phase stability data for the coesite-stishovite transformation yield the same transition pressure at 298 K, the phase-boundary slopes disagree by a factor of two. At present, it is not possible to determine which of the data are in error. Thus serious inconsistencies exist in the thermodynamic data for the polymorphic transformations of silica.
Cresswell, Kathrin M; Sheikh, Aziz
2008-05-01
There is increasing interest internationally in ways of reducing the high disease burden resulting from errors in medicine management. Repeat exposure to drugs to which patients have a known allergy has been a repeatedly identified error, often with disastrous consequences. Drug allergies are immunologically mediated reactions that are characterized by specificity and recurrence on reexposure. These repeat reactions should therefore be preventable. We argue that there is insufficient attention being paid to studying and implementing system-based approaches to reducing the risk of such accidental reexposure. Drawing on recent and ongoing research, we discuss a number of information technology-based interventions that can be used to reduce the risk of recurrent exposure. Proven to be effective in this respect are interventions that provide real-time clinical decision support; also promising are interventions aiming to enhance patient recognition, such as bar coding, radiofrequency identification, and biometric technologies.
Finite element normal mode analysis of resistance welding jointed of dissimilar plate hat structure
NASA Astrophysics Data System (ADS)
Nazri, N. A.; Sani, M. S. M.
2017-10-01
Structural joints offer connection between structural element (beam, plate, solid etc.) in order to build a whole assembled structure. The complex behaviour of connecting elements plays a valuable role in characteristics of dynamic such as natural frequencies and mode shapes. In automotive structures, the trustworthiness arrangement of the structure extremely depends on joints. In this paper, top hat structure is modelled and designed with spot welding joint using dissimilar materials which is mild steel 1010 and stainless steel 304, using finite element software. Different types of connector elements such as rigid body element (RBE2), welding joint element (CWELD), and bar element (CBAR) are applied to represent real connection between two dissimilar plates. Normal mode analysis is simulated with different types of joining element in order to determine modal properties. Natural frequencies using RBE2, CBAR and CWELD are compared to equivalent rigid body method. Connection that gives the lowest percentage error among these three will be selected as the most reliable joining for resistance spot weld. From the analysis, it is shown that CWELD is better compared to others in term of weld joining among dissimilar plate materials. It is expected that joint modelling of finite element plays significant role in structural dynamics.
Automotive Radar and Lidar Systems for Next Generation Driver Assistance Functions
NASA Astrophysics Data System (ADS)
Rasshofer, R. H.; Gresser, K.
2005-05-01
Automotive radar and lidar sensors represent key components for next generation driver assistance functions (Jones, 2001). Today, their use is limited to comfort applications in premium segment vehicles although an evolution process towards more safety-oriented functions is taking place. Radar sensors available on the market today suffer from low angular resolution and poor target detection in medium ranges (30 to 60m) over azimuth angles larger than ±30°. In contrast, Lidar sensors show large sensitivity towards environmental influences (e.g. snow, fog, dirt). Both sensor technologies today have a rather high cost level, forbidding their wide-spread usage on mass markets. A common approach to overcome individual sensor drawbacks is the employment of data fusion techniques (Bar-Shalom, 2001). Raw data fusion requires a common, standardized data interface to easily integrate a variety of asynchronous sensor data into a fusion network. Moreover, next generation sensors should be able to dynamically adopt to new situations and should have the ability to work in cooperative sensor environments. As vehicular function development today is being shifted more and more towards virtual prototyping, mathematical sensor models should be available. These models should take into account the sensor's functional principle as well as all typical measurement errors generated by the sensor.
Nagelhout, Gera E; Willemsen, Marc C; Gebhardt, Winifred A; van den Putte, Bas; Hitchman, Sara C; Crone, Matty R; Fong, Geoffrey T; van der Heiden, Sander; de Vries, Hein
2012-11-01
This study examined whether smokers' perceived level of stigmatization changed after the implementation of smoke-free hospitality industry legislation and whether smokers who smoked outside bars reported more perceived stigmatization. Longitudinal data from the International Tobacco Control (ITC) Netherlands Survey was used, involving a nationally representative sample of 1447 smokers aged 15 years and older. Whether smoke-free legislation increases smokers' perceived stigmatization depends on how smokers feel about smoking outside. The level of perceived stigmatization did not change after the implementation of smoke-free hospitality industry legislation in the Netherlands, possibly because most Dutch smokers do not feel negatively judged when smoking outside. Copyright © 2012 Elsevier Ltd. All rights reserved.
Maximizing return on socioeconomic investment in phase II proof-of-concept trials.
Chen, Cong; Beckman, Robert A
2014-04-01
Phase II proof-of-concept (POC) trials play a key role in oncology drug development, determining which therapeutic hypotheses will undergo definitive phase III testing according to predefined Go-No Go (GNG) criteria. The number of possible POC hypotheses likely far exceeds available public or private resources. We propose a design strategy for maximizing return on socioeconomic investment in phase II trials that obtains the greatest knowledge with the minimum patient exposure. We compare efficiency using the benefit-cost ratio, defined to be the risk-adjusted number of truly active drugs correctly identified for phase III development divided by the risk-adjusted total sample size in phase II and III development, for different POC trial sizes, powering schemes, and associated GNG criteria. It is most cost-effective to conduct small POC trials and set the corresponding GNG bars high, so that more POC trials can be conducted under socioeconomic constraints. If δ is the minimum treatment effect size of clinical interest in phase II, the study design with the highest benefit-cost ratio has approximately 5% type I error rate and approximately 20% type II error rate (80% power) for detecting an effect size of approximately 1.5δ. A Go decision to phase III is made when the observed effect size is close to δ. With the phenomenal expansion of our knowledge in molecular biology leading to an unprecedented number of new oncology drug targets, conducting more small POC trials and setting high GNG bars maximize the return on socioeconomic investment in phase II POC trials. ©2014 AACR.
Dynamics of Nearshore Sand Bars and Infra-gravity Waves: The Optimal Theory Point of View
NASA Astrophysics Data System (ADS)
Bouchette, F.; Mohammadi, B.
2016-12-01
It is well known that the dynamics of near-shore sand bars are partly controlled by the features (location of nodes, amplitude, length, period) of the so-called infra-gravity waves. Reciprocally, changes in the location, size and shape of near-shore sand bars can control wave/wave interactions which in their turn alter the infra-gravity content of the near-shore wave energy spectrum. The coupling infra-gravity / near-shore bar is thus definitely two ways. Regarding numerical modelling, several approaches have already been considered to analyze such coupled dynamics. Most of them are based on the following strategy: 1) define an energy spectrum including infra-gravity, 2) tentatively compute the radiation stresses driven by this energy spectrum, 3) compute sediment transport and changes in the seabottom elevation including sand bars, 4) loop on the computation of infra-gravity taking into account the morphological changes. In this work, we consider an alternative approach named Nearshore Optimal Theory, which is a kind of breakdown point of view for the modeling of near-shore hydro-morphodynamics and wave/ wave/ seabottom interactions. Optimal theory applied to near-shore hydro-morphodynamics arose with the design of solid coastal defense structures by shape optimization methods, and is being now extended in order to model dynamics of any near-shore system combining waves and sand. The basics are the following: the near-shore system state is through a functional J representative of the energy of the system in some way. This J is computed from a model embedding the physics to be studied only (here hydrodynamics forced by simple infra-gravity). Then the paradigm is to say that the system will evolve so that the energy J tends to minimize. No really matter the complexity of wave propagation nor wave/bottom interactions. As soon as J embeds the physics to be explored, the method does not require a comprehensive modeling. Near-shore Optimal Theory has already given promising results for the generation of near-shore sand bar from scratch and their growth when forced by fair-weather waves. Here, we use it to explore the coupling between a very simple infra-gravity content and the nucleation of near-shore sand-bars. It is shown that even a very poor infra-gravity content strongly improves the generation of sand bars.
Wetmore, Kelly M.; Price, Morgan N.; Waters, Robert J.; ...
2015-05-12
Transposon mutagenesis with next-generation sequencing (TnSeq) is a powerful approach to annotate gene function in bacteria, but existing protocols for TnSeq require laborious preparation of every sample before sequencing. Thus, the existing protocols are not amenable to the throughput necessary to identify phenotypes and functions for the majority of genes in diverse bacteria. Here, we present a method, random bar code transposon-site sequencing (RB-TnSeq), which increases the throughput of mutant fitness profiling by incorporating random DNA bar codes into Tn5 and mariner transposons and by using bar code sequencing (BarSeq) to assay mutant fitness. RB-TnSeq can be used with anymore » transposon, and TnSeq is performed once per organism instead of once per sample. Each BarSeq assay requires only a simple PCR, and 48 to 96 samples can be sequenced on one lane of an Illumina HiSeq system. We demonstrate the reproducibility and biological significance of RB-TnSeq with Escherichia coli, Phaeobacter inhibens, Pseudomonas stutzeri, Shewanella amazonensis, and Shewanella oneidensis. To demonstrate the increased throughput of RB-TnSeq, we performed 387 successful genome-wide mutant fitness assays representing 130 different bacterium-carbon source combinations and identified 5,196 genes with significant phenotypes across the five bacteria. In P. inhibens, we used our mutant fitness data to identify genes important for the utilization of diverse carbon substrates, including a putative D-mannose isomerase that is required for mannitol catabolism. RB-TnSeq will enable the cost-effective functional annotation of diverse bacteria using mutant fitness profiling. A large challenge in microbiology is the functional assessment of the millions of uncharacterized genes identified by genome sequencing. Transposon mutagenesis coupled to next-generation sequencing (TnSeq) is a powerful approach to assign phenotypes and functions to genes. However, the current strategies for TnSeq are too laborious to be applied to hundreds of experimental conditions across multiple bacteria. Here, we describe an approach, random bar code transposon-site sequencing (RB-TnSeq), which greatly simplifies the measurement of gene fitness by using bar code sequencing (BarSeq) to monitor the abundance of mutants. We performed 387 genome-wide fitness assays across five bacteria and identified phenotypes for over 5,000 genes. RB-TnSeq can be applied to diverse bacteria and is a powerful tool to annotate uncharacterized genes using phenotype data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wetmore, Kelly M.; Price, Morgan N.; Waters, Robert J.
Transposon mutagenesis with next-generation sequencing (TnSeq) is a powerful approach to annotate gene function in bacteria, but existing protocols for TnSeq require laborious preparation of every sample before sequencing. Thus, the existing protocols are not amenable to the throughput necessary to identify phenotypes and functions for the majority of genes in diverse bacteria. Here, we present a method, random bar code transposon-site sequencing (RB-TnSeq), which increases the throughput of mutant fitness profiling by incorporating random DNA bar codes into Tn5 and mariner transposons and by using bar code sequencing (BarSeq) to assay mutant fitness. RB-TnSeq can be used with anymore » transposon, and TnSeq is performed once per organism instead of once per sample. Each BarSeq assay requires only a simple PCR, and 48 to 96 samples can be sequenced on one lane of an Illumina HiSeq system. We demonstrate the reproducibility and biological significance of RB-TnSeq with Escherichia coli, Phaeobacter inhibens, Pseudomonas stutzeri, Shewanella amazonensis, and Shewanella oneidensis. To demonstrate the increased throughput of RB-TnSeq, we performed 387 successful genome-wide mutant fitness assays representing 130 different bacterium-carbon source combinations and identified 5,196 genes with significant phenotypes across the five bacteria. In P. inhibens, we used our mutant fitness data to identify genes important for the utilization of diverse carbon substrates, including a putative D-mannose isomerase that is required for mannitol catabolism. RB-TnSeq will enable the cost-effective functional annotation of diverse bacteria using mutant fitness profiling. A large challenge in microbiology is the functional assessment of the millions of uncharacterized genes identified by genome sequencing. Transposon mutagenesis coupled to next-generation sequencing (TnSeq) is a powerful approach to assign phenotypes and functions to genes. However, the current strategies for TnSeq are too laborious to be applied to hundreds of experimental conditions across multiple bacteria. Here, we describe an approach, random bar code transposon-site sequencing (RB-TnSeq), which greatly simplifies the measurement of gene fitness by using bar code sequencing (BarSeq) to monitor the abundance of mutants. We performed 387 genome-wide fitness assays across five bacteria and identified phenotypes for over 5,000 genes. RB-TnSeq can be applied to diverse bacteria and is a powerful tool to annotate uncharacterized genes using phenotype data.« less
Finding minimum-quotient cuts in planar graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, J.K.; Phillips, C.A.
Given a graph G = (V, E) where each vertex v {element_of} V is assigned a weight w(v) and each edge e {element_of} E is assigned a cost c(e), the quotient of a cut partitioning the vertices of V into sets S and {bar S} is c(S, {bar S})/min{l_brace}w(S), w(S){r_brace}, where c(S, {bar S}) is the sum of the costs of the edges crossing the cut and w(S) and w({bar S}) are the sum of the weights of the vertices in S and {bar S}, respectively. The problem of finding a cut whose quotient is minimum for a graph hasmore » in recent years attracted considerable attention, due in large part to the work of Rao and Leighton and Rao. They have shown that an algorithm (exact or approximation) for the minimum-quotient-cut problem can be used to obtain an approximation algorithm for the more famous minimumb-balanced-cut problem, which requires finding a cut (S,{bar S}) minimizing c(S,{bar S}) subject to the constraint bW {le} w(S) {le} (1 {minus} b)W, where W is the total vertex weight and b is some fixed balance in the range 0 < b {le} {1/2}. Unfortunately, the minimum-quotient-cut problem is strongly NP-hard for general graphs, and the best polynomial-time approximation algorithm known for the general problem guarantees only a cut whose quotient is at mostO(lg n) times optimal, where n is the size of the graph. However, for planar graphs, the minimum-quotient-cut problem appears more tractable, as Rao has developed several efficient approximation algorithms for the planar version of the problem capable of finding a cut whose quotient is at most some constant times optimal. In this paper, we improve Rao`s algorithms, both in terms of accuracy and speed. As our first result, we present two pseudopolynomial-time exact algorithms for the planar minimum-quotient-cut problem. As Rao`s most accurate approximation algorithm for the problem -- also a pseudopolynomial-time algorithm -- guarantees only a 1.5-times-optimal cut, our algorithms represent a significant advance.« less
Finding minimum-quotient cuts in planar graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, J.K.; Phillips, C.A.
Given a graph G = (V, E) where each vertex v [element of] V is assigned a weight w(v) and each edge e [element of] E is assigned a cost c(e), the quotient of a cut partitioning the vertices of V into sets S and [bar S] is c(S, [bar S])/min[l brace]w(S), w(S)[r brace], where c(S, [bar S]) is the sum of the costs of the edges crossing the cut and w(S) and w([bar S]) are the sum of the weights of the vertices in S and [bar S], respectively. The problem of finding a cut whose quotient is minimummore » for a graph has in recent years attracted considerable attention, due in large part to the work of Rao and Leighton and Rao. They have shown that an algorithm (exact or approximation) for the minimum-quotient-cut problem can be used to obtain an approximation algorithm for the more famous minimumb-balanced-cut problem, which requires finding a cut (S,[bar S]) minimizing c(S,[bar S]) subject to the constraint bW [le] w(S) [le] (1 [minus] b)W, where W is the total vertex weight and b is some fixed balance in the range 0 < b [le] [1/2]. Unfortunately, the minimum-quotient-cut problem is strongly NP-hard for general graphs, and the best polynomial-time approximation algorithm known for the general problem guarantees only a cut whose quotient is at mostO(lg n) times optimal, where n is the size of the graph. However, for planar graphs, the minimum-quotient-cut problem appears more tractable, as Rao has developed several efficient approximation algorithms for the planar version of the problem capable of finding a cut whose quotient is at most some constant times optimal. In this paper, we improve Rao's algorithms, both in terms of accuracy and speed. As our first result, we present two pseudopolynomial-time exact algorithms for the planar minimum-quotient-cut problem. As Rao's most accurate approximation algorithm for the problem -- also a pseudopolynomial-time algorithm -- guarantees only a 1.5-times-optimal cut, our algorithms represent a significant advance.« less
TrafficGen Architecture Document
2016-01-01
sequence diagram ....................................................5 Fig. 5 TrafficGen traffic flows viewed in SDT3D...Scripts contain commands to have the network node listen on specific ports and flows describing the start time, stop time, and specific traffic ...arranged vertically and time presented horizontally. Individual traffic flows are represented by horizontal bars indicating the start time, stop time
Kotov exercises on the SchRED during Expedition 15
2007-05-06
ISS015-E-08320 (6 May 2007) --- Cosmonaut Oleg V. Kotov, Expedition 15 flight engineer representing Russia's Federal Space Agency, uses the short bar for the Interim Resistive Exercise Device (IRED) to perform upper body strengthening pull-ups. The IRED hardware is located in the Unity node of the International Space Station.
High Court Hesitant to Bar Pledge in Schools
ERIC Educational Resources Information Center
Hendrie, Caroline
2004-01-01
This article reports on a lawsuit filed by Michael A. Newdow, a California atheist, on behalf of his daughter, against inclusion of the words "under God" in public schools' recitals of the United States Pledge of Allegiance. He said that the words "under God" represent "religious dogma" that is needlessly divisive.…
Weinberger, Steven E; Hoyt, David B; Lawrence, Hal C; Levin, Saul; Henley, Douglas E; Alden, Errol R; Wilkerson, Dean; Benjamin, Georges C; Hubbard, William C
2015-04-07
Deaths and injuries related to firearms constitute a major public health problem in the United States. In response to firearm violence and other firearm-related injuries and deaths, an interdisciplinary, interprofessional group of leaders of 8 national health professional organizations and the American Bar Association, representing the official policy positions of their organizations, advocate a series of measures aimed at reducing the health and public health consequences of firearms. The specific recommendations include universal background checks of gun purchasers, elimination of physician "gag laws," restricting the manufacture and sale of military-style assault weapons and large-capacity magazines for civilian use, and research to support strategies for reducing firearm-related injuries and deaths. The health professional organizations also advocate for improved access to mental health services and avoidance of stigmatization of persons with mental and substance use disorders through blanket reporting laws. The American Bar Association, acting through its Standing Committee on Gun Violence, confirms that none of these recommendations conflict with the Second Amendment or previous rulings of the U.S. Supreme Court.
The Non-Axisymmetric Milky Way
NASA Technical Reports Server (NTRS)
Spergel, David N.
1996-01-01
The Dwek et al. model represents the current state-of-the-art model for the stellar structure of our Galaxy. The improvements we have made to this model take a number of forms: (1) the construction of a more detailed dust model so that we can extend our modeling to the galactic plane; (2) simultaneous fits to the bulge and the disk; (3) the construction of the first self-consistent model for a galactic bar; and (4) the development and application of algorithms for constructing nonparametric bar models. The improved Galaxy model has enabled a number of exciting science projects. In Zhao et al., we show that the number and duration of microlensing events seen by the OGLE and MACHO collaborations towards the bulge were consistent with the predictions of our bar model. In Malhotra et al., we constructed an infrared Tully-Fisher (TF) relation for the local group. We found the tightest TF relation ever seen in any band and in any group of galaxies. The tightness of the correlation places strong constraints on galaxy formation models and provides a independent check of the Cepheid distance scale.
Texture Evolution in a Ti-Ta-Nb Alloy Processed by Severe Plastic Deformation
NASA Astrophysics Data System (ADS)
Cojocaru, Vasile-Danut; Raducanu, Doina; Gloriant, Thierry; Cinca, Ion
2012-05-01
Titanium alloys are extensively used in a variety of applications because of their good mechanical properties, high biocompatibility, and corrosion resistance. Recently, β-type Ti alloys containing Ta and Nb have received much attention because they feature not only high specific strength but also biocorrosion resistance, no allergic problems, and biocompatibility. A Ti-25Ta-25Nb β-type titanium alloy was subjected to severe plastic deformation (SPD) processing by accumulative roll bonding and investigated with the aim to observe the texture developed during SPD processing. Texture data expressed by pole figures, inverse pole figures, and orientation distribution functions for the (110), (200), and (211) β-Ti peaks were obtained by XRD investigations. The results showed that it is possible to obtain high-intensity share texture modes ({001}<110>) and well-developed α and γ-fibers; the most important fiber is the α-fiber ({001} < {1bar{1}0} > to {114} < {1bar{1}0} > to {112} < {1bar{1}0} > ). High-intensity texture along certain crystallographic directions represents a way to obtain materials with high anisotropic properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aaltonen, T.; Abulencia, A.; /Helsinki Inst. of Phys.
We report the measurements of the t{bar t} production cross section and of the top quark mass using 1.02 fb{sup -1} of p{bar p} data collected with the CDF II detector at the Fermilab Tevatron. We select events with six or more jets on which a number of kinematical requirements are imposed by means of a neural network algorithm. At least one of these jets must be identified as initiated by a b-quark candidate by the reconstruction of a secondary vertex. The cross section is measured to be {sigma}{sub t{bar t}} = 8.3 {+-} 1.0(stat. ){sup +2.0}{sub -1.5}(syst.) {+-} 0.5(lumi.)more » pb, which is consistent with the standard model prediction. The top quark mass of 174.0 {+-} 2.2(stat.){+-}4.8(syst.) GeV/c{sup 2} is derived from a likelihood fit incorporating reconstructed mass distributions representative of signal and background.« less
NASA Astrophysics Data System (ADS)
Bradley, A. M.; Segall, P.
2012-12-01
We describe software, in development, to calculate elastostatic displacement Green's functions and their derivatives for point and polygonal dislocations in three-dimensional homogeneous elastic layers above an elastic or a viscoelastic halfspace. The steps to calculate a Green's function for a point source at depth zs are as follows. 1. A grid in wavenumber space is chosen. 2. A six-element complex rotated stress-displacement vector x is obtained at each grid point by solving a two-point boundary value problem (2P-BVP). If the halfspace is viscoelastic, the solution is inverse Laplace transformed. 3. For each receiver, x is propagated to the receiver depth zr (often zr = 0) and then, 4, inverse Fourier transformed, with the Fourier component corresponding to the receiver's horizontal position. 5. The six elements are linearly combined into displacements and their derivatives. The dominant work is in step 2. The grid is chosen to represent the wavenumber-space solution with as few points as possible. First, the wavenumber space is transformed to increase sampling density near 0 wavenumber. Second, a tensor-product grid of Chebyshev points of the first kind is constructed in each quadrant of the transformed wavenumber space. Moment-tensor-dependent symmetries further reduce work. The numerical solution of the 2P-BVP problem in step 2 involves solving a linear equation A x = b. Half of the elements of x are of geophysical interest; the subset depends on whether zr ≤ zs. Denote these \\hat x. As wavenumber k increases, \\hat x can become inaccurate in finite precision arithmetic for two reasons: 1. The condition number of A becomes too large. 2. The norm-wise relative error (NWRE) in \\hat x is large even though it is small in x. To address this problem, a number of researchers have used determinants to obtain x. This may be the best approach for 6-dimensional or smaller 2P-BVP, where the combinatorial increase in work is still moderate. But there is an alternative. Let \\bar A be the matrix after scaling its columns to unit infinity norm and \\bar x the scaled x. If \\bar A is well conditioned, as it often is in (visco)elastostatic problems, then using determinants is unnecessary. Multiply each side of A x = b by a propagator matrix to the computation depth zcd prior to storing the matrix in finite precision. zcd is determined by the rule that zr and zcd must be on opposite sides of zs. Let the resulting matrix be A(zcd). Three facts imply that this rule controls the NWRE in \\hat x: 1. Diagonally scaling a matrix changes the accuracy of an element of the solution by about one ULP (unit in the last place). 2. If the NWRE of \\bar x is small, then the largest elements are accurate. 3. zcd controls the magnitude of elements in \\bar x. In step 4, to avoid numerically Fourier transforming the (nearly) non-square-integrable functions that arise when the receiver and source depths are (nearly) the same, a function is divided into an analytical part and a numerical part that goes quickly to 0 as k -> ∞ . Our poster will describe these calculations, present a preliminary interface to a C-language package in development, and show some physical results.
The paradox of extreme high-altitude migration in bar-headed geese Anser indicus
Hawkes, L. A.; Balachandran, S.; Batbayar, N.; Butler, P. J.; Chua, B.; Douglas, D. C.; Frappell, P. B.; Hou, Y.; Milsom, W. K.; Newman, S. H.; Prosser, D. J.; Sathiyaselvam, P.; Scott, G. R.; Takekawa, J. Y.; Natsagdorj, T.; Wikelski, M.; Witt, M. J.; Yan, B.; Bishop, C. M.
2013-01-01
Bar-headed geese are renowned for migratory flights at extremely high altitudes over the world's tallest mountains, the Himalayas, where partial pressure of oxygen is dramatically reduced while flight costs, in terms of rate of oxygen consumption, are greatly increased. Such a mismatch is paradoxical, and it is not clear why geese might fly higher than is absolutely necessary. In addition, direct empirical measurements of high-altitude flight are lacking. We test whether migrating bar-headed geese actually minimize flight altitude and make use of favourable winds to reduce flight costs. By tracking 91 geese, we show that these birds typically travel through the valleys of the Himalayas and not over the summits. We report maximum flight altitudes of 7290 m and 6540 m for southbound and northbound geese, respectively, but with 95 per cent of locations received from less than 5489 m. Geese travelled along a route that was 112 km longer than the great circle (shortest distance) route, with transit ground speeds suggesting that they rarely profited from tailwinds. Bar-headed geese from these eastern populations generally travel only as high as the terrain beneath them dictates and rarely in profitable winds. Nevertheless, their migration represents an enormous challenge in conditions where humans and other mammals are only able to operate at levels well below their sea-level maxima. PMID:23118436
The paradox of extreme high-altitude migration in bar-headed geese Anser indicus
Hawkes, L.A.; Balachandran, S.; Batbayar, N.; Butler, P.J.; Chua, B.; Douglas, David C.; Frappell, P.B.; Hou, Y.; Milsom, W.K.; Newman, S.H.; Prosser, D.J.; Sathiyaselvam, P.; Scott, G.R.; Takekawa, John Y.; Natsagdorj, T.; Wikelski, M.; Witt, M.J.; Yan, B.; Bishop, C.M.
2012-01-01
Bar-headed geese are renowned for migratory flights at extremely high altitudes over the world's tallest mountains, the Himalayas, where partial pressure of oxygen is dramatically reduced while flight costs, in terms of rate of oxygen consumption, are greatly increased. Such a mismatch is paradoxical, and it is not clear why geese might fly higher than is absolutely necessary. In addition, direct empirical measurements of high-altitude flight are lacking. We test whether migrating bar-headed geese actually minimize flight altitude and make use of favourable winds to reduce flight costs. By tracking 91 geese, we show that these birds typically travel through the valleys of the Himalayas and not over the summits. We report maximum flight altitudes of 7290 m and 6540 m for southbound and northbound geese, respectively, but with 95 per cent of locations received from less than 5489 m. Geese travelled along a route that was 112 km longer than the great circle (shortest distance) route, with transit ground speeds suggesting that they rarely profited from tailwinds. Bar-headed geese from these eastern populations generally travel only as high as the terrain beneath them dictates and rarely in profitable winds. Nevertheless, their migration represents an enormous challenge in conditions where humans and other mammals are only able to operate at levels well below their sea-level maxima.
Mangano, Francesco; Luongo, Fabrizia; Shibli, Jamil Awad; Anil, Sukumaran; Mangano, Carlo
2014-01-01
Purpose. Nowadays, the advancements in direct metal laser sintering (DMLS) technology allow the fabrication of titanium dental implants. The aim of this study was to evaluate implant survival, complications, and peri-implant marginal bone loss of DMLS implants used to support bar-retained maxillary overdentures. Materials and Methods. Over a 2-year period, 120 implants were placed in the maxilla of 30 patients (18 males, 12 females) to support bar-retained maxillary overdentures (ODs). Each OD was supported by 4 implants splinted by a rigid cobalt-chrome bar. At each annual follow-up session, clinical and radiographic parameters were assessed. The outcome measures were implant failure, biological and prosthetic complications, and peri-implant marginal bone loss (distance between the implant shoulder and the first visible bone-to-implant contact, DIB). Results. The 3-year implant survival rate was 97.4% (implant-based) and 92.9% (patient-based). Three implants failed. The incidence of biological complication was 3.5% (implant-based) and 7.1% (patient-based). The incidence of prosthetic complication was 17.8% (patient-based). No detrimental effects on marginal bone level were evidenced. Conclusions. The use of 4 DMLS titanium implants to support bar-retained maxillary ODs seems to represent a safe and successful procedure. Long-term clinical studies on a larger sample of patients are needed to confirm these results.
Mangano, Francesco; Shibli, Jamil Awad; Anil, Sukumaran
2014-01-01
Purpose. Nowadays, the advancements in direct metal laser sintering (DMLS) technology allow the fabrication of titanium dental implants. The aim of this study was to evaluate implant survival, complications, and peri-implant marginal bone loss of DMLS implants used to support bar-retained maxillary overdentures. Materials and Methods. Over a 2-year period, 120 implants were placed in the maxilla of 30 patients (18 males, 12 females) to support bar-retained maxillary overdentures (ODs). Each OD was supported by 4 implants splinted by a rigid cobalt-chrome bar. At each annual follow-up session, clinical and radiographic parameters were assessed. The outcome measures were implant failure, biological and prosthetic complications, and peri-implant marginal bone loss (distance between the implant shoulder and the first visible bone-to-implant contact, DIB). Results. The 3-year implant survival rate was 97.4% (implant-based) and 92.9% (patient-based). Three implants failed. The incidence of biological complication was 3.5% (implant-based) and 7.1% (patient-based). The incidence of prosthetic complication was 17.8% (patient-based). No detrimental effects on marginal bone level were evidenced. Conclusions. The use of 4 DMLS titanium implants to support bar-retained maxillary ODs seems to represent a safe and successful procedure. Long-term clinical studies on a larger sample of patients are needed to confirm these results. PMID:25580124
The CKM Matrix and The Unitarity Triangle: Another Look
NASA Astrophysics Data System (ADS)
Buras, Andrzej J.; Parodi, Fabrizio; Stocchi, Achille
2003-01-01
The unitarity triangle can be determined by means of two measurements of its sides or angles. Assuming the same relative errors on the angles (alpha,beta,gamma) and the sides (Rb,Rt), we find that the pairs (gamma,beta) and (gamma,Rb) are most efficient in determining (bar varrho,bar eta) that describe the apex of the unitarity triangle. They are followed by (alpha,beta), (alpha,Rb), (Rt,beta), (Rt,Rb) and (Rb,beta). As the set |Vus|, |Vcb|, Rt and beta appears to be the best candidate for the fundamental set of flavour violating parameters in the coming years, we show various constraints on the CKM matrix in the (Rt,beta) plane. Using the best available input we determine the universal unitarity triangle for models with minimal flavour violation (MFV) and compare it with the one in the Standard Model. We present allowed ranges for sin 2beta, sin 2alpha, gamma, Rb, Rt and DeltaMs within the Standard Model and MFV models. We also update the allowed range for the function Ftt that parametrizes various MFV-models.
NASA Technical Reports Server (NTRS)
Robinson, David; Okajima, Takashi; Serlemitsos, Peter; Soong, Yang
2012-01-01
The Astro-H is led by the Japanese Space Agency (JAXA) in collaboration with many other institutions including the NASA Goddard Space Flight Center. Goddard's contributions include two soft X-ray telescopes (SXTs). The telescopes have an effective area of 562 square cm at 1 keV and 425 square cm at 6 keV with an image quality requirement of 1.7 arc-minutes half power diameter (HPD). The engineering model has demonstrated 1.1 arc-minutes HPD error. The design of the SXT is based on the successful Suzaku mission mirrors with some enhancements to improve the image quality. Two major enhancements are bonding the X-ray mirror foils to alignment bars instead of allowing the mirrors to float, and fabricating alignment bars with grooves within 5 microns of accuracy. An engineering model SXT was recently built and subjected to several tests including vibration, thermal, and X-ray performance in a beamline. Several lessons were learned during this testing that will be incorporated in the flight design. Test results and optical performance are discussed, along with a description of the design of the SXT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van den Bergh, Sidney, E-mail: sidney.vandenbergh@nrc.gc.ca
Lenticular galaxies with M{sub B} < -21.5 are almost exclusively unbarred, whereas both barred and unbarred objects occur at fainter luminosity levels. This effect is observed both for objects classified in blue light, and for those that were classified in the infrared. This result suggests that the most luminous (massive) S0 galaxies find it difficult to form bars. As a result, the mean luminosity of unbarred lenticular galaxies in both B and IR light is observed to be {approx}0.4 mag brighter than that of barred lenticulars. A small contribution to the observed luminosity difference that is found between SA0 andmore » SB0 galaxies may also be due to the fact that there is an asymmetry between the effects of small classification errors on SA0 and SB0 galaxies. An elliptical (E) galaxy might be misclassified as a lenticular (S0) or an S0 as an E. However, an E will never be misclassified as an SB0, nor will an SB0 ever be called an E. This asymmetry is important because E galaxies are typically twice as luminous as S0 galaxies. The present results suggest that the evolution of luminous lenticular galaxies may be closely linked to that of elliptical galaxies, whereas fainter lenticulars might be more closely associated with ram-pressure stripped spiral galaxies. Finally, it is pointed out that fine details of the galaxy formation process might account for some of the differences between the classifications of the same galaxy by individual competent morphologists.« less
The Galileo probe Doppler wind experiment: Measurement of the deep zonal winds on Jupiter
NASA Astrophysics Data System (ADS)
Atkinson, David H.; Pollack, James B.; Seiff, Alvin
1998-09-01
During its descent into the upper atmosphere of Jupiter, the Galileo probe transmitted data to the orbiter for 57.5 min. Accurate measurements of the probe radio frequency, driven by an ultrastable oscillator, allowed an accurate time history of the probe motions to be reconstructed. Removal from the probe radio frequency profile of known Doppler contributions, including the orbiter trajectory, the probe descent velocity, and the rotation of Jupiter, left a measurable frequency residual due to Jupiter's zonal winds, and microdynamical motion of the probe from spin, swing under the parachute, atmospheric turbulence, and aerodynamic buffeting. From the assumption of the dominance of the zonal horizontal winds, the frequency residuals were inverted and resulted in the first in situ measurements of the vertical profile of Jupiter's deep zonal winds. A number of error sources with the capability of corrupting the frequency measurements or the interpretation of the frequency residuals were considered using reasonable assumptions and calibrations from prelaunch and in-flight testing. It is found that beneath the cloud tops (about 700 mbar) the winds are prograde and rise rapidly to 170 m/s at 4 bars. Beyond 4 bars to the depth at which the link with the probe was lost, nearly 21 bars, the winds remain constant and strong. Corrections for the high temperatures encountered by the probe have recently been completed and provide no evidence of diminishing or strengthening of the zonal wind profile in the deeper regions explored by the Galileo probe.
Yamaguchi, Mei S; McCartney, Mitchell M; Linderholm, Angela L; Ebeler, Susan E; Schivo, Michael; Davis, Cristina E
2018-05-12
The human respiratory tract releases volatile metabolites into exhaled breath that can be utilized for noninvasive health diagnostics. To understand the origin of this metabolic process, our group has previously analyzed the headspace above human epithelial cell cultures using solid phase microextraction-gas chromatography-mass spectrometry (SPME-GC-MS). In the present work, we improve our model by employing sorbent-covered magnetic stir bars for headspace sorptive extraction (HSSE). Sorbent-coated stir bar analyte recovery increased by 52 times and captured 97 more compounds than SPME. Our data show that HSSE is preferred over liquid extraction via stir bar sorptive extraction (SBSE), which failed to distinguish volatiles unique to the cell samples compared against media controls. Two different cellular media were also compared, and we found that Opti-MEM® is preferred for volatile analysis. We optimized HSSE analytical parameters such as extraction time (24 h), desorption temperature (300 °C) and desorption time (7 min). Finally, we developed an internal standard for cell culture VOC studies by introducing 842 ng of deuterated decane per 5 mL of cell medium to account for error from extraction, desorption, chromatography and detection. This improved model will serve as a platform for future metabolic cell culture studies to examine changes in epithelial VOCs caused by perturbations such as viral or bacterial infections, opening opportunities for improved, noninvasive pulmonary diagnostics. Copyright © 2018 Elsevier B.V. All rights reserved.
The impact of using an intravenous workflow management system (IVWMS) on cost and patient safety.
Lin, Alex C; Deng, Yihong; Thaibah, Hilal; Hingl, John; Penm, Jonathan; Ivey, Marianne F; Thomas, Mark
2018-07-01
The aim of this study was to determine the financial costs associated with wasted and missing doses before and after the implementation of an intravenous workflow management system (IVWMS) and to quantify the number and the rate of detected intravenous (IV) preparation errors. A retrospective analysis of the sample hospital information system database was conducted using three months of data before and after the implementation of an IVWMS System (DoseEdge ® ) which uses barcode scanning and photographic technologies to track and verify each step of the preparation process. The financial impact associated with wasted and missing >IV doses was determined by combining drug acquisition, labor, accessory, and disposal costs. The intercepted error reports and pharmacist detected error reports were drawn from the IVWMS to quantify the number of errors by defined error categories. The total number of IV doses prepared before and after the implementation of the IVWMS system were 110,963 and 101,765 doses, respectively. The adoption of the IVWMS significantly reduced the amount of wasted and missing IV doses by 14,176 and 2268 doses, respectively (p < 0.001). The overall cost savings of using the system was $144,019 over 3 months. The total number of errors detected was 1160 (1.14%) after using the IVWMS. The implementation of the IVWMS facilitated workflow changes that led to a positive impact on cost and patient safety. The implementation of the IVWMS increased patient safety by enforcing standard operating procedures and bar code verifications. Published by Elsevier B.V.
Woolford, Lucy; Rector, Annabel; Van Ranst, Marc; Ducki, Andrea; Bennett, Mark D.; Nicholls, Philip K.; Warren, Kristin S.; Swan, Ralph A.; Wilcox, Graham E.; O'Hara, Amanda J.
2007-01-01
Conservation efforts to prevent the extinction of the endangered western barred bandicoot (Perameles bougainville) are currently hindered by a progressively debilitating cutaneous and mucocutaneous papillomatosis and carcinomatosis syndrome observed in captive and wild populations. In this study, we detected a novel virus, designated the bandicoot papillomatosis carcinomatosis virus type 1 (BPCV1), in lesional tissue from affected western barred bandicoots using multiply primed rolling-circle amplification and PCR with the cutaneotropic papillomavirus primer pairs FAP59/FAP64 and AR-L1F8/AR-L1R9. Sequencing of the BPCV1 genome revealed a novel prototype virus exhibiting genomic properties of both the Papillomaviridae and the Polyomaviridae. Papillomaviral properties included a large genome size (∼7.3 kb) and the presence of open reading frames (ORFs) encoding canonical L1 and L2 structural proteins. The genomic organization in which structural and nonstructural proteins were encoded on different strands of the double-stranded genome and the presence of ORFs encoding the nonstructural proteins large T and small t antigens were, on the other hand, typical polyomaviral features. BPCV1 may represent the first member of a novel virus family, descended from a common ancestor of the papillomaviruses and polyomaviruses recognized today. Alternatively, it may represent the product of ancient recombination between members of these two virus families. The discovery of this virus could have implications for the current taxonomic classification of Papillomaviridae and Polyomaviridae and can provide further insight into the evolution of these ancient virus families. PMID:17898069
Evaluating process origins of sand-dominated fluvial stratigraphy
NASA Astrophysics Data System (ADS)
Chamberlin, E.; Hajek, E. A.
2015-12-01
Sand-dominated fluvial stratigraphy is often interpreted as indicating times of relatively slow subsidence because of the assumption that fine sediment (silt and clay) is reworked or bypassed during periods of low accommodation. However, sand-dominated successions may instead represent proximal, coarse-grained reaches of paleo-river basins and/or fluvial systems with a sandy sediment supply. Differentiating between these cases is critical for accurately interpreting mass-extraction profiles, basin-subsidence rates, and paleo-river avulsion and migration behavior from ancient fluvial deposits. We explore the degree to which sand-rich accumulations reflect supply-driven progradation or accommodation-limited reworking, by re-evaluating the Castlegate Sandstone (Utah, USA) and the upper Williams Fork Formation (Colorado, USA) - two Upper Cretaceous sandy fluvial deposits previously interpreted as having formed during periods of relatively low accommodation. Both units comprise amalgamated channel and bar deposits with minor intra-channel and overbank mudstones. To constrain relative reworking, we quantify the preservation of bar deposits in each unit using detailed facies and channel-deposit mapping, and compare bar-deposit preservation to expected preservation statistics generated with object-based models spanning a range of boundary conditions. To estimate the grain-size distribution of paleo-sediment input, we leverage results of experimental work that shows both bed-material deposits and accumulations on the downstream side of bars ("interbar fines") sample suspended and wash loads of active flows. We measure grain-size distributions of bar deposits and interbar fines to reconstruct the relative sandiness of paleo-sediment supplies for both systems. By using these novel approaches to test whether sand-rich fluvial deposits reflect river systems with accommodation-limited reworking and/or particularly sand-rich sediment loads, we can gain insight into large-scale downstream-fining and mass-extraction trends in basins with limited exposure.
Nagelhout, Gera E.; Wolfson, Tanya; Zhuang, Yue-Lin; Gamst, Anthony; Willemsen, Marc C.
2015-01-01
Introduction: Several states implemented comprehensive smoke-free laws in workplaces (14 states), restaurants (17 states), and bars (13 states) between 2002 and 2007. We tested the hypothesis that public support for smoke-free laws increases at a higher rate in states that implemented smoke-free laws between 2002 and 2007 (group A) than in states that implemented smoke-free laws after that time or not at all (group B). The period before the implementation (1992–2001) was also considered. Methods: Data was used from the Current Population Survey (CPS) Tobacco Use Supplements (TUS), which is representative for the U.S. adult population. Respondents were asked whether they thought smoking should not be allowed in indoor work areas, restaurants, and bars and cocktail lounges. Differences in trends were analyzed with binomial mixed effects models. Results: Population support for smoke-free restaurants and bars was higher among group A than among group B before 2002. After 2002, support for smoke-free restaurants and bars increased at a higher rate among group A than among group B. Population support for smoke-free workplaces did not differ between group A and B, and the increase in support for smoke-free workplaces also did not differ between these groups. Conclusions: The positive association between the implementation of smoke-free restaurant and bar laws and the rate of increase in support for these laws partly supported the hypothesis. The implementation of the laws may have caused support to increase, but also states that have higher support may have been more likely to implement smoke-free laws. PMID:25143293
Maxillomandibular Fixation by Plastic Surgeons: Cost Analysis and Utilization of Resources.
Farber, Scott J; Snyder-Warwick, Alison K; Skolnick, Gary B; Woo, Albert S; Patel, Kamlesh B
2016-09-01
Maxillomandibular fixation (MMF) can be performed using various techniques. Two common approaches used are arch bars and bone screws. Arch bars are the gold standard and inexpensive, but often require increased procedure time. Bone screws with wire fixation is a popular alternative, but more expensive than arch bars. The differences in costs of care, complications, and operative times between these 2 techniques are analyzed. A chart review was conducted on patients treated over the last 12 years at our institution. Forty-four patients with CPT code 21453 (closed reduction of mandible fracture with interdental fixation) with an isolated mandible fracture were used in our data collection. The operating room (OR) costs, procedure duration, and complications for these patients were analyzed. Operative times were significantly shorter for patients treated with bone screws (P < 0.002). The costs for one trip to the OR for either method of fixation did not show any significant differences (P < 0.840). More patients with arch bar fixation (62%) required a second trip to the OR for removal in comparison to those with screw fixation (31%) (P < 0.068). This additional trip to the OR added significant cost. There were no differences in patient complications between these 2 fixation techniques. The MMF with bone screws represents an attractive alternative to fixation with arch bars in appropriate scenarios. Screw fixation offers reduced costs, fewer trips to the OR, and decreased operative duration without a difference in complications. Cost savings were noted most significantly in a decreased need for secondary procedures in patients who were treated with MMF screws. Screw fixation offers potential for reducing the costs of care in treating patients with minimally displaced or favorable mandible fractures.
3DXRD at the Advanced Photon Source: Orientation Mapping and Deformation Studies
2010-09-01
statistics in the same sample (Hefferan et al. (2010)). This low orientation uncertainty or error bar might be surprising at first since we do measurements...may be a combination of noise and real gradients. Some of the intra‐ granular disorder in (b) should be interpreted as statistical and only...cooling (AC), but are not present after ice water quenching (IWQ). The presence of SRO domains is known to lead to planar slip bands during tensile
Numerical prediction of a draft tube flow taking into account uncertain inlet conditions
NASA Astrophysics Data System (ADS)
Brugiere, O.; Balarac, G.; Corre, C.; Metais, O.; Flores, E.; Pleroy
2012-11-01
The swirling turbulent flow in a hydroturbine draft tube is computed with a non-intrusive uncertainty quantification (UQ) method coupled to Reynolds-Averaged Navier-Stokes (RANS) modelling in order to take into account in the numerical prediction the physical uncertainties existing on the inlet flow conditions. The proposed approach yields not only mean velocity fields to be compared with measured profiles, as is customary in Computational Fluid Dynamics (CFD) practice, but also variance of these quantities from which error bars can be deduced on the computed profiles, thus making more significant the comparison between experiment and computation.
NASA Technical Reports Server (NTRS)
Nixon, C. A.; Achterberg, R. K.; Romani, P. N.; Allen, M.; Zhang, X.; Teanby, N. A.; Irwin, P. G. J.; Flasar, F. M.
2010-01-01
The following six tables give the retrieved temperatures and volume mixing ratios of C2H2 and C2H6 and the formal errors on these results from the retrieval, as described in the manuscript. These are in the form of two-dimensional tables, specified on a latitudinal and vertical grid. The first column is the pressure in bar, and the second column gives the altitude in kilometers calculated from hydrostatic equilibrium, and applies to the equatorial profile only. The top row of the table specifies the planetographic latitude.
NASA Technical Reports Server (NTRS)
Nixon, C. A.; Achterberg, R. K.; Romani, P. N.; Allen, M.; Zhang, X.; Irwin, P. G. J.; Flasar, F. M.
2010-01-01
The following six tables give the retrieved temperatures and volume mixing ratios of C2H2 and C2H6 and the formal errors on these results from the retrieval, as described in the manuscript. These are in the form of two-dimensional tables, specified on a latitudinal and vertical grid. The first column is the pressure in bar, and the second column gives the altitude in kilometers calculated from hydrostatic equilibrium, and applies to the equatorial profile only. The top row of the table specifies the planetographic latitude.
Multiconfiguration calculations of electronic isotope shift factors in Al i
NASA Astrophysics Data System (ADS)
Filippin, Livio; Beerwerth, Randolf; Ekman, Jörgen; Fritzsche, Stephan; Godefroid, Michel; Jönsson, Per
2016-12-01
The present work reports results from systematic multiconfiguration Dirac-Hartree-Fock calculations of electronic isotope shift factors for a set of transitions between low-lying levels of neutral aluminium. These electronic quantities together with observed isotope shifts between different pairs of isotopes provide the changes in mean-square charge radii of the atomic nuclei. Two computational approaches are adopted for the estimation of the mass- and field-shift factors. Within these approaches, different models for electron correlation are explored in a systematic way to determine a reliable computational strategy and to estimate theoretical error bars of the isotope shift factors.
Critical temperature of the Ising ferromagnet on the fcc, hcp, and dhcp lattices
NASA Astrophysics Data System (ADS)
Yu, Unjong
2015-02-01
By an extensive Monte-Carlo calculation together with the finite-size-scaling and the multiple histogram method, the critical coupling constant (Kc = J /kBTc) of the Ising ferromagnet on the fcc, hcp, and double hcp (dhcp) lattices were obtained with unprecedented precision: Kcfcc= 0.1020707(2) , Kchcp= 0.1020702(1) , and Kcdhcp= 0.1020706(2) . The critical temperature Tc of the hcp lattice is found to be higher than those of the fcc and the dhcp lattice. The dhcp lattice seems to have higher Tc than the fcc lattice, but the difference is within error bars.
Using Perturbative Least Action to Reconstruct Redshift-Space Distortions
NASA Astrophysics Data System (ADS)
Goldberg, David M.
2001-05-01
In this paper, we present a redshift-space reconstruction scheme that is analogous to and extends the perturbative least action (PLA) method described by Goldberg & Spergel. We first show that this scheme is effective in reconstructing even nonlinear observations. We then suggest that by varying the cosmology to minimize the quadrupole moment of a reconstructed density field, it may be possible to lower the error bars on the redshift distortion parameter, β, as well as to break the degeneracy between the linear bias parameter, b, and ΩM. Finally, we discuss how PLA might be applied to realistic redshift surveys.
NASA Astrophysics Data System (ADS)
Ran, Youhua; Li, Xin; Jin, Rui; Kang, Jian; Cosh, Michael H.
2017-01-01
Monitoring and estimating grid-mean soil moisture is very important for assessing many hydrological, biological, and biogeochemical processes and for validating remotely sensed surface soil moisture products. Temporal stability analysis (TSA) is a valuable tool for identifying a small number of representative sampling points to estimate the grid-mean soil moisture content. This analysis was evaluated and improved using high-quality surface soil moisture data that were acquired by a wireless sensor network in a high-intensity irrigated agricultural landscape in an arid region of northwestern China. The performance of the TSA was limited in areas where the representative error was dominated by random events, such as irrigation events. This shortcoming can be effectively mitigated by using a stratified TSA (STSA) method, proposed in this paper. In addition, the following methods were proposed for rapidly and efficiently identifying representative sampling points when using TSA. (1) Instantaneous measurements can be used to identify representative sampling points to some extent; however, the error resulting from this method is significant when validating remotely sensed soil moisture products. Thus, additional representative sampling points should be considered to reduce this error. (2) The calibration period can be determined from the time span of the full range of the grid-mean soil moisture content during the monitoring period. (3) The representative error is sensitive to the number of calibration sampling points, especially when only a few representative sampling points are used. Multiple sampling points are recommended to reduce data loss and improve the likelihood of representativeness at two scales.
Accounting for apparent deviations between calorimetric and van't Hoff enthalpies.
Kantonen, Samuel A; Henriksen, Niel M; Gilson, Michael K
2018-03-01
In theory, binding enthalpies directly obtained from calorimetry (such as ITC) and the temperature dependence of the binding free energy (van't Hoff method) should agree. However, previous studies have often found them to be discrepant. Experimental binding enthalpies (both calorimetric and van't Hoff) are obtained for two host-guest pairs using ITC, and the discrepancy between the two enthalpies is examined. Modeling of artificial ITC data is also used to examine how different sources of error propagate to both types of binding enthalpies. For the host-guest pairs examined here, good agreement, to within about 0.4kcal/mol, is obtained between the two enthalpies. Additionally, using artificial data, we find that different sources of error propagate to either enthalpy uniquely, with concentration error and heat error propagating primarily to calorimetric and van't Hoff enthalpies, respectively. With modern calorimeters, good agreement between van't Hoff and calorimetric enthalpies should be achievable, barring issues due to non-ideality or unanticipated measurement pathologies. Indeed, disagreement between the two can serve as a flag for error-prone datasets. A review of the underlying theory supports the expectation that these two quantities should be in agreement. We address and arguably resolve long-standing questions regarding the relationship between calorimetric and van't Hoff enthalpies. In addition, we show that comparison of these two quantities can be used as an internal consistency check of a calorimetry study. Copyright © 2017 Elsevier B.V. All rights reserved.
Optimization of Aimpoints for Coordinate Seeking Weapons
2015-09-01
aiming) and independent ( ballistic ) errors are taken into account, before utilizing each of the three damage functions representing the weapon. A Monte...characteristics such as the radius of the circle containing the weapon aimpoint, impact angle, dependent (aiming) and independent ( ballistic ) errors are taken...Dependent (Aiming) Error .................................8 2. Single Weapon Independent ( Ballistic ) Error .............................9 3
2001-03-01
flame length is about 230 mm. Figure 10 shows three characteristic structures of a cryogenic flame : "* A first expansion cone of length L1 = 15xDlox...correctly represented. However, the computed flame length is longer than the experimental data. This phenomenon is due to the droplets injection
The Effectiveness of Using the Model Method to Solve Word Problems
ERIC Educational Resources Information Center
Bao, Lei
2016-01-01
The aim of this study is to investigate whether the model method is effective to assist primary students to solve word problems. The model method not only provides students with an opportunity to interpret the problem by drawing the rectangular bar but also helps students to visually represent problem situations and relevant relationships on the…
Uncertainty and Complexity in Mathematical Modeling
ERIC Educational Resources Information Center
Cannon, Susan O.; Sanders, Mark
2017-01-01
Modeling is an effective tool to help students access mathematical concepts. Finding a math teacher who has not drawn a fraction bar or pie chart on the board would be difficult, as would finding students who have not been asked to draw models and represent numbers in different ways. In this article, the authors will discuss: (1) the properties of…
The Evolution of Roles and Aspirations: Burgeoning Choices for Females.
ERIC Educational Resources Information Center
Scott, Robert A.
Traditional female status, roles, and aspirations and changes that have occurred in American society are traced. While women were barred from colleges and universities in the 1800's, they now account for more than 50 percent of college students. It is projected that by the year 2000, women will represent an even larger percentage of the college…
Assessing the use of cognitive heuristic representativeness in clinical reasoning.
Payne, Velma L; Crowley, Rebecca S; Crowley, Rebecca
2008-11-06
We performed a pilot study to investigate use of the cognitive heuristic Representativeness in clinical reasoning. We tested a set of tasks and assessments to determine whether subjects used the heuristics in reasoning, to obtain initial frequencies of heuristic use and related cognitive errors, and to collect cognitive process data using think-aloud techniques. The study investigates two aspects of the Representativeness heuristic - judging by perceived frequency and representativeness as causal beliefs. Results show that subjects apply both aspects of the heuristic during reasoning, and make errors related to misapplication of these heuristics. Subjects in this study rarely used base rates, showed significant variability in their recall of base rates, demonstrated limited ability to use provided base rates, and favored causal data in diagnosis. We conclude that the tasks and assessments we have developed provide a suitable test-bed to study the cognitive processes underlying heuristic errors.
Assessing Use of Cognitive Heuristic Representativeness in Clinical Reasoning
Payne, Velma L.; Crowley, Rebecca S.
2008-01-01
We performed a pilot study to investigate use of the cognitive heuristic Representativeness in clinical reasoning. We tested a set of tasks and assessments to determine whether subjects used the heuristics in reasoning, to obtain initial frequencies of heuristic use and related cognitive errors, and to collect cognitive process data using think-aloud techniques. The study investigates two aspects of the Representativeness heuristic - judging by perceived frequency and representativeness as causal beliefs. Results show that subjects apply both aspects of the heuristic during reasoning, and make errors related to misapplication of these heuristics. Subjects in this study rarely used base rates, showed significant variability in their recall of base rates, demonstrated limited ability to use provided base rates, and favored causal data in diagnosis. We conclude that the tasks and assessments we have developed provide a suitable test-bed to study the cognitive processes underlying heuristic errors. PMID:18999140
Current interactions from the one-form sector of nonlinear higher-spin equations
NASA Astrophysics Data System (ADS)
Gelfond, O. A.; Vasiliev, M. A.
2018-06-01
The form of higher-spin current interactions in the sector of one-forms is derived from the nonlinear higher-spin equations in AdS4. Quadratic corrections to higher-spin equations are shown to be independent of the phase of the parameter η = exp iφ in the full nonlinear higher-spin equations. The current deformation resulting from the nonlinear higher-spin equations is represented in the canonical form with the minimal number of space-time derivatives. The non-zero spin-dependent coupling constants of the resulting currents are determined in terms of the higher-spin coupling constant η η bar . Our results confirm the conjecture that (anti-)self-dual nonlinear higher-spin equations result from the full system at (η = 0) η bar = 0.
NASA Astrophysics Data System (ADS)
Li, Ming; Wang, Q. J.; Bennett, James C.; Robertson, David E.
2016-09-01
This study develops a new error modelling method for ensemble short-term and real-time streamflow forecasting, called error reduction and representation in stages (ERRIS). The novelty of ERRIS is that it does not rely on a single complex error model but runs a sequence of simple error models through four stages. At each stage, an error model attempts to incrementally improve over the previous stage. Stage 1 establishes parameters of a hydrological model and parameters of a transformation function for data normalization, Stage 2 applies a bias correction, Stage 3 applies autoregressive (AR) updating, and Stage 4 applies a Gaussian mixture distribution to represent model residuals. In a case study, we apply ERRIS for one-step-ahead forecasting at a range of catchments. The forecasts at the end of Stage 4 are shown to be much more accurate than at Stage 1 and to be highly reliable in representing forecast uncertainty. Specifically, the forecasts become more accurate by applying the AR updating at Stage 3, and more reliable in uncertainty spread by using a mixture of two Gaussian distributions to represent the residuals at Stage 4. ERRIS can be applied to any existing calibrated hydrological models, including those calibrated to deterministic (e.g. least-squares) objectives.
El Camino Hospital: using health information technology to promote patient safety.
Bukunt, Susan; Hunter, Christine; Perkins, Sharon; Russell, Diana; Domanico, Lee
2005-10-01
El Camino Hospital is a leader in the use of health information technology to promote patient safety, including bar coding, computerized order entry, electronic medical records, and wireless communications. Each year, El Camino Hospital's board of directors sets performance expectations for the chief executive officer, which are tied to achievement of local, regional, and national safety and quality standards, including the six Institute of Medicine quality dimensions. He then determines a set of explicit quality goals and measurable actions, which serve as guidelines for the overall hospital. The goals and progress reports are widely shared with employees, medical staff, patients and families, and the public. For safety, for example, the medication error reduction team tracks and reviews medication error rates. The hospital has virtually eliminated transcription errors through its 100% use of computerized physician order entry. Clinical pathways and standard order sets have reduced practice variation, providing a safer environment. Many projects focused on timeliness, such as emergency department wait time, lab turnaround time, and pneumonia time to initial antibiotic. Results have been mixed, with projects most successful when a link was established with patient outcomes, such as in reducing time to percutaneous transluminal coronary angioplasty for patients with acute myocardial infarction.
K-band observations of boxy bulges - I. Morphology and surface brightness profiles
NASA Astrophysics Data System (ADS)
Bureau, M.; Aronica, G.; Athanassoula, E.; Dettmar, R.-J.; Bosma, A.; Freeman, K. C.
2006-08-01
In this first paper of a series on the structure of boxy and peanut-shaped (B/PS) bulges, Kn-band observations of a sample of 30 edge-on spiral galaxies are described and discussed. Kn-band observations best trace the dominant luminous galactic mass and are minimally affected by dust. Images, unsharp-masked images, as well as major-axis and vertically summed surface brightness profiles are presented and discussed. Galaxies with a B/PS bulge tend to have a more complex morphology than galaxies with other bulge types, more often showing centred or off-centred X structures, secondary maxima along the major-axis and spiral-like structures. While probably not uniquely related to bars, those features are observed in three-dimensional N-body simulations of barred discs and may trace the main bar orbit families. The surface brightness profiles of galaxies with a B/PS bulge are also more complex, typically containing three or more clearly separated regions, including a shallow or flat intermediate region (Freeman Type II profiles). The breaks in the profiles offer evidence for bar-driven transfer of angular momentum and radial redistribution of material. The profiles further suggest a rapid variation of the scaleheight of the disc material, contrary to conventional wisdom but again as expected from the vertical resonances and instabilities present in barred discs. Interestingly, the steep inner region of the surface brightness profiles is often shorter than the isophotally thick part of the galaxies, itself always shorter than the flat intermediate region of the profiles. The steep inner region is also much more prominent along the major-axis than in the vertically summed profiles. Similarly to other recent work but contrary to the standard `bulge + disc' model (where the bulge is both thick and steep), we thus propose that galaxies with a B/PS bulge are composed of a thin concentrated disc (a disc-like bulge) contained within a partially thick bar (the B/PS bulge), itself contained within a thin outer disc. The inner disc likely formed secularly through bar-driven processes and is responsible for the steep inner region of the surface brightness profiles, traditionally associated with a classic bulge, while the bar is responsible for the flat intermediate region of the surface brightness profiles and the thick complex morphological structures observed. Those components are strongly coupled dynamically and are formed mostly of the same (disc) material, shaped by the weak but relentless action of the bar resonances. Any competing formation scenario for galaxies with a B/PS bulge, which represent at least 45 per cent of the local disc galaxy population, must explain equally well and self-consistently the above morphological and photometric properties, the complex gas and stellar kinematics observed, and the correlations between them.
Two-UAV Intersection Localization System Based on the Airborne Optoelectronic Platform
Bai, Guanbing; Liu, Jinghong; Song, Yueming; Zuo, Yujia
2017-01-01
To address the limitation of the existing UAV (unmanned aerial vehicles) photoelectric localization method used for moving objects, this paper proposes an improved two-UAV intersection localization system based on airborne optoelectronic platforms by using the crossed-angle localization method of photoelectric theodolites for reference. This paper introduces the makeup and operating principle of intersection localization system, creates auxiliary coordinate systems, transforms the LOS (line of sight, from the UAV to the target) vectors into homogeneous coordinates, and establishes a two-UAV intersection localization model. In this paper, the influence of the positional relationship between UAVs and the target on localization accuracy has been studied in detail to obtain an ideal measuring position and the optimal localization position where the optimal intersection angle is 72.6318°. The result shows that, given the optimal position, the localization root mean square error (RMS) will be 25.0235 m when the target is 5 km away from UAV baselines. Finally, the influence of modified adaptive Kalman filtering on localization results is analyzed, and an appropriate filtering model is established to reduce the localization RMS error to 15.7983 m. Finally, An outfield experiment was carried out and obtained the optimal results: σB=1.63×10−4 (°), σL=1.35×10−4 (°), σH=15.8 (m), σsum=27.6 (m), where σB represents the longitude error, σL represents the latitude error, σH represents the altitude error, and σsum represents the error radius. PMID:28067814
Two-UAV Intersection Localization System Based on the Airborne Optoelectronic Platform.
Bai, Guanbing; Liu, Jinghong; Song, Yueming; Zuo, Yujia
2017-01-06
To address the limitation of the existing UAV (unmanned aerial vehicles) photoelectric localization method used for moving objects, this paper proposes an improved two-UAV intersection localization system based on airborne optoelectronic platforms by using the crossed-angle localization method of photoelectric theodolites for reference. This paper introduces the makeup and operating principle of intersection localization system, creates auxiliary coordinate systems, transforms the LOS (line of sight, from the UAV to the target) vectors into homogeneous coordinates, and establishes a two-UAV intersection localization model. In this paper, the influence of the positional relationship between UAVs and the target on localization accuracy has been studied in detail to obtain an ideal measuring position and the optimal localization position where the optimal intersection angle is 72.6318°. The result shows that, given the optimal position, the localization root mean square error (RMS) will be 25.0235 m when the target is 5 km away from UAV baselines. Finally, the influence of modified adaptive Kalman filtering on localization results is analyzed, and an appropriate filtering model is established to reduce the localization RMS error to 15.7983 m. Finally, An outfield experiment was carried out and obtained the optimal results: σ B = 1.63 × 10 - 4 ( ° ) , σ L = 1.35 × 10 - 4 ( ° ) , σ H = 15.8 ( m ) , σ s u m = 27.6 ( m ) , where σ B represents the longitude error, σ L represents the latitude error, σ H represents the altitude error, and σ s u m represents the error radius.
Southwest Washington littoral drift restoration—Beach and nearshore morphological monitoring
Stevens, Andrew W.; Gelfenbaum, Guy; Ruggiero, Peter; Kaminsky, George M.
2012-01-01
A morphological monitoring program has documented the placement and initial dispersal of beach nourishment material (280,000 m3) placed between the Mouth of the Columbia River (MCR) North Jetty and North Head, at the southern end of the Long Beach Peninsula in southwestern Washington State. A total of 21 topographic surveys and 8 nearshore bathymetric surveys were performed between July 11, 2010, and November 4, 2011. During placement, southerly alongshore transport resulted in movement of nourishment material to the south towards the MCR North Jetty. Moderate wave conditions (significant wave height around 4 m) following the completion of the nourishment resulted in cross-shore sediment transport, with most of the nourishment material transported into the nearshore bars. The nourishment acted as a buffer to the more severe erosion, including dune overtopping and retreat, that was observed at the northern end of the study area throughout the winter. One year after placement of the nourishment, onshore transport and beach recovery were most pronounced within the permit area and to the south toward the MCR North Jetty. This suggests that there is some long-term benefit of the nourishment for reducing erosion rates locally, although the enhanced recovery also could be due to natural gradients in alongshore transport causing net movement of the sediment from north to south. Measurements made during the morphological monitoring program documented the seasonal movement and decay of nearshore sand bars. Low-energy conditions in late summer resulted in onshore bar migration early in the monitoring program. Moderate wave conditions in the autumn resulted in offshore movement of the middle bar and continued onshore migration of the outer bar. High-energy wave conditions early in the winter resulted in strong cross-shore transport and creation of a 3-bar system along portions of the coast. More southerly wave events occurred later in the winter and early spring and coincided with the complete loss of the outer bar and net loss of sediment from the study area. These data suggest that bar decay may be an important mechanism for exporting sediment from Benson Beach north to the Long Beach Peninsula. The measurements presented in this report represent one component of a broader monitoring program designed to track the movement of nourishment material on the beach and shoreface at this location, including continuous video monitoring (Argus), in situu measurements of hydrodynamics, and a physical tracer experiment. Field data from the monitoring program will be used to test numerical models of hydrodynamics and sediment transport and to improve the capability of numerical models to support regional sediment management.
Cramer, Bradley D.; Loydell, David K.; Samtleben, Christian; Munnecke, Axel; Kaljo, Dimitri; Mannik, Peep; Martma, Tonu; Jeppsson, Lennart; Kleffner, Mark A.; Barrick, James E.; Johnson, Craig A.; Emsbo, Poul; Joachimski, Michael M.; Bickert, Torsten; Saltzman, Matthew R.
2010-01-01
The resolution and fidelity of global chronostratigraphic correlation are direct functions of the time period under consideration. By virtue of deep-ocean cores and astrochronology, the Cenozoic and Mesozoic time scales carry error bars of a few thousand years (k.y.) to a few hundred k.y. In contrast, most of the Paleozoic time scale carries error bars of plus or minus a few million years (m.y.), and chronostratigraphic control better than ??1 m.y. is considered "high resolution." The general lack of Paleozoic abyssal sediments and paucity of orbitally tuned Paleozoic data series combined with the relative incompleteness of the Paleozoic stratigraphic record have proven historically to be such an obstacle to intercontinental chronostratigraphic correlation that resolving the Paleozoic time scale to the level achieved during the Mesozoic and Cenozoic was viewed as impractical, impossible, or both. Here, we utilize integrated graptolite, conodont, and carbonate carbon isotope (??13Ccarb) data from three paleocontinents (Baltica, Avalonia, and Laurentia) to demonstrate chronostratigraphic control for upper Llando very through middle Wenlock (Telychian-Sheinwoodian, ~436-426 Ma) strata with a resolution of a few hundred k.y. The interval surrounding the base of the Wenlock Series can now be correlated globally with precision approaching 100 k.y., but some intervals (e.g., uppermost Telychian and upper Shein-woodian) are either yet to be studied in sufficient detail or do not show sufficient biologic speciation and/or extinction or carbon isotopic features to delineate such small time slices. Although producing such resolution during the Paleozoic presents an array of challenges unique to the era, we have begun to demonstrate that erecting a Paleozoic time scale comparable to that of younger eras is achievable. ?? 2010 Geological Society of America.
Astrostatistics in X-ray Astronomy: Systematics and Calibration
NASA Astrophysics Data System (ADS)
Siemiginowska, Aneta; Kashyap, Vinay; CHASC
2014-01-01
Astrostatistics has been emerging as a new field in X-ray and gamma-ray astronomy, driven by the analysis challenges arising from data collected by high performance missions since the beginning of this century. The development and implementation of new analysis methods and techniques requires a close collaboration between astronomers and statisticians, and requires support from a reliable and continuous funding source. The NASA AISR program was one such, and played a crucial part in our work. Our group (CHASC; http://heawww.harvard.edu/AstroStat/), composed of a mixture of high energy astrophysicists and statisticians, was formed ~15 years ago to address specific issues related to Chandra X-ray Observatory data (Siemiginowska et al. 1997) and was initially fully supported by Chandra. We have developed several statistical methods that have laid the foundation for extensive application of Bayesian methodologies to Poisson data in high-energy astrophysics. I will describe one such project, on dealing with systematic uncertainties (Lee et al. 2011, ApJ ), and present the implementation of the method in Sherpa, the CIAO modeling and fitting application. This algorithm propagates systematic uncertainties in instrumental responses (e.g., ARFs) through the Sherpa spectral modeling chain to obtain realistic error bars on model parameters when the data quality is high. Recent developments include the ability to narrow the space of allowed calibration and obtain better parameter estimates as well as tighter error bars. Acknowledgements: This research is funded in part by NASA contract NAS8-03060. References: Lee, H., Kashyap, V.L., van Dyk, D.A., et al. 2011, ApJ, 731, 126 Siemiginowska, A., Elvis, M., Connors, A., et al. 1997, Statistical Challenges in Modern Astronomy II, 241
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sung, C., E-mail: csung@physics.ucla.edu; White, A. E.; Greenwald, M.
2016-04-15
Long wavelength turbulent electron temperature fluctuations (k{sub y}ρ{sub s} < 0.3) are measured in the outer core region (r/a > 0.8) of Ohmic L-mode plasmas at Alcator C-Mod [E. S. Marmar et al., Nucl. Fusion 49, 104014 (2009)] with a correlation electron cyclotron emission diagnostic. The relative amplitude and frequency spectrum of the fluctuations are compared quantitatively with nonlinear gyrokinetic simulations using the GYRO code [J. Candy and R. E. Waltz, J. Comput. Phys. 186, 545 (2003)] in two different confinement regimes: linear Ohmic confinement (LOC) regime and saturated Ohmic confinement (SOC) regime. When comparing experiment with nonlinear simulations, it is found that local,more » electrostatic ion-scale simulations (k{sub y}ρ{sub s} ≲ 1.7) performed at r/a ∼ 0.85 reproduce the experimental ion heat flux levels, electron temperature fluctuation levels, and frequency spectra within experimental error bars. In contrast, the electron heat flux is robustly under-predicted and cannot be recovered by using scans of the simulation inputs within error bars or by using global simulations. If both the ion heat flux and the measured temperature fluctuations are attributed predominantly to long-wavelength turbulence, then under-prediction of electron heat flux strongly suggests that electron scale turbulence is important for transport in C-Mod Ohmic L-mode discharges. In addition, no evidence is found from linear or nonlinear simulations for a clear transition from trapped electron mode to ion temperature gradient turbulence across the LOC/SOC transition, and also there is no evidence in these Ohmic L-mode plasmas of the “Transport Shortfall” [C. Holland et al., Phys. Plasmas 16, 052301 (2009)].« less
The statistical properties and possible causes of polar motion prediction errors
NASA Astrophysics Data System (ADS)
Kosek, Wieslaw; Kalarus, Maciej; Wnek, Agnieszka; Zbylut-Gorska, Maria
2015-08-01
The pole coordinate data predictions from different prediction contributors of the Earth Orientation Parameters Combination of Prediction Pilot Project (EOPCPPP) were studied to determine the statistical properties of polar motion forecasts by looking at the time series of differences between them and the future IERS pole coordinates data. The mean absolute errors, standard deviations as well as the skewness and kurtosis of these differences were computed together with their error bars as a function of prediction length. The ensemble predictions show a little smaller mean absolute errors or standard deviations however their skewness and kurtosis values are similar as the for predictions from different contributors. The skewness and kurtosis enable to check whether these prediction differences satisfy normal distribution. The kurtosis values diminish with the prediction length which means that the probability distribution of these prediction differences is becoming more platykurtic than letptokurtic. Non zero skewness values result from oscillating character of these differences for particular prediction lengths which can be due to the irregular change of the annual oscillation phase in the joint fluid (atmospheric + ocean + land hydrology) excitation functions. The variations of the annual oscillation phase computed by the combination of the Fourier transform band pass filter and the Hilbert transform from pole coordinates data as well as from pole coordinates model data obtained from fluid excitations are in a good agreement.
CARMA observations of Galactic cold cores: searching for spinning dust emission
NASA Astrophysics Data System (ADS)
Tibbs, C. T.; Paladini, R.; Cleary, K.; Muchovej, S. J. C.; Scaife, A. M. M.; Stevenson, M. A.; Laureijs, R. J.; Ysard, N.; Grainge, K. J. B.; Perrott, Y. C.; Rumsey, C.; Villadsen, J.
2015-11-01
We present the first search for spinning dust emission from a sample of 34 Galactic cold cores, performed using the CARMA interferometer. For each of our cores, we use photometric data from the Herschel Space Observatory to constrain bar{N}H, bar{T}d, bar{n}H, and bar{G}0. By computing the mass of the cores and comparing it to the Bonnor-Ebert mass, we determined that 29 of the 34 cores are gravitationally unstable and undergoing collapse. In fact, we found that six cores are associated with at least one young stellar object, suggestive of their protostellar nature. By investigating the physical conditions within each core, we can shed light on the cm emission revealed (or not) by our CARMA observations. Indeed, we find that only three of our cores have any significant detectable cm emission. Using a spinning dust model, we predict the expected level of spinning dust emission in each core and find that for all 34 cores, the predicted level of emission is larger than the observed cm emission constrained by the CARMA observations. Moreover, even in the cores for which we do detect cm emission, we cannot, at this stage, discriminate between free-free emission from young stellar objects and spinning dust emission. We emphasize that although the CARMA observations described in this analysis place important constraints on the presence of spinning dust in cold, dense environments, the source sample targeted by these observations is not statistically representative of the entire population of Galactic cores.
Prospects of discovering stable double-heavy tetraquarks at a Tera-Z factory
NASA Astrophysics Data System (ADS)
Ali, Ahmed; Parkhomenko, Alexander Ya.; Qin, Qin; Wang, Wei
2018-07-01
Motivated by a number of theoretical considerations, predicting the deeply bound double-heavy tetraquarks T[ u bar d bar ]{ bb }, T[ u bar s bar ]{ bb } and T[ d bar s bar ]{ bb }, we explore the potential of their discovery at Tera-Z factories. Using the process Z → b b bar b b bar , we calculate, employing the Monte Carlo generators MadGraph5_aMC@NLO and Pythia6, the phase space configuration in which the bb pair is likely to fragment as a diquark. In a jet-cone, defined by an invariant mass interval mbb < M T[ q bar qbar‧ ]
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albert, J; Labarbe, R; Sterpin, E
2016-06-15
Purpose: To understand the extent to which the prompt gamma camera measurements can be used to predict the residual proton range due to setup errors and errors in the calibration curve. Methods: We generated ten variations on a default calibration curve (CC) and ten corresponding range maps (RM). Starting with the default RM, we chose a square array of N beamlets, which were then rotated by a random angle θ and shifted by a random vector s. We added a 5% distal Gaussian noise to each beamlet in order to introduce discrepancies that exist between the ranges predicted from themore » prompt gamma measurements and those simulated with Monte Carlo algorithms. For each RM, s, θ, along with an offset u in the CC, were optimized using a simple Euclidian distance between the default ranges and the ranges produced by the given RM. Results: The application of our method lead to the maximal overrange of 2.0mm and underrange of 0.6mm on average. Compared to the situations where s, θ, and u were ignored, these values were larger: 2.1mm and 4.3mm. In order to quantify the need for setup error corrections, we also performed computations in which u was corrected for, but s and θ were not. This yielded: 3.2mm and 3.2mm. The average computation time for 170 beamlets was 65 seconds. Conclusion: These results emphasize the necessity to correct for setup errors and the errors in the calibration curve. The simplicity and speed of our method makes it a good candidate for being implemented as a tool for in-room adaptive therapy. This work also demonstrates that the Prompt gamma range measurements can indeed be useful in the effort to reduce range errors. Given these results, and barring further refinements, this approach is a promising step towards an adaptive proton radiotherapy.« less
Giacomelli, Chiara; Daniele, Simona; Romei, Chiara; Tavanti, Laura; Neri, Tommaso; Piano, Ilaria; Celi, Alessandro; Martini, Claudia; Trincavelli, Maria L.
2018-01-01
The epithelial-mesenchymal transition (EMT) is a complex process in which cell phenotype switches from the epithelial to mesenchymal one. The deregulations of this process have been related with the occurrence of different diseases such as lung cancer and fibrosis. In the last decade, several efforts have been devoted in understanding the mechanisms that trigger and sustain this transition process. Adenosine is a purinergic signaling molecule that has been involved in the onset and progression of chronic lung diseases and cancer through the A2B adenosine receptor subtype activation, too. However, the relationship between A2BAR and EMT has not been investigated, yet. Herein, the A2BAR characterization was carried out in human epithelial lung cells. Moreover, the effects of receptor activation on EMT were investigated in the absence and presence of transforming growth factor-beta (TGF-β1), which has been known to promote the transition. The A2BAR activation alone decreased and increased the expression of epithelial markers (E-cadherin) and the mesenchymal one (Vimentin, N-cadherin), respectively, nevertheless a complete EMT was not observed. Surprisingly, the receptor activation counteracted the EMT induced by TGF-β1. Several intracellular pathways regulate the EMT: high levels of cAMP and ERK1/2 phosphorylation has been demonstrated to counteract and promote the transition, respectively. The A2BAR stimulation was able to modulated these two pathways, cAMP/PKA and MAPK/ERK, shifting the fine balance toward activation or inhibition of EMT. In fact, using a selective PKA inhibitor, which blocks the cAMP pathway, the A2BAR-mediated EMT promotion were exacerbated, and conversely the selective inhibition of MAPK/ERK counteracted the receptor-induced transition. These results highlighted the A2BAR as one of the receptors involved in the modulation of EMT process. Nevertheless, its activation is not enough to trigger a complete transition, its ability to affect different intracellular pathways could represent a mechanism at the basis of EMT maintenance/inhibition based on the extracellular microenvironment. Despite further investigations are needed, herein for the first time the A2BAR has been related to the EMT process, and therefore to the different EMT-related pathologies. PMID:29445342
Adaptive control system for pulsed megawatt klystrons
Bolie, Victor W.
1992-01-01
The invention provides an arrangement for reducing waveform errors such as errors in phase or amplitude in output pulses produced by pulsed power output devices such as klystrons by generating an error voltage representing the extent of error still present in the trailing edge of the previous output pulse, using the error voltage to provide a stored control voltage, and applying the stored control voltage to the pulsed power output device to limit the extent of error in the leading edge of the next output pulse.
ERIC Educational Resources Information Center
United Nations Educational, Scientific, and Cultural Organization, Paris (France). Div. of Marine Sciences.
Lagoons and their characteristic coastal bay-mouth bars represent 15 percent of the world coastal zone. They are among the most productive ecosystems in the biosphere, this productivity resulting from the interplay of ocean and continent. An International Symposium on Coastal Lagoons (ISCOL) was held to: assess the state of knowledge in the…
Nonlinear dynamic analysis of traveling wave-type ultrasonic motors.
Nakagawa, Yosuke; Saito, Akira; Maeno, Takashi
2008-03-01
In this paper, nonlinear dynamic response of a traveling wave-type ultrasonic motor was investigated. In particular, understanding the transient dynamics of a bar-type ultrasonic motor, such as starting up and stopping, is of primary interest. First, the transient response of the bar-type ultrasonic motor at starting up and stopping was measured using a laser Doppler velocimeter, and its driving characteristics are discussed in detail. The motor is shown to possess amplitude-dependent nonlinearity that greatly influences the transient dynamics of the motor. Second, a dynamical model of the motor was constructed as a second-order nonlinear oscillator, which represents the dynamics of the piezoelectric ceramic, stator, and rotor. The model features nonlinearities caused by the frictional interface between the stator and the rotor, and cubic nonlinearity in the dynamics of the stator. Coulomb's friction model was employed for the interface model, and a stick-slip phenomenon is considered. Lastly, it was shown that the model is capable of representing the transient dynamics of the motor accurately. The critical parameters in the model were identified from measured results, and numerical simulations were conducted using the model with the identified parameters. Good agreement between the results of measurements and numerical simulations is observed.
How to reduce the effect of framing on messages about health.
Garcia-Retamero, Rocio; Galesic, Mirta
2010-12-01
Patients must be informed about risks before any treatment can be implemented. Yet serious problems in communicating these risks occur because of framing effects. To investigate the effects of different information frames when communicating health risks to people with high and low numeracy and determine whether these effects can be countered or eliminated by using different types of visual displays (i.e., icon arrays, horizontal bars, vertical bars, or pies). Experiment on probabilistic, nationally representative US (n = 492) and German (n = 495) samples, conducted in summer 2008. Participants' risk perceptions of the medical risk expressed in positive (i.e., chances of surviving after surgery) and negative (i.e., chances of dying after surgery) terms. Although low-numeracy people are more susceptible to framing than those with high numeracy, use of visual aids is an effective method to eliminate its effects. However, not all visual aids were equally effective: pie charts and vertical and horizontal bars almost completely removed the effect of framing. Icon arrays, however, led to a smaller decrease in the framing effect. Difficulties with understanding numerical information often do not reside in the mind, but in the representation of the problem.
Population Control of Self-Replicating Systems: Option C
NASA Technical Reports Server (NTRS)
Mccord, R. L.
1983-01-01
From the conception and development of the theory of self-replicating automata by John von Neumann, others have expanded on his theories. In 1980, Georg von Tiesenhausen and Wesley A. Darbro developed a report which is a "first' in presenting the theories in a conceptualized engineering setting. In that report several options involving self-replicating systems are presented. One of the options allows each primary to generate n replicas, one in each sequential time frame after its own generation. Each replica is limited to a maximum of m ancestors. This study involves determining the state vector of the replicas in an efficient manner. The problem is cast in matrix notation, where F = fij is a non-diagonalizable matrix. Any element fij represents the number of elements of type j = (c,d) in time frame k+1 generated from type i = (a,b) in time frame k. It is then shown that the state vector is: bar F(k)=bar F (non-zero) X F sub K = bar F (non-zero) xmx J sub kx m sub-1 where J is a matrix in Jordan form having the same eigenvalues as F. M is a matrix composed of the eigenvectors and the generalized eigenvectors of F.
Ching, Joan M; Williams, Barbara L; Idemoto, Lori M; Blackmore, C Craig
2014-08-01
Virginia Mason Medical Center (Seattle) employed the Lean concept of Jidoka (automation with a human touch) to plan for and deploy bar code medication administration (BCMA) to hospitalized patients. Integrating BCMA technology into the nursing work flow with minimal disruption was accomplished using three steps ofJidoka: (1) assigning work to humans and machines on the basis of their differing abilities, (2) adapting machines to the human work flow, and (3) monitoring the human-machine interaction. Effectiveness of BCMA to both reinforce safe administration practices and reduce medication errors was measured using the Collaborative Alliance for Nursing Outcomes (CALNOC) Medication Administration Accuracy Quality Study methodology. Trained nurses observed a total of 16,149 medication doses for 3,617 patients in a three-year period. Following BCMA implementation, the number of safe practice violations decreased from 54.8 violations/100 doses (January 2010-September 2011) to 29.0 violations/100 doses (October 2011-December 2012), resulting in an absolute risk reduction of 25.8 violations/100 doses (95% confidence interval [CI]: 23.7, 27.9, p < .001). The number of medication errors decreased from 5.9 errors/100 doses at baseline to 3.0 errors/100 doses after BCMA implementation (absolute risk reduction: 2.9 errors/100 doses [95% CI: 2.2, 3.6,p < .001]). The number of unsafe administration practices (estimate, -5.481; standard error 1.133; p < .001; 95% CI: -7.702, -3.260) also decreased. As more hospitals respond to health information technology meaningful use incentives, thoughtful, methodical, and well-managed approaches to technology deployment are crucial. This work illustrates how Jidoka offers opportunities for a smooth transition to new technology.
Estimation of wave phase speed and nearshore bathymetry from video imagery
Stockdon, H.F.; Holman, R.A.
2000-01-01
A new remote sensing technique based on video image processing has been developed for the estimation of nearshore bathymetry. The shoreward propagation of waves is measured using pixel intensity time series collected at a cross-shore array of locations using remotely operated video cameras. The incident band is identified, and the cross-spectral matrix is calculated for this band. The cross-shore component of wavenumber is found as the gradient in phase of the first complex empirical orthogonal function of this matrix. Water depth is then inferred from linear wave theory's dispersion relationship. Full bathymetry maps may be measured by collecting data in a large array composed of both cross-shore and longshore lines. Data are collected hourly throughout the day, and a stable, daily estimate of bathymetry is calculated from the median of the hourly estimates. The technique was tested using 30 days of hourly data collected at the SandyDuck experiment in Duck, North Carolina, in October 1997. Errors calculated as the difference between estimated depth and ground truth data show a mean bias of -35 cm (rms error = 91 cm). Expressed as a fraction of the true water depth, the mean percent error was 13% (rms error = 34%). Excluding the region of known wave nonlinearities over the bar crest, the accuracy of the technique improved, and the mean (rms) error was -20 cm (75 cm). Additionally, under low-amplitude swells (wave height H ???1 m), the performance of the technique across the entire profile improved to 6% (29%) of the true water depth with a mean (rms) error of -12 cm (71 cm). Copyright 2000 by the American Geophysical Union.
Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis
NASA Technical Reports Server (NTRS)
Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher
1996-01-01
We study a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and will be required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and a bias correction of forecast anomalies. In brief, the distortion is determined by minimizing the objective function by varying the displacement and bias correction fields. In the present project we use a global or hemispheric domain, and spherical harmonics to represent these fields. In this project we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically we study the forecast errors of the 500 hPa geopotential height field for forecasts of the short and medium range. The forecasts are those of the Goddard Earth Observing System data assimilation system. Results presented show that the methodology works, that a large part of the total error may be explained by a distortion limited to triangular truncation at wavenumber 10, and that the remaining residual error contains mostly small spatial scales.
A new double right border binary vector for producing marker-free transgenic plants
2013-01-01
Background Once a transgenic plant is developed, the selectable marker gene (SMG) becomes unnecessary in the plant. In fact, the continued presence of the SMG in the transgenic plant may cause unexpected pleiotropic effects as well as environmental or biosafety issues. Several methods for removal of SMGs that have been reported remain inaccessible due to protection by patents, while development of new ones is expensive and cost prohibitive. Here, we describe the development of a new vector for producing marker-free plants by simply adapting an ordinary binary vector to the double right border (DRB) vector design using conventional cloning procedures. Findings We developed the DRB vector pMarkfree5.0 by placing the bar gene (representing genes of interest) between two copies of T-DNA right border sequences. The β-glucuronidase (gus) and nptII genes (representing the selectable marker gene) were cloned next followed by one copy of the left border sequence. When tested in a model species (tobacco), this vector system enabled the generation of 55.6% kanamycin-resistant plants by Agrobacterium-mediated transformation. The frequency of cotransformation of the nptII and bar transgenes using the vector was 66.7%. Using the leaf bleach and Basta assays, we confirmed that the nptII and bar transgenes were coexpressed and segregated independently in the transgenic plants. This enable separation of the transgenes in plants cotransformed using pMarkfree5.0. Conclusions The results suggest that the DRB system developed here is a practical and effective approach for separation of gene(s) of interest from a SMG and production of SMG-free plants. Therefore this system could be instrumental in production of “clean” plants containing genes of agronomic importance. PMID:24207020
Predictive Coding: A Possible Explanation of Filling-In at the Blind Spot
Raman, Rajani; Sarkar, Sandip
2016-01-01
Filling-in at the blind spot is a perceptual phenomenon in which the visual system fills the informational void, which arises due to the absence of retinal input corresponding to the optic disc, with surrounding visual attributes. It is known that during filling-in, nonlinear neural responses are observed in the early visual area that correlates with the perception, but the knowledge of underlying neural mechanism for filling-in at the blind spot is far from complete. In this work, we attempted to present a fresh perspective on the computational mechanism of filling-in process in the framework of hierarchical predictive coding, which provides a functional explanation for a range of neural responses in the cortex. We simulated a three-level hierarchical network and observe its response while stimulating the network with different bar stimulus across the blind spot. We find that the predictive-estimator neurons that represent blind spot in primary visual cortex exhibit elevated non-linear response when the bar stimulated both sides of the blind spot. Using generative model, we also show that these responses represent the filling-in completion. All these results are consistent with the finding of psychophysical and physiological studies. In this study, we also demonstrate that the tolerance in filling-in qualitatively matches with the experimental findings related to non-aligned bars. We discuss this phenomenon in the predictive coding paradigm and show that all our results could be explained by taking into account the efficient coding of natural images along with feedback and feed-forward connections that allow priors and predictions to co-evolve to arrive at the best prediction. These results suggest that the filling-in process could be a manifestation of the general computational principle of hierarchical predictive coding of natural images. PMID:26959812
Kupier prize lecture: Sources of solar-system carbon
NASA Technical Reports Server (NTRS)
Anders, Edward; Zinner, Ernst
1994-01-01
We have tried to deconvolve Solar-System carbon into its sources, on the basis of C-12/C-13 ratios (equivalent to R). Interstellar SiC in meteorites, representing greater than 4.6-Ga-old stardust from carbon stars, is isotopically heavier (bar R = 38 +/- 2) than Solar-System carbon (89), implying that the latter contains an additional, light component. A likely source are massive stars, mainly Type II supernovae and Wolf-Rayet stars, which, being O-rich, eject their C largely as CO rather than carbonaceous dust. The fraction of such light C in the Solar System depends on R(sub light) in the source. For R(sub light) = 180-1025 (as in 'Group 4' meteoritic graphite spherules, which apparently came from massive stars greater than 4.6 Ga ago), the fraction of light C is 0.79-0.61. Similar results are obtained for present-day data on red giants and interstellar gas. Although both have become enriched in C-13 due to galactic evolution (to bar-R = 20 and 57), the fraction of the light component in interstellar gas again is near 0.7. (Here bar R represents the mean of a mixture calculated via atom fractions; it is not identical to the arithmetic mean R). Interstellar graphite, unlike SiC, shows a large peak at R approximately equal 90, near the solar value. Although some of the grains may be of local origin, others show anomalies in other elements and hence are exotic. Microdiamonds, with R = 93, also are exotic on the basis of their Xe and N. Apparently R approximately 90 was a fairly common composition 4.6 Ga ago, of stars as well as the ISM.
Analyzing thematic maps and mapping for accuracy
Rosenfield, G.H.
1982-01-01
Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by either the row totals or the column totals from the original classification error matrices. In hypothesis testing, when the results of tests of multiple sample cases prove to be significant, some form of statistical test must be used to separate any results that differ significantly from the others. In the past, many analyses of the data in this error matrix were made by comparing the relative magnitudes of the percentage of correct classifications, for either individual categories, the entire map or both. More rigorous analyses have used data transformations and (or) two-way classification analysis of variance. A more sophisticated step of data analysis techniques would be to use the entire classification error matrices using the methods of discrete multivariate analysis or of multiviariate analysis of variance.
[Pressure control in medical gas distribution systems].
Bourgain, J L; Benayoun, L; Baguenard, P; Haré, G; Puizillout, J M; Billard, V
1997-01-01
To assess whether the pressure gauges at the downstream part of pressure regulators are accurate enough to ensure that pressure in O2 pipeline is always higher than in Air pipeline and that pressure in the latter is higher than pressure in N2O pipeline. A pressure difference of at least 0.4 bar between two medical gas supply systems is recommended to avoid the reflow of either N2O or Air into the O2 pipeline, through a faulty mixer or proportioning device. Prospective technical comparative study. Readings of 32 Bourdon gauges were compared with data obtained with a calibrated reference transducer. Two sets of measurements were performed at a one month interval. Pressure differences between Bourdon gauges and reference transducer were 8% (0.28 bar) in average for a theoretical maximal error less than 2.5%. During the first set of measurements, Air pressure was higher than O2 pressure in one place and N2O pressure higher than Air pressure in another. After an increase in the O2 pipeline pressure and careful setting of pressure regulators, this problem was not observed at the second set of measurements. Actual accuracy of Bourdon gauges was not convenient enough to ensure that O2 pressure was always above Air pressure. Regular controls of these pressure gauges are therefore essential. Replacement of the faulty Bourdon gauges by more accurate transducers should be considered. As an alternative, the increase in pressure difference between O2 and Air pipelines to at least 0.6 bar is recommended.
Crystallization of Yamato 980459 at 0.5 GPA: Are Residual Liquids Like QUE 94201?
NASA Technical Reports Server (NTRS)
Rapp, J. F.; Draper, D. S.; Mercer, C.
2012-01-01
The Martian basaltic meteorites Y980459 and QUE94201 (henceforth referred to as Y98 and QUE respectively) are thought to represent magmatic liquid compositions, rather than being products of protracted crystallization and accumulation like the majority of other martian meteorites. Both meteorite compositions have been experimentally crystallized at 1 bar, and liquidus phases were found to match corresponding mineral core compositions in the meteorites, consistent with the notion that these meteorites represent bona fide melts. They also represent the most primitive and most evolved basaltic martian samples, respectively. Y98 has Mg# (molar Mg/Mg+Fe) approximates 65, and lacks plagioclase; whereas QUE has Mg# approximates 40, and lacks olivine. However they share important geochemical characteristics (e.g. superchondritic CaO/Al2O3, very high epsilon(sub Nd) and low Sr-87/Sr-87) that suggest they sample a similar highly depleted mantle reservoir. As such, they represent likely endmembers of martian magmatic liquid compositions, and it is natural to seek petrogenetic linkages between the two. We make no claim that the actual meteorites themselves share a genetic link (the respective ages rule that out); we are exploring only in general whether primitive martian liquids like Y98 could evolve to liquids resembling QUE. Both experimental and computational efforts have been made to determine if there is indeed such a link. Recent petrological models at 1 bar generated using MELTS suggest that a QUE-like melt can be derived from a parental melt with a Y98 composition. However, experimental studies at 1 bar have been less successful at replicating this progression. Previous experimental crystallization studies of Y98 by our group at 0.5 GPa have produced melt compositions approaching that of QUE, although these results were complicated by the presence of small, variable amounts of H2O in some of the runs owing to the use of talc/pyrex experimental assemblies. Therefore we have repeated the four experiments, augmented with additional runs, all using BaCO3 cell assemblies, which are devoid of water, and these new experiments supersede those reported earlier. Here we report results of experiments simulating equilibrium crystallization; fractional crystallization experiments are currently underway.
Structural analysis of lunar subsurface with Chang'E-3 lunar penetrating radar
NASA Astrophysics Data System (ADS)
Lai, Jialong; Xu, Yi; Zhang, Xiaoping; Tang, Zesheng
2016-01-01
Geological structure of the subsurface of the Moon provides valuable information on lunar evolution. Recently, Chang'E-3 has utilized lunar penetrating radar (LPR), which is equipped on the lunar rover named as Yutu, to detect the lunar geological structure in Northern Imbrium (44.1260N, 19.5014W) for the first time. As an in situ detector, Chang'E-3 LPR has relative higher horizontal and vertical resolution and less clutter impact compared to spaceborne radars and earth-based radars. In this work, we analyze the LPR data at 500 MHz transmission frequency to obtain the shallow subsurface structure of the landing area of Chang'E-3 in Mare Imbrium. Filter method and amplitude recovery algorithms are utilized to alleviate the adverse effects of environment and system noises and compensate the amplitude losses during signal propagation. Based on the processed radar image, we observe numerous diffraction hyperbolae, which may be caused by discrete reflectors beneath the lunar surface. Hyperbolae fitting method is utilized to reverse the average dielectric constant to certain depth (ε bar). Overall, the estimated ε bar increases with the depth and ε bar could be classified into three categories. Average ε bar of each category is 2.47, 3.40 and 6.16, respectively. Because of the large gap between the values of ε bar of neighboring categories, we speculate a three-layered structure of the shallow surface of LPR exploration region. One possible geological picture of the speculated three-layered structure is presented as follows. The top layer is weathered layer of ejecta blanket with its average thickness and bound on error is 0.95±0.02 m. The second layer is the ejecta blanket of the nearby impact crater, and the corresponding average thickness is about 2.30±0.07 m, which is in good agreement with the two primary models of ejecta blanket thickness as a function of distance from the crater center. The third layer is regarded as a mixture of stones and soil. The echoes below the third layer are in the same magnitude as the noises, which may indicate that the fourth layer, if it exists, is uniform (no clear reflector) and its thickness is beyond the detection limit of LPR. Hence, we infer the fourth layer is a basalt layer.
Wiens, J. David; Anthony, Robert G.; Forsman, Eric D.
2014-01-01
The federally threatened northern spotted owl (Strix occidentalis caurina) is the focus of intensive conservation efforts that have led to much forested land being reserved as habitat for the owl and associated wildlife species throughout the Pacific Northwest of the United States. Recently, however, a relatively new threat to spotted owls has emerged in the form of an invasive competitor: the congeneric barred owl (S. varia). As barred owls have rapidly expanded their populations into the entire range of the northern spotted owl, mounting evidence indicates that they are displacing, hybridizing with, and even killing spotted owls. The range expansion by barred owls into western North America has made an already complex conservation issue even more contentious, and a lack of information on the ecological relationships between the 2 species has hampered recovery efforts for northern spotted owls. We investigated spatial relationships, habitat use, diets, survival, and reproduction of sympatric spotted owls and barred owls in western Oregon, USA, during 2007–2009. Our overall objective was to determine the potential for and possible consequences of competition for space, habitat, and food between these previously allopatric owl species. Our study included 29 spotted owls and 28 barred owls that were radio-marked in 36 neighboring territories and monitored over a 24-month period. Based on repeated surveys of both species, the number of territories occupied by pairs of barred owls in the 745-km2 study area (82) greatly outnumbered those occupied by pairs of spotted owls (15). Estimates of mean size of home ranges and core-use areas of spotted owls (1,843 ha and 305 ha, respectively) were 2–4 times larger than those of barred owls (581 ha and 188 ha, respectively). Individual spotted and barred owls in adjacent territories often had overlapping home ranges, but interspecific space sharing was largely restricted to broader foraging areas in the home range with minimal spatial overlap among core-use areas. We used an information-theoretic approach to rank discrete-choice models representing alternative hypotheses about the influence of forest conditions, topography, and interspecific interactions on species-specific patterns of nighttime resource selection. Spotted owls spent a disproportionate amount of time foraging on steep slopes in ravines dominated by old (>120 yr) conifer trees. Barred owls used available forest types more evenly than spotted owls, and were most strongly associated with patches of large hardwood and conifer trees that occupied relatively flat areas along streams. Spotted and barred owls differed in the relative use of old conifer forest (greater for spotted owls) and slope conditions (steeper slopes for spotted owls), but we found no evidence that the 2 species differed in their use of young, mature, and riparian-hardwood forest types. Mean overlap in proportional use of different forest types between individual spotted owls and barred owls in adjacent territories was 81% (range = 30–99%). The best model of habitat use for spotted owls indicated that the relative probability of a location being used was substantially reduced if the location was within or in close proximity to a core-use area of a barred owl. We used pellet analysis and measures of food-niche overlap to determine the potential for dietary competition between spatially associated pairs of spotted owls and barred owls. We identified 1,223 prey items from 15 territories occupied by spotted owls and 4,299 prey items from 24 territories occupied by barred owls. Diets of both species were dominated by nocturnal mammals, but diets of barred owls included many terrestrial, aquatic, and diurnal prey species that were rare or absent in diets of spotted owls. Northern flying squirrels (Glaucomys sabrinus), woodrats (Neotoma fuscipes, N. cinerea), and lagomorphs (Lepus americanus, Sylvilagus bachmani) were primary prey for both owl species, accounting for 81% and 49% of total dietary biomass for spotted owls and barred owls, respectively. Mean dietary overlap between pairs of spotted and barred owls in adjacent territories was moderate (42%; range = 28–70%). Barred owls displayed demographic superiority over spotted owls; annual survival probability of spotted owls from known-fate analyses (0.81, SE = 0.05) was lower than that of barred owls (0.92, SE = 0.04), and pairs of barred owls produced an average of 4.4 times more young than pairs of spotted owls over a 3-year period. We found a strong, positive relationship between seasonal (6-month) survival probabilities of both species and the proportion of old (>120 yr) conifer forest within individual home ranges, which suggested that availability of old forest was a potential limiting factor in the competitive relationship between these 2 species. The annual number of young produced by spotted owls increased linearly with increasing distance from a territory center of a pair of barred owls, and all spotted owls that attempted to nest within 1.5 km of a nest used by barred owls failed to successfully produce young. We identified strong associations between the presence of barred owls and the behavior and fitness potential of spotted owls, as shown by changes in movements, habitat use, and reproductive output of spotted owls exposed to different levels of spatial overlap with territorial barred owls. When viewed collectively, our results support the hypothesis that interference competition with barred owls for territorial space can constrain the availability of critical resources required for successful recruitment and reproduction of spotted owls. Availability of old forests and associated prey species appeared to be the most strongly limiting factors in the competitive relationship between these species, indicating that further loss of these conditions can lead to increases in competitive pressure. Our findings have broad implications for the conservation of spotted owls, as they suggest that spatial heterogeneity in vital rates may not arise solely because of differences among territories in the quality or abundance of forest habitat, but also because of the spatial distribution of a newly established competitor. Experimental removal of barred owls could be used to test this hypothesis and determine whether localized control of barred owl numbers is an ecologically practical and socio-politically acceptable management tool to consider in conservation strategies for spotted owls.
Global Precipitation Measurement Mission Launch and Commissioning
NASA Technical Reports Server (NTRS)
Davis, Nikesha; DeWeese, Keith; Vess, Melissa; O'Donnell, James R., Jr.; Welter, Gary
2015-01-01
During launch and early operation of the Global Precipitation Measurement (GPM) Mission, the Guidance, Navigation, and Control (GN&C) analysis team encountered four main on-orbit anomalies. These include: (1) unexpected shock from Solar Array deployment, (2) momentum buildup from the Magnetic Torquer Bars (MTBs) phasing errors, (3) transition into Safehold due to albedo induced Course Sun Sensor (CSS) anomaly, and (4) a flight software error that could cause a Safehold transition due to a Star Tracker occultation. This paper will discuss ways GN&C engineers identified the anomalies and tracked down the root causes. Flight data and GN&C on-board models will be shown to illustrate how each of these anomalies were investigated and mitigated before causing any harm to the spacecraft. On May 29, 2014, GPM was handed over to the Mission Flight Operations Team after a successful commissioning period. Currently, GPM is operating nominally on orbit, collecting meaningful scientific data that will significantly improve our understanding of the Earth's climate and water cycle.
Tan, Qiulin; Li, Chen; Xiong, Jijun; Jia, Pinggang; Zhang, Wendong; Liu, Jun; Xue, Chenyang; Hong, Yingping; Ren, Zhong; Luo, Tao
2014-01-01
In response to the growing demand for in situ measurement of pressure in high-temperature environments, a high temperature capacitive pressure sensor is presented in this paper. A high-temperature ceramic material-alumina is used for the fabrication of the sensor, and the prototype sensor consists of an inductance, a variable capacitance, and a sealed cavity integrated in the alumina ceramic substrate using a thick-film integrated technology. The experimental results show that the proposed sensor has stability at 850 °C for more than 20 min. The characterization in high-temperature and pressure environments successfully demonstrated sensing capabilities for pressure from 1 to 5 bar up to 600 °C, limited by the sensor test setup. At 600 °C, the sensor achieves a linear characteristic response, and the repeatability error, hysteresis error and zero-point drift of the sensor are 8.3%, 5.05% and 1%, respectively. PMID:24487624
Development and validity of an instrumented handbike: initial results of propulsion kinetics.
van Drongelen, Stefan; van den Berg, Jos; Arnet, Ursina; Veeger, Dirkjan H E J; van der Woude, Lucas H V
2011-11-01
To develop an instrumented handbike system to measure the forces applied to the handgrip during handbiking. A 6 degrees of freedom force sensor was built into the handgrip of an attach-unit handbike, together with two optical encoders to measure the orientation of the handgrip and crank in space. Linearity, precision, and percent error were determined for static and dynamic tests. High linearity was demonstrated for both the static and the dynamic condition (r=1.01). Precision was high under the static condition (standard deviation of 0.2N), however the precision decreased with higher loads during the dynamic condition. Percent error values were between 0.3 and 5.1%. This is the first instrumented handbike system that can register 3-dimensional forces. It can be concluded that the instrumented handbike system allows for an accurate force analysis based on forces registered at the handle bars. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Shuqing; Wang, Yongquan; Zhi, Xiyang
2017-05-01
A method of diminishing the shape error of membrane mirror is proposed in this paper. The inner inflating pressure is considerably decreased by adopting the pre-shaped membrane. Small deformation of the membrane mirror with greatly reduced shape error is sequentially achieved. Primarily a finite element model of the above pre-shaped membrane is built on the basis of its mechanical properties. Then accurate shape data under different pressures can be acquired by iteratively calculating the node displacements of the model. Shape data are applicable to build up deformed reflecting surfaces for the simulative analysis in ZEMAX. Finally, ground-based imaging experiments of 4-bar targets and nature scene are conducted. Experiment results indicate that the MTF of the infrared system can reach to 0.3 at a high spatial resolution of 10l p/mm, and texture details of the nature scene are well-presented. The method can provide theoretical basis and technical support for the applications in lightweight optical components with ultra-large apertures.
Global Precipitation Measurement Mission Launch and Commissioning
NASA Technical Reports Server (NTRS)
Davis, Nikesha; Deweese, Keith; Vess, Missie; Welter, Gary; O'Donnell, James R., Jr.
2015-01-01
During launch and early operation of the Global Precipitation Measurement (GPM) Mission, the Guidance, Navigation and Control (GNC) analysis team encountered four main on orbit anomalies. These include: (1) unexpected shock from Solar Array deployment, (2) momentum buildup from the Magnetic Torquer Bars (MTBs) phasing errors, (3) transition into Safehold due to albedo-induced Course Sun Sensor (CSS) anomaly, and (4) a flight software error that could cause a Safehold transition due to a Star Tracker occultation. This paper will discuss ways GNC engineers identified and tracked down the root causes. Flight data and GNC on board models will be shown to illustrate how each of these anomalies were investigated and mitigated before causing any harm to the spacecraft. On May 29, 2014, GPM was handed over to the Mission Flight Operations Team after a successful commissioning period. Currently, GPM is operating nominally on orbit, collecting meaningful scientific data that will significantly improve our understanding of the Earth's climate and water cycle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovalev, Andrew N.
The authors describe a measurement of the top quark mass using events with two charged leptons collected by the CDF II Detector from pmore » $$\\bar{p}$$ collisions with √s = 1.96 TeV at the Fermilab Tevatron. The posterior probability distribution of the top quark pole mass is calculated using the differential cross-section for the t$$\\bar{t}$$ production and decay expressed with respect to observed leptons and jets momenta. The presence of background events in the collected sample is modeled using calculations of the differential cross-sections for major background processes. This measurement represents the first application of this method to events with two charged leptons. In a data sample with integrated luminosity of 340 pb -1, they observe 33 candidate events and measure M top = 165.2 ± 61. stat ± 3.4 syst GeV/c 2.« less