Sample records for corrective measures implementation

  1. Modeling boundary measurements of scattered light using the corrected diffusion approximation

    PubMed Central

    Lehtikangas, Ossi; Tarvainen, Tanja; Kim, Arnold D.

    2012-01-01

    We study the modeling and simulation of steady-state measurements of light scattered by a turbid medium taken at the boundary. In particular, we implement the recently introduced corrected diffusion approximation in two spatial dimensions to model these boundary measurements. This implementation uses expansions in plane wave solutions to compute boundary conditions and the additive boundary layer correction, and a finite element method to solve the diffusion equation. We show that this corrected diffusion approximation models boundary measurements substantially better than the standard diffusion approximation in comparison to numerical solutions of the radiative transport equation. PMID:22435102

  2. Measurement-free implementations of small-scale surface codes for quantum-dot qubits

    NASA Astrophysics Data System (ADS)

    Ercan, H. Ekmel; Ghosh, Joydip; Crow, Daniel; Premakumar, Vickram N.; Joynt, Robert; Friesen, Mark; Coppersmith, S. N.

    2018-01-01

    The performance of quantum-error-correction schemes depends sensitively on the physical realizations of the qubits and the implementations of various operations. For example, in quantum-dot spin qubits, readout is typically much slower than gate operations, and conventional surface-code implementations that rely heavily on syndrome measurements could therefore be challenging. However, fast and accurate reset of quantum-dot qubits, without readout, can be achieved via tunneling to a reservoir. Here we propose small-scale surface-code implementations for which syndrome measurements are replaced by a combination of Toffoli gates and qubit reset. For quantum-dot qubits, this enables much faster error correction than measurement-based schemes, but requires additional ancilla qubits and non-nearest-neighbor interactions. We have performed numerical simulations of two different coding schemes, obtaining error thresholds on the orders of 10-2 for a one-dimensional architecture that only corrects bit-flip errors and 10-4 for a two-dimensional architecture that corrects bit- and phase-flip errors.

  3. Hypergol Maintenance Facility Hazardous Waste South Staging Areas, SWMU 070 Corrective Measures Implementation

    NASA Technical Reports Server (NTRS)

    Miller, Ralinda R.

    2016-01-01

    This document presents the Corrective Measures Implementation (CMI) Year 10 Annual Report for implementation of corrective measures at the Hypergol Maintenance Facility (HMF) Hazardous Waste South Staging Areas at Kennedy Space Center, Florida. The work is being performed by Tetra Tech, Inc., for the National Aeronautics and Space Administration (NASA) under Indefinite Delivery Indefinite Quantity (IDIQ) NNK12CA15B, Task Order (TO) 07. Mr. Harry Plaza, P.E., of NASA's Environmental Assurance Branch is the Remediation Project Manager for John F. Kennedy Space Center. The Tetra Tech Program Manager is Mr. Mark Speranza, P.E., and the Tetra Tech Project Manager is Robert Simcik, P.E.

  4. 78 FR 73769 - Approval and Promulgation of Air Quality Implementation Plans; West Virginia; Approval of the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-09

    ..., contingency measures, and other planning SIPs related to the attainment of either the 1997 annual or 2006 24... contingency measures, with a schedule for implementation, as EPA deems necessary to assure prompt correction... continued attainment; and (5) a contingency plan to prevent or correct future violations of the NAAQS. III...

  5. Contractors Road Heavy Equipment Area SWMU 055 Corrective Measures Implementation Progress Report

    NASA Technical Reports Server (NTRS)

    Dorman, Lane

    2015-01-01

    This Corrective Measures Implementation (CMI) Progress Report, Revision 1, for Contractor's Road Heavy Equipment (CRHE) Area Solid Waste Management Unit (SWMU) Number 055 was prepared by Geosyntec Consultants (Geosyntec) for the National Aeronautics and Space Administration (NASA) under contract number NNK09CA02B, Delivery Order NNK09CA62D and Project Number PCN ENV-2324. This CMI Progress Report documents: (i) activities conducted as part of supplemental assessment activities completed from June 2009 through November 2014; (ii) Engineering Evaluation (EE) Advanced Data Packages (ADPs); and (iii) recommendations for future activities related to corrective measures at the Site.

  6. Concurrent remote entanglement with quantum error correction against photon losses

    NASA Astrophysics Data System (ADS)

    Roy, Ananda; Stone, A. Douglas; Jiang, Liang

    2016-09-01

    Remote entanglement of distant, noninteracting quantum entities is a key primitive for quantum information processing. We present a protocol to remotely entangle two stationary qubits by first entangling them with propagating ancilla qubits and then performing a joint two-qubit measurement on the ancillas. Subsequently, single-qubit measurements are performed on each of the ancillas. We describe two continuous variable implementations of the protocol using propagating microwave modes. The first implementation uses propagating Schr o ̈ dinger cat states as the flying ancilla qubits, a joint-photon-number-modulo-2 measurement of the propagating modes for the two-qubit measurement, and homodyne detections as the final single-qubit measurements. The presence of inefficiencies in realistic quantum systems limit the success rate of generating high fidelity Bell states. This motivates us to propose a second continuous variable implementation, where we use quantum error correction to suppress the decoherence due to photon loss to first order. To that end, we encode the ancilla qubits in superpositions of Schrödinger cat states of a given photon-number parity, use a joint-photon-number-modulo-4 measurement as the two-qubit measurement, and homodyne detections as the final single-qubit measurements. We demonstrate the resilience of our quantum-error-correcting remote entanglement scheme to imperfections. Further, we describe a modification of our error-correcting scheme by incorporating additional individual photon-number-modulo-2 measurements of the ancilla modes to improve the success rate of generating high-fidelity Bell states. Our protocols can be straightforwardly implemented in state-of-the-art superconducting circuit-QED systems.

  7. Regulatory controls on the hydrogeological characterization of a mixed waste disposal site, Radioactive Waste Management Complex, Idaho National Engineering Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruebelmann, K.L.

    1990-01-01

    Following the detection of chlorinated volatile organic compounds in the groundwater beneath the SDA in the summer of 1987, hydrogeological characterization of the Radioactive Waste Management Complex (RWMC), Idaho National Engineering Laboratory (INEL) was required by the Resource Conservation and Recovery Act (RCRA). The waste site, the Subsurface Disposal Area (SDA), is the subject of a RCRA Corrective Action Program. Regulatory requirements for the Corrective Action Program dictate a phased approach to evaluation of the SDA. In the first phase of the program, the SDA is the subject of a RCRA Facility Investigation (RIF), which will obtain information to fullymore » characterize the physical properties of the site, determine the nature and extent of contamination, and identify pathways for migration of contaminants. If the need for corrective measures is identified during the RIF, a Corrective Measures Study (CMS) will be performed as second phase. Information generated during the RIF will be used to aid in the selection and implementation of appropriate corrective measures to correct the release. Following the CMS, the final phase is the implementation of the selected corrective measures. 4 refs., 1 fig.« less

  8. Practical estimate of gradient nonlinearity for implementation of apparent diffusion coefficient bias correction.

    PubMed

    Malkyarenko, Dariya I; Chenevert, Thomas L

    2014-12-01

    To describe an efficient procedure to empirically characterize gradient nonlinearity and correct for the corresponding apparent diffusion coefficient (ADC) bias on a clinical magnetic resonance imaging (MRI) scanner. Spatial nonlinearity scalars for individual gradient coils along superior and right directions were estimated via diffusion measurements of an isotropicic e-water phantom. Digital nonlinearity model from an independent scanner, described in the literature, was rescaled by system-specific scalars to approximate 3D bias correction maps. Correction efficacy was assessed by comparison to unbiased ADC values measured at isocenter. Empirically estimated nonlinearity scalars were confirmed by geometric distortion measurements of a regular grid phantom. The applied nonlinearity correction for arbitrarily oriented diffusion gradients reduced ADC bias from 20% down to 2% at clinically relevant offsets both for isotropic and anisotropic media. Identical performance was achieved using either corrected diffusion-weighted imaging (DWI) intensities or corrected b-values for each direction in brain and ice-water. Direction-average trace image correction was adequate only for isotropic medium. Empiric scalar adjustment of an independent gradient nonlinearity model adequately described DWI bias for a clinical scanner. Observed efficiency of implemented ADC bias correction quantitatively agreed with previous theoretical predictions and numerical simulations. The described procedure provides an independent benchmark for nonlinearity bias correction of clinical MRI scanners.

  9. Corrective Action Glossary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-07-01

    The glossary of technical terms was prepared to facilitate the use of the Corrective Action Plan (CAP) issued by OSWER on November 14, 1986. The CAP presents model scopes of work for all phases of a corrective action program, including the RCRA Facility Investigation (RFI), Corrective Measures Study (CMS), Corrective Measures Implementation (CMI), and interim measures. The Corrective Action Glossary includes brief definitions of the technical terms used in the CAP and explains how they are used. In addition, expected ranges (where applicable) are provided. Parameters or terms not discussed in the CAP, but commonly associated with site investigations ormore » remediations are also included.« less

  10. 40 CFR 258.56 - Assessment of corrective measures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Assessment of corrective measures. 258.56 Section 258.56 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES... implementation, and potential impacts of appropriate potential remedies, including safety impacts, cross-media...

  11. Simple wavefront correction framework for two-photon microscopy of in-vivo brain

    PubMed Central

    Galwaduge, P. T.; Kim, S. H.; Grosberg, L. E.; Hillman, E. M. C.

    2015-01-01

    We present an easily implemented wavefront correction scheme that has been specifically designed for in-vivo brain imaging. The system can be implemented with a single liquid crystal spatial light modulator (LCSLM), which makes it compatible with existing patterned illumination setups, and provides measurable signal improvements even after a few seconds of optimization. The optimization scheme is signal-based and does not require exogenous guide-stars, repeated image acquisition or beam constraint. The unconstrained beam approach allows the use of Zernike functions for aberration correction and Hadamard functions for scattering correction. Low order corrections performed in mouse brain were found to be valid up to hundreds of microns away from the correction location. PMID:26309763

  12. Weighted divergence correction scheme and its fast implementation

    NASA Astrophysics Data System (ADS)

    Wang, ChengYue; Gao, Qi; Wei, RunJie; Li, Tian; Wang, JinJun

    2017-05-01

    Forcing the experimental volumetric velocity fields to satisfy mass conversation principles has been proved beneficial for improving the quality of measured data. A number of correction methods including the divergence correction scheme (DCS) have been proposed to remove divergence errors from measurement velocity fields. For tomographic particle image velocimetry (TPIV) data, the measurement uncertainty for the velocity component along the light thickness direction is typically much larger than for the other two components. Such biased measurement errors would weaken the performance of traditional correction methods. The paper proposes a variant for the existing DCS by adding weighting coefficients to the three velocity components, named as the weighting DCS (WDCS). The generalized cross validation (GCV) method is employed to choose the suitable weighting coefficients. A fast algorithm for DCS or WDCS is developed, making the correction process significantly low-cost to implement. WDCS has strong advantages when correcting velocity components with biased noise levels. Numerical tests validate the accuracy and efficiency of the fast algorithm, the effectiveness of GCV method, and the advantages of WDCS. Lastly, DCS and WDCS are employed to process experimental velocity fields from the TPIV measurement of a turbulent boundary layer. This shows that WDCS achieves a better performance than DCS in improving some flow statistics.

  13. Measurement of dissolved organic matter fluorescense in aquatic environments: An interlaboratory comparison

    USGS Publications Warehouse

    Murphy, Kathleen R.; Butler, Kenna D.; Spencer, Robert G. M.; Stedmon, Colin A.; Boehme, Jennifer R.; Aiken, George R.

    2010-01-01

    The fluorescent properties of dissolved organic matter (DOM) are often studied in order to infer DOM characteristics in aquatic environments, including source, quantity, composition, and behavior. While a potentially powerful technique, a single widely implemented standard method for correcting and presenting fluorescence measurements is lacking, leading to difficulties when comparing data collected by different research groups. This paper reports on a large-scale interlaboratory comparison in which natural samples and well-characterized fluorophores were analyzed in 20 laboratories in the U.S., Europe, and Australia. Shortcomings were evident in several areas, including data quality-assurance, the accuracy of spectral correction factors used to correct EEMs, and the treatment of optically dense samples. Data corrected by participants according to individual laboratory procedures were more variable than when corrected under a standard protocol. Wavelength dependency in measurement precision and accuracy were observed within and between instruments, even in corrected data. In an effort to reduce future occurrences of similar problems, algorithms for correcting and calibrating EEMs are described in detail, and MATLAB scripts for implementing the study's protocol are provided. Combined with the recent expansion of spectral fluorescence standards, this approach will serve to increase the intercomparability of DOM fluorescence studies.

  14. Ionospheric propagation correction modeling for satellite altimeters

    NASA Technical Reports Server (NTRS)

    Nesterczuk, G.

    1981-01-01

    The theoretical basis and avaliable accuracy verifications were reviewed and compared for ionospheric correction procedures based on a global ionsopheric model driven by solar flux, and a technique in which measured electron content (using Faraday rotation measurements) for one path is mapped into corrections for a hemisphere. For these two techniques, RMS errors for correcting satellite altimeters data (at 14 GHz) are estimated to be 12 cm and 3 cm, respectively. On the basis of global accuracy and reliability after implementation, the solar flux model is recommended.

  15. Ionospheric Refraction Corrections in the GTDS for Satellite-To-Satellite Tracking Data

    NASA Technical Reports Server (NTRS)

    Nesterczuk, G.; Kozelsky, J. K.

    1976-01-01

    In satellite-to-satellite tracking (SST) geographic as well as diurnal ionospheric effects must be contended with, for the line of sight between satellites can cross a day-night interface or lie within the equatorial ionosphere. These various effects were examined and a method of computing ionospheric refraction corrections to range and range rate measurements with sufficient accuracy were devised to be used in orbit determinations. The Bent Ionospheric Model is used for SST refraction corrections. Making use of this model a method of computing corrections through large ionospheric gradients was devised and implemented into the Goddard Trajectory Determination System. The various considerations taken in designing and implementing this SST refraction correction algorithm are reported.

  16. Dead time corrections for inbeam γ-spectroscopy measurements

    NASA Astrophysics Data System (ADS)

    Boromiza, M.; Borcea, C.; Negret, A.; Olacel, A.; Suliman, G.

    2017-08-01

    Relatively high counting rates were registered in a proton inelastic scattering experiment on 16O and 28Si using HPGe detectors which was performed at the Tandem facility of IFIN-HH, Bucharest. In consequence, dead time corrections were needed in order to determine the absolute γ-production cross sections. Considering that the real counting rate follows a Poisson distribution, the dead time correction procedure is reformulated in statistical terms. The arriving time interval between the incoming events (Δt) obeys an exponential distribution with a single parameter - the average of the associated Poisson distribution. We use this mathematical connection to calculate and implement the dead time corrections for the counting rates of the mentioned experiment. Also, exploiting an idea introduced by Pommé et al., we describe a consistent method for calculating the dead time correction which completely eludes the complicated problem of measuring the dead time of a given detection system. Several comparisons are made between the corrections implemented through this method and by using standard (phenomenological) dead time models and we show how these results were used for correcting our experimental cross sections.

  17. Application of Pressure-Based Wall Correction Methods to Two NASA Langley Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Iyer, V.; Everhart, J. L.

    2001-01-01

    This paper is a description and status report on the implementation and application of the WICS wall interference method to the National Transonic Facility (NTF) and the 14 x 22-ft subsonic wind tunnel at the NASA Langley Research Center. The method calculates free-air corrections to the measured parameters and aerodynamic coefficients for full span and semispan models when the tunnels are in the solid-wall configuration. From a data quality point of view, these corrections remove predictable bias errors in the measurement due to the presence of the tunnel walls. At the NTF, the method is operational in the off-line and on-line modes, with three tests already computed for wall corrections. At the 14 x 22-ft tunnel, initial implementation has been done based on a test on a full span wing. This facility is currently scheduled for an upgrade to its wall pressure measurement system. With the addition of new wall orifices and other instrumentation upgrades, a significant improvement in the wall correction accuracy is expected.

  18. Measurement-based quantum communication with resource states generated by entanglement purification

    NASA Astrophysics Data System (ADS)

    Wallnöfer, J.; Dür, W.

    2017-01-01

    We investigate measurement-based quantum communication with noisy resource states that are generated by entanglement purification. We consider the transmission of encoded information via noisy quantum channels using a measurement-based implementation of encoding, error correction, and decoding. We show that such an approach offers advantages over direct transmission, gate-based error correction, and measurement-based schemes with direct generation of resource states. We analyze the noise structure of resource states generated by entanglement purification and show that a local error model, i.e., noise acting independently on all qubits of the resource state, is a good approximation in general, and provides an exact description for Greenberger-Horne-Zeilinger states. The latter are resources for a measurement-based implementation of error-correction codes for bit-flip or phase-flip errors. This provides an approach to link the recently found very high thresholds for fault-tolerant measurement-based quantum information processing based on local error models for resource states with error thresholds for gate-based computational models.

  19. Implementation of tuberculosis infection prevention and control in Mozambican health care facilities.

    PubMed

    Brouwer, M; Coelho, E; das Dores Mosse, C; van Leth, F

    2015-01-01

    District and urban health care facilities in three provinces (Manica, Sofala, Tete) in central Mozambique. To assess the level of implementation of selected tuberculosis infection prevention and control (TB-IPC) measures. In a cross-sectional study of TB-IPC implementation in 29 health care facilities, we assessed TB clinics, laboratories, out-patient departments and medical and TB wards. Assessment included selected managerial, administrative and environmental measures and the availability and use of respiratory protective equipment (N95 respirators). Guidelines for diagnosis and treatment of (presumptive) TB patients were not present in all facilities. Staff instructed patients on sputum collection in 91%, but only 4% observed it. Using a pragmatic '20% rule', 52% of the rooms assessed had adequate ventilation. Potentially, this could be increased to 76%. Three quarters of the health care workers had N95 respirators. Only 36% knew how to use it correctly. Implementation of TB-IPC measures showed wide variations within health care facilities. Relatively simple measures to improve TB-IPC include the availability of guidelines, opening doors and windows to improve ventilation, and training and support on correct N95 respirator use. However, even relatively simple measures are challenging to implement, and require careful attention in and evaluation of the implementation process.

  20. Timeliness and access to healthcare services via telemedicine for adolescents in state correctional facilities.

    PubMed

    Fox, Karen C; Somes, Grant W; Waters, Teresa M

    2007-08-01

    The aim of this study was to examine the effectiveness of a telemedicine program in improving timeliness of and access to healthcare services in adolescent correctional facilities. This study is a pre/post quasi-experimental design comparing time to treatment and healthcare use in the year preceding and the 2 years after the implementation of a telemedicine program in four facilities housing adolescents from 12 to 19. Timeliness of care is measured by time from referral to date of service (for behavioral healthcare only). Access to care is measured by use of outpatient care, emergency department (ED) visits, and inpatient visits. Two of the four state correctional facilities had a significant decrease (24%) in time from referral to treatment after the implementation of the telemedicine intervention. The facilities not showing significant improvements in timeliness experienced difficulty implementing the telemedicine program. The telemedicine program was also associated with significant improvements in access to care. Outpatient visits increased by 40% in the 2 years after implementation of telemedicine. For each 1% increase in telemedicine usage, outpatient visits increased by 1%, whereas emergency room visits decreased by 7%. Telemedicine can have a positive impact on timeliness of and access to care for youth in correctional facilities.

  1. Peeling Away Timing Error in NetFlow Data

    NASA Astrophysics Data System (ADS)

    Trammell, Brian; Tellenbach, Bernhard; Schatzmann, Dominik; Burkhart, Martin

    In this paper, we characterize, quantify, and correct timing errors introduced into network flow data by collection and export via Cisco NetFlow version 9. We find that while some of these sources of error (clock skew, export delay) are generally implementation-dependent and known in the literature, there is an additional cyclic error of up to one second that is inherent to the design of the export protocol. We present a method for correcting this cyclic error in the presence of clock skew and export delay. In an evaluation using traffic with known timing collected from a national-scale network, we show that this method can successfully correct the cyclic error. However, there can also be other implementation-specific errors for which insufficient information remains for correction. On the routers we have deployed in our network, this limits the accuracy to about 70ms, reinforcing the point that implementation matters when conducting research on network measurement data.

  2. Performance of bias-correction methods for exposure measurement error using repeated measurements with and without missing data.

    PubMed

    Batistatou, Evridiki; McNamee, Roseanne

    2012-12-10

    It is known that measurement error leads to bias in assessing exposure effects, which can however, be corrected if independent replicates are available. For expensive replicates, two-stage (2S) studies that produce data 'missing by design', may be preferred over a single-stage (1S) study, because in the second stage, measurement of replicates is restricted to a sample of first-stage subjects. Motivated by an occupational study on the acute effect of carbon black exposure on respiratory morbidity, we compare the performance of several bias-correction methods for both designs in a simulation study: an instrumental variable method (EVROS IV) based on grouping strategies, which had been recommended especially when measurement error is large, the regression calibration and the simulation extrapolation methods. For the 2S design, either the problem of 'missing' data was ignored or the 'missing' data were imputed using multiple imputations. Both in 1S and 2S designs, in the case of small or moderate measurement error, regression calibration was shown to be the preferred approach in terms of root mean square error. For 2S designs, regression calibration as implemented by Stata software is not recommended in contrast to our implementation of this method; the 'problematic' implementation of regression calibration although substantially improved with use of multiple imputations. The EVROS IV method, under a good/fairly good grouping, outperforms the regression calibration approach in both design scenarios when exposure mismeasurement is severe. Both in 1S and 2S designs with moderate or large measurement error, simulation extrapolation severely failed to correct for bias. Copyright © 2012 John Wiley & Sons, Ltd.

  3. Practical scheme for error control using feedback

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarovar, Mohan; Milburn, Gerard J.; Ahn, Charlene

    2004-05-01

    We describe a scheme for quantum-error correction that employs feedback and weak measurement rather than the standard tools of projective measurement and fast controlled unitary gates. The advantage of this scheme over previous protocols [for example, Ahn et al. Phys. Rev. A 65, 042301 (2001)], is that it requires little side processing while remaining robust to measurement inefficiency, and is therefore considerably more practical. We evaluate the performance of our scheme by simulating the correction of bit flips. We also consider implementation in a solid-state quantum-computation architecture and estimate the maximal error rate that could be corrected with current technology.

  4. Effects of a strategy to improve offender assessment practices: Staff perceptions of implementation outcomes.

    PubMed

    Welsh, Wayne N; Lin, Hsiu-Ju; Peters, Roger H; Stahler, Gerald J; Lehman, Wayne E K; Stein, Lynda A R; Monico, Laura; Eggers, Michele; Abdel-Salam, Sami; Pierce, Joshua C; Hunt, Elizabeth; Gallagher, Colleen; Frisman, Linda K

    2015-07-01

    This implementation study examined the impact of an organizational process improvement intervention (OPII) on a continuum of evidence based practices related to assessment and community reentry of drug-involved offenders: Measurement/Instrumentation, Case Plan Integration, Conveyance/Utility, and Service Activation/Delivery. To assess implementation outcomes (staff perceptions of evidence-based assessment practices), a survey was administered to correctional and treatment staff (n=1509) at 21 sites randomly assigned to an Early- or Delayed-Start condition. Hierarchical linear models with repeated measures were used to examine changes in evidence-based assessment practices over time, and organizational characteristics were examined as covariates to control for differences across the 21 research sites. Results demonstrated significant intervention and sustainability effects for three of the four assessment domains examined, although stronger effects were obtained for intra- than inter-agency outcomes. No significant effects were found for Conveyance/Utility. Implementation interventions such as the OPII represent an important tool to enhance the use of evidence-based assessment practices in large and diverse correctional systems. Intra-agency assessment activities that were more directly under the control of correctional agencies were implemented most effectively. Activities in domains that required cross-systems collaboration were not as successfully implemented, although longer follow-up periods might afford detection of stronger effects. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. Efficacy of a Process Improvement Intervention on Delivery of HIV Services to Offenders: A Multisite Trial

    PubMed Central

    Shafer, Michael S.; Dembo, Richard; del Mar Vega-Debién, Graciela; Pankow, Jennifer; Duvall, Jamieson L.; Belenko, Steven; Frisman, Linda K.; Visher, Christy A.; Pich, Michele; Patterson, Yvonne

    2014-01-01

    Objectives. We tested a modified Network for the Improvement of Addiction Treatment (NIATx) process improvement model to implement improved HIV services (prevention, testing, and linkage to treatment) for offenders under correctional supervision. Methods. As part of the Criminal Justice Drug Abuse Treatment Studies, Phase 2, the HIV Services and Treatment Implementation in Corrections study conducted 14 cluster-randomized trials in 2011 to 2013 at 9 US sites, where one correctional facility received training in HIV services and coaching in a modified NIATx model and the other received only HIV training. The outcome measure was the odds of successful delivery of an HIV service. Results. The results were significant at the .05 level, and the point estimate for the odds ratio was 2.14. Although overall the results were heterogeneous, the experiments that focused on implementing HIV prevention interventions had a 95% confidence interval that exceeded the no-difference point. Conclusions. Our results demonstrate that a modified NIATx process improvement model can effectively implement improved rates of delivery of some types of HIV services in correctional environments. PMID:25322311

  6. Efficacy of a process improvement intervention on delivery of HIV services to offenders: a multisite trial.

    PubMed

    Pearson, Frank S; Shafer, Michael S; Dembo, Richard; Del Mar Vega-Debién, Graciela; Pankow, Jennifer; Duvall, Jamieson L; Belenko, Steven; Frisman, Linda K; Visher, Christy A; Pich, Michele; Patterson, Yvonne

    2014-12-01

    We tested a modified Network for the Improvement of Addiction Treatment (NIATx) process improvement model to implement improved HIV services (prevention, testing, and linkage to treatment) for offenders under correctional supervision. As part of the Criminal Justice Drug Abuse Treatment Studies, Phase 2, the HIV Services and Treatment Implementation in Corrections study conducted 14 cluster-randomized trials in 2011 to 2013 at 9 US sites, where one correctional facility received training in HIV services and coaching in a modified NIATx model and the other received only HIV training. The outcome measure was the odds of successful delivery of an HIV service. The results were significant at the .05 level, and the point estimate for the odds ratio was 2.14. Although overall the results were heterogeneous, the experiments that focused on implementing HIV prevention interventions had a 95% confidence interval that exceeded the no-difference point. Our results demonstrate that a modified NIATx process improvement model can effectively implement improved rates of delivery of some types of HIV services in correctional environments.

  7. A method of bias correction for maximal reliability with dichotomous measures.

    PubMed

    Penev, Spiridon; Raykov, Tenko

    2010-02-01

    This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.

  8. Implementation of real-time nonuniformity correction with multiple NUC tables using FPGA in an uncooled imaging system

    NASA Astrophysics Data System (ADS)

    Oh, Gyong Jin; Kim, Lyang-June; Sheen, Sue-Ho; Koo, Gyou-Phyo; Jin, Sang-Hun; Yeo, Bo-Yeon; Lee, Jong-Ho

    2009-05-01

    This paper presents a real time implementation of Non Uniformity Correction (NUC). Two point correction and one point correction with shutter were carried out in an uncooled imaging system which will be applied to a missile application. To design a small, light weight and high speed imaging system for a missile system, SoPC (System On a Programmable Chip) which comprises of FPGA and soft core (Micro-blaze) was used. Real time NUC and generation of control signals are implemented using FPGA. Also, three different NUC tables were made to make the operating time shorter and to reduce the power consumption in a large range of environment temperature. The imaging system consists of optics and four electronics boards which are detector interface board, Analog to Digital converter board, Detector signal generation board and Power supply board. To evaluate the imaging system, NETD was measured. The NETD was less than 160mK in three different environment temperatures.

  9. Error Modelling for Multi-Sensor Measurements in Infrastructure-Free Indoor Navigation

    PubMed Central

    Ruotsalainen, Laura; Kirkko-Jaakkola, Martti; Rantanen, Jesperi; Mäkelä, Maija

    2018-01-01

    The long-term objective of our research is to develop a method for infrastructure-free simultaneous localization and mapping (SLAM) and context recognition for tactical situational awareness. Localization will be realized by propagating motion measurements obtained using a monocular camera, a foot-mounted Inertial Measurement Unit (IMU), sonar, and a barometer. Due to the size and weight requirements set by tactical applications, Micro-Electro-Mechanical (MEMS) sensors will be used. However, MEMS sensors suffer from biases and drift errors that may substantially decrease the position accuracy. Therefore, sophisticated error modelling and implementation of integration algorithms are key for providing a viable result. Algorithms used for multi-sensor fusion have traditionally been different versions of Kalman filters. However, Kalman filters are based on the assumptions that the state propagation and measurement models are linear with additive Gaussian noise. Neither of the assumptions is correct for tactical applications, especially for dismounted soldiers, or rescue personnel. Therefore, error modelling and implementation of advanced fusion algorithms are essential for providing a viable result. Our approach is to use particle filtering (PF), which is a sophisticated option for integrating measurements emerging from pedestrian motion having non-Gaussian error characteristics. This paper discusses the statistical modelling of the measurement errors from inertial sensors and vision based heading and translation measurements to include the correct error probability density functions (pdf) in the particle filter implementation. Then, model fitting is used to verify the pdfs of the measurement errors. Based on the deduced error models of the measurements, particle filtering method is developed to fuse all this information, where the weights of each particle are computed based on the specific models derived. The performance of the developed method is tested via two experiments, one at a university’s premises and another in realistic tactical conditions. The results show significant improvement on the horizontal localization when the measurement errors are carefully modelled and their inclusion into the particle filtering implementation correctly realized. PMID:29443918

  10. Systematic Analysis Of Ocean Colour Uncertainties

    NASA Astrophysics Data System (ADS)

    Lavender, Samantha

    2013-12-01

    This paper reviews current research into the estimation of uncertainties as a pixel-based measure to aid non- specialist users of remote sensing products. An example MERIS image, captured on the 28 March 2012, was processed with above-water atmospheric correction code. This was initially based on both the Antoine & Morel Standard Atmospheric Correction, with Bright Pixel correction component, and Doerffer Neural Network coastal water's approach. It's showed that analysis of the atmospheric by-products yield important information about the separation of the atmospheric and in-water signals, helping to sign-post possible uncertainties in the atmospheric correction results. Further analysis has concentrated on implementing a ‘simplistic' atmospheric correction so that the impact of changing the input auxiliary data can be analysed; the influence of changing surface pressure is demonstrated. Future work will focus on automating the analysis, so that the methodology can be implemented within an operational system.

  11. An expert system shell for inferring vegetation characteristics: Atmospheric techniques (Task G)

    NASA Technical Reports Server (NTRS)

    Harrison, P. Ann; Harrison, Patrick R.

    1993-01-01

    The NASA VEGetation Workbench (VEG) is a knowledge based system that infers vegetation characteristics from reflectance data. The VEG Subgoals have been reorganized into categories. A new subgoal category 'Atmospheric Techniques' containing two new subgoals has been implemented. The subgoal Atmospheric Passes allows the scientist to take reflectance data measured at ground level and predict what the reflectance values would be if the data were measured at a different atmospheric height. The subgoal Atmospheric Corrections allows atmospheric corrections to be made to data collected from an aircraft or by a satellite to determine what the equivalent reflectance values would be if the data were measured at ground level. The report describes the implementation and testing of the basic framework and interface for the Atmospheric Techniques Subgoals.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strait, R.S.; Pearson, P.K.; Sengupta, S.K.

    A password system comprises a set of codewords spaced apart from one another by a Hamming distance (HD) that exceeds twice the variability that can be projected for a series of biometric measurements for a particular individual and that is less than the HD that can be encountered between two individuals. To enroll an individual, a biometric measurement is taken and exclusive-ORed with a random codeword to produce a reference value. To verify the individual later, a biometric measurement is taken and exclusive-ORed with the reference value to reproduce the original random codeword or its approximation. If the reproduced valuemore » is not a codeword, the nearest codeword to it is found, and the bits that were corrected to produce the codeword to it is found, and the bits that were corrected to produce the codeword are also toggled in the biometric measurement taken and the codeword generated during enrollment. The correction scheme can be implemented by any conventional error correction code such as Reed-Muller code R(m,n). In the implementation using a hand geometry device an R(2,5) code has been used in this invention. Such codeword and biometric measurement can then be used to see if the individual is an authorized user. Conventional Diffie-Hellman public key encryption schemes and hashing procedures can then be used to secure the communications lines carrying the biometric information and to secure the database of authorized users.« less

  13. WORLD AND NATIONAL EXPERIENCE IN ORGANIZATION OF PREVENTION OF CARDIOVASCULAR DISEASES.

    PubMed

    Biduchak, А; Chornenka, Zh

    2017-11-01

    The aim of the study was to examine the global, European and national experience in the implementation of preventive programs and to reveal their value in health, economy and social health development. The conducted research has found that the implementation of the national program, the correct methodological approach of the physician to evaluate risk factors, and implementing preventive measures of diseases of the circulatory system bring positive results (reduction of prevalence and incidence of cerebral stroke by 13,7% and 1,4%, respectively). The results of the analysis of the health care industry pointed out the possible directions of optimization of prevention of behavioral risk factors in the practice of family medicine as the first point of contact with the patient, where preventive measures are essential and effective. Summing up, it should be noted that at the level of primary health care, particularly family medicine, with effectively coordinated work and correctly set motivation, the preventive measures against risk factors of diseases of circulatory system can be quite effective.

  14. The Seasat scanning multichannel microwave radiometer /SMMR/: Antenna pattern corrections - Development and implementation

    NASA Technical Reports Server (NTRS)

    Njoku, E. G.; Christensen, E. J.; Cofield, R. E.

    1980-01-01

    The antenna temperatures measured by the Seasat scanning multichannel microwave radiometer (SMMR) differ from the true brightness temperatures of the observed scene due to antenna pattern effects, principally from antenna sidelobe contributions and cross-polarization coupling. To provide accurate brightness temperatures convenient for geophysical parameter retrievals the antenna temperatures are processed through a series of stages, collectively known as the antenna pattern correction (APC) algorithm. A description of the development and implementation of the APC algorithm is given, along with an error analysis of the resulting brightness temperatures.

  15. Implementation of the WICS Wall Interference Correction System at the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Iyer, Venkit; Everhart, Joel L.; Bir, Pamela J.; Ulbrich, Norbert

    2000-01-01

    The Wall Interference Correction System (WICS) is operational at the National Transonic Facility (NTF) of NASA Langley Research Center (NASA LaRC) for semispan and full span tests in the solid wall (slots covered) configuration. The method is based on the wall pressure signature method for computing corrections to the measured parameters. It is an adaptation of the WICS code operational at the 12 ft pressure wind tunnel (12ft PWT) of NASA Ames Research Center (NASA ARC). This paper discusses the details of implementation of WICS at the NTF including tunnel calibration, code modifications for tunnel and support geometry, changes made for the NTF wall orifices layout, details of interfacing with the tunnel data processing system, and post-processing of results. Example results of applying WICS to a semispan test and a full span test are presented. Comparison with classical correction results and an analysis of uncertainty in the corrections are also given. As a special application of the code, the Mach number calibration data from a centerline pipe test was computed by WICS. Finally, future work for expanding the applicability of the code including online implementation is discussed.

  16. Implementation of the WICS Wall Interference Correction System at the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Iyer, Venkit; Martin, Lockheed; Everhart, Joel L.; Bir, Pamela J.; Ulbrich, Norbert

    2000-01-01

    The Wall Interference Correction System (WICS) is operational at the National Transonic Facility (NTF) of NASA Langley Research Center (NASA LaRC) for semispan and full span tests in the solid wall (slots covered) configuration, The method is based on the wall pressure signature method for computing corrections to the measured parameters. It is an adaptation of the WICS code operational at the 12 ft pressure wind tunnel (12ft PWT) of NASA Ames Research Center (NASA ARC). This paper discusses the details of implementation of WICS at the NTF including, tunnel calibration, code modifications for tunnel and support geometry, changes made for the NTF wall orifices layout, details of interfacing with the tunnel data processing system, and post-processing of results. Example results of applying WICS to a semispan test and a full span test are presented. Comparison with classical correction results and an analysis of uncertainty in the corrections are also given. As a special application of the code, the Mach number calibration data from a centerline pipe test was computed by WICS. Finally, future work for expanding the applicability of the code including online implementation is discussed.

  17. The Additional Secondary Phase Correction System for AIS Signals

    PubMed Central

    Wang, Xiaoye; Zhang, Shufang; Sun, Xiaowen

    2017-01-01

    This paper looks at the development and implementation of the additional secondary phase factor (ASF) real-time correction system for the Automatic Identification System (AIS) signal. A large number of test data were collected using the developed ASF correction system and the propagation characteristics of the AIS signal that transmits at sea and the ASF real-time correction algorithm of the AIS signal were analyzed and verified. Accounting for the different hardware of the receivers in the land-based positioning system and the variation of the actual environmental factors, the ASF correction system corrects original measurements of positioning receivers in real time and provides corrected positioning accuracy within 10 m. PMID:28362330

  18. Economic evaluation of a mentorship and enhanced supervision program to improve quality of integrated management of childhood illness care in rural Rwanda.

    PubMed

    Manzi, Anatole; Mugunga, Jean Claude; Iyer, Hari S; Magge, Hema; Nkikabahizi, Fulgence; Hirschhorn, Lisa R

    2018-01-01

    Integrated management of childhood illness (IMCI) can reduce under-5 morbidity and mortality in low-income settings. A program to strengthen IMCI practices through Mentorship and Enhanced Supervision at Health centers (MESH) was implemented in two rural districts in eastern Rwanda in 2010. We estimated cost per improvement in quality of care as measured by the difference in correct diagnosis and correct treatment at baseline and 12 months of MESH. Costs of developing and implementing MESH were estimated in 2011 United States Dollars (USD) from the provider perspective using both top-down and bottom-up approaches, from programmatic financial records and site-level data. Improvement in quality of care attributed to MESH was measured through case management observations (n = 292 cases at baseline, 413 cases at 12 months), with outcomes from the intervention already published. Sensitivity analyses were conducted to assess uncertainty under different assumptions of quality of care and patient volume. The total annual cost of MESH was US$ 27,955.74 and the average cost added by MESH per IMCI patient was US$1.06. Salary and benefits accounted for the majority of total annual costs (US$22,400 /year). Improvements in quality of care after 12 months of MESH implementation cost US$2.95 per additional child correctly diagnosed and $5.30 per additional child correctly treated. The incremental costs per additional child correctly diagnosed and child correctly treated suggest that MESH could be an affordable method for improving IMCI quality of care elsewhere in Rwanda and similar settings. Integrating MESH into existing supervision systems would further reduce costs, increasing potential for spread.

  19. Proportional crosstalk correction for the segmented clover at iThemba LABS

    NASA Astrophysics Data System (ADS)

    Bucher, T. D.; Noncolela, S. P.; Lawrie, E. A.; Dinoko, T. R. S.; Easton, J. L.; Erasmus, N.; Lawrie, J. J.; Mthembu, S. H.; Mtshali, W. X.; Shirinda, O.; Orce, J. N.

    2017-11-01

    Reaching new depths in nuclear structure investigations requires new experimental equipment and new techniques of data analysis. The modern γ-ray spectrometers, like AGATA and GRETINA are now built of new-generation segmented germanium detectors. These most advanced detectors are able to reconstruct the trajectory of a γ-ray inside the detector. These are powerful detectors, but they need careful characterization, since their output signals are more complex. For instance for each γ-ray interaction that occurs in a segment of such a detector additional output signals (called proportional crosstalk), falsely appearing as an independent (often negative) energy depositions, are registered on the non-interacting segments. A failure to implement crosstalk correction results in incorrectly measured energies on the segments for two- and higher-fold events. It affects all experiments which rely on the recorded segment energies. Furthermore incorrectly recorded energies on the segments cause a failure to reconstruct the γ-ray trajectories using Compton scattering analysis. The proportional crosstalk for the iThemba LABS segmented clover was measured and a crosstalk correction was successfully implemented. The measured crosstalk-corrected energies show good agreement with the true γ-ray energies independent on the number of hit segments and an improved energy resolution for the segment sum energy was obtained.

  20. Method and system for normalizing biometric variations to authenticate users from a public database and that ensures individual biometric data privacy

    DOEpatents

    Strait, Robert S.; Pearson, Peter K.; Sengupta, Sailes K.

    2000-01-01

    A password system comprises a set of codewords spaced apart from one another by a Hamming distance (HD) that exceeds twice the variability that can be projected for a series of biometric measurements for a particular individual and that is less than the HD that can be encountered between two individuals. To enroll an individual, a biometric measurement is taken and exclusive-ORed with a random codeword to produce a "reference value." To verify the individual later, a biometric measurement is taken and exclusive-ORed with the reference value to reproduce the original random codeword or its approximation. If the reproduced value is not a codeword, the nearest codeword to it is found, and the bits that were corrected to produce the codeword to it is found, and the bits that were corrected to produce the codeword are also toggled in the biometric measurement taken and the codeword generated during enrollment. The correction scheme can be implemented by any conventional error correction code such as Reed-Muller code R(m,n). In the implementation using a hand geometry device an R(2,5) code has been used in this invention. Such codeword and biometric measurement can then be used to see if the individual is an authorized user. Conventional Diffie-Hellman public key encryption schemes and hashing procedures can then be used to secure the communications lines carrying the biometric information and to secure the database of authorized users.

  1. Rapid Assessment Response (RAR) study: drug use, health and systemic risks--Emthonjeni Correctional Centre, Pretoria, South Africa.

    PubMed

    Dos Santos, Monika M L; Trautmann, Franz; Wolvaardt, Gustaaf; Palakatsela, Romeo

    2014-04-03

    Correctional centre populations are one of the populations most at risk of contracting HIV infection for many reasons, such as unprotected sex, violence, rape and tattooing with contaminated equipment. Specific data on drug users in correctional centres is not available for the majority of countries, including South Africa. The study aimed to identify the attitudes and knowledge of key informant (KI) offender and correctional centre staff regarding drug use, health and systemic-related problems so as to facilitate the long-term planning of activities in the field of drug-use prevention and systems strengthening in correctional centres, including suggestions for the development of appropriate intervention and rehabilitation programmes. A Rapid Assessment Response (RAR) methodology was adopted which included observation, mapping of service providers (SP), KI interviews (staff and offenders) and focus groups (FGs). The study was implemented in Emthonjeni Youth Correctional Centre, Pretoria, South Africa. Fifteen KI staff participants were interviewed and 45 KI offenders. Drug use is fairly prevalent in the centre, with tobacco most commonly smoked, followed by cannabis and heroin. The banning of tobacco has also led to black-market features such as transactional sex, violence, gangsterism and smuggling in order to obtain mainly prohibited tobacco products, as well as illicit substances. HIV, health and systemic-related risk reduction within the Correctional Service sector needs to focus on measures such as improvement of staff capacity and security measures, deregulation of tobacco products and the development and implementation of comprehensive health promotion programmes.

  2. Autonomous Quantum Error Correction with Application to Quantum Metrology

    NASA Astrophysics Data System (ADS)

    Reiter, Florentin; Sorensen, Anders S.; Zoller, Peter; Muschik, Christine A.

    2017-04-01

    We present a quantum error correction scheme that stabilizes a qubit by coupling it to an engineered environment which protects it against spin- or phase flips. Our scheme uses always-on couplings that run continuously in time and operates in a fully autonomous fashion without the need to perform measurements or feedback operations on the system. The correction of errors takes place entirely at the microscopic level through a build-in feedback mechanism. Our dissipative error correction scheme can be implemented in a system of trapped ions and can be used for improving high precision sensing. We show that the enhanced coherence time that results from the coupling to the engineered environment translates into a significantly enhanced precision for measuring weak fields. In a broader context, this work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.

  3. Analysis and correction of gradient nonlinearity bias in apparent diffusion coefficient measurements.

    PubMed

    Malyarenko, Dariya I; Ross, Brian D; Chenevert, Thomas L

    2014-03-01

    Gradient nonlinearity of MRI systems leads to spatially dependent b-values and consequently high non-uniformity errors (10-20%) in apparent diffusion coefficient (ADC) measurements over clinically relevant field-of-views. This work seeks practical correction procedure that effectively reduces observed ADC bias for media of arbitrary anisotropy in the fewest measurements. All-inclusive bias analysis considers spatial and time-domain cross-terms for diffusion and imaging gradients. The proposed correction is based on rotation of the gradient nonlinearity tensor into the diffusion gradient frame where spatial bias of b-matrix can be approximated by its Euclidean norm. Correction efficiency of the proposed procedure is numerically evaluated for a range of model diffusion tensor anisotropies and orientations. Spatial dependence of nonlinearity correction terms accounts for the bulk (75-95%) of ADC bias for FA = 0.3-0.9. Residual ADC non-uniformity errors are amplified for anisotropic diffusion. This approximation obviates need for full diffusion tensor measurement and diagonalization to derive a corrected ADC. Practical scenarios are outlined for implementation of the correction on clinical MRI systems. The proposed simplified correction algorithm appears sufficient to control ADC non-uniformity errors in clinical studies using three orthogonal diffusion measurements. The most efficient reduction of ADC bias for anisotropic medium is achieved with non-lab-based diffusion gradients. Copyright © 2013 Wiley Periodicals, Inc.

  4. Analysis and correction of gradient nonlinearity bias in ADC measurements

    PubMed Central

    Malyarenko, Dariya I.; Ross, Brian D.; Chenevert, Thomas L.

    2013-01-01

    Purpose Gradient nonlinearity of MRI systems leads to spatially-dependent b-values and consequently high non-uniformity errors (10–20%) in ADC measurements over clinically relevant field-of-views. This work seeks practical correction procedure that effectively reduces observed ADC bias for media of arbitrary anisotropy in the fewest measurements. Methods All-inclusive bias analysis considers spatial and time-domain cross-terms for diffusion and imaging gradients. The proposed correction is based on rotation of the gradient nonlinearity tensor into the diffusion gradient frame where spatial bias of b-matrix can be approximated by its Euclidean norm. Correction efficiency of the proposed procedure is numerically evaluated for a range of model diffusion tensor anisotropies and orientations. Results Spatial dependence of nonlinearity correction terms accounts for the bulk (75–95%) of ADC bias for FA = 0.3–0.9. Residual ADC non-uniformity errors are amplified for anisotropic diffusion. This approximation obviates need for full diffusion tensor measurement and diagonalization to derive a corrected ADC. Practical scenarios are outlined for implementation of the correction on clinical MRI systems. Conclusions The proposed simplified correction algorithm appears sufficient to control ADC non-uniformity errors in clinical studies using three orthogonal diffusion measurements. The most efficient reduction of ADC bias for anisotropic medium is achieved with non-lab-based diffusion gradients. PMID:23794533

  5. Optimisation of reconstruction--reprojection-based motion correction for cardiac SPECT.

    PubMed

    Kangasmaa, Tuija S; Sohlberg, Antti O

    2014-07-01

    Cardiac motion is a challenging cause of image artefacts in myocardial perfusion SPECT. A wide range of motion correction methods have been developed over the years, and so far automatic algorithms based on the reconstruction--reprojection principle have proved to be the most effective. However, these methods have not been fully optimised in terms of their free parameters and implementational details. Two slightly different implementations of reconstruction--reprojection-based motion correction techniques were optimised for effective, good-quality motion correction and then compared with each other. The first of these methods (Method 1) was the traditional reconstruction-reprojection motion correction algorithm, where the motion correction is done in projection space, whereas the second algorithm (Method 2) performed motion correction in reconstruction space. The parameters that were optimised include the type of cost function (squared difference, normalised cross-correlation and mutual information) that was used to compare measured and reprojected projections, and the number of iterations needed. The methods were tested with motion-corrupt projection datasets, which were generated by adding three different types of motion (lateral shift, vertical shift and vertical creep) to motion-free cardiac perfusion SPECT studies. Method 2 performed slightly better overall than Method 1, but the difference between the two implementations was small. The execution time for Method 2 was much longer than for Method 1, which limits its clinical usefulness. The mutual information cost function gave clearly the best results for all three motion sets for both correction methods. Three iterations were sufficient for a good quality correction using Method 1. The traditional reconstruction--reprojection-based method with three update iterations and mutual information cost function is a good option for motion correction in clinical myocardial perfusion SPECT.

  6. DOD Acquisition Information Management

    DTIC Science & Technology

    1994-09-30

    instead of on a real- time management information flow. The process of identifying risks and implementing corrective actions is lengthened by using the current system; performance measurement and reporting are impeded.

  7. [Study on phase correction method of spatial heterodyne spectrometer].

    PubMed

    Wang, Xin-Qiang; Ye, Song; Zhang, Li-Juan; Xiong, Wei

    2013-05-01

    Phase distortion exists in collected interferogram because of a variety of measure reasons when spatial heterodyne spectrometers are used in practice. So an improved phase correction method is presented. The phase curve of interferogram was obtained through Fourier inverse transform to extract single side transform spectrum, based on which, the phase distortions were attained by fitting phase slope, so were the phase correction functions, and the convolution was processed between transform spectrum and phase correction function to implement spectrum phase correction. The method was applied to phase correction of actually measured monochromatic spectrum and emulational water vapor spectrum. Experimental results show that the low-frequency false signals in monochromatic spectrum fringe would be eliminated effectively to increase the periodicity and the symmetry of interferogram, in addition when the continuous spectrum imposed phase error was corrected, the standard deviation between it and the original spectrum would be reduced form 0.47 to 0.20, and thus the accuracy of spectrum could be improved.

  8. Real-Time Phase Correction Based on FPGA in the Beam Position and Phase Measurement System

    NASA Astrophysics Data System (ADS)

    Gao, Xingshun; Zhao, Lei; Liu, Jinxin; Jiang, Zouyi; Hu, Xiaofang; Liu, Shubin; An, Qi

    2016-12-01

    A fully digital beam position and phase measurement (BPPM) system was designed for the linear accelerator (LINAC) in Accelerator Driven Sub-critical System (ADS) in China. Phase information is obtained from the summed signals from four pick-ups of the Beam Position Monitor (BPM). Considering that the delay variations of different analog circuit channels would introduce phase measurement errors, we propose a new method to tune the digital waveforms of four channels before summation and achieve real-time error correction. The process is based on the vector rotation method and implemented within one single Field Programmable Gate Array (FPGA) device. Tests were conducted to evaluate this correction method and the results indicate that a phase correction precision better than ± 0.3° over the dynamic range from -60 dBm to 0 dBm is achieved.

  9. Multiplicity-dependent and nonbinomial efficiency corrections for particle number cumulants

    NASA Astrophysics Data System (ADS)

    Bzdak, Adam; Holzmann, Romain; Koch, Volker

    2016-12-01

    In this article we extend previous work on efficiency corrections for cumulant measurements [Bzdak and Koch, Phys. Rev. C 86, 044904 (2012), 10.1103/PhysRevC.86.044904; Phys. Rev. C 91, 027901 (2015), 10.1103/PhysRevC.91.027901]. We will discuss the limitations of the methods presented in these papers. Specifically we will consider multiplicity dependent efficiencies as well as nonbinomial efficiency distributions. We will discuss the most simple and straightforward methods to implement those corrections.

  10. Multiplicity-dependent and nonbinomial efficiency corrections for particle number cumulants

    DOE PAGES

    Bzdak, Adam; Holzmann, Romain; Koch, Volker

    2016-12-19

    Here, we extend previous work on efficiency corrections for cumulant measurements [Bzdak and Koch, Phys. Rev. C 86, 044904 (2012)PRVCAN0556-281310.1103/PhysRevC.86.044904; Phys. Rev. C 91, 027901 (2015)PRVCAN0556-281310.1103/PhysRevC.91.027901]. We will then discuss the limitations of the methods presented in these papers. Specifically we will consider multiplicity dependent efficiencies as well as nonbinomial efficiency distributions. We will discuss the most simple and straightforward methods to implement those corrections.

  11. Nonrigid Autofocus Motion Correction for Coronary MR Angiography with a 3D Cones Trajectory

    PubMed Central

    Ingle, R. Reeve; Wu, Holden H.; Addy, Nii Okai; Cheng, Joseph Y.; Yang, Phillip C.; Hu, Bob S.; Nishimura, Dwight G.

    2014-01-01

    Purpose: To implement a nonrigid autofocus motion correction technique to improve respiratory motion correction of free-breathing whole-heart coronary magnetic resonance angiography (CMRA) acquisitions using an image-navigated 3D cones sequence. Methods: 2D image navigators acquired every heartbeat are used to measure superior-inferior, anterior-posterior, and right-left translation of the heart during a free-breathing CMRA scan using a 3D cones readout trajectory. Various tidal respiratory motion patterns are modeled by independently scaling the three measured displacement trajectories. These scaled motion trajectories are used for 3D translational compensation of the acquired data, and a bank of motion-compensated images is reconstructed. From this bank, a gradient entropy focusing metric is used to generate a nonrigid motion-corrected image on a pixel-by-pixel basis. The performance of the autofocus motion correction technique is compared with rigid-body translational correction and no correction in phantom, volunteer, and patient studies. Results: Nonrigid autofocus motion correction yields improved image quality compared to rigid-body-corrected images and uncorrected images. Quantitative vessel sharpness measurements indicate superiority of the proposed technique in 14 out of 15 coronary segments from three patient and two volunteer studies. Conclusion: The proposed technique corrects nonrigid motion artifacts in free-breathing 3D cones acquisitions, improving image quality compared to rigid-body motion correction. PMID:24006292

  12. Sensitivity Study of the Wall Interference Correction System (WICS) for Rectangular Tunnels

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.; Everhart, Joel L.; Iyer, Venkit

    2001-01-01

    An off-line version of the Wall Interference Correction System (WICS) has been implemented for the NASA Langley National Transonic Facility. The correction capability is currently restricted to corrections for solid wall interference in the model pitch plane for Mach numbers less than 0.45 due to a limitation in tunnel calibration data. A study to assess output sensitivity to measurement uncertainty was conducted to determine standard operational procedures and guidelines to ensure data quality during the testing process. Changes to the current facility setup and design recommendations for installing the WICS code into a new facility are reported.

  13. 78 FR 26708 - Pacific Halibut Fisheries; Catch Sharing Plan; Correcting Amendment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-08

    ... published on March 15, 2013, that implemented annual management measures governing the Pacific halibut... (78 FR 16423), included annual management measures for managing the harvest of Pacific halibut (Hippoglossus stenolepis) in the sport fishery in International Pacific Halibut Commission (IPHC) Regulatory...

  14. Eccentric correction for off-axis vision in central visual field loss.

    PubMed

    Gustafsson, Jörgen; Unsbo, Peter

    2003-07-01

    Subjects with absolute central visual field loss use eccentric fixation and magnifying devices to utilize their residual vision. This preliminary study investigated the importance of an accurate eccentric correction of off-axis refractive errors to optimize the residual visual function for these subjects. Photorefraction using the PowerRefractor instrument was used to evaluate the ametropia in eccentric fixation angles. Methods were adapted for measuring visual acuity outside the macula using filtered optotypes from high-pass resolution perimetry. Optical corrections were implemented, and the visual function of subjects with central visual field loss was measured with and without eccentric correction. Of the seven cases reported, five experienced an improvement in visual function in their preferred retinal locus with eccentric refraction. The main result was that optical correction for better image quality on the peripheral retina is important for the vision of subjects with central visual field loss, objectively as well as subjectively.

  15. Quantum supremacy in constant-time measurement-based computation: A unified architecture for sampling and verification

    NASA Astrophysics Data System (ADS)

    Miller, Jacob; Sanders, Stephen; Miyake, Akimasa

    2017-12-01

    While quantum speed-up in solving certain decision problems by a fault-tolerant universal quantum computer has been promised, a timely research interest includes how far one can reduce the resource requirement to demonstrate a provable advantage in quantum devices without demanding quantum error correction, which is crucial for prolonging the coherence time of qubits. We propose a model device made of locally interacting multiple qubits, designed such that simultaneous single-qubit measurements on it can output probability distributions whose average-case sampling is classically intractable, under similar assumptions as the sampling of noninteracting bosons and instantaneous quantum circuits. Notably, in contrast to these previous unitary-based realizations, our measurement-based implementation has two distinctive features. (i) Our implementation involves no adaptation of measurement bases, leading output probability distributions to be generated in constant time, independent of the system size. Thus, it could be implemented in principle without quantum error correction. (ii) Verifying the classical intractability of our sampling is done by changing the Pauli measurement bases only at certain output qubits. Our usage of random commuting quantum circuits in place of computationally universal circuits allows a unique unification of sampling and verification, so they require the same physical resource requirements in contrast to the more demanding verification protocols seen elsewhere in the literature.

  16. Multi-Angle Implementation of Atmospheric Correction for MODIS (MAIAC). Part 3: Atmospheric Correction

    NASA Technical Reports Server (NTRS)

    Lyapustin, A.; Wang, Y.; Laszlo, I.; Hilker, T.; Hall, F.; Sellers, P.; Tucker, J.; Korkin, S.

    2012-01-01

    This paper describes the atmospheric correction (AC) component of the Multi-Angle Implementation of Atmospheric Correction algorithm (MAIAC) which introduces a new way to compute parameters of the Ross-Thick Li-Sparse (RTLS) Bi-directional reflectance distribution function (BRDF), spectral surface albedo and bidirectional reflectance factors (BRF) from satellite measurements obtained by the Moderate Resolution Imaging Spectroradiometer (MODIS). MAIAC uses a time series and spatial analysis for cloud detection, aerosol retrievals and atmospheric correction. It implements a moving window of up to 16 days of MODIS data gridded to 1 km resolution in a selected projection. The RTLS parameters are computed directly by fitting the cloud-free MODIS top of atmosphere (TOA) reflectance data stored in the processing queue. The RTLS retrieval is applied when the land surface is stable or changes slowly. In case of rapid or large magnitude change (as for instance caused by disturbance), MAIAC follows the MODIS operational BRDF/albedo algorithm and uses a scaling approach where the BRDF shape is assumed stable but its magnitude is adjusted based on the latest single measurement. To assess the stability of the surface, MAIAC features a change detection algorithm which analyzes relative change of reflectance in the Red and NIR bands during the accumulation period. To adjust for the reflectance variability with the sun-observer geometry and allow comparison among different days (view geometries), the BRFs are normalized to the fixed view geometry using the RTLS model. An empirical analysis of MODIS data suggests that the RTLS inversion remains robust when the relative change of geometry-normalized reflectance stays below 15%. This first of two papers introduces the algorithm, a second, companion paper illustrates its potential by analyzing MODIS data over a tropical rainforest and assessing errors and uncertainties of MAIAC compared to conventional MODIS products.

  17. Contractors Road Heavy Equipment Area SWMU 055 Corrective Measures Implementation Progress Report Kennedy Space Center, Florida

    NASA Technical Reports Server (NTRS)

    Johnson, Jill W. (Compiler)

    2015-01-01

    This Corrective Measures Implementation (CMI) Progress Report documents: (i) activities conducted as part of supplemental assessment activities completed from June 2009 through November 2014; (ii) Engineering Evaluation (EE) Advanced Data Packages (ADPs); and (iii) recommendations for future activities related to corrective measures at the Site. Applicable meeting minutes are provided as Appendix A. The following EE ADPs for CRHE are included with this CMI Progress Report: center dot Supplemental Site Characterization ADP (Step 1 EE) (Appendix B) center dot Site Characterization ADP (Step 1 EE) for Hot Spot 1 (HS1) (Appendix C) center dot Remedial Alternatives Evaluation (Step 2 EE) ADP for HS1 (Appendix D) center dot Interim Measures Work Plan (Step 3 EE) ADP for HS1 (Appendix E) center dot Site Characterization ADP (Step 1 EE) ADP for Hot Spot 2 (HS2), High Concentration Plume (HCP), and Low Concentration Plume (LCP) (Appendix F) A summary of direct-push technology (DPT) and groundwater monitoring well sampling results are provided in Appendices G and H, respectively. The Interim Land Use Control Implementation Plan (LUCIP) is provided as Appendix I. Monitoring well completion reports, other applicable field forms, survey data, and analytical laboratory reports are provided as Appendices J through M, respectively, in the electronic copy of this document. Selected Site photographs are provided in Appendix N. The interim groundwater monitoring plan and document revision log are included as Appendices O and P, respectively. KSC Electronic Data Deliverable (KEDD) files are provided on the attached compact disk.

  18. Reliable Channel-Adapted Error Correction: Bacon-Shor Code Recovery from Amplitude Damping

    NASA Astrophysics Data System (ADS)

    Piedrafita, Álvaro; Renes, Joseph M.

    2017-12-01

    We construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve exact correction to a desired order in the damping rate. The first, employing one-bit teleportation and single-qubit measurements, needs only one-fourth as many physical qubits, while the second, using just stabilizer measurements and Pauli corrections, needs only half. The improvements stem from the fact that damping events need only be detected, not corrected, and that effective phase errors arising due to undamped qubits occur at a lower rate than damping errors. For error correction that is itself subject to damping noise, we show that existing fault-tolerance methods can be employed for the latter scheme, while the former can be made to avoid potential catastrophic errors and can easily cope with damping faults in ancilla qubits.

  19. An efficient algorithm for automatic phase correction of NMR spectra based on entropy minimization

    NASA Astrophysics Data System (ADS)

    Chen, Li; Weng, Zhiqiang; Goh, LaiYoong; Garland, Marc

    2002-09-01

    A new algorithm for automatic phase correction of NMR spectra based on entropy minimization is proposed. The optimal zero-order and first-order phase corrections for a NMR spectrum are determined by minimizing entropy. The objective function is constructed using a Shannon-type information entropy measure. Entropy is defined as the normalized derivative of the NMR spectral data. The algorithm has been successfully applied to experimental 1H NMR spectra. The results of automatic phase correction are found to be comparable to, or perhaps better than, manual phase correction. The advantages of this automatic phase correction algorithm include its simple mathematical basis and the straightforward, reproducible, and efficient optimization procedure. The algorithm is implemented in the Matlab program ACME—Automated phase Correction based on Minimization of Entropy.

  20. FY 2016 Status Report: CIRFT Testing Data Analyses and Updated Curvature Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jy-An John; Wang, Hong

    This report provides a detailed description of FY15 test result corrections/analysis based on the FY16 Cyclic Integrated Reversible-Bending Fatigue Tester (CIRFT) test program methodology update used to evaluate the vibration integrity of spent nuclear fuel (SNF) under normal transportation conditions. The CIRFT consists of a U-frame testing setup and a real-time curvature measurement method. The three-component U-frame setup of the CIRFT has two rigid arms and linkages to a universal testing machine. The curvature of rod bending is obtained through a three-point deflection measurement method. Three linear variable differential transformers (LVDTs) are used and clamped to the side connecting platesmore » of the U-frame to capture the deformation of the rod. The contact-based measurement, or three-LVDT-based curvature measurement system, on SNF rods has been proven to be quite reliable in CIRFT testing. However, how the LVDT head contacts the SNF rod may have a significant effect on the curvature measurement, depending on the magnitude and direction of rod curvature. It has been demonstrated that the contact/curvature issues can be corrected by using a correction on the sensor spacing. The sensor spacing defines the separation of the three LVDT probes and is a critical quantity in calculating the rod curvature once the deflections are obtained. The sensor spacing correction can be determined by using chisel-type probes. The method has been critically examined this year and has been shown to be difficult to implement in a hot cell environment, and thus cannot be implemented effectively. A correction based on the proposed equivalent gauge-length has the required flexibility and accuracy and can be appropriately used as a correction factor. The correction method based on the equivalent gauge length has been successfully demonstrated in CIRFT data analysis for the dynamic tests conducted on Limerick (LMK) (17 tests), North Anna (NA) (6 tests), and Catawba mixed oxide (MOX) (10 tests) SNF samples. These CIRFT tests were completed in FY14 and FY15. Specifically, the data sets obtained from measurement and monitoring were processed and analyzed. The fatigue life of rods has been characterized in terms of moment, curvature, and equivalent stress and strain..« less

  1. Comparative evaluation of performance measures for shading correction in time-lapse fluorescence microscopy.

    PubMed

    Liu, L; Kan, A; Leckie, C; Hodgkin, P D

    2017-04-01

    Time-lapse fluorescence microscopy is a valuable technology in cell biology, but it suffers from the inherent problem of intensity inhomogeneity due to uneven illumination or camera nonlinearity, known as shading artefacts. This will lead to inaccurate estimates of single-cell features such as average and total intensity. Numerous shading correction methods have been proposed to remove this effect. In order to compare the performance of different methods, many quantitative performance measures have been developed. However, there is little discussion about which performance measure should be generally applied for evaluation on real data, where the ground truth is absent. In this paper, the state-of-the-art shading correction methods and performance evaluation methods are reviewed. We implement 10 popular shading correction methods on two artificial datasets and four real ones. In order to make an objective comparison between those methods, we employ a number of quantitative performance measures. Extensive validation demonstrates that the coefficient of joint variation (CJV) is the most applicable measure in time-lapse fluorescence images. Based on this measure, we have proposed a novel shading correction method that performs better compared to well-established methods for a range of real data tested. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  2. Mixed waste landfill corrective measures study final report Sandia National Laboratories, Albuquerque, New Mexico.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peace, Gerald; Goering, Timothy James

    2004-03-01

    The Mixed Waste Landfill occupies 2.6 acres in the north-central portion of Technical Area 3 at Sandia National Laboratories, Albuquerque, New Mexico. The landfill accepted low-level radioactive and mixed waste from March 1959 to December 1988. This report represents the Corrective Measures Study that has been conducted for the Mixed Waste Landfill. The purpose of the study was to identify, develop, and evaluate corrective measures alternatives and recommend the corrective measure(s) to be taken at the site. Based upon detailed evaluation and risk assessment using guidance provided by the U.S. Environmental Protection Agency and the New Mexico Environment Department, themore » U.S. Department of Energy and Sandia National Laboratories recommend that a vegetative soil cover be deployed as the preferred corrective measure for the Mixed Waste Landfill. The cover would be of sufficient thickness to store precipitation, minimize infiltration and deep percolation, support a healthy vegetative community, and perform with minimal maintenance by emulating the natural analogue ecosystem. There would be no intrusive remedial activities at the site and therefore no potential for exposure to the waste. This alternative poses minimal risk to site workers implementing institutional controls associated with long-term environmental monitoring as well as routine maintenance and surveillance of the site.« less

  3. [Quality assessment in anesthesia].

    PubMed

    Kupperwasser, B

    1996-01-01

    Quality assessment (assurance/improvement) is the set of methods used to measure and improve the delivered care and the department's performance against pre-established criteria or standards. The four stages of the self-maintained quality assessment cycle are: problem identification, problem analysis, problem correction and evaluation of corrective actions. Quality assessment is a measurable entity for which it is necessary to define and calibrate measurement parameters (indicators) from available data gathered from the hospital anaesthesia environment. Problem identification comes from the accumulation of indicators. There are four types of quality indicators: structure, process, outcome and sentinel indicators. The latter signal a quality defect, are independent of outcomes, are easier to analyse by statistical methods and closely related to processes and main targets of quality improvement. The three types of methods to analyse the problems (indicators) are: peer review, quantitative methods and risks management techniques. Peer review is performed by qualified anaesthesiologists. To improve its validity, the review process should be explicited and conclusions based on standards of practice and literature references. The quantitative methods are statistical analyses applied to the collected data and presented in a graphic format (histogram, Pareto diagram, control charts). The risks management techniques include: a) critical incident analysis establishing an objective relationship between a 'critical' event and the associated human behaviours; b) system accident analysis, based on the fact that accidents continue to occur despite safety systems and sophisticated technologies, checks of all the process components leading to the impredictable outcome and not just the human factors; c) cause-effect diagrams facilitate the problem analysis in reducing its causes to four fundamental components (persons, regulations, equipment, process). Definition and implementation of corrective measures, based on the findings of the two previous stages, are the third step of the evaluation cycle. The Hawthorne effect is an outcome improvement, before the implementation of any corrective actions. Verification of the implemented actions is the final and mandatory step closing the evaluation cycle.

  4. 40 CFR 257.28 - Implementation of the corrective action program.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... interim measures necessary to ensure the protection of human health and the environment. Interim measures... supplies or sensitive ecosystems; (iv) Further degradation of the ground-water that may occur if remedial... situations that may pose threats to human health and the environment. (b) An owner or operator may determine...

  5. SU-E-T-87: A TG-100 Approach for Quality Improvement of Associated Dosimetry Equipment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manger, R; Pawlicki, T; Kim, G

    2015-06-15

    Purpose: Dosimetry protocols devote so much time to the discussion of ionization chamber choice, use and performance that is easy to forget about the importance of the associated dosimetry equipment (ADE) in radiation dosimetry - barometer, thermometer, electrometer, phantoms, triaxial cables, etc. Improper use and inaccuracy of these devices may significantly affect the accuracy of radiation dosimetry. The purpose of this study is to evaluate the risk factors in the monthly output dosimetry procedure and recommend corrective actions using a TG-100 approach. Methods: A failure mode and effects analysis (FMEA) of the monthly linac output check procedure was performed tomore » determine which steps and failure modes carried the greatest risk. In addition, a fault tree analysis (FTA) was performed to expand the initial list of failure modes making sure that none were overlooked. After determining the failure modes with the highest risk priority numbers (RPNs), 11 physicists were asked to score corrective actions based on their ease of implementation and potential impact. The results were aggregated into an impact map to determine the implementable corrective actions. Results: Three of the top five failure modes were related to the thermometer and barometer. The two highest RPN-ranked failure modes were related to barometric pressure inaccuracy due to their high lack-of-detectability scores. Six corrective actions were proposed to address barometric pressure inaccuracy, and the survey results found the following two corrective actions to be implementable: 1) send the barometer for recalibration at a calibration laboratory and 2) check the barometer accuracy against the local airport and correct for elevation. Conclusion: An FMEA on monthly output measurements displayed the importance of ADE for accurate radiation dosimetry. When brainstorming for corrective actions, an impact map is helpful for visualizing the overall impact versus the ease of implementation.« less

  6. Implementation of Energy Code Controls Requirements in New Commercial Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenberg, Michael I.; Hart, Philip R.; Hatten, Mike

    Most state energy codes in the United States are based on one of two national model codes; ANSI/ASHRAE/IES 90.1 (Standard 90.1) or the International Code Council (ICC) International Energy Conservation Code (IECC). Since 2004, covering the last four cycles of Standard 90.1 updates, about 30% of all new requirements have been related to building controls. These requirements can be difficult to implement and verification is beyond the expertise of most building code officials, yet the assumption in studies that measure the savings from energy codes is that they are implemented and working correctly. The objective of the current research ismore » to evaluate the degree to which high impact controls requirements included in commercial energy codes are properly designed, commissioned and implemented in new buildings. This study also evaluates the degree to which these control requirements are realizing their savings potential. This was done using a three-step process. The first step involved interviewing commissioning agents to get a better understanding of their activities as they relate to energy code required controls measures. The second involved field audits of a sample of commercial buildings to determine whether the code required control measures are being designed, commissioned and correctly implemented and functioning in new buildings. The third step includes compilation and analysis of the information gather during the first two steps. Information gathered during these activities could be valuable to code developers, energy planners, designers, building owners, and building officials.« less

  7. Staircase-scene-based nonuniformity correction in aerial point target detection systems.

    PubMed

    Huo, Lijun; Zhou, Dabiao; Wang, Dejiang; Liu, Rang; He, Bin

    2016-09-01

    Focal-plane arrays (FPAs) are often interfered by heavy fixed-pattern noise, which severely degrades the detection rate and increases the false alarms in airborne point target detection systems. Thus, high-precision nonuniformity correction is an essential preprocessing step. In this paper, a new nonuniformity correction method is proposed based on a staircase scene. This correction method can compensate for the nonlinear response of the detector and calibrate the entire optical system with computational efficiency and implementation simplicity. Then, a proof-of-concept point target detection system is established with a long-wave Sofradir FPA. Finally, the local standard deviation of the corrected image and the signal-to-clutter ratio of the Airy disk of a Boeing B738 are measured to evaluate the performance of the proposed nonuniformity correction method. Our experimental results demonstrate that the proposed correction method achieves high-quality corrections.

  8. Simulation of Ultra-Small MOSFETs Using a 2-D Quantum-Corrected Drift-Diffusion Model

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A.; Rafferty, Conor S.; Yu, Zhiping; Dutton, Robert W.; Ancona, Mario G.; Saini, Subhash (Technical Monitor)

    1998-01-01

    We describe an electronic transport model and an implementation approach that respond to the challenges of device modeling for gigascale integration. We use the density-gradient (DG) transport model, which adds tunneling and quantum smoothing of carrier density profiles to the drift-diffusion model. We present the current implementation of the DG model in PROPHET, a partial differential equation solver developed by Lucent Technologies. This implementation approach permits rapid development and enhancement of models, as well as run-time modifications and model switching. We show that even in typical bulk transport devices such as P-N diodes and BJTs, DG quantum effects can significantly modify the I-V characteristics. Quantum effects are shown to be even more significant in small, surface transport devices, such as sub-0.1 micron MOSFETs. In thin-oxide MOS capacitors, we find that quantum effects may reduce gate capacitance by 25% or more. The inclusion of quantum effects in simulations dramatically improves the match between C-V simulations and measurements. Significant quantum corrections also occur in the I-V characteristics of short-channel MOSFETs due to the gate capacitance correction.

  9. Implementation of the Rauch-Tung-Striebel Smoother for Sensor Compatibility Correction of a Fixed-Wing Unmanned Air Vehicle

    PubMed Central

    Chan, Woei-Leong; Hsiao, Fei-Bin

    2011-01-01

    This paper presents a complete procedure for sensor compatibility correction of a fixed-wing Unmanned Air Vehicle (UAV). The sensors consist of a differential air pressure transducer for airspeed measurement, two airdata vanes installed on an airdata probe for angle of attack (AoA) and angle of sideslip (AoS) measurement, and an Attitude and Heading Reference System (AHRS) that provides attitude angles, angular rates, and acceleration. The procedure is mainly based on a two pass algorithm called the Rauch-Tung-Striebel (RTS) smoother, which consists of a forward pass Extended Kalman Filter (EKF) and a backward recursion smoother. On top of that, this paper proposes the implementation of the Wiener Type Filter prior to the RTS in order to avoid the complicated process noise covariance matrix estimation. Furthermore, an easy to implement airdata measurement noise variance estimation method is introduced. The method estimates the airdata and subsequently the noise variances using the ground speed and ascent rate provided by the Global Positioning System (GPS). It incorporates the idea of data regionality by assuming that some sort of statistical relation exists between nearby data points. Root mean square deviation (RMSD) is being employed to justify the sensor compatibility. The result shows that the presented procedure is easy to implement and it improves the UAV sensor data compatibility significantly. PMID:22163819

  10. Implementation of the Rauch-Tung-Striebel smoother for sensor compatibility correction of a fixed-wing unmanned air vehicle.

    PubMed

    Chan, Woei-Leong; Hsiao, Fei-Bin

    2011-01-01

    This paper presents a complete procedure for sensor compatibility correction of a fixed-wing Unmanned Air Vehicle (UAV). The sensors consist of a differential air pressure transducer for airspeed measurement, two airdata vanes installed on an airdata probe for angle of attack (AoA) and angle of sideslip (AoS) measurement, and an Attitude and Heading Reference System (AHRS) that provides attitude angles, angular rates, and acceleration. The procedure is mainly based on a two pass algorithm called the Rauch-Tung-Striebel (RTS) smoother, which consists of a forward pass Extended Kalman Filter (EKF) and a backward recursion smoother. On top of that, this paper proposes the implementation of the Wiener Type Filter prior to the RTS in order to avoid the complicated process noise covariance matrix estimation. Furthermore, an easy to implement airdata measurement noise variance estimation method is introduced. The method estimates the airdata and subsequently the noise variances using the ground speed and ascent rate provided by the Global Positioning System (GPS). It incorporates the idea of data regionality by assuming that some sort of statistical relation exists between nearby data points. Root mean square deviation (RMSD) is being employed to justify the sensor compatibility. The result shows that the presented procedure is easy to implement and it improves the UAV sensor data compatibility significantly.

  11. Widefield fluorescence microscopy with sensor-based conjugate adaptive optics using oblique back illumination

    PubMed Central

    Li, Jiang; Bifano, Thomas G.; Mertz, Jerome

    2016-01-01

    Abstract. We describe a wavefront sensor strategy for the implementation of adaptive optics (AO) in microscope applications involving thick, scattering media. The strategy is based on the exploitation of multiple scattering to provide oblique back illumination of the wavefront-sensor focal plane, enabling a simple and direct measurement of the flux-density tilt angles caused by aberrations at this plane. Advantages of the sensor are that it provides a large measurement field of view (FOV) while requiring no guide star, making it particularly adapted to a type of AO called conjugate AO, which provides a large correction FOV in cases when sample-induced aberrations arise from a single dominant plane (e.g., the sample surface). We apply conjugate AO here to widefield (i.e., nonscanning) fluorescence microscopy for the first time and demonstrate dynamic wavefront correction in a closed-loop implementation. PMID:27653793

  12. Implementation and Initial Testing of Advanced Processing and Analysis Algorithms for Correlated Neutron Counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santi, Peter Angelo; Cutler, Theresa Elizabeth; Favalli, Andrea

    In order to improve the accuracy and capabilities of neutron multiplicity counting, additional quantifiable information is needed in order to address the assumptions that are present in the point model. Extracting and utilizing higher order moments (Quads and Pents) from the neutron pulse train represents the most direct way of extracting additional information from the measurement data to allow for an improved determination of the physical properties of the item of interest. The extraction of higher order moments from a neutron pulse train required the development of advanced dead time correction algorithms which could correct for dead time effects inmore » all of the measurement moments in a self-consistent manner. In addition, advanced analysis algorithms have been developed to address specific assumptions that are made within the current analysis model, namely that all neutrons are created at a single point within the item of interest, and that all neutrons that are produced within an item are created with the same energy distribution. This report will discuss the current status of implementation and initial testing of the advanced dead time correction and analysis algorithms that have been developed in an attempt to utilize higher order moments to improve the capabilities of correlated neutron measurement techniques.« less

  13. The course correction implementation of the inertial navigation system based on the information from the aircraft satellite navigation system before take-off

    NASA Astrophysics Data System (ADS)

    Markelov, V.; Shukalov, A.; Zharinov, I.; Kostishin, M.; Kniga, I.

    2016-04-01

    The use of the correction course option before aircraft take-off after inertial navigation system (INS) inaccurate alignment based on the platform attitude-and-heading reference system in azimuth is considered in the paper. A course correction is performed based on the track angle defined by the information received from the satellite navigation system (SNS). The course correction includes a calculated track error definition during ground taxiing along straight sections before take-off with its input in the onboard digital computational system like amendment for using in the current flight. The track error calculation is performed by the statistical evaluation of the track angle comparison defined by the SNS information with the current course measured by INS for a given number of measurements on the realizable time interval. The course correction testing results and recommendation application are given in the paper. The course correction based on the information from SNS can be used for improving accuracy characteristics for determining an aircraft path after making accelerated INS preparation concerning inaccurate initial azimuth alignment.

  14. Bayesian adjustment for measurement error in continuous exposures in an individually matched case-control study.

    PubMed

    Espino-Hernandez, Gabriela; Gustafson, Paul; Burstyn, Igor

    2011-05-14

    In epidemiological studies explanatory variables are frequently subject to measurement error. The aim of this paper is to develop a Bayesian method to correct for measurement error in multiple continuous exposures in individually matched case-control studies. This is a topic that has not been widely investigated. The new method is illustrated using data from an individually matched case-control study of the association between thyroid hormone levels during pregnancy and exposure to perfluorinated acids. The objective of the motivating study was to examine the risk of maternal hypothyroxinemia due to exposure to three perfluorinated acids measured on a continuous scale. Results from the proposed method are compared with those obtained from a naive analysis. Using a Bayesian approach, the developed method considers a classical measurement error model for the exposures, as well as the conditional logistic regression likelihood as the disease model, together with a random-effect exposure model. Proper and diffuse prior distributions are assigned, and results from a quality control experiment are used to estimate the perfluorinated acids' measurement error variability. As a result, posterior distributions and 95% credible intervals of the odds ratios are computed. A sensitivity analysis of method's performance in this particular application with different measurement error variability was performed. The proposed Bayesian method to correct for measurement error is feasible and can be implemented using statistical software. For the study on perfluorinated acids, a comparison of the inferences which are corrected for measurement error to those which ignore it indicates that little adjustment is manifested for the level of measurement error actually exhibited in the exposures. Nevertheless, a sensitivity analysis shows that more substantial adjustments arise if larger measurement errors are assumed. In individually matched case-control studies, the use of conditional logistic regression likelihood as a disease model in the presence of measurement error in multiple continuous exposures can be justified by having a random-effect exposure model. The proposed method can be successfully implemented in WinBUGS to correct individually matched case-control studies for several mismeasured continuous exposures under a classical measurement error model.

  15. Active vibration control with model correction on a flexible laboratory grid structure

    NASA Technical Reports Server (NTRS)

    Schamel, George C., II; Haftka, Raphael T.

    1991-01-01

    This paper presents experimental and computational comparisons of three active damping control laws applied to a complex laboratory structure. Two reduced structural models were used with one model being corrected on the basis of measured mode shapes and frequencies. Three control laws were investigated, a time-invariant linear quadratic regulator with state estimation and two direct rate feedback control laws. Experimental results for all designs were obtained with digital implementation. It was found that model correction improved the agreement between analytical and experimental results. The best agreement was obtained with the simplest direct rate feedback control.

  16. Signal processing of aircraft flyover noise

    NASA Technical Reports Server (NTRS)

    Kelly, Jeffrey J.

    1991-01-01

    A detailed analysis of signal processing concerns for measuring aircraft flyover noise is presented. Development of a de-Dopplerization scheme for both corrected time history and spectral data is discussed along with an analysis of motion effects on measured spectra. A computer code was written to implement the de-Dopplerization scheme. Input to the code is the aircraft position data and the pressure time histories. To facilitate ensemble averaging, a uniform level flyover is considered but the code can accept more general flight profiles. The effects of spectral smearing and its removal is discussed. Using data acquired from XV-15 tilt rotor flyover test comparisons are made showing the measured and corrected spectra. Frequency shifts are accurately accounted for by the method. It is shown that correcting for spherical spreading, Doppler amplitude, and frequency can give some idea about source directivity. The analysis indicated that smearing increases with frequency and is more severe on approach than recession.

  17. Implementing a Reentry Framework at a Correctional Facility: Challenges to the Culture

    ERIC Educational Resources Information Center

    Rudes, Danielle S.; Lerch, Jennifer; Taxman, Faye S.

    2011-01-01

    Implementation research is emerging in the field of corrections, but few studies have examined the complexities associated with implementing change among frontline workers embedded in specific organizational cultures. Using a mixed methods approach, the authors examine the challenges faced by correctional workers in a work release correctional…

  18. Implementing a Batterer's Intervention Program in a Correctional Setting: A Tertiary Prevention Model

    ERIC Educational Resources Information Center

    Yorke, Nada J.; Friedman, Bruce D.; Hurt, Pat

    2010-01-01

    This study discusses the pretest and posttest results of a batterer's intervention program (BIP) implemented within a California state prison substance abuse program (SAP), with a recommendation for further programs to be implemented within correctional institutions. The efficacy of utilizing correctional facilities to reach offenders who…

  19. Clinical introduction of image lag correction for a cone beam CT system.

    PubMed

    Stankovic, Uros; Ploeger, Lennert S; Sonke, Jan-Jakob; van Herk, Marcel

    2016-03-01

    Image lag in the flat-panel detector used for Linac integrated cone beam computed tomography (CBCT) has a degrading effect on CBCT image quality. The most prominent visible artifact is the presence of bright semicircular structure in the transverse view of the scans, known also as radar artifact. Several correction strategies have been proposed, but until now the clinical introduction of such corrections remains unreported. In November 2013, the authors have clinically implemented a previously proposed image lag correction on all of their machines at their main site in Amsterdam. The purpose of this study was to retrospectively evaluate the effect of the correction on the quality of CBCT images and evaluate the required calibration frequency. Image lag was measured in five clinical CBCT systems (Elekta Synergy 4.6) using an in-house developed beam interrupting device that stops the x-ray beam midway through the data acquisition of an unattenuated beam for calibration. A triple exponential falling edge response was fitted to the measured data and used to correct image lag from projection images with an infinite response. This filter, including an extrapolation for saturated pixels, was incorporated in the authors' in-house developed clinical cbct reconstruction software. To investigate the short-term stability of the lag and associated parameters, a series of five image lag measurement over a period of three months was performed. For quantitative analysis, the authors have retrospectively selected ten patients treated in the pelvic region. The apparent contrast was quantified in polar coordinates for scans reconstructed using the parameters obtained from different dates with and without saturation handling. Visually, the radar artifact was minimal in scans reconstructed using image lag correction especially when saturation handling was used. In patient imaging, there was a significant reduction of the apparent contrast from 43 ± 16.7 to 15.5 ± 11.9 HU without the saturation handling and to 9.6 ± 12.1 HU with the saturation handling, depending on the date of the calibration. The image lag correction parameters were stable over a period of 3 months. The computational load was increased by approximately 10%, not endangering the fast in-line reconstruction. The lag correction was successfully implemented clinically and removed most image lag artifacts thus improving the image quality. Image lag correction parameters were stable for 3 months indicating low frequency of calibration requirements.

  20. The effect of surgical titanium rods on proton therapy delivered for cervical bone tumors: experimental validation using an anthropomorphic phantom

    NASA Astrophysics Data System (ADS)

    Dietlicher, Isabelle; Casiraghi, Margherita; Ares, Carmen; Bolsi, Alessandra; Weber, Damien C.; Lomax, Antony J.; Albertini, Francesca

    2014-12-01

    To investigate the effect of metal implants in proton radiotherapy, dose distributions of different, clinically relevant treatment plans have been measured in an anthropomorphic phantom and compared to treatment planning predictions. The anthropomorphic phantom, which is sliced into four segments in the cranio-caudal direction, is composed of tissue equivalent materials and contains a titanium implant in a vertebral body in the cervical region. GafChromic® films were laid between the different segments to measure the 2D delivered dose. Three different four-field plans have then been applied: a Single-Field-Uniform-Dose (SFUD) plan, both with and without artifact correction implemented, and an Intensity-Modulated-Proton-Therapy (IMPT) plan with the artifacts corrected. For corrections, the artifacts were manually outlined and the Hounsfield Units manually set to an average value for soft tissue. Results show a surprisingly good agreement between prescribed and delivered dose distributions when artifacts have been corrected, with > 97% and 98% of points fulfilling the gamma criterion of 3%/3 mm for both SFUD and the IMPT plans, respectively. In contrast, without artifact corrections, up to 18% of measured points fail the gamma criterion of 3%/3 mm for the SFUD plan. These measurements indicate that correcting manually for the reconstruction artifacts resulting from metal implants substantially improves the accuracy of the calculated dose distribution.

  1. Continuously monitoring the parity of superconducting qubits in a 2D cQED architecture

    NASA Astrophysics Data System (ADS)

    Blok, Machiel; Flurin, Emmanuel; Livingston, William; Colless, James; Dove, Allison; Siddiqi, Irfan

    Continuous measurements of joint qubit properties such as their parity can reveal insight into the collapse dynamics of entangled states and are a prerequisite for implementing continuous quantum error correction. Here it is crucial that the measurement collects no information other than the parity to avoid measurement induced dephasing. In a cQED architecture, a full-parity measurement can be implemented by strongly coupling two transmon qubits to a single high-Q planar resonator (χ >> κ). We will discuss the experimental implementation of this on-chip technique and the prospects to extend it to more qubits. This will allow us to monitor, in real-time, the projection into multi-partite entangled states and continuously detect errors on a logical qubit encoded in an entangled subspace. This work was supported by Army Research Office.

  2. Statistical Calibration and Validation of a Homogeneous Ventilated Wall-Interference Correction Method for the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.

    2005-01-01

    Wind tunnel experiments will continue to be a primary source of validation data for many types of mathematical and computational models in the aerospace industry. The increased emphasis on accuracy of data acquired from these facilities requires understanding of the uncertainty of not only the measurement data but also any correction applied to the data. One of the largest and most critical corrections made to these data is due to wall interference. In an effort to understand the accuracy and suitability of these corrections, a statistical validation process for wall interference correction methods has been developed. This process is based on the use of independent cases which, after correction, are expected to produce the same result. Comparison of these independent cases with respect to the uncertainty in the correction process establishes a domain of applicability based on the capability of the method to provide reasonable corrections with respect to customer accuracy requirements. The statistical validation method was applied to the version of the Transonic Wall Interference Correction System (TWICS) recently implemented in the National Transonic Facility at NASA Langley Research Center. The TWICS code generates corrections for solid and slotted wall interference in the model pitch plane based on boundary pressure measurements. Before validation could be performed on this method, it was necessary to calibrate the ventilated wall boundary condition parameters. Discrimination comparisons are used to determine the most representative of three linear boundary condition models which have historically been used to represent longitudinally slotted test section walls. Of the three linear boundary condition models implemented for ventilated walls, the general slotted wall model was the most representative of the data. The TWICS code using the calibrated general slotted wall model was found to be valid to within the process uncertainty for test section Mach numbers less than or equal to 0.60. The scatter among the mean corrected results of the bodies of revolution validation cases was within one count of drag on a typical transport aircraft configuration for Mach numbers at or below 0.80 and two counts of drag for Mach numbers at or below 0.90.

  3. Accurate and fiducial-marker-free correction for three-dimensional chromatic shift in biological fluorescence microscopy.

    PubMed

    Matsuda, Atsushi; Schermelleh, Lothar; Hirano, Yasuhiro; Haraguchi, Tokuko; Hiraoka, Yasushi

    2018-05-15

    Correction of chromatic shift is necessary for precise registration of multicolor fluorescence images of biological specimens. New emerging technologies in fluorescence microscopy with increasing spatial resolution and penetration depth have prompted the need for more accurate methods to correct chromatic aberration. However, the amount of chromatic shift of the region of interest in biological samples often deviates from the theoretical prediction because of unknown dispersion in the biological samples. To measure and correct chromatic shift in biological samples, we developed a quadrisection phase correlation approach to computationally calculate translation, rotation, and magnification from reference images. Furthermore, to account for local chromatic shifts, images are split into smaller elements, for which the phase correlation between channels is measured individually and corrected accordingly. We implemented this method in an easy-to-use open-source software package, called Chromagnon, that is able to correct shifts with a 3D accuracy of approximately 15 nm. Applying this software, we quantified the level of uncertainty in chromatic shift correction, depending on the imaging modality used, and for different existing calibration methods, along with the proposed one. Finally, we provide guidelines to choose the optimal chromatic shift registration method for any given situation.

  4. Quantitatively accurate activity measurements with a dedicated cardiac SPECT camera: Physical phantom experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pourmoghaddas, Amir, E-mail: apour@ottawaheart.ca; Wells, R. Glenn

    Purpose: Recently, there has been increased interest in dedicated cardiac single photon emission computed tomography (SPECT) scanners with pinhole collimation and improved detector technology due to their improved count sensitivity and resolution over traditional parallel-hole cameras. With traditional cameras, energy-based approaches are often used in the clinic for scatter compensation because they are fast and easily implemented. Some of the cardiac cameras use cadmium-zinc-telluride (CZT) detectors which can complicate the use of energy-based scatter correction (SC) due to the low-energy tail—an increased number of unscattered photons detected with reduced energy. Modified energy-based scatter correction methods can be implemented, but theirmore » level of accuracy is unclear. In this study, the authors validated by physical phantom experiments the quantitative accuracy and reproducibility of easily implemented correction techniques applied to {sup 99m}Tc myocardial imaging with a CZT-detector-based gamma camera with multiple heads, each with a single-pinhole collimator. Methods: Activity in the cardiac compartment of an Anthropomorphic Torso phantom (Data Spectrum Corporation) was measured through 15 {sup 99m}Tc-SPECT acquisitions. The ratio of activity concentrations in organ compartments resembled a clinical {sup 99m}Tc-sestamibi scan and was kept consistent across all experiments (1.2:1 heart to liver and 1.5:1 heart to lung). Two background activity levels were considered: no activity (cold) and an activity concentration 1/10th of the heart (hot). A plastic “lesion” was placed inside of the septal wall of the myocardial insert to simulate the presence of a region without tracer uptake and contrast in this lesion was calculated for all images. The true net activity in each compartment was measured with a dose calibrator (CRC-25R, Capintec, Inc.). A 10 min SPECT image was acquired using a dedicated cardiac camera with CZT detectors (Discovery NM530c, GE Healthcare), followed by a CT scan for attenuation correction (AC). For each experiment, separate images were created including reconstruction with no corrections (NC), with AC, with attenuation and dual-energy window (DEW) scatter correction (ACSC), with attenuation and partial volume correction (PVC) applied (ACPVC), and with attenuation, scatter, and PVC applied (ACSCPVC). The DEW SC method used was modified to account for the presence of the low-energy tail. Results: T-tests showed that the mean error in absolute activity measurement was reduced significantly for AC and ACSC compared to NC for both (hot and cold) datasets (p < 0.001) and that ACSC, ACPVC, and ACSCPVC show significant reductions in mean differences compared to AC (p ≤ 0.001) without increasing the uncertainty (p > 0.4). The effect of SC and PVC was significant in reducing errors over AC in both datasets (p < 0.001 and p < 0.01, respectively), resulting in a mean error of 5% ± 4%. Conclusions: Quantitative measurements of cardiac {sup 99m}Tc activity are achievable using attenuation and scatter corrections, with the authors’ dedicated cardiac SPECT camera. Partial volume corrections offer improvements in measurement accuracy in AC images and ACSC images with elevated background activity; however, these improvements are not significant in ACSC images with low background activity.« less

  5. Achieving continuous improvement in laboratory organization through performance measurements: a seven-year experience.

    PubMed

    Salinas, Maria; López-Garrigós, Maite; Gutiérrez, Mercedes; Lugo, Javier; Sirvent, Jose Vicente; Uris, Joaquin

    2010-01-01

    Laboratory performance can be measured using a set of model key performance indicators (KPIs). The design and implementation of KPIs are important issues. KPI results from 7 years are reported and their implementation, monitoring, objectives, interventions, result reporting and delivery are analyzed. The KPIs of the entire laboratory process were obtained using Laboratory Information System (LIS) registers. These were collected automatically using a data warehouse application, spreadsheets and external quality program reports. Customer satisfaction was assessed using surveys. Nine model laboratory KPIs were proposed and measured. The results of some examples of KPIs used in our laboratory are reported. Their corrective measurements or the implementation of objectives led to improvement in the associated KPIs results. Measurement of laboratory performance using KPIs and a data warehouse application that continuously collects registers and calculates KPIs confirmed the reliability of indicators, indicator acceptability and usability for users, and continuous process improvement.

  6. Improvement of Nonlinearity Correction for BESIII ETOF Upgrade

    NASA Astrophysics Data System (ADS)

    Sun, Weijia; Cao, Ping; Ji, Xiaolu; Fan, Huanhuan; Dai, Hongliang; Zhang, Jie; Liu, Shubin; An, Qi

    2015-08-01

    An improved scheme to implement integral non-linearity (INL) correction of time measurements in the Beijing Spectrometer III Endcap Time-of-Flight (BESIII ETOF) upgrade system is presented in this paper. During upgrade, multi-gap resistive plate chambers (MRPC) are introduced as ETOF detectors which increases the total number of time measurement channels to 1728. The INL correction method adopted in BESIII TOF proved to be of limited use, because the sharply increased number of electronic channels required for reading out the detector strips degrade the system configuration efficiency severely. Furthermore, once installed into the spectrometer, BESIII TOF electronics do not support the TDCs' nonlinearity evaluation online. In this proposed method, INL data used for the correction algorithm are automatically imported from a non-volatile read-only memory (ROM) instead of from data acquisition software. This guarantees the real-time performance and system efficiency of the INL correction, especially for the ETOF upgrades with massive number of channels. Besides, a signal that is not synchronized to the system 41.65 MHz clock from BEPCII is sent to the frontend electronics (FEE) to simulate pseudo-random test pulses for the purpose of online nonlinearity evaluation. Test results show that the time measuring INL errors in one module with 72 channels can be corrected online and in real time.

  7. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    NASA Astrophysics Data System (ADS)

    Raghunath, N.; Faber, T. L.; Suryanarayanan, S.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  8. Using Laser Scanners to Augment the Systematic Error Pointing Model

    NASA Astrophysics Data System (ADS)

    Wernicke, D. R.

    2016-08-01

    The antennas of the Deep Space Network (DSN) rely on precise pointing algorithms to communicate with spacecraft that are billions of miles away. Although the existing systematic error pointing model is effective at reducing blind pointing errors due to static misalignments, several of its terms have a strong dependence on seasonal and even daily thermal variation and are thus not easily modeled. Changes in the thermal state of the structure create a separation from the model and introduce a varying pointing offset. Compensating for this varying offset is possible by augmenting the pointing model with laser scanners. In this approach, laser scanners mounted to the alidade measure structural displacements while a series of transformations generate correction angles. Two sets of experiments were conducted in August 2015 using commercially available laser scanners. When compared with historical monopulse corrections under similar conditions, the computed corrections are within 3 mdeg of the mean. However, although the results show promise, several key challenges relating to the sensitivity of the optical equipment to sunlight render an implementation of this approach impractical. Other measurement devices such as inclinometers may be implementable at a significantly lower cost.

  9. Reducing the risk of healthcare-associated infections through Lean Six Sigma: The case of the medicine areas at the Federico II University Hospital in Naples (Italy).

    PubMed

    Improta, Giovanni; Cesarelli, Mario; Montuori, Paolo; Santillo, Liberatina Carmela; Triassi, Maria

    2018-04-01

    Lean Six Sigma (LSS) has been recognized as an effective management tool for improving healthcare performance. Here, LSS was adopted to reduce the risk of healthcare-associated infections (HAIs), a critical quality parameter in the healthcare sector. Lean Six Sigma was applied to the areas of clinical medicine (including general medicine, pulmonology, oncology, nephrology, cardiology, neurology, gastroenterology, rheumatology, and diabetology), and data regarding HAIs were collected for 28,000 patients hospitalized between January 2011 and December 2016. Following the LSS define, measure, analyse, improve, and control cycle, the factors influencing the risk of HAI were identified by using typical LSS tools (statistical analyses, brainstorming sessions, and cause-effect diagrams). Finally, corrective measures to prevent HAIs were implemented and monitored for 1 year after implementation. Lean Six Sigma proved to be a useful tool for identifying variables affecting the risk of HAIs and implementing corrective actions to improve the performance of the care process. A reduction in the number of patients colonized by sentinel bacteria was achieved after the improvement phase. The implementation of an LSS approach could significantly decrease the percentage of patients with HAIs. © 2017 The Authors. Journal of Evaluation in Clinical Practice published by John Wiley & Sons Ltd.

  10. Reducing the risk of healthcare‐associated infections through Lean Six Sigma: The case of the medicine areas at the Federico II University Hospital in Naples (Italy)

    PubMed Central

    Cesarelli, Mario; Montuori, Paolo; Santillo, Liberatina Carmela; Triassi, Maria

    2017-01-01

    Abstract Rationale, aims, and objectives Lean Six Sigma (LSS) has been recognized as an effective management tool for improving healthcare performance. Here, LSS was adopted to reduce the risk of healthcare‐associated infections (HAIs), a critical quality parameter in the healthcare sector. Methods Lean Six Sigma was applied to the areas of clinical medicine (including general medicine, pulmonology, oncology, nephrology, cardiology, neurology, gastroenterology, rheumatology, and diabetology), and data regarding HAIs were collected for 28,000 patients hospitalized between January 2011 and December 2016. Following the LSS define, measure, analyse, improve, and control cycle, the factors influencing the risk of HAI were identified by using typical LSS tools (statistical analyses, brainstorming sessions, and cause‐effect diagrams). Finally, corrective measures to prevent HAIs were implemented and monitored for 1 year after implementation. Results Lean Six Sigma proved to be a useful tool for identifying variables affecting the risk of HAIs and implementing corrective actions to improve the performance of the care process. A reduction in the number of patients colonized by sentinel bacteria was achieved after the improvement phase. Conclusions The implementation of an LSS approach could significantly decrease the percentage of patients with HAIs. PMID:29098756

  11. 7 CFR 205.505 - Statement of agreement.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... qualities of products labeled as organically produced; (3) Conduct an annual performance evaluation of all... certification decisions and implement measures to correct any deficiencies in certification services; (4) Have...

  12. Binary phase lock loops for simplified OMEGA receivers

    NASA Technical Reports Server (NTRS)

    Burhans, R. W.

    1974-01-01

    A sampled binary phase lock loop is proposed for periodically correcting OMEGA receiver internal clocks. The circuit is particularly simple to implement and provides a means of generating long range 3.4 KHz difference frequency lanes from simultaneous pair measurements.

  13. Data-driven sensitivity inference for Thomson scattering electron density measurement systems.

    PubMed

    Fujii, Keisuke; Yamada, Ichihiro; Hasuo, Masahiro

    2017-01-01

    We developed a method to infer the calibration parameters of multichannel measurement systems, such as channel variations of sensitivity and noise amplitude, from experimental data. We regard such uncertainties of the calibration parameters as dependent noise. The statistical properties of the dependent noise and that of the latent functions were modeled and implemented in the Gaussian process kernel. Based on their statistical difference, both parameters were inferred from the data. We applied this method to the electron density measurement system by Thomson scattering for the Large Helical Device plasma, which is equipped with 141 spatial channels. Based on the 210 sets of experimental data, we evaluated the correction factor of the sensitivity and noise amplitude for each channel. The correction factor varies by ≈10%, and the random noise amplitude is ≈2%, i.e., the measurement accuracy increases by a factor of 5 after this sensitivity correction. The certainty improvement in the spatial derivative inference was demonstrated.

  14. Special electronic distance meter calibration for precise engineering surveying industrial applications

    NASA Astrophysics Data System (ADS)

    Braun, Jaroslav; Štroner, Martin; Urban, Rudolf

    2015-05-01

    All surveying instruments and their measurements suffer from some errors. To refine the measurement results, it is necessary to use procedures restricting influence of the instrument errors on the measured values or to implement numerical corrections. In precise engineering surveying industrial applications the accuracy of the distances usually realized on relatively short distance is a key parameter limiting the resulting accuracy of the determined values (coordinates, etc.). To determine the size of systematic and random errors of the measured distances were made test with the idea of the suppression of the random error by the averaging of the repeating measurement, and reducing systematic errors influence of by identifying their absolute size on the absolute baseline realized in geodetic laboratory at the Faculty of Civil Engineering CTU in Prague. The 16 concrete pillars with forced centerings were set up and the absolute distances between the points were determined with a standard deviation of 0.02 millimetre using a Leica Absolute Tracker AT401. For any distance measured by the calibrated instruments (up to the length of the testing baseline, i.e. 38.6 m) can now be determined the size of error correction of the distance meter in two ways: Firstly by the interpolation on the raw data, or secondly using correction function derived by previous FFT transformation usage. The quality of this calibration and correction procedure was tested on three instruments (Trimble S6 HP, Topcon GPT-7501, Trimble M3) experimentally using Leica Absolute Tracker AT401. By the correction procedure was the standard deviation of the measured distances reduced significantly to less than 0.6 mm. In case of Topcon GPT-7501 is the nominal standard deviation 2 mm, achieved (without corrections) 2.8 mm and after corrections 0.55 mm; in case of Trimble M3 is nominal standard deviation 3 mm, achieved (without corrections) 1.1 mm and after corrections 0.58 mm; and finally in case of Trimble S6 is nominal standard deviation 1 mm, achieved (without corrections) 1.2 mm and after corrections 0.51 mm. Proposed procedure of the calibration and correction is in our opinion very suitable for increasing of the accuracy of the electronic distance measurement and allows the use of the common surveying instrument to achieve uncommonly high precision.

  15. Comparison of different Aethalometer correction schemes and a reference multi-wavelength absorption technique for ambient aerosol data

    NASA Astrophysics Data System (ADS)

    Saturno, Jorge; Pöhlker, Christopher; Massabò, Dario; Brito, Joel; Carbone, Samara; Cheng, Yafang; Chi, Xuguang; Ditas, Florian; Hrabě de Angelis, Isabella; Morán-Zuloaga, Daniel; Pöhlker, Mira L.; Rizzo, Luciana V.; Walter, David; Wang, Qiaoqiao; Artaxo, Paulo; Prati, Paolo; Andreae, Meinrat O.

    2017-08-01

    Deriving absorption coefficients from Aethalometer attenuation data requires different corrections to compensate for artifacts related to filter-loading effects, scattering by filter fibers, and scattering by aerosol particles. In this study, two different correction schemes were applied to seven-wavelength Aethalometer data, using multi-angle absorption photometer (MAAP) data as a reference absorption measurement at 637 nm. The compensation algorithms were compared to five-wavelength offline absorption measurements obtained with a multi-wavelength absorbance analyzer (MWAA), which serves as a multiple-wavelength reference measurement. The online measurements took place in the Amazon rainforest, from the wet-to-dry transition season to the dry season (June-September 2014). The mean absorption coefficient (at 637 nm) during this period was 1.8 ± 2.1 Mm-1, with a maximum of 15.9 Mm-1. Under these conditions, the filter-loading compensation was negligible. One of the correction schemes was found to artificially increase the short-wavelength absorption coefficients. It was found that accounting for the aerosol optical properties in the scattering compensation significantly affects the absorption Ångström exponent (åABS) retrievals. Proper Aethalometer data compensation schemes are crucial to retrieve the correct åABS, which is commonly implemented in brown carbon contribution calculations. Additionally, we found that the wavelength dependence of uncompensated Aethalometer attenuation data significantly correlates with the åABS retrieved from offline MWAA measurements.

  16. Real-Time Visualization of Tissue Ischemia

    NASA Technical Reports Server (NTRS)

    Bearman, Gregory H. (Inventor); Chrien, Thomas D. (Inventor); Eastwood, Michael L. (Inventor)

    2000-01-01

    A real-time display of tissue ischemia which comprises three CCD video cameras, each with a narrow bandwidth filter at the correct wavelength is discussed. The cameras simultaneously view an area of tissue suspected of having ischemic areas through beamsplitters. The output from each camera is adjusted to give the correct signal intensity for combining with, the others into an image for display. If necessary a digital signal processor (DSP) can implement algorithms for image enhancement prior to display. Current DSP engines are fast enough to give real-time display. Measurement at three, wavelengths, combined into a real-time Red-Green-Blue (RGB) video display with a digital signal processing (DSP) board to implement image algorithms, provides direct visualization of ischemic areas.

  17. [Not Available].

    PubMed

    Abril, Encarnación; Gómez-Conesa, Antonia; Gutiérrez-Santos, Manuel

    2008-05-01

    To assess the quality of physiotherapeutic care in patients treated for low back pain in a Primary Health Care physiotherapy unit, and to improve the quality of the care provided to these patients. The first assessment includes all patients treated in 2002 (n=83). Five criteria corresponding to the initial physiotherapeutic assessment were chosen: C1, pain; C2, disability; C3, mobility, muscle examination, and C5, palpation. In order to detect non-compliance, the Ishikawa Fishbone diagram was used. Corrective measures were established in November 2003: publication of an Oswestry questionnaire model for assessing disability and reflecting on the results obtained. The second assessment includes the period of time from 1st November 2003 to 6th April 2004 (n=32). In the initial assessment, compliances observed were: C1: 21.69%; C2: 0; C3: 69.87%; C4: 78.31%, and C5: 84.33%. After implementing corrective measures a significant improvement was observed in the compliance of C1, C2, and C3. In C4 there was an improvement, and in C5 there was a decrease in compliance, both of them without statistical significance. The carrying out of an improvement cycle has enabled non-compliances in the application of this protocol to be detected and corrected. The corrective measures implemented have led to a reduction of the variability in the records. It is advisable to ensure that compliance does not decrease in the areas that initially showed a high level of compliance. Copyright © 2008 Sociedad Española de Calidad Asistencial. Published by Elsevier Espana. All rights reserved.

  18. 40 CFR 258.58 - Implementation of the corrective action program.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... WASTES CRITERIA FOR MUNICIPAL SOLID WASTE LANDFILLS Ground-Water Monitoring and Corrective Action § 258... implement a corrective action ground-water monitoring program that: (i) At a minimum, meet the requirements of an assessment monitoring program under § 258.55; (ii) Indicate the effectiveness of the corrective...

  19. Status of the National Transonic Facility Characterization

    NASA Technical Reports Server (NTRS)

    Bobbitt, C., Jr.; Everhart, J.

    2001-01-01

    This paper describes the current activities at the National Transonic Facility to document the test-section flow and to support tunnel improvements. The paper is divided into sections on the tunnel calibration, flow quality measurements, data quality assurance, and implementation of wall interference corrections.

  20. Measurement of unsteady pressures in rotating systems

    NASA Technical Reports Server (NTRS)

    Kienappel, K.

    1978-01-01

    The principles of the experimental determination of unsteady periodic pressure distributions in rotating systems are reported. An indirect method is discussed, and the effects of the centrifugal force and the transmission behavior of the pressure measurement circuit were outlined. The required correction procedures are described and experimentally implemented in a test bench. Results show that the indirect method is suited to the measurement of unsteady nonharmonic pressure distributions in rotating systems.

  1. Linear Collider Test Facility: Twiss Parameter Analysis at the IP/Post-IP Location of the ATF2 Beam Line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bolzon, Benoit; /Annecy, LAPP; Jeremie, Andrea

    2012-07-02

    At the first stage of the ATF2 beam tuning, vertical beam size is usually bigger than 3 {micro}m at the IP. Beam waist measurements using wire scanners and a laser wire are usually performed to check the initial matching of the beam through to the IP. These measurements are described in this paper for the optics currently used ({beta}{sub x} = 4cm and {beta}{sub y} = 1mm). Software implemented in the control room to automate these measurements with integrated analysis is also described. Measurements showed that {beta} functions and emittances were within errors of measurements when no rematching and couplingmore » corrections were done. However, it was observed that the waist in the horizontal (X) and vertical (Y) plane was abnormally shifted and simulations were performed to try to understand these shifts. They also showed that multiknobs are needed in the current optics to correct simultaneously {alpha}{sub x}, {alpha}{sub y} and the horizontal dispersion (D{sub x}). Such multiknobs were found and their linearity and orthogonality were successfully checked using MAD optics code. The software for these multiknobs was implemented in the control room and waist scan measurements using the {alpha}{sub y} knob were successfully performed.« less

  2. CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.

    PubMed

    Saegusa, Jun

    2008-01-01

    The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.

  3. 75 FR 36023 - Approval and Promulgation of Implementation Plans; Designation of Areas for Air Quality Planning...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-24

    ... a Maintenance Plan, Including a Contingency Plan, for the Area Under Section 175a of the CAA 1. An.... Contingency Provisions That EPA Deems Necessary to Promptly Correct Any Violation of the NAAQS That Occurs... available control measures, and contingency measures) no longer applies for so long as the area continues to...

  4. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    PubMed

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.

  5. A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield

    PubMed Central

    Ringard, Justine; Seyler, Frederique; Linguet, Laurent

    2017-01-01

    Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale. PMID:28621723

  6. A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield.

    PubMed

    Ringard, Justine; Seyler, Frederique; Linguet, Laurent

    2017-06-16

    Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale.

  7. An analog gamma correction scheme for high dynamic range CMOS logarithmic image sensors.

    PubMed

    Cao, Yuan; Pan, Xiaofang; Zhao, Xiaojin; Wu, Huisi

    2014-12-15

    In this paper, a novel analog gamma correction scheme with a logarithmic image sensor dedicated to minimize the quantization noise of the high dynamic applications is presented. The proposed implementation exploits a non-linear voltage-controlled-oscillator (VCO) based analog-to-digital converter (ADC) to perform the gamma correction during the analog-to-digital conversion. As a result, the quantization noise does not increase while the same high dynamic range of logarithmic image sensor is preserved. Moreover, by combining the gamma correction with the analog-to-digital conversion, the silicon area and overall power consumption can be greatly reduced. The proposed gamma correction scheme is validated by the reported simulation results and the experimental results measured for our designed test structure, which is fabricated with 0.35 μm standard complementary-metal-oxide-semiconductor (CMOS) process.

  8. An Analog Gamma Correction Scheme for High Dynamic Range CMOS Logarithmic Image Sensors

    PubMed Central

    Cao, Yuan; Pan, Xiaofang; Zhao, Xiaojin; Wu, Huisi

    2014-01-01

    In this paper, a novel analog gamma correction scheme with a logarithmic image sensor dedicated to minimize the quantization noise of the high dynamic applications is presented. The proposed implementation exploits a non-linear voltage-controlled-oscillator (VCO) based analog-to-digital converter (ADC) to perform the gamma correction during the analog-to-digital conversion. As a result, the quantization noise does not increase while the same high dynamic range of logarithmic image sensor is preserved. Moreover, by combining the gamma correction with the analog-to-digital conversion, the silicon area and overall power consumption can be greatly reduced. The proposed gamma correction scheme is validated by the reported simulation results and the experimental results measured for our designed test structure, which is fabricated with 0.35 μm standard complementary-metal-oxide-semiconductor (CMOS) process. PMID:25517692

  9. A Technique for Real-Time Ionospheric Ranging Error Correction Based On Radar Dual-Frequency Detection

    NASA Astrophysics Data System (ADS)

    Lyu, Jiang-Tao; Zhou, Chen

    2017-12-01

    Ionospheric refraction is one of the principal error sources for limiting the accuracy of radar systems for space target detection. High-accuracy measurement of the ionospheric electron density along the propagation path of radar wave is the most important procedure for the ionospheric refraction correction. Traditionally, the ionospheric model and the ionospheric detection instruments, like ionosonde or GPS receivers, are employed for obtaining the electron density. However, both methods are not capable of satisfying the requirements of correction accuracy for the advanced space target radar system. In this study, we propose a novel technique for ionospheric refraction correction based on radar dual-frequency detection. Radar target range measurements at two adjacent frequencies are utilized for calculating the electron density integral exactly along the propagation path of the radar wave, which can generate accurate ionospheric range correction. The implementation of radar dual-frequency detection is validated by a P band radar located in midlatitude China. The experimental results present that the accuracy of this novel technique is more accurate than the traditional ionospheric model correction. The technique proposed in this study is very promising for the high-accuracy radar detection and tracking of objects in geospace.

  10. Reconstructive correction of aberrations in nuclear particle spectrographs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berz, M.; Joh, K.; Nolen, J.A.

    A method is presented that allows the reconstruction of trajectories in particle spectrographs and the reconstructive correction of residual aberrations that otherwise limit the resolution. Using a computed or fitted high order transfer map that describes the uncorrected aberrations of the spectrograph, it is possible to calculate a map via an analytic recursion relation that allows the computation of the corrected data of interest such as reaction energy and scattering angle as well as the reconstructed trajectories in terms of position measurements in two planes near the focal plane. The technique is only limited by the accuracy of the positionmore » measurements, the incoherent spot sizes, and the accuracy of the transfer map. In practice the method can be expressed as an inversion of a nonlinear map and implemented in the differential algebraic framework. The method is applied to correct residual aberrations in the S800 spectrograph which is under construction at the National Superconducting Cyclotron Laboratory at Michigan State University and to two other high resolution spectrographs.« less

  11. Status of the National Transonic Facility Characterization (Invited)

    NASA Technical Reports Server (NTRS)

    Bobbitt, C., Jr.; Everhart, J.

    2001-01-01

    This paper describes the current activities at the National Transonic Facility to document the test-section flow and to support tunnel improvements. The paper is divided into sections on the tunnel calibration, flow quality measurements, data quality assurance, and implementation of wall interference corrections.

  12. [Rabies contingency plan in Japan].

    PubMed

    Inoue, Satoshi

    2005-12-01

    In Japan, rabies has been culled out since 1957 thanks to the strong implementation of measures against rabies, such as vaccination of dogs, quarantine and control of wild dogs under the 'Rabies Prevention Law' enacted in 1950. Nevertheless one cannot deny the possibility of introduction of rabies into Japan in view of the recent increase in the international movements of people and animals. Should an outbreak of rabies be suspected now in Japan, the society would probably overreact due to a decreased awareness of risks and a lack of correct knowledge about this disease. Officials of the government and the municipalities, veterinarians and doctors should exchange correct information on rabies and on prevention control and raise their awareness, while providing also information to the public on a timely basis. Besides it is needless to say that it is important to set up a crisis management system allowing a quick and adequate response in case of an outbreak of rabies and to continue to implement appropriate prevention measures in normal times.

  13. Combination of Heat Shock and Enhanced Thermal Regime to Control the Growth of a Persistent Legionella pneumophila Strain

    PubMed Central

    Bédard, Emilie; Boppe, Inès; Kouamé, Serge; Martin, Philippe; Pinsonneault, Linda; Valiquette, Louis; Racine, Jules; Prévost, Michèle

    2016-01-01

    Following nosocomial cases of Legionella pneumophila, the investigation of a hot water system revealed that 81.5% of sampled taps were positive for L. pneumophila, despite the presence of protective levels of copper in the water. A significant reduction of L. pneumophila counts was observed by culture after heat shock disinfection. The following corrective measures were implemented to control L. pneumophila: increasing the hot water temperature (55 to 60 °C), flushing taps weekly with hot water, removing excess lengths of piping and maintaining a water temperature of 55 °C throughout the system. A gradual reduction in L. pneumophila counts was observed using the culture method and qPCR in the 18 months after implementation of the corrective measures. However, low level contamination was retained in areas with hydraulic deficiencies, highlighting the importance of maintaining a good thermal regime at all points within the system to control the population of L. pneumophila. PMID:27092528

  14. Bunch mode specific rate corrections for PILATUS3 detectors

    DOE PAGES

    Trueb, P.; Dejoie, C.; Kobas, M.; ...

    2015-04-09

    PILATUS X-ray detectors are in operation at many synchrotron beamlines around the world. This article reports on the characterization of the new PILATUS3 detector generation at high count rates. As for all counting detectors, the measured intensities have to be corrected for the dead-time of the counting mechanism at high photon fluxes. The large number of different bunch modes at these synchrotrons as well as the wide range of detector settings presents a challenge for providing accurate corrections. To avoid the intricate measurement of the count rate behaviour for every bunch mode, a Monte Carlo simulation of the counting mechanismmore » has been implemented, which is able to predict the corrections for arbitrary bunch modes and a wide range of detector settings. This article compares the simulated results with experimental data acquired at different synchrotrons. It is found that the usage of bunch mode specific corrections based on this simulation improves the accuracy of the measured intensities by up to 40% for high photon rates and highly structured bunch modes. For less structured bunch modes, the instant retrigger technology of PILATUS3 detectors substantially reduces the dependency of the rate correction on the bunch mode. The acquired data also demonstrate that the instant retrigger technology allows for data acquisition up to 15 million photons per second per pixel.« less

  15. Clinical introduction of image lag correction for a cone beam CT system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stankovic, Uros; Ploeger, Lennert S.; Sonke, Jan-Jakob, E-mail: j.sonke@nki.nl

    Purpose: Image lag in the flat-panel detector used for Linac integrated cone beam computed tomography (CBCT) has a degrading effect on CBCT image quality. The most prominent visible artifact is the presence of bright semicircular structure in the transverse view of the scans, known also as radar artifact. Several correction strategies have been proposed, but until now the clinical introduction of such corrections remains unreported. In November 2013, the authors have clinically implemented a previously proposed image lag correction on all of their machines at their main site in Amsterdam. The purpose of this study was to retrospectively evaluate themore » effect of the correction on the quality of CBCT images and evaluate the required calibration frequency. Methods: Image lag was measured in five clinical CBCT systems (Elekta Synergy 4.6) using an in-house developed beam interrupting device that stops the x-ray beam midway through the data acquisition of an unattenuated beam for calibration. A triple exponential falling edge response was fitted to the measured data and used to correct image lag from projection images with an infinite response. This filter, including an extrapolation for saturated pixels, was incorporated in the authors’ in-house developed clinical CBCT reconstruction software. To investigate the short-term stability of the lag and associated parameters, a series of five image lag measurement over a period of three months was performed. For quantitative analysis, the authors have retrospectively selected ten patients treated in the pelvic region. The apparent contrast was quantified in polar coordinates for scans reconstructed using the parameters obtained from different dates with and without saturation handling. Results: Visually, the radar artifact was minimal in scans reconstructed using image lag correction especially when saturation handling was used. In patient imaging, there was a significant reduction of the apparent contrast from 43 ± 16.7 to 15.5 ± 11.9 HU without the saturation handling and to 9.6 ± 12.1 HU with the saturation handling, depending on the date of the calibration. The image lag correction parameters were stable over a period of 3 months. The computational load was increased by approximately 10%, not endangering the fast in-line reconstruction. Conclusions: The lag correction was successfully implemented clinically and removed most image lag artifacts thus improving the image quality. Image lag correction parameters were stable for 3 months indicating low frequency of calibration requirements.« less

  16. Ground based measurements on reflectance towards validating atmospheric correction algorithms on IRS-P6 AWiFS data

    NASA Astrophysics Data System (ADS)

    Rani Sharma, Anu; Kharol, Shailesh Kumar; Kvs, Badarinath; Roy, P. S.

    In Earth observation, the atmosphere has a non-negligible influence on the visible and infrared radiation which is strong enough to modify the reflected electromagnetic signal and at-target reflectance. Scattering of solar irradiance by atmospheric molecules and aerosol generates path radiance, which increases the apparent surface reflectance over dark surfaces while absorption by aerosols and other molecules in the atmosphere causes loss of brightness to the scene, as recorded by the satellite sensor. In order to derive precise surface reflectance from satellite image data, it is indispensable to apply the atmospheric correction which serves to remove the effects of molecular and aerosol scattering. In the present study, we have implemented a fast atmospheric correction algorithm to IRS-P6 AWiFS satellite data which can effectively retrieve surface reflectance under different atmospheric and surface conditions. The algorithm is based on MODIS climatology products and simplified use of Second Simulation of Satellite Signal in Solar Spectrum (6S) radiative transfer code, which is used to generate look-up-tables (LUTs). The algorithm requires information on aerosol optical depth for correcting the satellite dataset. The proposed method is simple and easy to implement for estimating surface reflectance from the at sensor recorded signal, on a per pixel basis. The atmospheric correction algorithm has been tested for different IRS-P6 AWiFS False color composites (FCC) covering the ICRISAT Farm, Patancheru, Hyderabad, India under varying atmospheric conditions. Ground measurements of surface reflectance representing different land use/land cover, i.e., Red soil, Chick Pea crop, Groundnut crop and Pigeon Pea crop were conducted to validate the algorithm and found a very good match between surface reflectance and atmospherically corrected reflectance for all spectral bands. Further, we aggregated all datasets together and compared the retrieved AWiFS reflectance with aggregated ground measurements which showed a very good correlation of 0.96 in all four spectral bands (i.e. green, red, NIR and SWIR). In order to quantify the accuracy of the proposed method in the estimation of the surface reflectance, the root mean square error (RMSE) associated to the proposed method was evaluated. The analysis of the ground measured versus retrieved AWiFS reflectance yielded smaller RMSE values in case of all four spectral bands. EOS TERRA/AQUA MODIS derived AOD exhibited very good correlation of 0.92 and the data sets provides an effective means for carrying out atmospheric corrections in an operational way. Keywords: Atmospheric correction, 6S code, MODIS, Spectroradiometer, Sun-Photometer

  17. Corrective Action Decision Document/Closure Report for Corrective Action Unit 567: Miscellaneous Soil Sites - Nevada National Security Site, Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthews, Patrick

    2014-12-01

    This Corrective Action Decision Document/Closure Report presents information supporting the closure of Corrective Action Unit (CAU) 567: Miscellaneous Soil Sites, Nevada National Security Site, Nevada. The purpose of this Corrective Action Decision Document/Closure Report is to provide justification and documentation supporting the recommendation that no further corrective action is needed for CAU 567 based on the implementation of the corrective actions. The corrective actions implemented at CAU 567 were developed based on an evaluation of analytical data from the CAI, the assumed presence of COCs at specific locations, and the detailed and comparative analysis of the CAAs. The CAAs weremore » selected on technical merit focusing on performance, reliability, feasibility, safety, and cost. The implemented corrective actions meet all requirements for the technical components evaluated. The CAAs meet all applicable federal and state regulations for closure of the site. Based on the implementation of these corrective actions, the DOE, National Nuclear Security Administration Nevada Field Office provides the following recommendations: • No further corrective actions are necessary for CAU 567. • The Nevada Division of Environmental Protection issue a Notice of Completion to the DOE, National Nuclear Security Administration Nevada Field Office for closure of CAU 567. • CAU 567 be moved from Appendix III to Appendix IV of the FFACO.« less

  18. QIN DAWG Validation of Gradient Nonlinearity Bias Correction Workflow for Quantitative Diffusion-Weighted Imaging in Multicenter Trials.

    PubMed

    Malyarenko, Dariya I; Wilmes, Lisa J; Arlinghaus, Lori R; Jacobs, Michael A; Huang, Wei; Helmer, Karl G; Taouli, Bachir; Yankeelov, Thomas E; Newitt, David; Chenevert, Thomas L

    2016-12-01

    Previous research has shown that system-dependent gradient nonlinearity (GNL) introduces a significant spatial bias (nonuniformity) in apparent diffusion coefficient (ADC) maps. Here, the feasibility of centralized retrospective system-specific correction of GNL bias for quantitative diffusion-weighted imaging (DWI) in multisite clinical trials is demonstrated across diverse scanners independent of the scanned object. Using corrector maps generated from system characterization by ice-water phantom measurement completed in the previous project phase, GNL bias correction was performed for test ADC measurements from an independent DWI phantom (room temperature agar) at two offset locations in the bore. The precomputed three-dimensional GNL correctors were retrospectively applied to test DWI scans by the central analysis site. The correction was blinded to reference DWI of the agar phantom at magnet isocenter where the GNL bias is negligible. The performance was evaluated from changes in ADC region of interest histogram statistics before and after correction with respect to the unbiased reference ADC values provided by sites. Both absolute error and nonuniformity of the ADC map induced by GNL (median, 12%; range, -35% to +10%) were substantially reduced by correction (7-fold in median and 3-fold in range). The residual ADC nonuniformity errors were attributed to measurement noise and other non-GNL sources. Correction of systematic GNL bias resulted in a 2-fold decrease in technical variability across scanners (down to site temperature range). The described validation of GNL bias correction marks progress toward implementation of this technology in multicenter trials that utilize quantitative DWI.

  19. QIN DAWG Validation of Gradient Nonlinearity Bias Correction Workflow for Quantitative Diffusion-Weighted Imaging in Multicenter Trials

    PubMed Central

    Malyarenko, Dariya I.; Wilmes, Lisa J.; Arlinghaus, Lori R.; Jacobs, Michael A.; Huang, Wei; Helmer, Karl G.; Taouli, Bachir; Yankeelov, Thomas E.; Newitt, David; Chenevert, Thomas L.

    2017-01-01

    Previous research has shown that system-dependent gradient nonlinearity (GNL) introduces a significant spatial bias (nonuniformity) in apparent diffusion coefficient (ADC) maps. Here, the feasibility of centralized retrospective system-specific correction of GNL bias for quantitative diffusion-weighted imaging (DWI) in multisite clinical trials is demonstrated across diverse scanners independent of the scanned object. Using corrector maps generated from system characterization by ice-water phantom measurement completed in the previous project phase, GNL bias correction was performed for test ADC measurements from an independent DWI phantom (room temperature agar) at two offset locations in the bore. The precomputed three-dimensional GNL correctors were retrospectively applied to test DWI scans by the central analysis site. The correction was blinded to reference DWI of the agar phantom at magnet isocenter where the GNL bias is negligible. The performance was evaluated from changes in ADC region of interest histogram statistics before and after correction with respect to the unbiased reference ADC values provided by sites. Both absolute error and nonuniformity of the ADC map induced by GNL (median, 12%; range, −35% to +10%) were substantially reduced by correction (7-fold in median and 3-fold in range). The residual ADC nonuniformity errors were attributed to measurement noise and other non-GNL sources. Correction of systematic GNL bias resulted in a 2-fold decrease in technical variability across scanners (down to site temperature range). The described validation of GNL bias correction marks progress toward implementation of this technology in multicenter trials that utilize quantitative DWI. PMID:28105469

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.

    Purpose: Small calcifications are often the earliest and the main indicator of breast cancer. Dual-energy digital mammography (DEDM) has been considered as a promising technique to improve the detectability of calcifications since it can be used to suppress the contrast between adipose and glandular tissues of the breast. X-ray scatter leads to erroneous calculations of the DEDM image. Although the pinhole-array interpolation method can estimate scattered radiations, it requires extra exposures to measure the scatter and apply the correction. The purpose of this work is to design an algorithmic method for scatter correction in DEDM without extra exposures.Methods: In thismore » paper, a scatter correction method for DEDM was developed based on the knowledge that scattered radiation has small spatial variation and that the majority of pixels in a mammogram are noncalcification pixels. The scatter fraction was estimated in the DEDM calculation and the measured scatter fraction was used to remove scatter from the image. The scatter correction method was implemented on a commercial full-field digital mammography system with breast tissue equivalent phantom and calcification phantom. The authors also implemented the pinhole-array interpolation scatter correction method on the system. Phantom results for both methods are presented and discussed. The authors compared the background DE calcification signals and the contrast-to-noise ratio (CNR) of calcifications in the three DE calcification images: image without scatter correction, image with scatter correction using pinhole-array interpolation method, and image with scatter correction using the authors' algorithmic method.Results: The authors' results show that the resultant background DE calcification signal can be reduced. The root-mean-square of background DE calcification signal of 1962 μm with scatter-uncorrected data was reduced to 194 μm after scatter correction using the authors' algorithmic method. The range of background DE calcification signals using scatter-uncorrected data was reduced by 58% with scatter-corrected data by algorithmic method. With the scatter-correction algorithm and denoising, the minimum visible calcification size can be reduced from 380 to 280 μm.Conclusions: When applying the proposed algorithmic scatter correction to images, the resultant background DE calcification signals can be reduced and the CNR of calcifications can be improved. This method has similar or even better performance than pinhole-array interpolation method in scatter correction for DEDM; moreover, this method is convenient and requires no extra exposure to the patient. Although the proposed scatter correction method is effective, it is validated by a 5-cm-thick phantom with calcifications and homogeneous background. The method should be tested on structured backgrounds to more accurately gauge effectiveness.« less

  1. Reducing uncertainties associated with filter-based optical measurements of light absorbing carbon particles with chemical information

    NASA Astrophysics Data System (ADS)

    Engström, J. E.; Leck, C.

    2011-08-01

    The presented filter-based optical method for determination of soot (light absorbing carbon or Black Carbon, BC) can be implemented in the field under primitive conditions and at low cost. This enables researchers with small economical means to perform monitoring at remote locations, especially in the Asia where it is much needed. One concern when applying filter-based optical measurements of BC is that they suffer from systematic errors due to the light scattering of non-absorbing particles co-deposited on the filter, such as inorganic salts and mineral dust. In addition to an optical correction of the non-absorbing material this study provides a protocol for correction of light scattering based on the chemical quantification of the material, which is a novelty. A newly designed photometer was implemented to measure light transmission on particle accumulating filters, which includes an additional sensor recording backscattered light. The choice of polycarbonate membrane filters avoided high chemical blank values and reduced errors associated with length of the light path through the filter. Two protocols for corrections were applied to aerosol samples collected at the Maldives Climate Observatory Hanimaadhoo during episodes with either continentally influenced air from the Indian/Arabian subcontinents (winter season) or pristine air from the Southern Indian Ocean (summer monsoon). The two ways of correction (optical and chemical) lowered the particle light absorption of BC by 63 to 61 %, respectively, for data from the Arabian Sea sourced group, resulting in median BC absorption coefficients of 4.2 and 3.5 Mm-1. Corresponding values for the South Indian Ocean data were 69 and 97 % (0.38 and 0.02 Mm-1). A comparison with other studies in the area indicated an overestimation of their BC levels, by up to two orders of magnitude. This raises the necessity for chemical correction protocols on optical filter-based determinations of BC, before even the sign on the radiative forcing based on their effects can be assessed.

  2. Improved control of the betatron coupling in the Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Persson, T.; Tomás, R.

    2014-05-01

    The control of the betatron coupling is of importance for safe beam operation in the LHC. In this article we show recent advancements in methods and algorithms to measure and correct coupling. The benefit of using a more precise formula relating the resonance driving term f1001 to the ΔQmin is presented. The quality of the coupling measurements is increased, with about a factor 3, by selecting beam position monitor (BPM) pairs with phase advances close to π/2 and through data cleaning using singular value decomposition with an optimal number of singular values. These improvements are beneficial for the implemented automatic coupling correction, which is based on injection oscillations, presented in the article. Furthermore, a proposed coupling feedback for the LHC is presented. The system will rely on the measurements from BPMs equipped with a new type of high resolution electronics, diode orbit and oscillation, which will be operational when the LHC restarts in 2015. The feedback will combine the coupling measurements from the available BPMs in order to calculate the best correction.

  3. Experimental Assessment and Enhancement of Planar Laser-Induced Fluorescence Measurements of Nitric Oxide in an Inverse Diffusion Flame

    NASA Technical Reports Server (NTRS)

    Partridge, William P.; Laurendeau, Normand M.

    1997-01-01

    We have experimentally assessed the quantitative nature of planar laser-induced fluorescence (PLIF) measurements of NO concentration in a unique atmospheric pressure, laminar, axial inverse diffusion flame (IDF). The PLIF measurements were assessed relative to a two-dimensional array of separate laser saturated fluorescence (LSF) measurements. We demonstrated and evaluated several experimentally-based procedures for enhancing the quantitative nature of PLIF concentration images. Because these experimentally-based PLIF correction schemes require only the ability to make PLIF and LSF measurements, they produce a more broadly applicable PLIF diagnostic compared to numerically-based correction schemes. We experimentally assessed the influence of interferences on both narrow-band and broad-band fluorescence measurements at atmospheric and high pressures. Optimum excitation and detection schemes were determined for the LSF and PLIF measurements. Single-input and multiple-input, experimentally-based PLIF enhancement procedures were developed for application in test environments with both negligible and significant quench-dependent error gradients. Each experimentally-based procedure provides an enhancement of approximately 50% in the quantitative nature of the PLIF measurements, and results in concentration images nominally as quantitative as LSF point measurements. These correction procedures can be applied to other species, including radicals, for which no experimental data are available from which to implement numerically-based PLIF enhancement procedures.

  4. A quantitative reconstruction software suite for SPECT imaging

    NASA Astrophysics Data System (ADS)

    Namías, Mauro; Jeraj, Robert

    2017-11-01

    Quantitative Single Photon Emission Tomography (SPECT) imaging allows for measurement of activity concentrations of a given radiotracer in vivo. Although SPECT has usually been perceived as non-quantitative by the medical community, the introduction of accurate CT based attenuation correction and scatter correction from hybrid SPECT/CT scanners has enabled SPECT systems to be as quantitative as Positron Emission Tomography (PET) systems. We implemented a software suite to reconstruct quantitative SPECT images from hybrid or dedicated SPECT systems with a separate CT scanner. Attenuation, scatter and collimator response corrections were included in an Ordered Subset Expectation Maximization (OSEM) algorithm. A novel scatter fraction estimation technique was introduced. The SPECT/CT system was calibrated with a cylindrical phantom and quantitative accuracy was assessed with an anthropomorphic phantom and a NEMA/IEC image quality phantom. Accurate activity measurements were achieved at an organ level. This software suite helps increasing quantitative accuracy of SPECT scanners.

  5. A wall interference assessment/correction system

    NASA Technical Reports Server (NTRS)

    Lo, Ching F.; Ulbrich, N.; Sickles, W. L.; Qian, Cathy X.

    1992-01-01

    A Wall Signature method, the Hackett method, has been selected to be adapted for the 12-ft Wind Tunnel wall interference assessment/correction (WIAC) system in the present phase. This method uses limited measurements of the static pressure at the wall, in conjunction with the solid wall boundary condition, to determine the strength and distribution of singularities representing the test article. The singularities are used in turn for estimating wall interferences at the model location. The Wall Signature method will be formulated for application to the unique geometry of the 12-ft Tunnel. The development and implementation of a working prototype will be completed, delivered and documented with a software manual. The WIAC code will be validated by conducting numerically simulated experiments rather than actual wind tunnel experiments. The simulations will be used to generate both free-air and confined wind-tunnel flow fields for each of the test articles over a range of test configurations. Specifically, the pressure signature at the test section wall will be computed for the tunnel case to provide the simulated 'measured' data. These data will serve as the input for the WIAC method-Wall Signature method. The performance of the WIAC method then may be evaluated by comparing the corrected parameters with those for the free-air simulation. Each set of wind tunnel/test article numerical simulations provides data to validate the WIAC method. A numerical wind tunnel test simulation is initiated to validate the WIAC methods developed in the project. In the present reported period, the blockage correction has been developed and implemented for a rectangular tunnel as well as the 12-ft Pressure Tunnel. An improved wall interference assessment and correction method for three-dimensional wind tunnel testing is presented in the appendix.

  6. Rocketdyne automated dynamics data analysis and management system

    NASA Technical Reports Server (NTRS)

    Tarn, Robert B.

    1988-01-01

    An automated dynamics data analysis and management systems implemented on a DEC VAX minicomputer cluster is described. Multichannel acquisition, Fast Fourier Transformation analysis, and an online database have significantly improved the analysis of wideband transducer responses from Space Shuttle Main Engine testing. Leakage error correction to recover sinusoid amplitudes and correct for frequency slewing is described. The phase errors caused by FM recorder/playback head misalignment are automatically measured and used to correct the data. Data compression methods are described and compared. The system hardware is described. Applications using the data base are introduced, including software for power spectral density, instantaneous time history, amplitude histogram, fatigue analysis, and rotordynamics expert system analysis.

  7. 40 CFR 258.58 - Implementation of the corrective action program.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Take any interim measures necessary to ensure the protection of human health and the environment... drinking water supplies or sensitive ecosystems; (iv) Further degradation of the ground-water that may... situations that may pose threats to human health and the environment. (b) An owner or operator may determine...

  8. 22 CFR 1006.860 - What factors may influence the debarring official's decision?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... official may consider when the cooperation began and whether you disclosed all pertinent information known... corrective action or remedial measures, such as establishing ethics training and implementing programs to... you had effective standards of conduct and internal control systems in place at the time the...

  9. 34 CFR 85.860 - What factors may influence the debarring official's decision?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... official may consider when the cooperation began and whether you disclosed all pertinent information known... corrective action or remedial measures, such as establishing ethics training and implementing programs to... you had effective standards of conduct and internal control systems in place at the time the...

  10. 22 CFR 1508.860 - What factors may influence the debarring official's decision?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... official may consider when the cooperation began and whether you disclosed all pertinent information known... corrective action or remedial measures, such as establishing ethics training and implementing programs to... you had effective standards of conduct and internal control systems in place at the time the...

  11. 22 CFR 208.860 - What factors may influence the debarring official's decision?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... official may consider when the cooperation began and whether you disclosed all pertinent information known... corrective action or remedial measures, such as establishing ethics training and implementing programs to... you had effective standards of conduct and internal control systems in place at the time the...

  12. Electromagnetic bone segment tracking to control femoral derotation osteotomy-A saw bone study.

    PubMed

    Geisbüsch, Andreas; Auer, Christoph; Dickhaus, Hartmut; Niklasch, Mirjam; Dreher, Thomas

    2017-05-01

    Correction of rotational gait abnormalities is common practice in pediatric orthopaedics such as in children with cerebral palsy. Femoral derotation osteotomy is established as a standard treatment, however, different authors reported substantial variability in outcomes following surgery with patients showing over- or under-correction. Only 60% of the applied correction is observed postoperatively, which strongly suggests intraoperative measurement error or loss of correction during surgery. This study was conducted to verify the impact of error sources in the derotation procedure and assess the utility of a newly developed, instrumented measurement system based on electromagnetic tracking aiming to improve the accuracy of rotational correction. A supracondylar derotation osteotomy was performed in 21 artificial femur sawbones and the amount of derotation was quantified during the procedure by the tracking system and by nine raters using a conventional goniometer. Accuracy of both measurement devices was determined by repeated computer tomography scans. Average derotation measured by the tracking system differed by 0.1° ± 1.6° from the defined reference measurement . In contrast, a high inter-rater variability was found in goniometric measurements (range: 10.8° ± 6.9°, mean interquartile distance: 6.6°). During fixation of the osteosynthesis, the tracking system reliably detected unintentional manipulation of the correction angle with a mean absolute change of 4.0° ± 3.2°. Our findings show that conventional control of femoral derotation is subject to relevant observer bias whereas instrumental tracking yields accuracy better than ±2°. The tracking system is a step towards more reliable and safe implementation of femoral correction, promising substantial improvements of patient safety in the future. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 35:1106-1112, 2017. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  13. Implementing patient-reported outcome measures in palliative care clinical practice: a systematic review of facilitators and barriers.

    PubMed

    Antunes, Bárbara; Harding, Richard; Higginson, Irene J

    2014-02-01

    Many patient-reported outcome measures have been developed in the past two decades, playing an increasingly important role in palliative care. However, their routine use in practice has been slow and difficult to implement. To systematically identify facilitators and barriers to the implementation of patient-reported outcome measures in different palliative care settings for routine practice, and to generate evidence-based recommendations, to inform the implementation process in clinical practice. Systematic literature review and narrative synthesis. Medline, PsycInfo, Cumulative Index to Nursing and Allied Health Literature, Embase and British Nursing Index were systematically searched from 1985. Hand searching of reference lists for all included articles and relevant review articles was performed. A total of 3863 articles were screened. Of these, 31 articles met the inclusion criteria. First, data were integrated in the main themes: facilitators, barriers and lessons learned. Second, each main theme was grouped into either five or six categories. Finally, recommendations for implementation on outcome measures at management, health-care professional and patient levels were generated for three different points in time: preparation, implementation and assessment/improvement. Successful implementation of patient-reported outcome measures should be tailored by identifying and addressing potential barriers according to setting. Having a coordinator throughout the implementation process seems to be key. Ongoing cognitive and emotional processes of each individual should be taken into consideration during changes. The educational component prior to the implementation is crucial. This could promote ownership and correct use of the measure by clinicians, potentially improving practice and the quality of care provided through patient-reported outcome measure data use in clinical decision-making.

  14. Policies and Practices in the Delivery of HIV Services in Correctional Agencies and Facilities: Results from a Multi-Site Survey

    PubMed Central

    Belenko, Steven; Hiller, Matthew; Visher, Christy; Copenhaver, Michael; O’Connell, Daniel; Burdon, William; Pankow, Jennifer; Clarke, Jennifer; Oser, Carrie

    2013-01-01

    HIV risk is disproportionately high among incarcerated individuals. Corrections agencies have been slow to implement evidence-based guidelines and interventions for HIV prevention, testing, and treatment. The emerging field of implementation science focuses on organizational interventions to facilitate adoption and implementation of evidence-based practices. A survey of among CJ-DATS correctional agency partners revealed that HIV policies and practices in prevention, detection and medical care varied widely, with some corrections agencies and facilities closely matching national guidelines and/or implementing evidence-based interventions. Others, principally attributed to limited resources, had numerous gaps in delivery of best HIV service practices. A brief overview is provided of a new CJ-DATS cooperative research protocol, informed by the survey findings, to test an organization-level intervention to reduce HIV service delivery gaps in corrections. PMID:24078624

  15. Closed-Loop Analysis of Soft Decisions for Serial Links

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin A.; Steele, Glen F.; Zucha, Joan P.; Schlensinger, Adam M.

    2012-01-01

    Modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more overhead through noisier channels, and software-defined radios use error-correction techniques that approach Shannon s theoretical limit of performance. The authors describe the benefit of closed-loop measurements for a receiver when paired with a counterpart transmitter and representative channel conditions. We also describe a real-time Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in real-time during the development of software defined radios.

  16. Correcting nonlinear drift distortion of scanning probe and scanning transmission electron microscopies from image pairs with orthogonal scan directions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ophus, Colin; Ciston, Jim; Nelson, Chris T.

    Unwanted motion of the probe with respect to the sample is a ubiquitous problem in scanning probe and scanning transmission electron microscopies, causing both linear and nonlinear artifacts in experimental images. We have designed a procedure to correct these artifacts by using orthogonal scan pairs to align each measurement line-by-line along the slow scan direction, by fitting contrast variation along the lines. We demonstrate the accuracy of our algorithm on both synthetic and experimental data and provide an implementation of our method.

  17. Correcting nonlinear drift distortion of scanning probe and scanning transmission electron microscopies from image pairs with orthogonal scan directions

    DOE PAGES

    Ophus, Colin; Ciston, Jim; Nelson, Chris T.

    2015-12-10

    Unwanted motion of the probe with respect to the sample is a ubiquitous problem in scanning probe and scanning transmission electron microscopies, causing both linear and nonlinear artifacts in experimental images. We have designed a procedure to correct these artifacts by using orthogonal scan pairs to align each measurement line-by-line along the slow scan direction, by fitting contrast variation along the lines. We demonstrate the accuracy of our algorithm on both synthetic and experimental data and provide an implementation of our method.

  18. Computational fluid dynamics analysis and experimental study of a low measurement error temperature sensor used in climate observation.

    PubMed

    Yang, Jie; Liu, Qingquan; Dai, Wei

    2017-02-01

    To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.

  19. Teacher perspectives after implementing a human sexuality education program.

    PubMed

    Gingiss, P L; Hamilton, R

    1989-12-01

    To help teachers enhance the effectiveness of their classroom instruction in human sexuality education, it is necessary to understand their attitudes and concerns about their teaching experiences. Forty-seven sixth grade teachers were surveyed one year after curriculum implementation to examine perceptions of themselves, their students, colleagues, and community. Teachers answered 70% of the knowledge items correctly and indicated slightly liberal orientations. Overall levels of teachers' views generally were positive on scales designed to measure: importance of the items studied, responsibility for student outcomes, three measures of comfort, adequacy of preparation, required changes, ease of use, social supports, and student responses. However, patterns of teacher responses within scales indicated numerous concerns related to curriculum implementation. The concerns and teacher-identified benefits and barriers to teaching the course indicate a focus for continuing education.

  20. Quality of care of treatment for uncomplicated severe acute malnutrition provided by lady health workers in Pakistan.

    PubMed

    Rogers, Eleanor; Ali, Muhammad; Fazal, Shahid; Kumar, Deepak; Guerrero, Saul; Hussain, Imtiaz; Soofi, Sajid; Alvarez Morán, Jose Luis

    2018-02-01

    To assess the quality of care provided by lady health workers (LHW) managing cases of uncomplicated severe acute malnutrition (SAM) in the community. Cross-sectional quality-of-care study. The feasibility of the implementation of screening and treatment for uncomplicated SAM in the community by LHW was tested in Sindh Province, Pakistan. An observational, clinical prospective multicentre cohort study compared the LHW-delivered care with the existing outpatient health facility model. LHW implementing treatment for uncomplicated SAM in the community. Oedema was diagnosed conducted correctly for 87·5 % of children; weight and mid upper-arm circumference were measured correctly for 60·0 % and 57·4 % of children, respectively. The appetite test was conducted correctly for 42·0 % of cases. Of all cases of SAM without complications assessed during the study, 68·0 % received the correct medical and nutrition treatment. The proportion of cases that received the correct medical and nutrition treatment and key counselling messages was 4·0 %. This quality-of-care study supports existing evidence that LHW are able to identify uncomplicated SAM, and a majority can provide appropriate nutrition and medical treatment in the community. However, the findings also show that their ability to provide the complete package with an acceptable level of care is not assured. Additional evidence on the impact of supervision and training on the quality of SAM treatment and counselling provided by LHW to children with SAM is required. The study has also shown that, as in other sectors, it is essential that operational challenges are addressed in a timely manner and that implementers receive appropriate levels of support, if SAM is to be treated successfully in the community.

  1. 76 FR 61946 - Implementation of Form 990; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-06

    ... implement the redesigned Form 990, ``Return of Organization Exempt From Income Tax''. These regulations were... corrected by making the following correcting amendment: PART 1--INCOME TAXES 0 Paragraph 1. The authority... may prove to be misleading and is in need of clarification. List of Subjects in 26 CFR Part 1 Income...

  2. 40 CFR 258.58 - Implementation of the corrective action program.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) SOLID WASTES CRITERIA FOR MUNICIPAL SOLID WASTE LANDFILLS Ground-Water Monitoring and Corrective...) Establish and implement a corrective action ground-water monitoring program that: (i) At a minimum, meet the requirements of an assessment monitoring program under § 258.55; (ii) Indicate the effectiveness of the...

  3. 40 CFR 258.58 - Implementation of the corrective action program.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (CONTINUED) SOLID WASTES CRITERIA FOR MUNICIPAL SOLID WASTE LANDFILLS Ground-Water Monitoring and Corrective...) Establish and implement a corrective action ground-water monitoring program that: (i) At a minimum, meet the requirements of an assessment monitoring program under § 258.55; (ii) Indicate the effectiveness of the...

  4. 40 CFR 258.58 - Implementation of the corrective action program.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) SOLID WASTES CRITERIA FOR MUNICIPAL SOLID WASTE LANDFILLS Ground-Water Monitoring and Corrective...) Establish and implement a corrective action ground-water monitoring program that: (i) At a minimum, meet the requirements of an assessment monitoring program under § 258.55; (ii) Indicate the effectiveness of the...

  5. 40 CFR 257.28 - Implementation of the corrective action program.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...-Hazardous Waste Disposal Units Ground-Water Monitoring and Corrective Action § 257.28 Implementation of the... ground-water monitoring program that: (i) At a minimum, meets the requirements of an assessment monitoring program under § 257.25; (ii) Indicates the effectiveness of the corrective action remedy; and (iii...

  6. 78 FR 65621 - Implementation of Title I/II Program Initiatives; Extension of Public Comment Period; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-01

    ... DEPARTMENT OF EDUCATION Implementation of Title I/II Program Initiatives; Extension of Public Comment Period; Correction AGENCY: Department of Education. ACTION: Correction notice. SUMMARY: On October... Title I/II Program Initiatives,'' Docket ID ED- 2013-ICCD-0090. The comment period for this information...

  7. The Accuracy and Precision of Flow Measurements Using Phase Contrast Techniques

    NASA Astrophysics Data System (ADS)

    Tang, Chao

    Quantitative volume flow rate measurements using the magnetic resonance imaging technique are studied in this dissertation because the volume flow rates have a special interest in the blood supply of the human body. The method of quantitative volume flow rate measurements is based on the phase contrast technique, which assumes a linear relationship between the phase and flow velocity of spins. By measuring the phase shift of nuclear spins and integrating velocity across the lumen of the vessel, we can determine the volume flow rate. The accuracy and precision of volume flow rate measurements obtained using the phase contrast technique are studied by computer simulations and experiments. The various factors studied include (1) the partial volume effect due to voxel dimensions and slice thickness relative to the vessel dimensions; (2) vessel angulation relative to the imaging plane; (3) intravoxel phase dispersion; (4) flow velocity relative to the magnitude of the flow encoding gradient. The partial volume effect is demonstrated to be the major obstacle to obtaining accurate flow measurements for both laminar and plug flow. Laminar flow can be measured more accurately than plug flow in the same condition. Both the experiment and simulation results for laminar flow show that, to obtain the accuracy of volume flow rate measurements to within 10%, at least 16 voxels are needed to cover the vessel lumen. The accuracy of flow measurements depends strongly on the relative intensity of signal from stationary tissues. A correction method is proposed to compensate for the partial volume effect. The correction method is based on a small phase shift approximation. After the correction, the errors due to the partial volume effect are compensated, allowing more accurate results to be obtained. An automatic program based on the correction method is developed and implemented on a Sun workstation. The correction method is applied to the simulation and experiment results. The results show that the correction significantly reduces the errors due to the partial volume effect. We apply the correction method to the data of in vivo studies. Because the blood flow is not known, the results of correction are tested according to the common knowledge (such as cardiac output) and conservation of flow. For example, the volume of blood flowing to the brain should be equal to the volume of blood flowing from the brain. Our measurement results are very convincing.

  8. Correction of misclassification bias induced by the residential mobility in studies examining the link between socioeconomic environment and cancer incidence.

    PubMed

    Bryere, Josephine; Pornet, Carole; Dejardin, Olivier; Launay, Ludivine; Guittet, Lydia; Launoy, Guy

    2015-04-01

    Many international ecological studies that examine the link between social environment and cancer incidence use a deprivation index based on the subjects' address at the time of diagnosis to evaluate socioeconomic status. Thus, social past details are ignored, which leads to misclassification bias in the estimations. The objectives of this study were to include the latency delay in such estimations and to observe the effects. We adapted a previous methodology to correct estimates of the influence of socioeconomic environment on cancer incidence considering the latency delay in measuring socioeconomic status. We implemented this method using French data. We evaluated the misclassification due to social mobility with census data and corrected the relative risks. Inclusion of misclassification affected the values of relative risks, and the corrected values showed a greater departure from the value 1 than the uncorrected ones. For cancer of lung, colon-rectum, lips-mouth-pharynx, kidney and esophagus in men, the over incidence in the deprived categories was augmented by the correction. By not taking into account the latency period in measuring socioeconomic status, the burden of cancer associated with social inequality may be underestimated. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Measuring aberrations in the rat brain by coherence-gated wavefront sensing using a Linnik interferometer

    PubMed Central

    Wang, Jinyu; Léger, Jean-François; Binding, Jonas; Boccara, A. Claude; Gigan, Sylvain; Bourdieu, Laurent

    2012-01-01

    Aberrations limit the resolution, signal intensity and achievable imaging depth in microscopy. Coherence-gated wavefront sensing (CGWS) allows the fast measurement of aberrations in scattering samples and therefore the implementation of adaptive corrections. However, CGWS has been demonstrated so far only in weakly scattering samples. We designed a new CGWS scheme based on a Linnik interferometer and a SLED light source, which is able to compensate dispersion automatically and can be implemented on any microscope. In the highly scattering rat brain tissue, where multiply scattered photons falling within the temporal gate of the CGWS can no longer be neglected, we have measured known defocus and spherical aberrations up to a depth of 400 µm. PMID:23082292

  10. Measuring aberrations in the rat brain by coherence-gated wavefront sensing using a Linnik interferometer.

    PubMed

    Wang, Jinyu; Léger, Jean-François; Binding, Jonas; Boccara, A Claude; Gigan, Sylvain; Bourdieu, Laurent

    2012-10-01

    Aberrations limit the resolution, signal intensity and achievable imaging depth in microscopy. Coherence-gated wavefront sensing (CGWS) allows the fast measurement of aberrations in scattering samples and therefore the implementation of adaptive corrections. However, CGWS has been demonstrated so far only in weakly scattering samples. We designed a new CGWS scheme based on a Linnik interferometer and a SLED light source, which is able to compensate dispersion automatically and can be implemented on any microscope. In the highly scattering rat brain tissue, where multiply scattered photons falling within the temporal gate of the CGWS can no longer be neglected, we have measured known defocus and spherical aberrations up to a depth of 400 µm.

  11. Low dose scatter correction for digital chest tomosynthesis

    NASA Astrophysics Data System (ADS)

    Inscoe, Christina R.; Wu, Gongting; Shan, Jing; Lee, Yueh Z.; Zhou, Otto; Lu, Jianping

    2015-03-01

    Digital chest tomosynthesis (DCT) provides superior image quality and depth information for thoracic imaging at relatively low dose, though the presence of strong photon scatter degrades the image quality. In most chest radiography, anti-scatter grids are used. However, the grid also blocks a large fraction of the primary beam photons requiring a significantly higher imaging dose for patients. Previously, we have proposed an efficient low dose scatter correction technique using a primary beam sampling apparatus. We implemented the technique in stationary digital breast tomosynthesis, and found the method to be efficient in correcting patient-specific scatter with only 3% increase in dose. In this paper we reported the feasibility study of applying the same technique to chest tomosynthesis. This investigation was performed utilizing phantom and cadaver subjects. The method involves an initial tomosynthesis scan of the object. A lead plate with an array of holes, or primary sampling apparatus (PSA), was placed above the object. A second tomosynthesis scan was performed to measure the primary (scatter-free) transmission. This PSA data was used with the full-field projections to compute the scatter, which was then interpolated to full-field scatter maps unique to each projection angle. Full-field projection images were scatter corrected prior to reconstruction. Projections and reconstruction slices were evaluated and the correction method was found to be effective at improving image quality and practical for clinical implementation.

  12. Multipass Steering: A Reference Implementation

    NASA Astrophysics Data System (ADS)

    Hennessey, Michael; Tiefenback, Michael

    2015-10-01

    We introduce a reference implementation of a protocol to compute corrections that bring all beams in one of the CEBAF linear accelerators (linac) to axis, including, with a larger tolerance, the lowest energy pass using measured beam trajectory data. This method relies on linear optics as representation of the system; we treat beamline perturbations as magnetic field errors localized to regions between cryomodules, providing the same transverse momentum kick to each beam. We produce a vector of measured beam position data with which we left-multiply the pseudo-inverse of a coefficient array, A, that describes the transport of the beam through the linac using parameters that include the magnetic offsets of the quadrupole magnets, the instrumental offsets of the BPMs, and the beam initial conditions. This process is repeated using a reduced array to produce values that can be applied to the available correcting magnets and beam initial conditions. We show that this method is effective in steering the beam to a straight axis along the linac by using our values in elegant, the accelerator simulation program, on a model of the linac in question. The algorithms in this reference implementation provide a tool for systematic diagnosis and cataloging of perturbations in the beam line. Supported by Jefferson Lab, Old Dominion University, NSF, DOE.

  13. A Portable Ground-Based Atmospheric Monitoring System (PGAMS) for the Calibration and Validation of Atmospheric Correction Algorithms Applied to Aircraft and Satellite Images

    NASA Technical Reports Server (NTRS)

    Schiller, Stephen; Luvall, Jeffrey C.; Rickman, Doug L.; Arnold, James E. (Technical Monitor)

    2000-01-01

    Detecting changes in the Earth's environment using satellite images of ocean and land surfaces must take into account atmospheric effects. As a result, major programs are underway to develop algorithms for image retrieval of atmospheric aerosol properties and atmospheric correction. However, because of the temporal and spatial variability of atmospheric transmittance it is very difficult to model atmospheric effects and implement models in an operational mode. For this reason, simultaneous in situ ground measurements of atmospheric optical properties are vital to the development of accurate atmospheric correction techniques. Presented in this paper is a spectroradiometer system that provides an optimized set of surface measurements for the calibration and validation of atmospheric correction algorithms. The Portable Ground-based Atmospheric Monitoring System (PGAMS) obtains a comprehensive series of in situ irradiance, radiance, and reflectance measurements for the calibration of atmospheric correction algorithms applied to multispectral. and hyperspectral images. The observations include: total downwelling irradiance, diffuse sky irradiance, direct solar irradiance, path radiance in the direction of the north celestial pole, path radiance in the direction of the overflying satellite, almucantar scans of path radiance, full sky radiance maps, and surface reflectance. Each of these parameters are recorded over a wavelength range from 350 to 1050 nm in 512 channels. The system is fast, with the potential to acquire the complete set of observations in only 8 to 10 minutes depending on the selected spatial resolution of the sky path radiance measurements

  14. Stray light correction on array spectroradiometers for optical radiation risk assessment in the workplace.

    PubMed

    Barlier-Salsi, A

    2014-12-01

    The European directive 2006/25/EC requires the employer to assess and, if necessary, measure the levels of exposure to optical radiation in the workplace. Array spectroradiometers can measure optical radiation from various types of sources; however poor stray light rejection affects their accuracy. A stray light correction matrix, using a tunable laser, was developed at the National Institute of Standards and Technology (NIST). As tunable lasers are very expensive, the purpose of this study was to implement this method using only nine low power lasers; other elements of the correction matrix being completed by interpolation and extrapolation. The correction efficiency was evaluated by comparing CCD spectroradiometers with and without correction and a scanning double monochromator device as reference. Similar to findings recorded by NIST, these experiments show that it is possible to reduce the spectral stray light by one or two orders of magnitude. In terms of workplace risk assessment, this spectral stray light correction method helps determine exposure levels, with an acceptable degree of uncertainty, for the majority of workplace situations. The level of uncertainty depends upon the model of spectroradiometers used; the best results are obtained with CCD detectors having an enhanced spectral sensitivity in the UV range. Thus corrected spectroradiometers require a validation against a scanning double monochromator spectroradiometer before using them for risk assessment in the workplace.

  15. Differential correction capability of the GTDS using TDRSS data

    NASA Technical Reports Server (NTRS)

    Liu, S. Y.; Soskey, D. G.; Jacintho, J.

    1980-01-01

    A differential correction (DC) capability was implemented in the Goddard Trajectory Determination System (GTDS) to process satellite tracking data acquired via the Tracking and Data Relay Satellite System (TRDRSS). Configuration of the TDRSS is reviewed, observation modeling is presented, and major features of the capability are discussed. The following types of TDRSS data can be processed by GTDS: two way relay range and Doppler measurements, hybrid relay range and Doppler measurements, one way relay Doppler measurements, and differenced one way relay Doppler measurements. These data may be combined with conventional ground based direct tracking data. By using Bayesian weighted least squares techniques, the software allows the simultaneous determination of the trajectories of up to four different satellites - one user satellite and three relay satellites. In addition to satellite trajectories, the following parameters can be optionally solved: for drag coefficient, reflectivity of a satellite for solar radiation pressure, transponder delay, station position, and biases.

  16. Harmonic source wavefront aberration correction for ultrasound imaging

    PubMed Central

    Dianis, Scott W.; von Ramm, Olaf T.

    2011-01-01

    A method is proposed which uses a lower-frequency transmit to create a known harmonic acoustical source in tissue suitable for wavefront correction without a priori assumptions of the target or requiring a transponder. The measurement and imaging steps of this method were implemented on the Duke phased array system with a two-dimensional (2-D) array. The method was tested with multiple electronic aberrators [0.39π to 1.16π radians root-mean-square (rms) at 4.17 MHz] and with a physical aberrator 0.17π radians rms at 4.17 MHz) in a variety of imaging situations. Corrections were quantified in terms of peak beam amplitude compared to the unaberrated case, with restoration between 0.6 and 36.6 dB of peak amplitude with a single correction. Standard phantom images before and after correction were obtained and showed both visible improvement and 14 dB contrast improvement after correction. This method, when combined with previous phase correction methods, may be an important step that leads to improved clinical images. PMID:21303031

  17. 78 FR 61743 - Revisions to the Export Administration Regulations: Initial Implementation of Export Control...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-03

    ... ``attachments'' that are ``specially designed'' for a commodity subject to control in this ECCN or a defense... Implementation of Export Control Reform; Correction; Final Rule #0;#0;Federal Register / Vol. 78 , No. 192... Administration Regulations: Initial Implementation of Export Control Reform; Correction AGENCY: Bureau of...

  18. Music: Highly Engaged Students Connect Music to Math

    ERIC Educational Resources Information Center

    Jones, Shelly M.; Pearson, Dunn, Jr.

    2013-01-01

    A musician and a mathematics educator create and implement a set of elementary school lessons integrating music and math. Students learn the basics of music theory including identifying notes and learning their fractional values. They learn about time signatures and how to determine correct note values per measure. Students are motivated by…

  19. Spectral line polarimetry with a channeled polarimeter.

    PubMed

    van Harten, Gerard; Snik, Frans; Rietjens, Jeroen H H; Martijn Smit, J; Keller, Christoph U

    2014-07-01

    Channeled spectropolarimetry or spectral polarization modulation is an accurate technique for measuring the continuum polarization in one shot with no moving parts. We show how a dual-beam implementation also enables spectral line polarimetry at the intrinsic resolution, as in a classic beam-splitting polarimeter. Recording redundant polarization information in the two spectrally modulated beams of a polarizing beam-splitter even provides the possibility to perform a postfacto differential transmission correction that improves the accuracy of the spectral line polarimetry. We perform an error analysis to compare the accuracy of spectral line polarimetry to continuum polarimetry, degraded by a residual dark signal and differential transmission, as well as to quantify the impact of the transmission correction. We demonstrate the new techniques with a blue sky polarization measurement around the oxygen A absorption band using the groundSPEX instrument, yielding a polarization in the deepest part of the band of 0.160±0.010, significantly different from the polarization in the continuum of 0.2284±0.0004. The presented methods are applicable to any dual-beam channeled polarimeter, including implementations for snapshot imaging polarimetry.

  20. Kennedy Space Center Press Site (SWMU 074) Interim Measure Report

    NASA Technical Reports Server (NTRS)

    Applegate, Joseph L.

    2015-01-01

    This report summarizes the Interim Measure (IM) activities conducted at the Kennedy Space Center (KSC) Press Site ("the Press Site"). This facility has been designated as Solid Waste Management Unit 074 under KSC's Resource Conservation and Recovery Act Corrective Action program. The activities were completed as part of the Vehicle Assembly Building (VAB) Area Land Use Controls Implementation Plan (LUCIP) Elimination Project. The purpose of the VAB Area LUCIP Elimination Project was to delineate and remove soil affected with constituents of concern (COCs) that historically resulted in Land Use Controls (LUCs). The goal of the project was to eliminate the LUCs on soil. LUCs for groundwater were not addressed as part of the project and are not discussed in this report. This report is intended to meet the Florida Department of Environmental Protection (FDEP) Corrective Action Management Plan requirement as part of the KSC Hazardous and Solid Waste Amendments permit and the U.S. Environmental Protection Agency's (USEPA's) Toxic Substance Control Act (TSCA) self-implementing polychlorinated biphenyl (PCB) cleanup requirements of 40 Code of Federal Regulations (CFR) 761.61(a).

  1. Kalman Filter for Mass Property and Thrust Identification (MMS)

    NASA Technical Reports Server (NTRS)

    Queen, Steven

    2015-01-01

    The Magnetospheric Multiscale (MMS) mission consists of four identically instrumented, spin-stabilized observatories, elliptically orbiting the Earth in a tetrahedron formation. For the operational success of the mission, on-board systems must be able to deliver high-precision orbital adjustment maneuvers. On MMS, this is accomplished using feedback from on-board star sensors in tandem with accelerometers whose measurements are dynamically corrected for errors associated with a spinning platform. In order to determine the required corrections to the measured acceleration, precise estimates of attitude, rate, and mass-properties is necessary. To this end, both an on-board and ground-based Multiplicative Extended Kalman Filter (MEKF) were formulated and implemented in order to estimate the dynamic and quasi-static properties of the spacecraft.

  2. SU-F-T-281: Monte Carlo Investigation of Sources of Dosimetric Discrepancies with 2D Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Afifi, M; Deiab, N; El-Farrash, A

    2016-06-15

    Purpose: Intensity modulated radiation therapy (IMRT) poses a number of challenges for properly measuring commissioning data and quality assurance (QA). Understanding the limitations and use of dosimeters to measure these dose distributions is critical to safe IMRT implementation. In this work, we used Monte Carlo simulations to investigate the possible sources of discrepancy between our measurement with 2D array system and our dose calculation using our treatment planning system (TPS). Material and Methods: MCBEAM and MCSIM Monte Carlo codes were used for treatment head simulation and phantom dose calculation. Accurate modeling of a 6MV beam from Varian trilogy machine wasmore » verified by comparing simulated and measured percentage depth doses and profiles. Dose distribution inside the 2D array was calculated using Monte Carlo simulations and our TPS. Then Cross profiles for different field sizes were compared with actual measurements for zero and 90° gantry angle setup. Through the analysis and comparison, we tried to determine the differences and quantify a possible angular calibration factor. Results: Minimum discrepancies was seen in the comparison between the simulated and the measured profiles for the zero gantry angles at all studied field sizes (4×4cm{sup 2}, 10×10cm{sup 2}, 15×15cm{sup 2}, and 20×20cm{sup 2}). Discrepancies between our measurements and calculations increased dramatically for the cross beam profiles at the 90° gantry angle. This could ascribe mainly to the different attenuation caused by the layer of electronics at the base behind the ion chambers in the 2D array. The degree of attenuation will vary depending on the angle of beam incidence. Correction factors were implemented to correct the errors. Conclusion: Monte Carlo modeling of the 2D arrays and the derivation of angular dependence correction factors will allow for improved accuracy of the device for IMRT QA.« less

  3. Atmospheric correction of ocean color sensors: analysis of the effects of residual instrument polarization sensitivity.

    PubMed

    Gordon, H R; Du, T; Zhang, T

    1997-09-20

    We provide an analysis of the influence of instrument polarization sensitivity on the radiance measured by spaceborne ocean color sensors. Simulated examples demonstrate the influence of polarization sensitivity on the retrieval of the water-leaving reflectance rho(w). A simple method for partially correcting for polarization sensitivity--replacing the linear polarization properties of the top-of-atmosphere reflectance with those from a Rayleigh-scattering atmosphere--is provided and its efficacy is evaluated. It is shown that this scheme improves rho(w) retrievals as long as the polarization sensitivity of the instrument does not vary strongly from band to band. Of course, a complete polarization-sensitivity characterization of the ocean color sensor is required to implement the correction.

  4. A 20 MHz CMOS reorder buffer for a superscalar microprocessor

    NASA Technical Reports Server (NTRS)

    Lenell, John; Wallace, Steve; Bagherzadeh, Nader

    1992-01-01

    Superscalar processors can achieve increased performance by issuing instructions out-of-order from the original sequential instruction stream. Implementing an out-of-order instruction issue policy requires a hardware mechanism to prevent incorrectly executed instructions from updating register values. A reorder buffer can be used to allow a superscalar processor to issue instructions out-of-order and maintain program correctness. This paper describes the design and implementation of a 20MHz CMOS reorder buffer for superscalar processors. The reorder buffer is designed to accept and retire two instructions per cycle. A full-custom layout in 1.2 micron has been implemented, measuring 1.1058 mm by 1.3542 mm.

  5. On evaluating clustering procedures for use in classification

    NASA Technical Reports Server (NTRS)

    Pore, M. D.; Moritz, T. E.; Register, D. T.; Yao, S. S.; Eppler, W. G. (Principal Investigator)

    1979-01-01

    The problem of evaluating clustering algorithms and their respective computer programs for use in a preprocessing step for classification is addressed. In clustering for classification the probability of correct classification is suggested as the ultimate measure of accuracy on training data. A means of implementing this criterion and a measure of cluster purity are discussed. Examples are given. A procedure for cluster labeling that is based on cluster purity and sample size is presented.

  6. Implementation of in vivo Dosimetry with Isorad{sup TM} Semiconductor Diodes in Radiotherapy Treatments of the Pelvis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodriguez, Miguel L.; Abrego, Eladio; Pineda, Amalia

    2008-04-01

    This report describes the results obtained with the Isorad{sup TM} (Red) semiconductor detectors for implementing an in vivo dosimetry program in patients subject to radiotherapy treatment of the pelvis. Four n-type semiconductor diodes were studied to characterize them for the application. The diode calibration consisted of establishing reading-to-dose conversion factors in reference conditions and a set of correction factors accounting for deviations of the diode response in comparison to that of an ion chamber. Treatments of the pelvis were performed by using an isocentric 'box' technique employing a beam of 18 MV with the shape of the fields defined bymore » a multileaf collimator. The method of Rizzotti-Leunen was used to assess the dose at the isocenter based on measurements of the in vivo dose at the entrance and at the exit of each radiation field. The in vivo dose was evaluated for a population of 80 patients. The diodes exhibit good characteristics for their use in in vivo dosimetry; however, the high attenuation of the beam ({approx}12% at 5.0-cm depth) produced, and some important correction factors, must be taken into account. The correction factors determined, including the source-to-surface factor, were within a range of {+-}4%. The frequency histograms of the relative difference between the expected and measured doses at the entrance, the exit, and the isocenter, have mean values and standard deviations of -0.09% (2.18%), 0.77% (2.73%), and -0.11% (1.76%), respectively. The method implemented has proven to be very useful in the assessment of the in vivo dose in this kind of treatment.« less

  7. 78 FR 75251 - Changes To Implement the Patent Law Treaty; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-11

    ...-2013-0007] RIN 0651-AC85 Changes To Implement the Patent Law Treaty; Correction AGENCY: United States... Law Treaty (PLT) and provisions of the Patent Law Treaties Implementation Act of 2012 (PLTIA) that... practice in patent cases for consistency with the changes in the Patent Law Treaty (PLT) and provisions of...

  8. Training Parent Implementation of Discrete-Trial Teaching: Effects on Generalization of Parent Teaching and Child Correct Responding

    ERIC Educational Resources Information Center

    Lafasakis, Michael; Sturmey, Peter

    2007-01-01

    Behavioral skills training was used to teach 3 parents to implement discrete-trial teaching with their children with developmental disabilities. Parents learned to implement discrete-trial training, their skills generalized to novel programs, and the children's correct responding increased, suggesting that behavioral skills training is an…

  9. Dispersion durations of P-wave and QT interval in children treated with a ketogenic diet.

    PubMed

    Doksöz, Önder; Güzel, Orkide; Yılmaz, Ünsal; Işgüder, Rana; Çeleğen, Kübra; Meşe, Timur

    2014-04-01

    Limited data are available on the effects of a ketogenic diet on dispersion duration of P-wave and QT-interval measures in children. We searched for the changes in these measures with serial electrocardiograms in patients treated with a ketogenic diet. Twenty-five drug-resistant patients with epilepsy treated with a ketogenic diet were enrolled in this study. Electrocardiography was performed in all patients before the beginning and at the sixth month after implementation of the ketogenic diet. Heart rate, maximum and minimum P-wave duration, P-wave dispersion, and maximum and minimum corrected QT interval and QT dispersion were manually measured from the 12-lead surface electrocardiogram. Minimum and maximum corrected QT and QT dispersion measurements showed nonsignificant increase at month 6 compared with baseline values. Other previously mentioned electrocardiogram parameters also showed no significant changes. A ketogenic diet of 6 months' duration has no significant effect on electrocardiogram parameters in children. Further studies with larger samples and longer duration of follow-up are needed to clarify the effects of ketogenic diet on P-wave dispersion and corrected QT and QT dispersion. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Detecting selection effects in community implementations of family-based substance abuse prevention programs.

    PubMed

    Hill, Laura G; Goates, Scott G; Rosenman, Robert

    2010-04-01

    To calculate valid estimates of the costs and benefits of substance abuse prevention programs, selection effects must be identified and corrected. A supplemental comparison sample is typically used for this purpose, but in community-based program implementations, such a sample is often not available. We present an evaluation design and analytic approach that can be used in program evaluations of real-world implementations to identify selection effects, which in turn can help inform recruitment strategies, pinpoint possible selection influences on measured program outcomes, and refine estimates of program costs and benefits. We illustrate our approach with data from a multisite implementation of a popular substance abuse prevention program. Our results indicate that the program's participants differed significantly from the population at large.

  11. SeaWiFS technical report series. Volume 31: Stray light in the SeaWiFS radiometer

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Acker, James G. (Editor); Barnes, Robert A.; Holmes, Alan W.; Esaias, Wayne E.

    1995-01-01

    Some of the measurements from the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) will not be useful as ocean measurements. For the ocean data set, there are procedures in place to mask the SeaWiFS measurements of clouds and ice. Land measurements will also be masked using a geographic technique based on each measurment's latitude and longitude. Each of these masks involves a source of light much brighter than the ocean. Because of stray light in the SeaWiFS radiometer, light from these bright sources can contaminate ocean measurements located a variable number of pixels away from a bright source. In this document, the sources of stray light in the sensor are examined, and a method is developed for masking measurements near bright targets for stray light effects. In addition, a procedure is proposed for reducing the effects of stray light in the flight data from SeaWiFS. This correction can also reduce the number of pixels masked for stray light. Without these corrections, local area scenes must be masked 10 pixels before and after bright targets in the along-scan direction. The addition of these corrections reduces the along-scan masks to four pixels before and after bright sources. In the along-track direction, the flight data are not corrected, and are masked two pixels before and after. Laboratory measurements have shown that stray light within the instrument changes in a direct ratio to the intensity of the bright source. The measurements have also shown that none of the bands show peculiarities in their stray light response. In other words, the instrument's response is uniform from band to band. The along-scan correction is based on each band's response to a 1 pixel wide bright sources. Since these results are based solely on preflight laboratory measurements, their successful implementation requires compliance with two additional criteria. First, since SeaWiFS has a large data volume, the correction and masking procedures must be such that they can be converted into computationally fast algorithms. Second, they must be shown to operate properly on flight data. The laboratory results, and the corrections and masking procedures that derive from them, should be considered as zeroeth order estimates of the effects that will be found on orbit.

  12. Reliability Correction for Functional Connectivity: Theory and Implementation

    PubMed Central

    Mueller, Sophia; Wang, Danhong; Fox, Michael D.; Pan, Ruiqi; Lu, Jie; Li, Kuncheng; Sun, Wei; Buckner, Randy L.; Liu, Hesheng

    2016-01-01

    Network properties can be estimated using functional connectivity MRI (fcMRI). However, regional variation of the fMRI signal causes systematic biases in network estimates including correlation attenuation in regions of low measurement reliability. Here we computed the spatial distribution of fcMRI reliability using longitudinal fcMRI datasets and demonstrated how pre-estimated reliability maps can correct for correlation attenuation. As a test case of reliability-based attenuation correction we estimated properties of the default network, where reliability was significantly lower than average in the medial temporal lobe and higher in the posterior medial cortex, heterogeneity that impacts estimation of the network. Accounting for this bias using attenuation correction revealed that the medial temporal lobe’s contribution to the default network is typically underestimated. To render this approach useful to a greater number of datasets, we demonstrate that test-retest reliability maps derived from repeated runs within a single scanning session can be used as a surrogate for multi-session reliability mapping. Using data segments with different scan lengths between 1 and 30 min, we found that test-retest reliability of connectivity estimates increases with scan length while the spatial distribution of reliability is relatively stable even at short scan lengths. Finally, analyses of tertiary data revealed that reliability distribution is influenced by age, neuropsychiatric status and scanner type, suggesting that reliability correction may be especially important when studying between-group differences. Collectively, these results illustrate that reliability-based attenuation correction is an easily implemented strategy that mitigates certain features of fMRI signal nonuniformity. PMID:26493163

  13. An orthogonal return method for linearly polarized beam based on the Faraday effect and its application in interferometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Benyong, E-mail: chenby@zstu.edu.cn; Zhang, Enzheng; Yan, Liping

    2014-10-15

    Correct return of the measuring beam is essential for laser interferometers to carry out measurement. In the actual situation, because the measured object inevitably rotates or laterally moves, not only the measurement accuracy will decrease, or even the measurement will be impossibly performed. To solve this problem, a novel orthogonal return method for linearly polarized beam based on the Faraday effect is presented. The orthogonal return of incident linearly polarized beam is realized by using a Faraday rotator with the rotational angle of 45°. The optical configuration of the method is designed and analyzed in detail. To verify its practicabilitymore » in polarization interferometry, a laser heterodyne interferometer based on this method was constructed and precision displacement measurement experiments were performed. These results show that the advantage of the method is that the correct return of the incident measuring beam is ensured when large lateral displacement or angular rotation of the measured object occurs and then the implementation of interferometric measurement can be ensured.« less

  14. Motion-Corrected 3D Sonic Anemometer for Tethersondes and Other Moving Platforms

    NASA Technical Reports Server (NTRS)

    Bognar, John

    2012-01-01

    To date, it has not been possible to apply 3D sonic anemometers on tethersondes or similar atmospheric research platforms due to the motion of the supporting platform. A tethersonde module including both a 3D sonic anemometer and associated motion correction sensors has been developed, enabling motion-corrected 3D winds to be measured from a moving platform such as a tethersonde. Blimps and other similar lifting systems are used to support tethersondes meteorological devices that fly on the tether of a blimp or similar platform. To date, tethersondes have been limited to making basic meteorological measurements (pressure, temperature, humidity, and wind speed and direction). The motion of the tethersonde has precluded the addition of 3D sonic anemometers, which can be used for high-speed flux measurements, thereby limiting what has been achieved to date with tethersondes. The tethersonde modules fly on a tether that can be constantly moving and swaying. This would introduce enormous error into the output of an uncorrected 3D sonic anemometer. The motion correction that is required must be implemented in a low-weight, low-cost manner to be suitable for this application. Until now, flux measurements using 3D sonic anemometers could only be made if the 3D sonic anemometer was located on a rigid, fixed platform such as a tower. This limited the areas in which they could be set up and used. The purpose of the innovation was to enable precise 3D wind and flux measurements to be made using tether - sondes. In brief, a 3D accelerometer and a 3D gyroscope were added to a tethersonde module along with a 3D sonic anemometer. This combination allowed for the necessary package motions to be measured, which were then mathematically combined with the measured winds to yield motion-corrected 3D winds. At the time of this reporting, no tethersonde has been able to make any wind measurement other than a basic wind speed and direction measurement. The addition of a 3D sonic anemometer is unique, as is the addition of the motion-correction sensors.

  15. [Complex of psycho-hygienic correction measures of personality features of hiv-infected men and evaluation of their efficiency].

    PubMed

    Serheta, Ihor V; Dudarenko, Oksana B; Mostova, Olha P; Lobastova, Tetiana V; Andriichuk, Vitalii M; Vakolyuk, Larysa M; Yakubovska, Olha M

    2018-01-01

    Introduction: In addition to adequate diagnosis and treatment of HIV-infected individuals, development, scientific substantiation and implementation of psycho-hygienic measures aimed at correcting the processes of forming personality traits and improving the psycho-emotional state of HIV-infected individuals are of particular importance. The aim: The purpose of the scientific research was to determine the most significant changes of situational and personal anxiety indicators, the degree of gravity of the asthenic state and depressive manifestations that were recorded in the context of the introduction of a number of measures for psycho-hygienic correction. Materials and methods: To determine the peculiarities of the impact of the proposed measures of psycho-hygienic correction and the study of the consequences of their implementation, two groups of comparison were created: a control group and an intervention group. 30 HIV-infected men who used a complex of measures for psycho-hygienic correction of personality traits and improvement of psycho-emotional state in their daily activities were included in the intervention group; 30 HIV-infected men who did not use this complex in their daily activities were included in the control group. Diagnosis and assessment of the anxiety of HIV-infected persons were carried out on the basis of The State-Trait Anxiety Inventory (STAI). The absence or presence of manifestations of an asthenic personality disorder in the subjects was determined by means of a test method created by L. Malkova for assessing asthenia. In order to determine the degree of manifestation of this characteristic, the psychic state of a person, as a level of expression of a depressive state, the psychometric Zung Depression Rating Scale was used to assess depression. Results: Studies have found that there was a statistically valid decrease of the level of indicators of situational anxiety among the representatives of the intervention group which reduced from 51,56 ±1,69 to 43,36 ±1,05 (p<0,001). The degree of expression of asthenic manifestations significantly decreased from 87,23±3,00 points (p<0,01) at the beginning of the observation period to 77,76±1,54 points towards the end of the period, the level of indicators of depression declined from 59,13±1,09 to 55,13±0,79 points (p<0,01). Conclusions: The use of a complex of measures of psycho-hygienic correction provides the appearance of extremely favorable changes on the part of such personality characteristics as indicators of situational anxiety (p<0,001), the severity of asthenic (p<0,01) and depressive (p<0,01) states.

  16. Development and validation of a rebinner with rigid motion correction for the Siemens PET-MR scanner: Application to a large cohort of [11C]-PIB scans.

    PubMed

    Reilhac, Anthonin; Merida, Ines; Irace, Zacharie; Stephenson, Mary; Weekes, Ashley; Chen, Christopher; Totman, John; Townsend, David W; Fayad, Hadi; Costes, Nicolas

    2018-04-13

    Objective: Head motion occuring during brain PET studies leads to image blurring and to bias in measured local quantities. Our first objective was to implement an accurate list-mode-based rigid motion correction method for PET data acquired with the mMR synchronous Positron Emission Tomography/Magnetic Resonance (PET/MR) scanner. Our second objective was to optimize the correction for [ 11 C]-PIB scans using simulated and actual data with well-controlled motions. Results: An efficient list-mode based motion correction approach has been implemented, fully optimized and validated using simulated as well as actual PET data. The average spatial resolution loss induced by inaccuracies in motion parameter estimates as well as by the rebinning process was estimated to correspond to a 1 mm increase in Full Width Half Maximum (FWHM) with motion parameters estimated directly from the PET data with a temporal frequency of 20 secs. The results show that it can be safely applied to the [ 11 C]-PIB scans, allowing almost complete removal of motion induced artifacts.The application of the correction method on a large cohort of 11C-PIB scans led to the following observations: i) more than 21% of the scans were affected by a motion greater than 10 mm (39% for subjects with Mini-Mental State Examination -MMSE scores below 20) and ii), the correction led to quantitative changes in Alzheimer-specific cortical regions of up to 30%. Conclusion: The rebinner allows an accurate motion correction at a cost of minimal resolution reduction. The application of the correction to a large cohort of [ 11 C]-PIB scans confirmed the necessity to systematically correct for motion for quantitative results. Copyright © 2018 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  17. The Polychromatic Laser Guide Star: the ELP-OA demonstrator at Observatoire de Haute Provence

    NASA Astrophysics Data System (ADS)

    Foy, R.; Chatagnat, M.; Dubet, D.; Éric, P.; Eysseric, J.; Foy, F.-C.; Fusco, T.; Girard, J.; Laloge, A.; Le van Suu, A.; Messaoudi, B.; Perruchot, S.; Richaud, P.; Richaud, Y.; Rondeau, X.; Tallon, M.; Thiébaut, É.; Boër, M.

    2007-07-01

    The correction of the tilt for adaptive optics devices from the only laser guide star can be done with the polychromatic laser guide star. We report the progress of the first demonstrator of the implementation of this concept, at Observatoire de Haute-Provence. We review the last steps of the feasibility studies, the optimization of the laser parameters, and the studies of the implementation at the OHP 1.52m telescope, including the beam propagation to the lasers room to the mesosphere and the algorithms for tip-tilt measurements.

  18. Real-time distortion correction of spiral and echo planar images using the gradient system impulse response function.

    PubMed

    Campbell-Washburn, Adrienne E; Xue, Hui; Lederman, Robert J; Faranesh, Anthony Z; Hansen, Michael S

    2016-06-01

    MRI-guided interventions demand high frame rate imaging, making fast imaging techniques such as spiral imaging and echo planar imaging (EPI) appealing. In this study, we implemented a real-time distortion correction framework to enable the use of these fast acquisitions for interventional MRI. Distortions caused by gradient waveform inaccuracies were corrected using the gradient impulse response function (GIRF), which was measured by standard equipment and saved as a calibration file on the host computer. This file was used at runtime to calculate the predicted k-space trajectories for image reconstruction. Additionally, the off-resonance reconstruction frequency was modified in real time to interactively deblur spiral images. Real-time distortion correction for arbitrary image orientations was achieved in phantoms and healthy human volunteers. The GIRF-predicted k-space trajectories matched measured k-space trajectories closely for spiral imaging. Spiral and EPI image distortion was visibly improved using the GIRF-predicted trajectories. The GIRF calibration file showed no systematic drift in 4 months and was demonstrated to correct distortions after 30 min of continuous scanning despite gradient heating. Interactive off-resonance reconstruction was used to sharpen anatomical boundaries during continuous imaging. This real-time distortion correction framework will enable the use of these high frame rate imaging methods for MRI-guided interventions. Magn Reson Med 75:2278-2285, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  19. Real-time distortion correction of spiral and echo planar images using the gradient system impulse response function

    PubMed Central

    Campbell-Washburn, Adrienne E; Xue, Hui; Lederman, Robert J; Faranesh, Anthony Z; Hansen, Michael S

    2015-01-01

    Purpose MRI-guided interventions demand high frame-rate imaging, making fast imaging techniques such as spiral imaging and echo planar imaging (EPI) appealing. In this study, we implemented a real-time distortion correction framework to enable the use of these fast acquisitions for interventional MRI. Methods Distortions caused by gradient waveform inaccuracies were corrected using the gradient impulse response function (GIRF), which was measured by standard equipment and saved as a calibration file on the host computer. This file was used at runtime to calculate the predicted k-space trajectories for image reconstruction. Additionally, the off-resonance reconstruction frequency was modified in real-time to interactively de-blur spiral images. Results Real-time distortion correction for arbitrary image orientations was achieved in phantoms and healthy human volunteers. The GIRF predicted k-space trajectories matched measured k-space trajectories closely for spiral imaging. Spiral and EPI image distortion was visibly improved using the GIRF predicted trajectories. The GIRF calibration file showed no systematic drift in 4 months and was demonstrated to correct distortions after 30 minutes of continuous scanning despite gradient heating. Interactive off-resonance reconstruction was used to sharpen anatomical boundaries during continuous imaging. Conclusions This real-time distortion correction framework will enable the use of these high frame-rate imaging methods for MRI-guided interventions. PMID:26114951

  20. Radiometric and spectral stray light correction for the portable remote imaging spectrometer (PRISM) coastal ocean sensor

    NASA Astrophysics Data System (ADS)

    Haag, Justin M.; Van Gorp, Byron E.; Mouroulis, Pantazis; Thompson, David R.

    2017-09-01

    The airborne Portable Remote Imaging Spectrometer (PRISM) instrument is based on a fast (F/1.8) Dyson spectrometer operating at 350-1050 nm and a two-mirror telescope combined with a Teledyne HyViSI 6604A detector array. Raw PRISM data contain electronic and optical artifacts that must be removed prior to radiometric calibration. We provide an overview of the process transforming raw digital numbers to calibrated radiance values. Electronic panel artifacts are first corrected using empirical relationships developed from laboratory data. The instrument spectral response functions (SRF) are reconstructed using a measurement-based optimization technique. Removal of SRF effects from the data improves retrieval of true spectra, particularly in the typically low-signal near-ultraviolet and near-infrared regions. As a final step, radiometric calibration is performed using corrected measurements of an object of known radiance. Implementation of the complete calibration procedure maximizes data quality in preparation for subsequent processing steps, such as atmospheric removal and spectral signature classification.

  1. Tailoring the implementation of new biomarkers based on their added predictive value in subgroups of individuals.

    PubMed

    van Giessen, A; Moons, K G M; de Wit, G A; Verschuren, W M M; Boer, J M A; Koffijberg, H

    2015-01-01

    The value of new biomarkers or imaging tests, when added to a prediction model, is currently evaluated using reclassification measures, such as the net reclassification improvement (NRI). However, these measures only provide an estimate of improved reclassification at population level. We present a straightforward approach to characterize subgroups of reclassified individuals in order to tailor implementation of a new prediction model to individuals expected to benefit from it. In a large Dutch population cohort (n = 21,992) we classified individuals to low (< 5%) and high (≥ 5%) fatal cardiovascular disease risk by the Framingham risk score (FRS) and reclassified them based on the systematic coronary risk evaluation (SCORE). Subsequently, we characterized the reclassified individuals and, in case of heterogeneity, applied cluster analysis to identify and characterize subgroups. These characterizations were used to select individuals expected to benefit from implementation of SCORE. Reclassification after applying SCORE in all individuals resulted in an NRI of 5.00% (95% CI [-0.53%; 11.50%]) within the events, 0.06% (95% CI [-0.08%; 0.22%]) within the nonevents, and a total NRI of 0.051 (95% CI [-0.004; 0.116]). Among the correctly downward reclassified individuals cluster analysis identified three subgroups. Using the characterizations of the typically correctly reclassified individuals, implementing SCORE only in individuals expected to benefit (n = 2,707,12.3%) improved the NRI to 5.32% (95% CI [-0.13%; 12.06%]) within the events, 0.24% (95% CI [0.10%; 0.36%]) within the nonevents, and a total NRI of 0.055 (95% CI [0.001; 0.123]). Overall, the risk levels for individuals reclassified by tailored implementation of SCORE were more accurate. In our empirical example the presented approach successfully characterized subgroups of reclassified individuals that could be used to improve reclassification and reduce implementation burden. In particular when newly added biomarkers or imaging tests are costly or burdensome such a tailored implementation strategy may save resources and improve (cost-)effectiveness.

  2. 77 FR 61027 - Notice of Lodging of Proposed Consent Decree Under the Clean Water Act and Safe Drinking Water Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-05

    ... Decree resolves alleged violations of the Clean Water Act and Safe Drinking Water Act at mobile home... provide drinking water at a number of its mobile home parks and illegally discharged sewage, failed to... environmental audits at each mobile home park, implementing corrective measures, conducting regular inspections...

  3. Office of Student Financial Aid Quality Improvement Program: Design and Implementation Plan.

    ERIC Educational Resources Information Center

    Advanced Technology, Inc., Reston, VA.

    The purpose and direction of the quality improvement program of the U.S. Department of Education's Office of Student Financial Aid (OSFA) are described. The improvement program was designed to develop a systematic approach to identify, measure, and correct errors in the student aid delivery system. Information is provided on the general approach…

  4. 78 FR 50394 - Fisheries of the Caribbean, Gulf of Mexico, and South Atlantic; Snapper-Grouper Fishery off the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-19

    ... published in the Federal Register on August 2, 2013, regarding the implementation of management measures described in Amendment 22 to the Fishery Management Plan for the Snapper-Grouper Fishery in the South... to ensure there is adequate time to submit complete responses. DATES: This correction is effective...

  5. An Accurate Scatter Measurement and Correction Technique for Cone Beam Breast CT Imaging Using Scanning Sampled Measurement (SSM) Technique.

    PubMed

    Liu, Xinming; Shaw, Chris C; Wang, Tianpeng; Chen, Lingyun; Altunbas, Mustafa C; Kappadath, S Cheenu

    2006-02-28

    We developed and investigated a scanning sampled measurement (SSM) technique for scatter measurement and correction in cone beam breast CT imaging. A cylindrical polypropylene phantom (water equivalent) was mounted on a rotating table in a stationary gantry experimental cone beam breast CT imaging system. A 2-D array of lead beads, with the beads set apart about ~1 cm from each other and slightly tilted vertically, was placed between the object and x-ray source. A series of projection images were acquired as the phantom is rotated 1 degree per projection view and the lead beads array shifted vertically from one projection view to the next. A series of lead bars were also placed at the phantom edge to produce better scatter estimation across the phantom edges. Image signals in the lead beads/bars shadow were used to obtain sampled scatter measurements which were then interpolated to form an estimated scatter distribution across the projection images. The image data behind the lead bead/bar shadows were restored by interpolating image data from two adjacent projection views to form beam-block free projection images. The estimated scatter distribution was then subtracted from the corresponding restored projection image to obtain the scatter removed projection images.Our preliminary experiment has demonstrated that it is feasible to implement SSM technique for scatter estimation and correction for cone beam breast CT imaging. Scatter correction was successfully performed on all projection images using scatter distribution interpolated from SSM and restored projection image data. The resultant scatter corrected projection image data resulted in elevated CT number and largely reduced the cupping effects.

  6. Adjustment of spatio-temporal precipitation patterns in a high Alpine environment

    NASA Astrophysics Data System (ADS)

    Herrnegger, Mathew; Senoner, Tobias; Nachtnebel, Hans-Peter

    2018-01-01

    This contribution presents a method for correcting the spatial and temporal distribution of precipitation fields in a mountainous environment. The approach is applied within a flood forecasting model in the Upper Enns catchment in the Central Austrian Alps. Precipitation exhibits a large spatio-temporal variability in Alpine areas. Additionally the density of the monitoring network is low and measurements are subjected to major errors. This can lead to significant deficits in water balance estimation and stream flow simulations, e.g. for flood forecasting models. Therefore precipitation correction factors are frequently applied. For the presented study a multiplicative, stepwise linear correction model is implemented in the rainfall-runoff model COSERO to adjust the precipitation pattern as a function of elevation. To account for the local meteorological conditions, the correction model is derived for two elevation zones: (1) Valley floors to 2000 m a.s.l. and (2) above 2000 m a.s.l. to mountain peaks. Measurement errors also depend on the precipitation type, with higher magnitudes in winter months during snow fall. Therefore, additionally, separate correction factors for winter and summer months are estimated. Significant improvements in the runoff simulations could be achieved, not only in the long-term water balance simulation and the overall model performance, but also in the simulation of flood peaks.

  7. Adaptable gene-specific dye bias correction for two-channel DNA microarrays.

    PubMed

    Margaritis, Thanasis; Lijnzaad, Philip; van Leenen, Dik; Bouwmeester, Diane; Kemmeren, Patrick; van Hooff, Sander R; Holstege, Frank C P

    2009-01-01

    DNA microarray technology is a powerful tool for monitoring gene expression or for finding the location of DNA-bound proteins. DNA microarrays can suffer from gene-specific dye bias (GSDB), causing some probes to be affected more by the dye than by the sample. This results in large measurement errors, which vary considerably for different probes and also across different hybridizations. GSDB is not corrected by conventional normalization and has been difficult to address systematically because of its variance. We show that GSDB is influenced by label incorporation efficiency, explaining the variation of GSDB across different hybridizations. A correction method (Gene- And Slide-Specific Correction, GASSCO) is presented, whereby sequence-specific corrections are modulated by the overall bias of individual hybridizations. GASSCO outperforms earlier methods and works well on a variety of publically available datasets covering a range of platforms, organisms and applications, including ChIP on chip. A sequence-based model is also presented, which predicts which probes will suffer most from GSDB, useful for microarray probe design and correction of individual hybridizations. Software implementing the method is publicly available.

  8. Adaptable gene-specific dye bias correction for two-channel DNA microarrays

    PubMed Central

    Margaritis, Thanasis; Lijnzaad, Philip; van Leenen, Dik; Bouwmeester, Diane; Kemmeren, Patrick; van Hooff, Sander R; Holstege, Frank CP

    2009-01-01

    DNA microarray technology is a powerful tool for monitoring gene expression or for finding the location of DNA-bound proteins. DNA microarrays can suffer from gene-specific dye bias (GSDB), causing some probes to be affected more by the dye than by the sample. This results in large measurement errors, which vary considerably for different probes and also across different hybridizations. GSDB is not corrected by conventional normalization and has been difficult to address systematically because of its variance. We show that GSDB is influenced by label incorporation efficiency, explaining the variation of GSDB across different hybridizations. A correction method (Gene- And Slide-Specific Correction, GASSCO) is presented, whereby sequence-specific corrections are modulated by the overall bias of individual hybridizations. GASSCO outperforms earlier methods and works well on a variety of publically available datasets covering a range of platforms, organisms and applications, including ChIP on chip. A sequence-based model is also presented, which predicts which probes will suffer most from GSDB, useful for microarray probe design and correction of individual hybridizations. Software implementing the method is publicly available. PMID:19401678

  9. Minor Distortions with Major Consequences: Correcting Distortions in Imaging Spectrographs

    PubMed Central

    Esmonde-White, Francis W. L.; Esmonde-White, Karen A.; Morris, Michael D.

    2010-01-01

    Projective transformation is a mathematical correction (implemented in software) used in the remote imaging field to produce distortion-free images. We present the application of projective transformation to correct minor alignment and astigmatism distortions that are inherent in dispersive spectrographs. Patterned white-light images and neon emission spectra were used to produce registration points for the transformation. Raman transects collected on microscopy and fiber-optic systems were corrected using established methods and compared with the same transects corrected using the projective transformation. Even minor distortions have a significant effect on reproducibility and apparent fluorescence background complexity. Simulated Raman spectra were used to optimize the projective transformation algorithm. We demonstrate that the projective transformation reduced the apparent fluorescent background complexity and improved reproducibility of measured parameters of Raman spectra. Distortion correction using a projective transformation provides a major advantage in reducing the background fluorescence complexity even in instrumentation where slit-image distortions and camera rotation were minimized using manual or mechanical means. We expect these advantages should be readily applicable to other spectroscopic modalities using dispersive imaging spectrographs. PMID:21211158

  10. Laser Measurements Based for Volumetric Accuracy Improvement of Multi-axis Systems

    NASA Astrophysics Data System (ADS)

    Vladimir, Sokolov; Konstantin, Basalaev

    The paper describes a new developed approach to CNC-controlled multi-axis systems geometric errors compensation based on optimal error correction strategy. Multi-axis CNC-controlled systems - machine-tools and CMM's are the basis of modern engineering industry. Similar design principles of both technological and measurement equipment allow usage of similar approaches to precision management. The approach based on geometric errors compensation are widely used at present time. The paper describes a system for compensation of geometric errors of multi-axis equipment based on the new approach. The hardware basis of the developed system is a multi-function laser interferometer. The principles of system's implementation, results of measurements and system's functioning simulation are described. The effectiveness of application of described principles to multi-axis equipment of different sizes and purposes for different machining directions and zones within workspace is presented. The concepts of optimal correction strategy is introduced and dynamic accuracy control is proposed.

  11. SWSA 6 interim corrective measures environmental monitoring: FY 1991 results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clapp, R.B.; Marshall, D.S.

    1992-06-01

    In 1988, interim corrective measures (ICMs) were implemented at Solid Waste Storage Area (SWSA) 6 at Oak Ridge National Laboratory. The SWSA 6 site was regulated under the Resource Conservation and Recovery Act (RCRA). The ICMs consist of eight large high-density polyethylene sheets placed as temporary caps to cover trenches known to contain RCRA-regulated materials. Environmental monitoring for FY 1991 consisted of collecting water levels at 13 groundwater wells outside the capped areas and 44 wells in or near the capped areas in order to identify any significant loss of hydrologic isolation of the wastes. Past annual reports show thatmore » the caps are only partially effective in keeping the waste trenches dry and that many trenches consistently or intermittently contain water.« less

  12. SWSA 6 interim corrective measures environmental monitoring: FY 1991 results. Environmental Restoration Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clapp, R.B.; Marshall, D.S.

    1992-06-01

    In 1988, interim corrective measures (ICMs) were implemented at Solid Waste Storage Area (SWSA) 6 at Oak Ridge National Laboratory. The SWSA 6 site was regulated under the Resource Conservation and Recovery Act (RCRA). The ICMs consist of eight large high-density polyethylene sheets placed as temporary caps to cover trenches known to contain RCRA-regulated materials. Environmental monitoring for FY 1991 consisted of collecting water levels at 13 groundwater wells outside the capped areas and 44 wells in or near the capped areas in order to identify any significant loss of hydrologic isolation of the wastes. Past annual reports show thatmore » the caps are only partially effective in keeping the waste trenches dry and that many trenches consistently or intermittently contain water.« less

  13. A Kalman Filter for Mass Property and Thrust Identification of the Spin-Stabilized Magnetospheric Multiscale Formation

    NASA Technical Reports Server (NTRS)

    Queen, Steven Z.

    2015-01-01

    The Magnetospheric Multiscale (MMS) mission consists of four identically instrumented, spin-stabilized observatories, elliptically orbiting the Earth in a tetrahedron formation. For the operational success of the mission, on-board systems must be able to deliver high-precision orbital adjustment maneuvers. On MMS, this is accomplished using feedback from on-board star sensors in tandem with accelerometers whose measurements are dynamically corrected for errors associated with a spinning platform. In order to determine the required corrections to the measured acceleration, precise estimates of attitude, rate, and mass-properties are necessary. To this end, both an on-board and ground-based Multiplicative Extended Kalman Filter (MEKF) were formulated and implemented in order to estimate the dynamic and quasi-static properties of the spacecraft.

  14. Optical coherence tomography with a 2.8-mm beam diameter and sensorless defocus and astigmatism correction

    NASA Astrophysics Data System (ADS)

    Reddikumar, Maddipatla; Tanabe, Ayano; Hashimoto, Nobuyuki; Cense, Barry

    2017-02-01

    An optical coherence tomography (OCT) system with a 2.8-mm beam diameter is presented. Sensorless defocus correction can be performed with a Badal optometer and astigmatism correction with a liquid crystal device. OCT B-scans were used in an image-based optimization algorithm for aberration correction. Defocus can be corrected from -4.3 D to +4.3 D and vertical and oblique astigmatism from -2.5 D to +2.5 D. A contrast gain of 6.9 times was measured after aberration correction. In comparison with a 1.3-mm beam diameter OCT system, this concept achieved a 3.7-dB gain in dynamic range on a model retina. Both systems were used to image the retina of a human subject. As the correction of the liquid crystal device can take more than 60 s, the subject's spectacle prescription was adopted instead. This resulted in a 2.5 times smaller speckle size compared with the standard OCT system. The liquid crystal device for astigmatism correction does not need a high-voltage amplifier and can be operated at 5 V. The correction device is small (9 mm×30 mm×38 mm) and can easily be implemented in existing designs for OCT.

  15. Bootstrap confidence intervals and bias correction in the estimation of HIV incidence from surveillance data with testing for recent infection.

    PubMed

    Carnegie, Nicole Bohme

    2011-04-15

    The incidence of new infections is a key measure of the status of the HIV epidemic, but accurate measurement of incidence is often constrained by limited data. Karon et al. (Statist. Med. 2008; 27:4617–4633) developed a model to estimate the incidence of HIV infection from surveillance data with biologic testing for recent infection for newly diagnosed cases. This method has been implemented by public health departments across the United States and is behind the new national incidence estimates, which are about 40 per cent higher than previous estimates. We show that the delta method approximation given for the variance of the estimator is incomplete, leading to an inflated variance estimate. This contributes to the generation of overly conservative confidence intervals, potentially obscuring important differences between populations. We demonstrate via simulation that an innovative model-based bootstrap method using the specified model for the infection and surveillance process improves confidence interval coverage and adjusts for the bias in the point estimate. Confidence interval coverage is about 94–97 per cent after correction, compared with 96–99 per cent before. The simulated bias in the estimate of incidence ranges from −6.3 to +14.6 per cent under the original model but is consistently under 1 per cent after correction by the model-based bootstrap. In an application to data from King County, Washington in 2007 we observe correction of 7.2 per cent relative bias in the incidence estimate and a 66 per cent reduction in the width of the 95 per cent confidence interval using this method. We provide open-source software to implement the method that can also be extended for alternate models.

  16. A geometric model of a V-slit Sun sensor correcting for spacecraft wobble

    NASA Technical Reports Server (NTRS)

    Mcmartin, W. P.; Gambhir, S. S.

    1994-01-01

    A V-Slit sun sensor is body-mounted on a spin-stabilized spacecraft. During injection from a parking or transfer orbit to some final orbit, the spacecraft may not be dynamically balanced. This may result in wobble about the spacecraft spin axis as the spin axis may not be aligned with the spacecraft's axis of symmetry. While the widely used models in Spacecraft Attitude Determination and Control, edited by Wertz, correct for separation, elevation, and azimuthal mounting biases, spacecraft wobble is not taken into consideration. A geometric approach is used to develop a method for measurement of the sun angle which corrects for the magnitude and phase of spacecraft wobble. The algorithm was implemented using a set of standard mathematical routines for spherical geometry on a unit sphere.

  17. A Q-Band Free-Space Characterization of Carbon Nanotube Composites

    PubMed Central

    Hassan, Ahmed M.; Garboczi, Edward J.

    2016-01-01

    We present a free-space measurement technique for non-destructive non-contact electrical and dielectric characterization of nano-carbon composites in the Q-band frequency range of 30 GHz to 50 GHz. The experimental system and error correction model accurately reconstruct the conductivity of composite materials that are either thicker than the wave penetration depth, and therefore exhibit negligible microwave transmission (less than −40 dB), or thinner than the wave penetration depth and, therefore, exhibit significant microwave transmission. This error correction model implements a fixed wave propagation distance between antennas and corrects the complex scattering parameters of the specimen from two references, an air slab having geometrical propagation length equal to that of the specimen under test, and a metallic conductor, such as an aluminum plate. Experimental results were validated by reconstructing the relative dielectric permittivity of known dielectric materials and then used to determine the conductivity of nano-carbon composite laminates. This error correction model can simplify routine characterization of thin conducting laminates to just one measurement of scattering parameters, making the method attractive for research, development, and for quality control in the manufacturing environment. PMID:28057959

  18. [Measurement of customer satisfaction and participation of citizens in improving the quality of healthcare services.].

    PubMed

    Degrassi, Flori; Sopranzi, Cristina; Leto, Antonella; Amato, Simona; D'Urso, Antonio

    2009-01-01

    Managing quality in health care whilst ensuring equity is a fundamental aspect of the provision of services by healthcare organizations. Measuring perceived quality of care is an important tool for evaluating the quality of healthcare delivery in that it allows the implementation of corrective actions to meet the healthcare needs of patients. The Rome B (ASL RMB) local health authority adopted the UNI EN 10006:2006 norms as a management tool, therefore introducing the evaluation of customer satisfaction as an opportunity to involve users in the creation of quality healthcare services with and for the citizens. This paper presents the activities implemented and the results achieved with regards to shared and integrated continuous improvement of services.

  19. The digital compensation technology system for automotive pressure sensor

    NASA Astrophysics Data System (ADS)

    Guo, Bin; Li, Quanling; Lu, Yi; Luo, Zai

    2011-05-01

    Piezoresistive pressure sensor be made of semiconductor silicon based on Piezoresistive phenomenon, has many characteristics. But since the temperature effect of semiconductor, the performance of silicon sensor is also changed by temperature, and the pressure sensor without temperature drift can not be produced at present. This paper briefly describe the principles of sensors, the function of pressure sensor and the various types of compensation method, design the detailed digital compensation program for automotive pressure sensor. Simulation-Digital mixed signal conditioning is used in this dissertation, adopt signal conditioning chip MAX1452. AVR singlechip ATMEGA128 and other apparatus; fulfill the design of digital pressure sensor hardware circuit and singlechip hardware circuit; simultaneously design the singlechip software; Digital pressure sensor hardware circuit is used to implementing the correction and compensation of sensor; singlechip hardware circuit is used to implementing to controll the correction and compensation of pressure sensor; singlechip software is used to implementing to fulfill compensation arithmetic. In the end, it implement to measure the output of sensor, and contrast to the data of non-compensation, the outcome indicates that the compensation precision of compensated sensor output is obviously better than non-compensation sensor, not only improving the compensation precision but also increasing the stabilization of pressure sensor.

  20. An Adaptive Kalman Filter using a Simple Residual Tuning Method

    NASA Technical Reports Server (NTRS)

    Harman, Richard R.

    1999-01-01

    One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.

  1. [Tracking study to improve basic academic ability in chemistry for freshmen].

    PubMed

    Sato, Atsuko; Morone, Mieko; Azuma, Yutaka

    2010-08-01

    The aims of this study were to assess the basic academic ability of freshmen with regard to chemistry and implement suitable educational guidance measures. At Tohoku Pharmaceutical University, basic academic ability examinations are conducted in chemistry for freshmen immediately after entrance into the college. From 2003 to 2009, the examination was conducted using the same questions, and the secular changes in the mean percentage of correct response were statistically analyzed. An experience survey was also conducted on 2007 and 2009 freshmen regarding chemical experiments at senior high school. Analysis of the basic academic ability examinations revealed a significant decrease in the mean percentage of correct responses after 2007. With regard to the answers for each question, there was a significant decrease in the percentage of correct answers for approximately 80% of questions. In particular, a marked decrease was observed for calculation questions involving percentages. A significant decrease was also observed in the number of students who had experiences with chemical experiments in high school. However, notable results have been achieved through the implementation of practice incorporating calculation problems in order to improve calculation ability. Learning of chemistry and a lack of experimental experience in high school may be contributory factors in the decrease in chemistry academic ability. In consideration of the professional ability demanded of pharmacists, the decrease in calculation ability should be regarded as a serious issue and suitable measures for improving calculation ability are urgently required.

  2. Accurate Micro-Tool Manufacturing by Iterative Pulsed-Laser Ablation

    NASA Astrophysics Data System (ADS)

    Warhanek, Maximilian; Mayr, Josef; Dörig, Christian; Wegener, Konrad

    2017-12-01

    Iterative processing solutions, including multiple cycles of material removal and measurement, are capable of achieving higher geometric accuracy by compensating for most deviations manifesting directly on the workpiece. Remaining error sources are the measurement uncertainty and the repeatability of the material-removal process including clamping errors. Due to the lack of processing forces, process fluids and wear, pulsed-laser ablation has proven high repeatability and can be realized directly on a measuring machine. This work takes advantage of this possibility by implementing an iterative, laser-based correction process for profile deviations registered directly on an optical measurement machine. This way efficient iterative processing is enabled, which is precise, applicable for all tool materials including diamond and eliminates clamping errors. The concept is proven by a prototypical implementation on an industrial tool measurement machine and a nanosecond fibre laser. A number of measurements are performed on both the machine and the processed workpieces. Results show production deviations within 2 μm diameter tolerance.

  3. 77 FR 17334 - Approval and Promulgation of Air Quality Implementation Plans; State of Nevada; Regional Haze...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-26

    ... Progress Goal B. Long-Term Strategy C. BART for SO 2 and PM 10 at Reid Gardner D. Corrections to EPA's... a long- term strategy with enforceable measures to ensure reasonable progress toward achieving the... corresponding emission limits and schedules of compliance for NO X at RGGS in the SIP's long-term strategy...

  4. Pentacam Scheimpflug quantitative imaging of the crystalline lens and intraocular lens.

    PubMed

    Rosales, Patricia; Marcos, Susana

    2009-05-01

    To implement geometrical and optical distortion correction methods for anterior segment Scheimpflug images obtained with a commercially available system (Pentacam, Oculus Optikgeräte GmbH). Ray tracing algorithms were implemented to obtain corrected ocular surface geometry from the original images captured by the Pentacam's CCD camera. As details of the optical layout were not fully provided by the manufacturer, an iterative procedure (based on imaging of calibrated spheres) was developed to estimate the camera lens specifications. The correction procedure was tested on Scheimpflug images of a physical water cell model eye (with polymethylmethacrylate cornea and a commercial IOL of known dimensions) and of a normal human eye previously measured with a corrected optical and geometrical distortion Scheimpflug camera (Topcon SL-45 [Topcon Medical Systems Inc] from the Vrije University, Amsterdam, Holland). Uncorrected Scheimpflug images show flatter surfaces and thinner lenses than in reality. The application of geometrical and optical distortion correction algorithms improves the accuracy of the estimated anterior lens radii of curvature by 30% to 40% and of the estimated posterior lens by 50% to 100%. The average error in the retrieved radii was 0.37 and 0.46 mm for the anterior and posterior lens radii of curvature, respectively, and 0.048 mm for lens thickness. The Pentacam Scheimpflug system can be used to obtain quantitative information on the geometry of the crystalline lens, provided that geometrical and optical distortion correction algorithms are applied, within the accuracy of state-of-the art phakometry and biometry. The techniques could improve with exact knowledge of the technical specifications of the instrument, improved edge detection algorithms, consideration of aspheric and non-rotationally symmetrical surfaces, and introduction of a crystalline gradient index.

  5. First Industrial Tests of a Drum Monitor Matrix Correction for the Fissile Mass Measurement in Large Volume Historic Metallic Residues with the Differential Die-away Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antoni, R.; Passard, C.; Perot, B.

    2015-07-01

    The fissile mass in radioactive waste drums filled with compacted metallic residues (spent fuel hulls and nozzles) produced at AREVA La Hague reprocessing plant is measured by neutron interrogation with the Differential Die-away measurement Technique (DDT. In the next years, old hulls and nozzles mixed with Ion-Exchange Resins will be measured. The ion-exchange resins increase neutron moderation in the matrix, compared to the waste measured in the current process. In this context, the Nuclear Measurement Laboratory (NML) of CEA Cadarache has studied a matrix effect correction method, based on a drum monitor ({sup 3}He proportional counter inside the measurement cavity).more » A previous study performed with the NML R and D measurement cell PROMETHEE 6 has shown the feasibility of method, and the capability of MCNP simulations to correctly reproduce experimental data and to assess the performances of the proposed correction. A next step of the study has focused on the performance assessment of the method on the industrial station using numerical simulation. A correlation between the prompt calibration coefficient of the {sup 239}Pu signal and the drum monitor signal was established using the MCNPX computer code and a fractional factorial experimental design composed of matrix parameters representative of the variation range of historical waste. Calculations have showed that the method allows the assay of the fissile mass with an uncertainty within a factor of 2, while the matrix effect without correction ranges on 2 decades. In this paper, we present and discuss the first experimental tests on the industrial ACC measurement system. A calculation vs. experiment benchmark has been achieved by performing dedicated calibration measurement with a representative drum and {sup 235}U samples. The preliminary comparison between calculation and experiment shows a satisfactory agreement for the drum monitor. The final objective of this work is to confirm the reliability of the modeling approach and the industrial feasibility of the method, which will be implemented on the industrial station for the measurement of historical wastes. (authors)« less

  6. Particle identification using the time-over-threshold measurements in straw tube detectors

    NASA Astrophysics Data System (ADS)

    Jowzaee, S.; Fioravanti, E.; Gianotti, P.; Idzik, M.; Korcyl, G.; Palka, M.; Przyborowski, D.; Pysz, K.; Ritman, J.; Salabura, P.; Savrie, M.; Smyrski, J.; Strzempek, P.; Wintz, P.

    2013-08-01

    The identification of charged particles based on energy losses in straw tube detectors has been simulated. The response of a new front-end chip developed for the PANDA straw tube tracker was implemented in the simulations and corrections for track distance to sense wire were included. Separation power for p - K, p - π and K - π pairs obtained using the time-over-threshold technique was compared with the one based on the measurement of collected charge.

  7. A TDM link with channel coding and digital voice.

    NASA Technical Reports Server (NTRS)

    Jones, M. W.; Tu, K.; Harton, P. L.

    1972-01-01

    The features of a TDM (time-division multiplexed) link model are described. A PCM telemetry sequence was coded for error correction and multiplexed with a digitized voice channel. An all-digital implementation of a variable-slope delta modulation algorithm was used to digitize the voice channel. The results of extensive testing are reported. The measured coding gain and the system performance over a Gaussian channel are compared with theoretical predictions and computer simulations. Word intelligibility scores are reported as a measure of voice channel performance.

  8. dropEst: pipeline for accurate estimation of molecular counts in droplet-based single-cell RNA-seq experiments.

    PubMed

    Petukhov, Viktor; Guo, Jimin; Baryawno, Ninib; Severe, Nicolas; Scadden, David T; Samsonova, Maria G; Kharchenko, Peter V

    2018-06-19

    Recent single-cell RNA-seq protocols based on droplet microfluidics use massively multiplexed barcoding to enable simultaneous measurements of transcriptomes for thousands of individual cells. The increasing complexity of such data creates challenges for subsequent computational processing and troubleshooting of these experiments, with few software options currently available. Here, we describe a flexible pipeline for processing droplet-based transcriptome data that implements barcode corrections, classification of cell quality, and diagnostic information about the droplet libraries. We introduce advanced methods for correcting composition bias and sequencing errors affecting cellular and molecular barcodes to provide more accurate estimates of molecular counts in individual cells.

  9. Improvements in dose calculation accuracy for small off-axis targets in high dose per fraction tomotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardcastle, Nicholas; Bayliss, Adam; Wong, Jeannie Hsiu Ding

    2012-08-15

    Purpose: A recent field safety notice from TomoTherapy detailed the underdosing of small, off-axis targets when receiving high doses per fraction. This is due to angular undersampling in the dose calculation gantry angles. This study evaluates a correction method to reduce the underdosing, to be implemented in the current version (v4.1) of the TomoTherapy treatment planning software. Methods: The correction method, termed 'Super Sampling' involved the tripling of the number of gantry angles from which the dose is calculated during optimization and dose calculation. Radiochromic film was used to measure the dose to small targets at various off-axis distances receivingmore » a minimum of 21 Gy in one fraction. Measurements were also performed for single small targets at the center of the Lucy phantom, using radiochromic film and the dose magnifying glass (DMG). Results: Without super sampling, the peak dose deficit increased from 0% to 18% for a 10 mm target and 0% to 30% for a 5 mm target as off-axis target distances increased from 0 to 16.5 cm. When super sampling was turned on, the dose deficit trend was removed and all peak doses were within 5% of the planned dose. For measurements in the Lucy phantom at 9.7 cm off-axis, the positional and dose magnitude accuracy using super sampling was verified using radiochromic film and the DMG. Conclusions: A correction method implemented in the TomoTherapy treatment planning system which triples the angular sampling of the gantry angles used during optimization and dose calculation removes the underdosing for targets as small as 5 mm diameter, up to 16.5 cm off-axis receiving up to 21 Gy.« less

  10. Better compliance and better tolerance in relation to a well-conducted introduction to rub-in hand disinfection.

    PubMed

    Girard, R; Amazian, K; Fabry, J

    2001-02-01

    The aim of the study was to demonstrate that the introduction of rub-in hand disinfection (RHD) in hospital units, with the implementation of suitable equipment, drafting of specific protocols, and training users, improved compliance of hand disinfection and tolerance of user's hands. In four hospital units not previously using RHD an external investigator conducted two identical studies in order to measure the rate of compliance with, and the quality of, disinfection practices, [rate of adapted (i.e., appropriate) procedures, rate of correct (i.e., properly performed) procedures, rate of adapted and correct procedures carried out] and to assess the state of hands (clinical scores of dryness and irritation, measuring hydration with a corneometer). Between the two studies, the units were equipped with dispensers for RHD products and staff were trained. Compliance improved from 62.2 to 66.5%, quality was improved (rate of adapted procedures from 66.8% to 84.3%, P > or = 10(-6), rate of correct procedures from 11.1% to 28.9%, P > or = 10(-8), rate of adapted and correct procedures from 6.0 to 17.8%, P > or = 10(-8)). The tolerance was improved significantly (P > or = 10(-2)) for clinical dryness and irritation scores, although not significantly for measurements using a corneometer. This study shows the benefit of introducing RHD with a technical and educational accompaniment. Copyright 2001 The Hospital Infection Society.

  11. An Analysis of Offset, Gain, and Phase Corrections in Analog to Digital Converters

    NASA Astrophysics Data System (ADS)

    Cody, Devin; Ford, John

    2015-01-01

    Many high-speed analog to digital converters (ADCs) use interwoven ADCs to greatly boost their sample rate. This interwoven architecture can introduce problems if the low speed ADCs do not have identical outputs. These errors are manifested as phantom frequencies that appear in the digitized signal although they never existed in the analog domain. Through the application of offset, gain, and phase (OGP) corrections to the ADC, this problem can be reduced. Here we report on an implementation of such a correction in a high speed ADC chip used for radio astronomy. While the corrections could not be implemented in the ADCs themselves, a partial solution was devised and implemented digitally inside of a signal processing field programmable gate array (FPGA). Positive results to contrived situations are shown, and null results are presented for implementation in an ADC083000 card with minimal error. Lastly, we discuss the implications of this method as well as its mathematical basis.

  12. Breast tissue decomposition with spectral distortion correction: A postmortem study

    PubMed Central

    Ding, Huanjun; Zhao, Bo; Baturin, Pavlo; Behroozi, Farnaz; Molloi, Sabee

    2014-01-01

    Purpose: To investigate the feasibility of an accurate measurement of water, lipid, and protein composition of breast tissue using a photon-counting spectral computed tomography (CT) with spectral distortion corrections. Methods: Thirty-eight postmortem breasts were imaged with a cadmium-zinc-telluride-based photon-counting spectral CT system at 100 kV. The energy-resolving capability of the photon-counting detector was used to separate photons into low and high energy bins with a splitting energy of 42 keV. The estimated mean glandular dose for each breast ranged from 1.8 to 2.2 mGy. Two spectral distortion correction techniques were implemented, respectively, on the raw images to correct the nonlinear detector response due to pulse pileup and charge-sharing artifacts. Dual energy decomposition was then used to characterize each breast in terms of water, lipid, and protein content. In the meantime, the breasts were chemically decomposed into their respective water, lipid, and protein components to provide a gold standard for comparison with dual energy decomposition results. Results: The accuracy of the tissue compositional measurement with spectral CT was determined by comparing to the reference standard from chemical analysis. The averaged root-mean-square error in percentage composition was reduced from 15.5% to 2.8% after spectral distortion corrections. Conclusions: The results indicate that spectral CT can be used to quantify the water, lipid, and protein content in breast tissue. The accuracy of the compositional analysis depends on the applied spectral distortion correction technique. PMID:25281953

  13. Closure Report for Corrective Action Unit 539: Areas 25 and 26 Railroad Tracks Nevada National Security Site, Nevada with ROTC-1, Revision 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mark Kauss

    2011-06-01

    This Closure Report (CR) presents information supporting the closure of Corrective Action Unit (CAU) 539: Areas 25 and 26 Railroad Tracks, Nevada National Security Site, Nevada. This CR complies with the requirements of the Federal Facility Agreement and Consent Order (FFACO) that was agreed to by the State of Nevada; U.S. Department of Energy (DOE), Environmental Management; U.S. Department of Defense; and DOE, Legacy Management. The corrective action sites (CASs) within CAU 539 are located within Areas 25 and 26 of the Nevada National Security Site. Corrective Action Unit 539 comprises the following CASs: • 25-99-21, Area 25 Railroad Tracksmore » • 26-99-05, Area 26 Railroad Tracks The purpose of this CR is to provide documentation supporting the completed corrective actions and provide data confirming that the closure objectives for CASs within CAU 539 were met. To achieve this, the following actions were performed: • Reviewed documentation on historical and current site conditions, including the concentration and extent of contamination. • Conducted radiological walkover surveys of railroad tracks in both Areas 25 and 26. • Collected ballast and soil samples and calculated internal dose estimates for radiological releases. • Collected in situ thermoluminescent dosimeter measurements and calculated external dose estimates for radiological releases. • Removed lead bricks as potential source material (PSM) and collected verification samples. • Implemented corrective actions as necessary to protect human health and the environment. • Properly disposed of corrective action and investigation wastes. • Implemented an FFACO use restriction (UR) for radiological contamination at CAS 25-99-21. The approved UR form and map are provided in Appendix F and will be filed in the DOE, National Nuclear Security Administration Nevada Site Office (NNSA/NSO), Facility Information Management System; the FFACO database; and the NNSA/NSO CAU/CAS files. From November 29, 2010, through May 2, 2011, closure activities were performed as set forth in the Streamlined Approach for Environmental Restoration (SAFER) Plan for Corrective Action Unit 539: Areas 25 and 26 Railroad Tracks, Nevada Test Site, Nevada. The purposes of the activities as defined during the data quality objectives process were as follows: • Determine whether contaminants of concern (COCs) are present. • If COCs are present, determine their nature and extent, implement appropriate corrective actions, and properly dispose of wastes. Analytes detected during the closure activities were evaluated against final action levels (FALs) to determine COCs for CAU 539. Assessment of the data generated from closure activities revealed the following: • At CAS 26-99-05, the total effective dose for radiological releases did not exceed the FAL of 25 millirem per Industrial Area year. Potential source material in the form of lead bricks was found at three locations. A corrective action of clean closure was implemented at these locations, and verification samples indicated that no further action is necessary. • At CAS 25-99-21, the total effective dose for radiological releases exceeds the FAL of 25 millirem per Industrial Area year. Potential source material in the form of lead bricks was found at eight locations. A corrective action was implemented by removing the lead bricks and soil above FALs at these locations, and verification samples indicated that no further action is necessary. Pieces of debris with high radioactivity were identified as PSM and remain within the CAS boundary. A corrective action of closure in place with a UR was implemented at this CAS because closure activities showed evidence of remaining soil contamination and radioactive PSM. Future land use will be restricted from surface and intrusive activities. Closure activities generated waste streams consisting of industrial solid waste, recyclable materials, low-level radioactive waste, and mixed low-level radioactive waste. Wastes were disposed of in the appropriate onsite landfills. The NNSA/NSO provides the following recommendations: • Clean closure is required at CAS 26-99-05. • Closure in place is required at CAS 25-99-21. • A UR is required at CAS 25-99-21. • A Notice of Completion to the NNSA/NSO is requested from the Nevada Division of Environmental Protection for closure of CAU 539. • Corrective Action Unit 539 should be moved from Appendix III to Appendix IV of the FFACO.« less

  14. Evaluation metrics for bone segmentation in ultrasound

    NASA Astrophysics Data System (ADS)

    Lougheed, Matthew; Fichtinger, Gabor; Ungi, Tamas

    2015-03-01

    Tracked ultrasound is a safe alternative to X-ray for imaging bones. The interpretation of bony structures is challenging as ultrasound has no specific intensity characteristic of bones. Several image segmentation algorithms have been devised to identify bony structures. We propose an open-source framework that would aid in the development and comparison of such algorithms by quantitatively measuring segmentation performance in the ultrasound images. True-positive and false-negative metrics used in the framework quantify algorithm performance based on correctly segmented bone and correctly segmented boneless regions. Ground-truth for these metrics are defined manually and along with the corresponding automatically segmented image are used for the performance analysis. Manually created ground truth tests were generated to verify the accuracy of the analysis. Further evaluation metrics for determining average performance per slide and standard deviation are considered. The metrics provide a means of evaluating accuracy of frames along the length of a volume. This would aid in assessing the accuracy of the volume itself and the approach to image acquisition (positioning and frequency of frame). The framework was implemented as an open-source module of the 3D Slicer platform. The ground truth tests verified that the framework correctly calculates the implemented metrics. The developed framework provides a convenient way to evaluate bone segmentation algorithms. The implementation fits in a widely used application for segmentation algorithm prototyping. Future algorithm development will benefit by monitoring the effects of adjustments to an algorithm in a standard evaluation framework.

  15. Evaluating the effectiveness of flood damage mitigation measures by the application of Propensity Score Matching

    NASA Astrophysics Data System (ADS)

    Hudson, P.; Botzen, W. J. W.; Kreibich, H.; Bubeck, P.; Aerts, J. C. J. H.

    2014-01-01

    The employment of damage mitigation measures by individuals is an important component of integrated flood risk management. In order to promote efficient damage mitigation measures, accurate estimates of their damage mitigation potential are required. That is, for correctly assessing the damage mitigation measures' effectiveness from survey data, one needs to control for sources of bias. A biased estimate can occur if risk characteristics differ between individuals who have, or have not, implemented mitigation measures. This study removed this bias by applying an econometric evaluation technique called Propensity Score Matching to a survey of German households along along two major rivers major rivers that were flooded in 2002, 2005 and 2006. The application of this method detected substantial overestimates of mitigation measures' effectiveness if bias is not controlled for, ranging from nearly € 1700 to € 15 000 per measure. Bias-corrected effectiveness estimates of several mitigation measures show that these measures are still very effective since they prevent between € 6700-14 000 of flood damage. This study concludes with four main recommendations regarding how to better apply Propensity Score Matching in future studies, and makes several policy recommendations.

  16. Evaluating the effectiveness of flood damage mitigation measures by the application of propensity score matching

    NASA Astrophysics Data System (ADS)

    Hudson, P.; Botzen, W. J. W.; Kreibich, H.; Bubeck, P.; Aerts, J. C. J. H.

    2014-07-01

    The employment of damage mitigation measures (DMMs) by individuals is an important component of integrated flood risk management. In order to promote efficient damage mitigation measures, accurate estimates of their damage mitigation potential are required. That is, for correctly assessing the damage mitigation measures' effectiveness from survey data, one needs to control for sources of bias. A biased estimate can occur if risk characteristics differ between individuals who have, or have not, implemented mitigation measures. This study removed this bias by applying an econometric evaluation technique called propensity score matching (PSM) to a survey of German households along three major rivers that were flooded in 2002, 2005, and 2006. The application of this method detected substantial overestimates of mitigation measures' effectiveness if bias is not controlled for, ranging from nearly EUR 1700 to 15 000 per measure. Bias-corrected effectiveness estimates of several mitigation measures show that these measures are still very effective since they prevent between EUR 6700 and 14 000 of flood damage per flood event. This study concludes with four main recommendations regarding how to better apply propensity score matching in future studies, and makes several policy recommendations.

  17. A novel forward projection-based metal artifact reduction method for flat-detector computed tomography.

    PubMed

    Prell, Daniel; Kyriakou, Yiannis; Beister, Marcel; Kalender, Willi A

    2009-11-07

    Metallic implants generate streak-like artifacts in flat-detector computed tomography (FD-CT) reconstructed volumetric images. This study presents a novel method for reducing these disturbing artifacts by inserting discarded information into the original rawdata using a three-step correction procedure and working directly with each detector element. Computation times are minimized by completely implementing the correction process on graphics processing units (GPUs). First, the original volume is corrected using a three-dimensional interpolation scheme in the rawdata domain, followed by a second reconstruction. This metal artifact-reduced volume is then segmented into three materials, i.e. air, soft-tissue and bone, using a threshold-based algorithm. Subsequently, a forward projection of the obtained tissue-class model substitutes the missing or corrupted attenuation values directly for each flat detector element that contains attenuation values corresponding to metal parts, followed by a final reconstruction. Experiments using tissue-equivalent phantoms showed a significant reduction of metal artifacts (deviations of CT values after correction compared to measurements without metallic inserts reduced typically to below 20 HU, differences in image noise to below 5 HU) caused by the implants and no significant resolution losses even in areas close to the inserts. To cover a variety of different cases, cadaver measurements and clinical images in the knee, head and spine region were used to investigate the effectiveness and applicability of our method. A comparison to a three-dimensional interpolation correction showed that the new approach outperformed interpolation schemes. Correction times are minimized, and initial and corrected images are made available at almost the same time (12.7 s for the initial reconstruction, 46.2 s for the final corrected image compared to 114.1 s and 355.1 s on central processing units (CPUs)).

  18. Implementation of treatment guidelines for specialist mental health care.

    PubMed

    Barbui, Corrado; Girlanda, Francesca; Ay, Esra; Cipriani, Andrea; Becker, Thomas; Koesters, Markus

    2014-01-17

    A huge gap exists between the production of evidence and its take-up in clinical practice settings. To fill this gap, treatment guidelines, based on explicit assessments of the evidence base, are commonly employed in several fields of medicine, including schizophrenia and related psychotic disorders. It remains unclear, however, whether treatment guidelines have any impact on provider performance and patient outcomes, and how implementation should be conducted to maximise benefit. The primary objective of this review was to examine the efficacy of guideline implementation strategies in improving process outcomes (performance of healthcare providers) and patient outcomes. We additionally explored which components of different guideline implementation strategies can influence process and patient outcomes. We searched the Cochrane Schizophrenia Group Register (March 2012), as well as references of included studies. Studies that examined schizophrenia-spectrum disorders to compare guideline implementation strategies with usual care or to assess the comparative efficacy of different guideline implementation strategies. Review authors worked independently and in duplicate to critically appraise records from 882 studies; five individual studies met the inclusion criteria and were considered. As critical appraisal of the five included studies revealed substantial heterogeneity in terms of focus of the guideline, target of the intervention, implementation strategy and outcome measures, meta-analysis was carried out for antipsychotic co-prescribing only. Of the five included studies, practitioner impact was assessed in three. The five studies were generally at unclear risk of bias, and all evidence in the 'Summary of findings' table was graded by review authors as of very low quality. Meta-analysis of two studies revealed that a combination of several guideline dissemination and implementation strategies targeting healthcare professionals did not reduce antipsychotic co-prescribing in schizophrenia outpatients (two studies, n = 1,082, risk ratio (RR) 1.10, 95% confidence interval (CI) 0.99 to 1.23; corrected for cluster design: n = 310, RR 0.97, CI 0.75 to 1.25). One trial, which studied a nurse-led intervention aimed at promoting cardiovascular disease screening, found a significant effect in terms of the proportion of people receiving screening (blood pressure: n = 96, RR 0.07, 95% CI 0.02 to 0.28; cholesterol: n = 103, RR 0.46, 95% CI 0.30 to 0.70; glucose: n = 103, RR 0.53, 95% CI 0.34 to 0.82; BMI: n = 99, RR 0.22, 95% CI 0.08 to 0.60; smoking status: n = 96, RR 0.28, 95% CI 0.12 to 0.64; Framingham score: n = 110, RR 0.69, 95% CI 0.55 to 0.87), although in the analysis corrected for cluster design, the effect was statistically significant for blood pressure and cholesterol only (blood pressure, corrected for cluster design: n = 33, RR 0.10, 95% CI 0.01 to 0.74; cholesterol, corrected for cluster design: n = 35, RR 0.49, 95% CI 0.24 to 0.99; glucose, corrected for cluster design: n = 35, RR 0.58, 95% CI 0.28 to 1.21; BMI, corrected for cluster design: n = 34, RR 0.18, 95% CI 0.02 to 1.37; smoking status, corrected for cluster design: n = 32, RR 0.25, 95% CI 0.06 to 1.03; Framingham score, corrected for cluster design: n = 38, RR 0.71, 95% CI 0.48 to 1.03; very low quality). Regarding participant outcomes, one trial assessed the efficacy of a shared decision-making implementation strategy and found no impact in terms of psychopathology, satisfaction with care and drug attitude. Another single trial studied a multifaceted intervention to promote medication adherence and found no impact in terms of adherence rates. With only five studies meeting inclusion criteria, and with limited low or very low quality usable information, it is not possible to arrive at definitive conclusions. The preliminary pattern of evidence suggests that, although small changes in psychiatric practice have been demonstrated, uncertainty remains in terms of clinically meaningful and sustainable effects of treatment guidelines on patient outcomes and how best to implement such guidelines for maximal benefit.

  19. Environmental liability protection and other advantages of voluntary cleanup programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bost, R.C.; Linton, K.E.

    Historically, regulatory agencies have required that contaminated sites be returned to pristine conditions, often at very high costs. Fear of these enormous environmental liabilities has resulted in abandonment of many industrial and commercial properties, referred to as brownfields. The development of Risk-Based Corrective Action programs has provided a means for regulatory agencies to evaluate contaminated sites based on risk to human health and the environment, resulting in more reasonable remedial measures and costs. Governmental bodies have created a more flexible means of addressing contaminated sites using Risk-Based Corrective Action and other incentives to encourage the redevelopment of sites through Voluntarymore » Cleanup Programs. This study describes the development of Voluntary Cleanup Programs, and the successful implementation of Risk-Based Corrective Action with a focus on the states of Texas, Louisiana, and Oklahoma.« less

  20. 78 FR 16783 - Approval and Promulgation of Implementation Plans; Georgia; Control Techniques Guidelines and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-19

    ... Promulgation of Implementation Plans; Georgia; Control Techniques Guidelines and Reasonably Available Control...), related to reasonably available control technology (RACT) requirements. This correcting amendment corrects... October 21, 2009, SIP submittal for certain source categories for which EPA has issued control technique...

  1. Bias-correction of PERSIANN-CDR Extreme Precipitation Estimates Over the United States

    NASA Astrophysics Data System (ADS)

    Faridzad, M.; Yang, T.; Hsu, K. L.; Sorooshian, S.

    2017-12-01

    Ground-based precipitation measurements can be sparse or even nonexistent over remote regions which make it difficult for extreme event analysis. PERSIANN-CDR (CDR), with 30+ years of daily rainfall information, provides an opportunity to study precipitation for regions where ground measurements are limited. In this study, the use of CDR annual extreme precipitation for frequency analysis of extreme events over limited/ungauged basins is explored. The adjustment of CDR is implemented in two steps: (1) Calculated CDR bias correction factor at limited gauge locations based on the linear regression analysis of gauge and CDR annual maxima precipitation; and (2) Extend the bias correction factor to the locations where gauges are not available. The correction factors are estimated at gauge sites over various catchments, elevation zones, and climate regions and the results were generalized to ungauged sites based on regional and climatic similarity. Case studies were conducted on 20 basins with diverse climate and altitudes in the Eastern and Western US. Cross-validation reveals that the bias correction factors estimated on limited calibration data can be extended to regions with similar characteristics. The adjusted CDR estimates also outperform gauge interpolation on validation sites consistently. It is suggested that the CDR with bias adjustment has a potential for study frequency analysis of extreme events, especially for regions with limited gauge observations.

  2. 7 CFR 275.16 - Corrective action planning.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false Corrective action planning. 275.16 Section 275.16... Corrective action planning. (a) Corrective action planning is the process by which State agencies shall...)/management unit(s) in the planning, development, and implementation of corrective action are those which: (1...

  3. Error correcting circuit design with carbon nanotube field effect transistors

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoqiang; Cai, Li; Yang, Xiaokuo; Liu, Baojun; Liu, Zhongyong

    2018-03-01

    In this work, a parallel error correcting circuit based on (7, 4) Hamming code is designed and implemented with carbon nanotube field effect transistors, and its function is validated by simulation in HSpice with the Stanford model. A grouping method which is able to correct multiple bit errors in 16-bit and 32-bit application is proposed, and its error correction capability is analyzed. Performance of circuits implemented with CNTFETs and traditional MOSFETs respectively is also compared, and the former shows a 34.4% decrement of layout area and a 56.9% decrement of power consumption.

  4. Adaptation of the University of Wisconsin High Spectral Resolution Lidar for Polarization and Multiple Scattering Measurements

    NASA Technical Reports Server (NTRS)

    Eloranta, E. W.; Piironen, P. K.

    1996-01-01

    Quantitative lidar measurements of aerosol scattering are hampered by the need for calibrations and the problem of correcting observed backscatter profiles for the effects of attenuation. The University of Wisconsin High Spectral Resolution Lidar (HSRL) addresses these problems by separating molecular scattering contributions from the aerosol scattering; the molecular scattering is then used as a calibration target that is available at each point in the observed profiles. While the HSRl approach has intrinsic advantages over competing techniques, realization of these advantages requires implementation of a technically demanding system which is potentially very sensitive to changes in temperature and mechanical alignments. This paper describes a new implementation of the HSRL in an instrumented van which allows measurements during field experiments. The HSRL was modified to measure depolarization. In addition, both the signal amplitude and depolarization variations with receiver field of view are simultaneously measured. This allows for discrimination of ice clouds from water clouds and observation of multiple scattering contributions to the lidar return.

  5. 78 FR 59798 - Small Business Subcontracting: Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-30

    ... SMALL BUSINESS ADMINISTRATION 13 CFR Part 125 RIN 3245-AG22 Small Business Subcontracting: Correction AGENCY: U.S. Small Business Administration. ACTION: Correcting amendments. SUMMARY: This document... business subcontracting to implement provisions of the Small Business Jobs Act of 2010. This correction...

  6. Assimilation of SMOS Retrievals in the Land Information System

    NASA Technical Reports Server (NTRS)

    Blankenship, Clay B.; Case, Jonathan L.; Zavodsky, Bradley T.; Crosson, William L.

    2016-01-01

    The Soil Moisture and Ocean Salinity (SMOS) satellite provides retrievals of soil moisture in the upper 5 cm with a 30-50 km resolution and a mission accuracy requirement of 0.04 cm(sub 3 cm(sub -3). These observations can be used to improve land surface model soil moisture states through data assimilation. In this paper, SMOS soil moisture retrievals are assimilated into the Noah land surface model via an Ensemble Kalman Filter within the NASA Land Information System. Bias correction is implemented using Cumulative Distribution Function (CDF) matching, with points aggregated by either land cover or soil type to reduce sampling error in generating the CDFs. An experiment was run for the warm season of 2011 to test SMOS data assimilation and to compare assimilation methods. Verification of soil moisture analyses in the 0-10 cm upper layer and root zone (0-1 m) was conducted using in situ measurements from several observing networks in the central and southeastern United States. This experiment showed that SMOS data assimilation significantly increased the anomaly correlation of Noah soil moisture with station measurements from 0.45 to 0.57 in the 0-10 cm layer. Time series at specific stations demonstrate the ability of SMOS DA to increase the dynamic range of soil moisture in a manner consistent with station measurements. Among the bias correction methods, the correction based on soil type performed best at bias reduction but also reduced correlations. The vegetation-based correction did not produce any significant differences compared to using a simple uniform correction curve.

  7. Assimilation of SMOS Retrievals in the Land Information System

    PubMed Central

    Blankenship, Clay B.; Case, Jonathan L.; Zavodsky, Bradley T.; Crosson, William L.

    2018-01-01

    The Soil Moisture and Ocean Salinity (SMOS) satellite provides retrievals of soil moisture in the upper 5 cm with a 30-50 km resolution and a mission accuracy requirement of 0.04 cm3 cm−3. These observations can be used to improve land surface model soil moisture states through data assimilation. In this paper, SMOS soil moisture retrievals are assimilated into the Noah land surface model via an Ensemble Kalman Filter within the NASA Land Information System. Bias correction is implemented using Cumulative Distribution Function (CDF) matching, with points aggregated by either land cover or soil type to reduce sampling error in generating the CDFs. An experiment was run for the warm season of 2011 to test SMOS data assimilation and to compare assimilation methods. Verification of soil moisture analyses in the 0-10 cm upper layer and root zone (0-1 m) was conducted using in situ measurements from several observing networks in the central and southeastern United States. This experiment showed that SMOS data assimilation significantly increased the anomaly correlation of Noah soil moisture with station measurements from 0.45 to 0.57 in the 0-10 cm layer. Time series at specific stations demonstrate the ability of SMOS DA to increase the dynamic range of soil moisture in a manner consistent with station measurements. Among the bias correction methods, the correction based on soil type performed best at bias reduction but also reduced correlations. The vegetation-based correction did not produce any significant differences compared to using a simple uniform correction curve. PMID:29367795

  8. RELIC: a novel dye-bias correction method for Illumina Methylation BeadChip.

    PubMed

    Xu, Zongli; Langie, Sabine A S; De Boever, Patrick; Taylor, Jack A; Niu, Liang

    2017-01-03

    The Illumina Infinium HumanMethylation450 BeadChip and its successor, Infinium MethylationEPIC BeadChip, have been extensively utilized in epigenome-wide association studies. Both arrays use two fluorescent dyes (Cy3-green/Cy5-red) to measure methylation level at CpG sites. However, performance difference between dyes can result in biased estimates of methylation levels. Here we describe a novel method, called REgression on Logarithm of Internal Control probes (RELIC) to correct for dye bias on whole array by utilizing the intensity values of paired internal control probes that monitor the two color channels. We evaluate the method in several datasets against other widely used dye-bias correction methods. Results on data quality improvement showed that RELIC correction statistically significantly outperforms alternative dye-bias correction methods. We incorporated the method into the R package ENmix, which is freely available from the Bioconductor website ( https://www.bioconductor.org/packages/release/bioc/html/ENmix.html ). RELIC is an efficient and robust method to correct for dye-bias in Illumina Methylation BeadChip data. It outperforms other alternative methods and conveniently implemented in R package ENmix to facilitate DNA methylation studies.

  9. Accounting for Chromatic Atmospheric Effects on Barycentric Corrections

    NASA Astrophysics Data System (ADS)

    Blackman, Ryan T.; Szymkowiak, Andrew E.; Fischer, Debra A.; Jurgenson, Colby A.

    2017-03-01

    Atmospheric effects on stellar radial velocity measurements for exoplanet discovery and characterization have not yet been fully investigated for extreme precision levels. We carry out calculations to determine the wavelength dependence of barycentric corrections across optical wavelengths, due to the ubiquitous variations in air mass during observations. We demonstrate that radial velocity errors of at least several cm s-1 can be incurred if the wavelength dependence is not included in the photon-weighted barycentric corrections. A minimum of four wavelength channels across optical spectra (380-680 nm) are required to account for this effect at the 10 cm s-1 level, with polynomial fits of the barycentric corrections applied to cover all wavelengths. Additional channels may be required in poor observing conditions or to avoid strong telluric absorption features. Furthermore, consistent flux sampling on the order of seconds throughout the observation is necessary to ensure that accurate photon weights are obtained. Finally, we describe how a multiple-channel exposure meter will be implemented in the EXtreme PREcision Spectrograph (EXPRES).

  10. Performance of various branch-point tolerant phase reconstructors with finite time delays and measurement noise

    NASA Astrophysics Data System (ADS)

    Zetterlind, Virgil E., III; Magee, Eric P.

    2002-06-01

    This study extends branch point tolerant phase reconstructor research to examine the effect of finite time delays and measurement error on system performance. Branch point tolerant phase reconstruction is particularly applicable to atmospheric laser weapon and communication systems, which operate in extended turbulence. We examine the relative performance of a least squares reconstructor, least squares plus hidden phase reconstructor, and a Goldstein branch point reconstructor for various correction time-delays and measurement noise scenarios. Performance is evaluated using a wave-optics simulation that models a 100km atmospheric propagation of a point source beacon to a transmit/receive aperture. Phase-only corrections are then calculated using the various reconstructor algorithms and applied to an outgoing uniform field. Point Strehl is used as the performance metric. Results indicate that while time delays and measurement noise reduce the performance of branch point tolerant reconstructors, these reconstructors can still outperform least squares implementations in many cases. We also show that branch point detection becomes the limiting factor in measurement noise corrupted scenarios.

  11. 75 FR 1704 - Federal Civil Penalties Inflation Adjustment Act-2009 Implementation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-13

    ... DEPARTMENT OF HOMELAND SECURITY Coast Guard 33 CFR Part 27 [Docket No. USCG-2009-0891] RIN 1625-AB40 Federal Civil Penalties Inflation Adjustment Act--2009 Implementation AGENCY: Coast Guard, DHS. ACTION: Final rule; correction. SUMMARY: The Coast Guard is correcting a final rule that appeared in the...

  12. 77 FR 66543 - Approval and Promulgation of Air Quality Implementation Plans; Delaware; Requirements for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-06

    ... ENVIRONMENTAL PROTECTION AGENCY 40 CFR Part 52 [EPA-R03-OAR-2012-0381; FRL-9747-9] Approval and Promulgation of Air Quality Implementation Plans; Delaware; Requirements for Prevention of Significant...: Environmental Protection Agency (EPA). ACTION: Final rule; correcting amendment. SUMMARY: This document corrects...

  13. 75 FR 5514 - Approval and Promulgation of Air Quality Implementation Plans; Indiana; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-03

    ... ENVIRONMENTAL PROTECTION AGENCY 40 CFR Part 52 [EPA-R05-OAR-2009-0771; FRL-9108-7] Approval and Promulgation of Air Quality Implementation Plans; Indiana; Correction AGENCY: Environmental Protection Agency..., Environmental Engineer, Criteria Pollutant Section, Air Programs Branch (AR-18J), Environmental Protection...

  14. Managed access technology to combat contraband cell phones in prison: Findings from a process evaluation.

    PubMed

    Grommon, Eric

    2018-02-01

    Cell phones in correctional facilities have emerged as one of the most pervasive forms of modern contraband. This issue has been identified as a top priority for many correctional administrators in the United States. Managed access, a technology that utilizes cellular signals to capture transmissions from contraband phones, has received notable attention as a promising tool to combat this problem. However, this technology has received little evaluative attention. The present study offers a foundational process evaluation and draws upon output measures and stakeholder interviews to identify salient operational challenges and subsequent lessons learned about implementing and maintaining a managed access system. Findings suggest that while managed access captures large volumes of contraband cellular transmissions, the technology requires significant implementation planning, personnel support, and complex partnerships with commercial cellular carriers. Lessons learned provide guidance for practitioners to navigate these challenges and for scholars to improve future evaluations of managed access. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Developing a quality assurance program for online services.

    PubMed Central

    Humphries, A W; Naisawald, G V

    1991-01-01

    A quality assurance (QA) program provides not only a mechanism for establishing training and competency standards, but also a method for continuously monitoring current service practices to correct shortcomings. The typical QA cycle includes these basic steps: select subject for review, establish measurable standards, evaluate existing services using the standards, identify problems, implement solutions, and reevaluate services. The Claude Moore Health Sciences Library (CMHSL) developed a quality assurance program for online services designed to evaluate services against specific criteria identified by research studies as being important to customer satisfaction. These criteria include reliability, responsiveness, approachability, communication, and physical factors. The application of these criteria to the library's existing online services in the quality review process is discussed with specific examples of the problems identified in each service area, as well as the solutions implemented to correct deficiencies. The application of the QA cycle to an online services program serves as a model of possible interventions. The use of QA principles to enhance online service quality can be extended to other library service areas. PMID:1909197

  16. Developing a quality assurance program for online services.

    PubMed

    Humphries, A W; Naisawald, G V

    1991-07-01

    A quality assurance (QA) program provides not only a mechanism for establishing training and competency standards, but also a method for continuously monitoring current service practices to correct shortcomings. The typical QA cycle includes these basic steps: select subject for review, establish measurable standards, evaluate existing services using the standards, identify problems, implement solutions, and reevaluate services. The Claude Moore Health Sciences Library (CMHSL) developed a quality assurance program for online services designed to evaluate services against specific criteria identified by research studies as being important to customer satisfaction. These criteria include reliability, responsiveness, approachability, communication, and physical factors. The application of these criteria to the library's existing online services in the quality review process is discussed with specific examples of the problems identified in each service area, as well as the solutions implemented to correct deficiencies. The application of the QA cycle to an online services program serves as a model of possible interventions. The use of QA principles to enhance online service quality can be extended to other library service areas.

  17. Development and characterisation of FPGA modems using forward error correction for FSOC

    NASA Astrophysics Data System (ADS)

    Mudge, Kerry A.; Grant, Kenneth J.; Clare, Bradley A.; Biggs, Colin L.; Cowley, William G.; Manning, Sean; Lechner, Gottfried

    2016-05-01

    In this paper we report on the performance of a free-space optical communications (FSOC) modem implemented in FPGA, with data rate variable up to 60 Mbps. To combat the effects of atmospheric scintillation, a 7/8 rate low density parity check (LDPC) forward error correction is implemented along with custom bit and frame synchronisation and a variable length interleaver. We report on the systematic performance evaluation of an optical communications link employing the FPGA modems using a laboratory test-bed to simulate the effects of atmospheric turbulence. Log-normal fading is imposed onto the transmitted free-space beam using a custom LabVIEW program and an acoustic-optic modulator. The scintillation index, transmitted optical power and the scintillation bandwidth can all be independently varied allowing testing over a wide range of optical channel conditions. In particular, bit-error-ratio (BER) performance for different interleaver lengths is investigated as a function of the scintillation bandwidth. The laboratory results are compared to field measurements over 1.5km.

  18. Fluctuating ideal-gas lattice Boltzmann method with fluctuation dissipation theorem for nonvanishing velocities.

    PubMed

    Kaehler, G; Wagner, A J

    2013-06-01

    Current implementations of fluctuating ideal-gas descriptions with the lattice Boltzmann methods are based on a fluctuation dissipation theorem, which, while greatly simplifying the implementation, strictly holds only for zero mean velocity and small fluctuations. We show how to derive the fluctuation dissipation theorem for all k, which was done only for k=0 in previous derivations. The consistent derivation requires, in principle, locally velocity-dependent multirelaxation time transforms. Such an implementation is computationally prohibitively expensive but, with a small computational trick, it is feasible to reproduce the correct FDT without overhead in computation time. It is then shown that the previous standard implementations perform poorly for non vanishing mean velocity as indicated by violations of Galilean invariance of measured structure factors. Results obtained with the method introduced here show a significant reduction of the Galilean invariance violations.

  19. Drift-corrected Odin-OSIRIS ozone product: algorithm and updated stratospheric ozone trends

    NASA Astrophysics Data System (ADS)

    Bourassa, Adam E.; Roth, Chris Z.; Zawada, Daniel J.; Rieger, Landon A.; McLinden, Chris A.; Degenstein, Douglas A.

    2018-01-01

    A small long-term drift in the Optical Spectrograph and Infrared Imager System (OSIRIS) stratospheric ozone product, manifested mostly since 2012, is quantified and attributed to a changing bias in the limb pointing knowledge of the instrument. A correction to this pointing drift using a predictable shape in the measured limb radiance profile is implemented and applied within the OSIRIS retrieval algorithm. This new data product, version 5.10, displays substantially better both long- and short-term agreement with Microwave Limb Sounder (MLS) ozone throughout the stratosphere due to the pointing correction. Previously reported stratospheric ozone trends over the time period 1984-2013, which were derived by merging the altitude-number density ozone profile measurements from the Stratospheric Aerosol and Gas Experiment (SAGE) II satellite instrument (1984-2005) and from OSIRIS (2002-2013), are recalculated using the new OSIRIS version 5.10 product and extended to 2017. These results still show statistically significant positive trends throughout the upper stratosphere since 1997, but at weaker levels that are more closely in line with estimates from other data records.

  20. Real-time 3D motion tracking for small animal brain PET

    NASA Astrophysics Data System (ADS)

    Kyme, A. Z.; Zhou, V. W.; Meikle, S. R.; Fulton, R. R.

    2008-05-01

    High-resolution positron emission tomography (PET) imaging of conscious, unrestrained laboratory animals presents many challenges. Some form of motion correction will normally be necessary to avoid motion artefacts in the reconstruction. The aim of the current work was to develop and evaluate a motion tracking system potentially suitable for use in small animal PET. This system is based on the commercially available stereo-optical MicronTracker S60 which we have integrated with a Siemens Focus-220 microPET scanner. We present measured performance limits of the tracker and the technical details of our implementation, including calibration and synchronization of the system. A phantom study demonstrating motion tracking and correction was also performed. The system can be calibrated with sub-millimetre accuracy, and small lightweight markers can be constructed to provide accurate 3D motion data. A marked reduction in motion artefacts was demonstrated in the phantom study. The techniques and results described here represent a step towards a practical method for rigid-body motion correction in small animal PET. There is scope to achieve further improvements in the accuracy of synchronization and pose measurements in future work.

  1. The first cases of Candida auris candidaemia in Oman.

    PubMed

    Mohsin, Jalila; Hagen, Ferry; Al-Balushi, Zainab A M; de Hoog, G Sybren; Chowdhary, Anuradha; Meis, Jacques F; Al-Hatmi, Abdullah M S

    2017-09-01

    Candida auris has been recognised as a problematic healthcare-associated emerging yeast which is often misidentified as Candida haemulonii by commercial systems. Correct early identification of C. auris is important for appropriate antifungal treatment and implementing effective infection control measures. Here we report emergence of the first C. auris cases in Oman, initially misidentified as C. haemulonii. © 2017 Blackwell Verlag GmbH.

  2. Computer-implemented method and apparatus for autonomous position determination using magnetic field data

    NASA Technical Reports Server (NTRS)

    Ketchum, Eleanor A. (Inventor)

    2000-01-01

    A computer-implemented method and apparatus for determining position of a vehicle within 100 km autonomously from magnetic field measurements and attitude data without a priori knowledge of position. An inverted dipole solution of two possible position solutions for each measurement of magnetic field data are deterministically calculated by a program controlled processor solving the inverted first order spherical harmonic representation of the geomagnetic field for two unit position vectors 180 degrees apart and a vehicle distance from the center of the earth. Correction schemes such as a successive substitutions and a Newton-Raphson method are applied to each dipole. The two position solutions for each measurement are saved separately. Velocity vectors for the position solutions are calculated so that a total energy difference for each of the two resultant position paths is computed. The position path with the smaller absolute total energy difference is chosen as the true position path of the vehicle.

  3. An Optical Lever For The Metrology Of Grazing Incidence Optics

    NASA Astrophysics Data System (ADS)

    DeCew, Alan E.; Wagner, Robert W.

    1986-11-01

    Research Optics & Development, Inc. is using a slope tracing profilometer to measure the figure of optical surfaces which cannot be measured conveniently by interferometric means. As a metrological tool, the technique has its greatest advantage as an in-process easurement system. An optician can easily convert from polishing to measurement in less than a minute of time. This rapid feedback allows figure correction with minimal wasted effort and setup time. The present configuration of the slope scanner provides resolutions to 1 micro-radian. By implementing minor modifications, the resolution could be improved by an order of magnitude.

  4. Medicare Program; Prospective Payment System and Consolidated Billing for Skilled Nursing Facilities for FY 2017, SNF Value-Based Purchasing Program, SNF Quality Reporting Program, and SNF Payment Models Research. Final rule.

    PubMed

    2016-08-05

    This final rule updates the payment rates used under the prospective payment system (PPS) for skilled nursing facilities (SNFs) for fiscal year (FY) 2017. In addition, it specifies a potentially preventable readmission measure for the Skilled Nursing Facility Value-Based Purchasing Program (SNF VBP), and implements requirements for that program, including performance standards, a scoring methodology, and a review and correction process for performance information to be made public, aimed at implementing value-based purchasing for SNFs. Additionally, this final rule includes additional polices and measures in the Skilled Nursing Facility Quality Reporting Program (SNF QRP). This final rule also responds to comments on the SNF Payment Models Research (PMR) project.

  5. Practical Framework: Implementing OEE Method in Manufacturing Process Environment

    NASA Astrophysics Data System (ADS)

    Maideen, N. C.; Sahudin, S.; Mohd Yahya, N. H.; Norliawati, A. O.

    2016-02-01

    Manufacturing process environment requires reliable machineries in order to be able to satisfy the market demand. Ideally, a reliable machine is expected to be operated and produce a quality product at its maximum designed capability. However, due to some reason, the machine usually unable to achieved the desired performance. Since the performance will affect the productivity of the system, a measurement technique should be applied. Overall Equipment Effectiveness (OEE) is a good method to measure the performance of the machine. The reliable result produced from OEE can then be used to propose a suitable corrective action. There are a lot of published paper mentioned about the purpose and benefit of OEE that covers what and why factors. However, the how factor not yet been revealed especially the implementation of OEE in manufacturing process environment. Thus, this paper presents a practical framework to implement OEE and a case study has been discussed to explain in detail each steps proposed. The proposed framework is beneficial to the engineer especially the beginner to start measure their machine performance and later improve the performance of the machine.

  6. Finite element simulation of crack depth measurements in concrete using diffuse ultrasound

    NASA Astrophysics Data System (ADS)

    Seher, Matthias; Kim, Jin-Yeon; Jacobs, Laurence J.

    2012-05-01

    This research simulates the measurements of crack depth in concrete using diffuse ultrasound. The finite element method is employed to simulate the ultrasonic diffusion process around cracks with different geometrical shapes, with the goal of gaining physical insight into the data obtained from experimental measurements. The commercial finite element software Ansys is used to implement the two-dimensional concrete model. The model is validated with an analytical solution and experimental results. It is found from the simulation results that preliminary knowledge of the crack geometry is required to interpret the energy evolution curves from measurements and to correctly determine the crack depth.

  7. Development, implementation and evaluation of a dedicated metal artefact reduction method for interventional flat-detector CT.

    PubMed

    Prell, D; Kalender, W A; Kyriakou, Y

    2010-12-01

    The purpose of this study was to develop, implement and evaluate a dedicated metal artefact reduction (MAR) method for flat-detector CT (FDCT). The algorithm uses the multidimensional raw data space to calculate surrogate attenuation values for the original metal traces in the raw data domain. The metal traces are detected automatically by a three-dimensional, threshold-based segmentation algorithm in an initial reconstructed image volume, based on twofold histogram information for calculating appropriate metal thresholds. These thresholds are combined with constrained morphological operations in the projection domain. A subsequent reconstruction of the modified raw data yields an artefact-reduced image volume that is further processed by a combining procedure that reinserts the missing metal information. For image quality assessment, measurements on semi-anthropomorphic phantoms containing metallic inserts were evaluated in terms of CT value accuracy, image noise and spatial resolution before and after correction. Measurements of the same phantoms without prostheses were used as ground truth for comparison. Cadaver measurements were performed on complex and realistic cases and to determine the influences of our correction method on the tissue surrounding the prostheses. The results showed a significant reduction of metal-induced streak artefacts (CT value differences were reduced to below 22 HU and image noise reduction of up to 200%). The cadaver measurements showed excellent results for imaging areas close to the implant and exceptional artefact suppression in these areas. Furthermore, measurements in the knee and spine regions confirmed the superiority of our method to standard one-dimensional, linear interpolation.

  8. Multi-step-ahead Method for Wind Speed Prediction Correction Based on Numerical Weather Prediction and Historical Measurement Data

    NASA Astrophysics Data System (ADS)

    Wang, Han; Yan, Jie; Liu, Yongqian; Han, Shuang; Li, Li; Zhao, Jing

    2017-11-01

    Increasing the accuracy of wind speed prediction lays solid foundation to the reliability of wind power forecasting. Most traditional correction methods for wind speed prediction establish the mapping relationship between wind speed of the numerical weather prediction (NWP) and the historical measurement data (HMD) at the corresponding time slot, which is free of time-dependent impacts of wind speed time series. In this paper, a multi-step-ahead wind speed prediction correction method is proposed with consideration of the passing effects from wind speed at the previous time slot. To this end, the proposed method employs both NWP and HMD as model inputs and the training labels. First, the probabilistic analysis of the NWP deviation for different wind speed bins is calculated to illustrate the inadequacy of the traditional time-independent mapping strategy. Then, support vector machine (SVM) is utilized as example to implement the proposed mapping strategy and to establish the correction model for all the wind speed bins. One Chinese wind farm in northern part of China is taken as example to validate the proposed method. Three benchmark methods of wind speed prediction are used to compare the performance. The results show that the proposed model has the best performance under different time horizons.

  9. Dose-to-water conversion for the backscatter-shielded EPID: A frame-based method to correct for EPID energy response to MLC transmitted radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zwan, Benjamin J., E-mail: benjamin.zwan@uon.edu.au; O’Connor, Daryl J.; King, Brian W.

    2014-08-15

    Purpose: To develop a frame-by-frame correction for the energy response of amorphous silicon electronic portal imaging devices (a-Si EPIDs) to radiation that has transmitted through the multileaf collimator (MLC) and to integrate this correction into the backscatter shielded EPID (BSS-EPID) dose-to-water conversion model. Methods: Individual EPID frames were acquired using a Varian frame grabber and iTools acquisition software then processed using in-house software developed inMATLAB. For each EPID image frame, the region below the MLC leaves was identified and all pixels in this region were multiplied by a factor of 1.3 to correct for the under-response of the imager tomore » MLC transmitted radiation. The corrected frames were then summed to form a corrected integrated EPID image. This correction was implemented as an initial step in the BSS-EPID dose-to-water conversion model which was then used to compute dose planes in a water phantom for 35 IMRT fields. The calculated dose planes, with and without the proposed MLC transmission correction, were compared to measurements in solid water using a two-dimensional diode array. Results: It was observed that the integration of the MLC transmission correction into the BSS-EPID dose model improved agreement between modeled and measured dose planes. In particular, the MLC correction produced higher pass rates for almost all Head and Neck fields tested, yielding an average pass rate of 99.8% for 2%/2 mm criteria. A two-sample independentt-test and fisher F-test were used to show that the MLC transmission correction resulted in a statistically significant reduction in the mean and the standard deviation of the gamma values, respectively, to give a more accurate and consistent dose-to-water conversion. Conclusions: The frame-by-frame MLC transmission response correction was shown to improve the accuracy and reduce the variability of the BSS-EPID dose-to-water conversion model. The correction may be applied as a preprocessing step in any pretreatment portal dosimetry calculation and has been shown to be beneficial for highly modulated IMRT fields.« less

  10. 9 CFR 416.15 - Corrective Actions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Corrective Actions. 416.15 Section 416... SANITATION § 416.15 Corrective Actions. (a) Each official establishment shall take appropriate corrective... the procedures specified therein, or the implementation or maintenance of the Sanitation SOP's, may...

  11. Optimal sequential measurements for bipartite state discrimination

    NASA Astrophysics Data System (ADS)

    Croke, Sarah; Barnett, Stephen M.; Weir, Graeme

    2017-05-01

    State discrimination is a useful test problem with which to clarify the power and limitations of different classes of measurement. We consider the problem of discriminating between given states of a bipartite quantum system via sequential measurement of the subsystems, with classical feed-forward of measurement results. Our aim is to understand when sequential measurements, which are relatively easy to implement experimentally, perform as well, or almost as well, as optimal joint measurements, which are in general more technologically challenging. We construct conditions that the optimal sequential measurement must satisfy, analogous to the well-known Helstrom conditions for minimum error discrimination in the unrestricted case. We give several examples and compare the optimal probability of correctly identifying the state via global versus sequential measurement strategies.

  12. Adaptive optics self-calibration using differential OTF (dOTF)

    NASA Astrophysics Data System (ADS)

    Rodack, Alexander T.; Knight, Justin M.; Codona, Johanan L.; Miller, Kelsey L.; Guyon, Olivier

    2015-09-01

    We demonstrate self-calibration of an adaptive optical system using differential OTF [Codona, JL; Opt. Eng. 0001; 52(9):097105-097105. doi:10.1117/1.OE.52.9.097105]. We use a deformable mirror (DM) along with science camera focal plane images to implement a closed-loop servo that both flattens the DM and corrects for non-common-path aberrations within the telescope. The pupil field modification required for dOTF measurement is introduced by displacing actuators near the edge of the illuminated pupil. Simulations were used to develop methods to retrieve the phase from the complex amplitude dOTF measurements for both segmented and continuous sheet MEMS DMs and tests were performed using a Boston Micromachines continuous sheet DM for verification. We compute the actuator correction updates directly from the phase of the dOTF measurements, reading out displacements and/or slopes at segment and actuator positions. Through simulation, we also explore the effectiveness of these techniques for a variety of photons collected in each dOTF exposure pair.

  13. A simple method to determine evaporation and compensate for liquid losses in small-scale cell culture systems.

    PubMed

    Wiegmann, Vincent; Martinez, Cristina Bernal; Baganz, Frank

    2018-04-24

    Establish a method to indirectly measure evaporation in microwell-based cell culture systems and show that the proposed method allows compensating for liquid losses in fed-batch processes. A correlation between evaporation and the concentration of Na + was found (R 2  = 0.95) when using the 24-well-based miniature bioreactor system (micro-Matrix) for a batch culture with GS-CHO. Based on these results, a method was developed to counteract evaporation with periodic water additions based on measurements of the Na + concentration. Implementation of this method resulted in a reduction of the relative liquid loss after 15 days of a fed-batch cultivation from 36.7 ± 6.7% without volume corrections to 6.9 ± 6.5% with volume corrections. A procedure was established to indirectly measure evaporation through a correlation with the level of Na + ions in solution and deriving a simple formula to account for liquid losses.

  14. Wavefront Measurement in Ophthalmology

    NASA Astrophysics Data System (ADS)

    Molebny, Vasyl

    Wavefront sensing or aberration measurement in the eye is a key problem in refractive surgery and vision correction with laser. The accuracy of these measurements is critical for the outcome of the surgery. Practically all clinical methods use laser as a source of light. To better understand the background, we analyze the pre-laser techniques developed over centuries. They allowed new discoveries of the nature of the optical system of the eye, and many served as prototypes for laser-based wavefront sensing technologies. Hartmann's test was strengthened by Platt's lenslet matrix and the CCD two-dimensional photodetector acquired a new life as a Hartmann-Shack sensor in Heidelberg. Tscherning's aberroscope, invented in France, was transformed into a laser device known as a Dresden aberrometer, having seen its reincarnation in Germany with Seiler's help. The clinical ray tracing technique was brought to life by Molebny in Ukraine, and skiascopy was created by Fujieda in Japan. With the maturation of these technologies, new demands now arise for their wider implementation in optometry and vision correction with customized contact and intraocular lenses.

  15. A technique for phase correction in Fourier transform spectroscopy

    NASA Astrophysics Data System (ADS)

    Artsang, P.; Pongchalee, P.; Palawong, K.; Buisset, C.; Meemon, P.

    2018-03-01

    Fourier transform spectroscopy (FTS) is a type of spectroscopy that can be used to analyze components in the sample. The basic setup that is commonly used in this technique is "Michelson interferometer". The interference signal obtained from interferometer can be Fourier transformed into the spectral pattern of the illuminating light source. To experimentally study the concept of the Fourier transform spectroscopy, the project started by setup the Michelson interferometer in the laboratory. The implemented system used a broadband light source in near infrared region (0.81-0.89 μm) and controlled the movable mirror by using computer controlled motorized translation stage. In the early study, there is no sample the interference path. Therefore, the theoretical spectral results after the Fourier transformation of the captured interferogram must be the spectral shape of the light source. One main challenge of the FTS is to retrieve the correct phase information of the inferferogram that relates with the correct spectral shape of the light source. One main source of the phase distortion in FTS that we observed from our system is the non-linear movement of the movable reference mirror of the Michelson interferometer. Therefore, to improve the result, we coupled a monochromatic light source to the implemented interferometer. We simultaneously measured the interferograms of the monochromatic and broadband light sources. The interferogram of the monochromatic light source was used to correct the phase of the interferogram of the broadband light source. The result shows significant improvement in the computed spectral shape.

  16. 76 FR 4569 - Implementing the Whistleblower Provisions of Section 23 of the Commodity Exchange Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-26

    ... COMMODITY FUTURES TRADING COMMISSION 17 CFR Part 165 RIN Number 3038-AD04 Implementing the Whistleblower Provisions of Section 23 of the Commodity Exchange Act Correction In proposed rule document 2010-29022, beginning on page 75728 in the issue of Monday, December 6, 2010, make the following correction...

  17. 76 FR 53831 - Fisheries of the Northeastern United States; Summer Flounder, Scup, and Black Sea Bass Fisheries...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-30

    ... Black Sea Bass Fisheries; 2011 Summer Flounder, Scup, and Black Sea Bass Specifications; Correction... Federal Register the final rule to implement the 2011 summer flounder, scup, and black sea bass....100. Need for Correction The final rule implementing 2011 summer flounder, scup, and black sea bass...

  18. 78 FR 12269 - Wireline Competition Bureau Seeks Updates and Corrections to TelcoMaster Table for Connect...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-22

    ... Competition Bureau Seeks Updates and Corrections to TelcoMaster Table for Connect America Cost Model AGENCY... centers to particular holding companies for purposes of Connect America Phase II implementation. DATES... companies for purposes of Connect America Phase II implementation. 2. The USF/ICC Transformation Order, 76...

  19. Cost reduction with successful implementation of an antibiotic prophylaxis program in a private hospital in Ribeirão Preto, Brazil.

    PubMed

    Fonseca, S N; Melon Kunzle, S R; Barbosa Silva, S A; Schmidt, J G; Mele, R R

    1999-01-01

    To describe the implementation and results of a perioperative antibiotic prophylaxis (PAP) program. A protocol for correct use of PAP was implemented in December 1994. For selected months we measured the PAP protocol compliance of a random sample of clean and clean-contaminated procedures and calculated the cost of incorrect use of PAP. SELLING: A 180-bed general hospital in Ribeirão Preto, Brazil. The cost of unnecessary PAP in the obstetric and gynecologic, cardiothoracic, and orthopedic services dropped from $4,224.54 ($23.47/procedure) in November 1994 to $1,147.24 ($6.17/procedure, January 1995), $544.42 ($3.58/procedure, May 1995), $99.06 ($0.50/procedure, August 1995), and $30 ($0.12/procedure, March 1996). In November 1994, only 13.6% of all surgical procedures were done with correct use of PAP, compared to 59% in January 1995, 73% in August 1995, 78% in March 1996, 92% in November 1996, and 98% in May 1997. Incorrect PAP use wastes resources, which is a particular problem in developing countries. Our program is simple and can be implemented without the use of computers and now is being adopted in other hospitals in our region. We credit the success of our program to the commitment of all participants and to the strong support of the hospital directors.

  20. An FPGA-based High Speed Parallel Signal Processing System for Adaptive Optics Testbed

    NASA Astrophysics Data System (ADS)

    Kim, H.; Choi, Y.; Yang, Y.

    In this paper a state-of-the-art FPGA (Field Programmable Gate Array) based high speed parallel signal processing system (SPS) for adaptive optics (AO) testbed with 1 kHz wavefront error (WFE) correction frequency is reported. The AO system consists of Shack-Hartmann sensor (SHS) and deformable mirror (DM), tip-tilt sensor (TTS), tip-tilt mirror (TTM) and an FPGA-based high performance SPS to correct wavefront aberrations. The SHS is composed of 400 subapertures and the DM 277 actuators with Fried geometry, requiring high speed parallel computing capability SPS. In this study, the target WFE correction speed is 1 kHz; therefore, it requires massive parallel computing capabilities as well as strict hard real time constraints on measurements from sensors, matrix computation latency for correction algorithms, and output of control signals for actuators. In order to meet them, an FPGA based real-time SPS with parallel computing capabilities is proposed. In particular, the SPS is made up of a National Instrument's (NI's) real time computer and five FPGA boards based on state-of-the-art Xilinx Kintex 7 FPGA. Programming is done with NI's LabView environment, providing flexibility when applying different algorithms for WFE correction. It also facilitates faster programming and debugging environment as compared to conventional ones. One of the five FPGA's is assigned to measure TTS and calculate control signals for TTM, while the rest four are used to receive SHS signal, calculate slops for each subaperture and correction signal for DM. With this parallel processing capabilities of the SPS the overall closed-loop WFE correction speed of 1 kHz has been achieved. System requirements, architecture and implementation issues are described; furthermore, experimental results are also given.

  1. Method for correction of measured polarization angles from motional Stark effect spectroscopy for the effects of electric fields

    DOE PAGES

    Luce, T. C.; Petty, C. C.; Meyer, W. H.; ...

    2016-11-02

    An approximate method to correct the motional Stark effect (MSE) spectroscopy for the effects of intrinsic plasma electric fields has been developed. The motivation for using an approximate method is to incorporate electric field effects for between-pulse or real-time analysis of the current density or safety factor profile. The toroidal velocity term in the momentum balance equation is normally the dominant contribution to the electric field orthogonal to the flux surface over most of the plasma. When this approximation is valid, the correction to the MSE data can be included in a form like that used when electric field effectsmore » are neglected. This allows measurements of the toroidal velocity to be integrated into the interpretation of the MSE polarization angles without changing how the data is treated in existing codes. In some cases, such as the DIII-D system, the correction is especially simple, due to the details of the neutral beam and MSE viewing geometry. The correction method is compared using DIII-D data in a variety of plasma conditions to analysis that assumes no radial electric field is present and to analysis that uses the standard correction method, which involves significant human intervention for profile fitting. The comparison shows that the new correction method is close to the standard one, and in all cases appears to offer a better result than use of the uncorrected data. Lastly, the method has been integrated into the standard DIII-D equilibrium reconstruction code in use for analysis between plasma pulses and is sufficiently fast that it will be implemented in real-time equilibrium analysis for control applications.« less

  2. A breast-specific, negligible-dose scatter correction technique for dedicated cone-beam breast CT: a physics-based approach to improve Hounsfield Unit accuracy

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Burkett, George, Jr.; Boone, John M.

    2014-11-01

    The purpose of this research was to develop a method to correct the cupping artifact caused from x-ray scattering and to achieve consistent Hounsfield Unit (HU) values of breast tissues for a dedicated breast CT (bCT) system. The use of a beam passing array (BPA) composed of parallel-holes has been previously proposed for scatter correction in various imaging applications. In this study, we first verified the efficacy and accuracy using BPA to measure the scatter signal on a cone-beam bCT system. A systematic scatter correction approach was then developed by modeling the scatter-to-primary ratio (SPR) in projection images acquired with and without BPA. To quantitatively evaluate the improved accuracy of HU values, different breast tissue-equivalent phantoms were scanned and radially averaged HU profiles through reconstructed planes were evaluated. The dependency of the correction method on object size and number of projections was studied. A simplified application of the proposed method on five clinical patient scans was performed to demonstrate efficacy. For the typical 10-18 cm breast diameters seen in the bCT application, the proposed method can effectively correct for the cupping artifact and reduce the variation of HU values of breast equivalent material from 150 to 40 HU. The measured HU values of 100% glandular tissue, 50/50 glandular/adipose tissue, and 100% adipose tissue were approximately 46, -35, and -94, respectively. It was found that only six BPA projections were necessary to accurately implement this method, and the additional dose requirement is less than 1% of the exam dose. The proposed method can effectively correct for the cupping artifact caused from x-ray scattering and retain consistent HU values of breast tissues.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romanov, A.; Edstrom, D.; Emanov, F. A.

    Precise beam based measurement and correction of magnetic optics is essential for the successful operation of accelerators. The LOCO algorithm is a proven and reliable tool, which in some situations can be improved by using a broader class of experimental data. The standard data sets for LOCO include the closed orbit responses to dipole corrector variation, dispersion, and betatron tunes. This paper discusses the benefits from augmenting the data with four additional classes of experimental data: the beam shape measured with beam profile monitors; responses of closed orbit bumps to focusing field variations; betatron tune responses to focusing field variations;more » BPM-to-BPM betatron phase advances and beta functions in BPMs from turn-by-turn coordinates of kicked beam. All of the described features were implemented in the Sixdsimulation software that was used to correct the optics of the VEPP-2000 collider, the VEPP-5 injector booster ring, and the FAST linac.« less

  4. Generic distortion model for metrology under optical microscopes

    NASA Astrophysics Data System (ADS)

    Liu, Xingjian; Li, Zhongwei; Zhong, Kai; Chao, YuhJin; Miraldo, Pedro; Shi, Yusheng

    2018-04-01

    For metrology under optical microscopes, lens distortion is the dominant source of error. Previous distortion models and correction methods mostly rely on the assumption that parametric distortion models require a priori knowledge of the microscopes' lens systems. However, because of the numerous optical elements in a microscope, distortions can be hardly represented by a simple parametric model. In this paper, a generic distortion model considering both symmetric and asymmetric distortions is developed. Such a model is obtained by using radial basis functions (RBFs) to interpolate the radius and distortion values of symmetric distortions (image coordinates and distortion rays for asymmetric distortions). An accurate and easy to implement distortion correction method is presented. With the proposed approach, quantitative measurement with better accuracy can be achieved, such as in Digital Image Correlation for deformation measurement when used with an optical microscope. The proposed technique is verified by both synthetic and real data experiments.

  5. Development of a network RTK positioning and gravity-surveying application with gravity correction using a smartphone.

    PubMed

    Kim, Jinsoo; Lee, Youngcheol; Cha, Sungyeoul; Choi, Chuluong; Lee, Seongkyu

    2013-07-12

    This paper proposes a smartphone-based network real-time kinematic (RTK) positioning and gravity-surveying application (app) that allows semi-real-time measurements using the built-in Bluetooth features of the smartphone and a third-generation or long-term evolution wireless device. The app was implemented on a single smartphone by integrating a global navigation satellite system (GNSS) controller, a laptop, and a field-note writing tool. The observation devices (i.e., a GNSS receiver and relative gravimeter) functioned independently of this system. The app included a gravity module, which converted the measured relative gravity reading into an absolute gravity value according to tides; meter height; instrument drift correction; and network adjustments. The semi-real-time features of this app allowed data to be shared easily with other researchers. Moreover, the proposed smartphone-based gravity-survey app was easily adaptable to various locations and rough terrain due to its compact size.

  6. A neural network for the identification of measured helicopter noise

    NASA Technical Reports Server (NTRS)

    Cabell, R. H.; Fuller, C. R.; O'Brien, W. F.

    1991-01-01

    The results of a preliminary study of the components of a novel acoustic helicopter identification system are described. The identification system uses the relationship between the amplitudes of the first eight harmonics in the main rotor noise spectrum to distinguish between helicopter types. Two classification algorithms are tested; a statistically optimal Bayes classifier, and a neural network adaptive classifier. The performance of these classifiers is tested using measured noise of three helicopters. The statistical classifier can correctly identify the helicopter an average of 67 percent of the time, while the neural network is correct an average of 65 percent of the time. These results indicate the need for additional study of the envelope of harmonic amplitudes as a component of a helicopter identification system. Issues concerning the implementation of the neural network classifier, such as training time and structure of the network, are discussed.

  7. A Synchronous Multi-Body Sensor Platform in a Wireless Body Sensor Network: Design and Implementation

    PubMed Central

    Gil, Yeongjoon; Wu, Wanqing; Lee, Jungtae

    2012-01-01

    Background Human life can be further improved if diseases and disorders can be predicted before they become dangerous, by correctly recognizing signals from the human body, so in order to make disease detection more precise, various body-signals need to be measured simultaneously in a synchronized manner. Object This research aims at developing an integrated system for measuring four signals (EEG, ECG, respiration, and PPG) and simultaneously producing synchronous signals on a Wireless Body Sensor Network. Design We designed and implemented a platform for multiple bio-signals using Bluetooth communication. Results First, we developed a prototype board and verified the signals from the sensor platform using frequency responses and quantities. Next, we designed and implemented a lightweight, ultra-compact, low cost, low power-consumption Printed Circuit Board. Conclusion A synchronous multi-body sensor platform is expected to be very useful in telemedicine and emergency rescue scenarios. Furthermore, this system is expected to be able to analyze the mutual effects among body signals. PMID:23112605

  8. Dead time corrections using the backward extrapolation method

    NASA Astrophysics Data System (ADS)

    Gilad, E.; Dubi, C.; Geslot, B.; Blaise, P.; Kolin, A.

    2017-05-01

    Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold, up to total saturation of the detector system. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (e.g., paralyzing vs. non paralyzing), and the stochastic nature of the fission chains. In the present study, a new technique is introduced for dead time corrections on the sampled Count Per Second (CPS), based on backward extrapolation of the losses, created by increasingly growing artificially imposed dead time on the data, back to zero. The method has been implemented on actual neutron noise measurements carried out in the MINERVE zero power reactor, demonstrating high accuracy (of 1-2%) in restoring the corrected count rate.

  9. Rejuvenation of a ten-year old AO curvature sensor: combining obsolescence correction and performance upgrade of MACAO

    NASA Astrophysics Data System (ADS)

    Haguenauer, P.; Fedrigo, E.; Pettazzi, L.; Reinero, C.; Gonte, F.; Pallanca, L.; Frahm, R.; Woillez, J.; Lilley, P.

    2016-07-01

    The MACAO curvature wavefront sensors have been designed as a generic adaptive optics sensor for the Very Large Telescope. Six systems have been manufactured and implemented on sky: four installed in the UTs Coudé train as an AO facility for the VLTI, and two in UT's instruments, SINFONI and CRIRES. The MACAO-VLTI have now been in use for scientific operation for more than a decade and are planned to be operated for at least ten more years. As second generation instruments for the VLTI were planned to start implementation in end of 2015, accompanied with a major upgrade of the VLTI infrastructure, we saw it as a good time for a rejuvenation project of these systems, correcting the obsolete components. This obsolescence correction also gave us the opportunity to implement improved capabilities: the correction frequency was pushed from 420 Hz to 1050 Hz, and an automatic vibrations compensation algorithm was added. The implementation on the first MACAO was done in October 2014 and the first phase of obsolescence correction was completed in all four MACAO-VLTI systems in October 2015 with the systems delivered back to operation. The resuming of the scientific operation of the VLTI on the UTs in November 2015 allowed to gather statistics in order to evaluate the improvement of the performances through this upgrade. A second phase of obsolescence correction has now been started, together with a global reflection on possible further improvements to secure observations with the VLTI.

  10. Whole Building Efficiency for Whole Foods: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deru, M.; Doebber, I.; Hirsch, A.

    2013-02-01

    The National Renewable Energy Laboratory partnered with Whole Foods Market under the Commercial Building Partnership (CBP) program to design and implement a new store in Raleigh, North Carolina. The result was a design with a predicted energy savings of 40% over ASHRAE Standard 90.1-2004, and 25% energy savings over their standard design. Measured performance of the as-built building showed that the building did not achieve the predicted performance. A detailed review of the project several months after opening revealed a series of several items in construction and controls items that were not implemented properly and were not fully corrected inmore » the commissioning process.« less

  11. Lean Manufacturing and the Infantry: Retaining Quality during Total Mobilization

    DTIC Science & Technology

    United States last decisive war against a peer threat, World War II, required significant manpower resources. Prioritizing quality manpower ...distribution to technical jobs resulted in the degradation of the infantry, which then required the Army to implement corrective measures to reverse the...deficiency in quality. Should another decisive war occur in the future, the lethality and speed of modern warfare will increase the demand for manpower , which

  12. An Adaptive Kalman Filter Using a Simple Residual Tuning Method

    NASA Technical Reports Server (NTRS)

    Harman, Richard R.

    1999-01-01

    One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. A. H. Jazwinski developed a specialized version of this technique for estimation of process noise. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.

  13. Performance of the NIST goniocolorimeter with a broad-band source and multichannel charged coupled device based spectrometer.

    PubMed

    Podobedov, V B; Miller, C C; Nadal, M E

    2012-09-01

    The authors describe the NIST high-efficiency instrument for measurements of bidirectional reflectance distribution function of colored materials, including gonioapparent materials such as metallic and pearlescent coatings. The five-axis goniospectrometer measures the spectral reflectance of samples over a wide range of illumination and viewing angles. The implementation of a broad-band source and a multichannel CCD spectrometer corrected for stray light significantly increased the efficiency of the goniometer. In the extended range of 380 nm to 1050 nm, a reduction of measurement time from a few hours to a few minutes was obtained. Shorter measurement time reduces the load on the precise mechanical assembly ensuring high angular accuracy over time. We describe the application of matrix-based correction of stray light and the extension of effective dynamic range of measured fluxes to the values of 10(6) to 10(7) needed for the absolute characterization of samples. The measurement uncertainty was determined to be 0.7% (k = 2), which is comparable with similar instruments operating in a single channel configuration. Several examples of reflectance data obtained with the improved instrument indicate a 0.3% agreement compared to data collected with the single channel configuration.

  14. 40 CFR 257.28 - Implementation of the corrective action program.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...-Municipal Non-Hazardous Waste Disposal Units Ground-Water Monitoring and Corrective Action § 257.28... corrective action ground-water monitoring program that: (i) At a minimum, meets the requirements of an assessment monitoring program under § 257.25; (ii) Indicates the effectiveness of the corrective action...

  15. 40 CFR 257.28 - Implementation of the corrective action program.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-Municipal Non-Hazardous Waste Disposal Units Ground-Water Monitoring and Corrective Action § 257.28... corrective action ground-water monitoring program that: (i) At a minimum, meets the requirements of an assessment monitoring program under § 257.25; (ii) Indicates the effectiveness of the corrective action...

  16. 40 CFR 257.28 - Implementation of the corrective action program.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...-Municipal Non-Hazardous Waste Disposal Units Ground-Water Monitoring and Corrective Action § 257.28... corrective action ground-water monitoring program that: (i) At a minimum, meets the requirements of an assessment monitoring program under § 257.25; (ii) Indicates the effectiveness of the corrective action...

  17. Model Development for MODIS Thermal Band Electronic Crosstalk

    NASA Technical Reports Server (NTRS)

    Chang, Tiejun; Wu, Aisheng; Geng, Xu; Li, Yonghonh; Brinkman, Jake; Keller, Graziela; Xiong, Xiaoxiong

    2016-01-01

    MODerate-resolution Imaging Spectroradiometer (MODIS) has 36 bands. Among them, 16 thermal emissive bands covering a wavelength range from 3.8 to 14.4 m. After 16 years on-orbit operation, the electronic crosstalk of a few Terra MODIS thermal emissive bands developed substantial issues that cause biases in the EV brightness temperature measurements and surface feature contamination. The crosstalk effects on band 27 with center wavelength at 6.7 m and band 29 at 8.5 m increased significantly in recent years, affecting downstream products such as water vapor and cloud mask. The crosstalk effect is evident in the near-monthly scheduled lunar measurements, from which the crosstalk coefficients can be derived. The development of an alternative approach is very helpful for independent verification.In this work, a physical model was developed to assess the crosstalk impact on calibration as well as in Earth view brightness temperature retrieval. This model was applied to Terra MODIS band 29 empirically to correct the Earth brightness temperature measurements. In the model development, the detectors nonlinear response is considered. The impact of the electronic crosstalk is assessed in two steps. The first step consists of determining the impact on calibration using the on-board blackbody (BB). Due to the detectors nonlinear response and large background signal, both linear and nonlinear coefficients are affected by the crosstalk from sending bands. The second step is to calculate the effects on the Earth view brightness temperature retrieval. The effects include those from affected calibration coefficients and the contamination of Earth view measurements. This model links the measurement bias with crosstalk coefficients, detector non-linearity, and the ratio of Earth measurements between the sending and receiving bands. The correction of the electronic cross talk can be implemented empirically from the processed bias at different brightness temperature. The implementation can be done through two approaches. As routine calibration assessment for thermal infrared bands, the trending over select Earth scenes is processed for all the detectors in a band and the band averaged bias is derived at a certain time. In this case, the correction of an affected band can be made using the regression of the model with band averaged bias and then corrections of detector differences are applied. The second approach requires the trending for individual detectors and the bias for each detector is used for regression with the model. A test using the first approach was made for Terra MODIS band 29 with the biases derived from long-term trending of brightness temperature over ocean and Dome-C.

  18. Retrospective Correction of Physiological Noise in DTI Using an Extended Tensor Model and Peripheral Measurements

    PubMed Central

    Mohammadi, Siawoosh; Hutton, Chloe; Nagy, Zoltan; Josephs, Oliver; Weiskopf, Nikolaus

    2013-01-01

    Diffusion tensor imaging is widely used in research and clinical applications, but this modality is highly sensitive to artefacts. We developed an easy-to-implement extension of the original diffusion tensor model to account for physiological noise in diffusion tensor imaging using measures of peripheral physiology (pulse and respiration), the so-called extended tensor model. Within the framework of the extended tensor model two types of regressors, which respectively modeled small (linear) and strong (nonlinear) variations in the diffusion signal, were derived from peripheral measures. We tested the performance of four extended tensor models with different physiological noise regressors on nongated and gated diffusion tensor imaging data, and compared it to an established data-driven robust fitting method. In the brainstem and cerebellum the extended tensor models reduced the noise in the tensor-fit by up to 23% in accordance with previous studies on physiological noise. The extended tensor model addresses both large-amplitude outliers and small-amplitude signal-changes. The framework of the extended tensor model also facilitates further investigation into physiological noise in diffusion tensor imaging. The proposed extended tensor model can be readily combined with other artefact correction methods such as robust fitting and eddy current correction. PMID:22936599

  19. The Assessment of Atmospheric Correction Processors for MERIS Based on In-Situ Measurements-Updates in OC-CCI Round Robin

    NASA Astrophysics Data System (ADS)

    Muller, Dagmar; Krasemann, Hajo; Zuhilke, Marco; Doerffer, Roland; Brockmann, Carsten; Steinmetz, Francois; Valente, Andre; Brotas, Vanda; Grant, kMicheal G.; Sathyendranath, Shubha; Melin, Frederic; Franz, Bryan A.; Mazeran, Constant; Regner, Peter

    2016-08-01

    The Ocean Colour Climate Change Initiative (OC- CCI) provides a long-term time series of ocean colour data and investigates the detectable climate impact. A reliable and stable atmospheric correction (AC) procedure is the basis for ocean colour products of the necessary high quality.The selection of atmospheric correction processors is repeated regularly based on a round robin exercise, at the latest when a revised production and release of the OC-CCI merged product is scheduled. Most of the AC processors are under constant development and changes are implemented to improve the quality of satellite-derived retrievals of remote sensing reflectances. The changes between versions of the inter-comparison are not restricted to the implementation of AC processors. There are activities to improve the quality flagging for some processors, and the system vicarious calibration for AC algorithms in their sensor specific behaviour are widely studied. Each inter-comparison starts with an updated in-situ database, as more spectra are included in order to broaden the temporal and spatial range of satellite match-ups. While the OC-CCI's focus has laid on case-1 waters in the past, it has expanded to the retrieval of case-2 products now. In light of this goal, new bidirectional correction procedures (normalisation) for the remote sensing spectra have been introduced. As in-situ measurements are not always available at the satellite sensor specific central wave- lengths, a band-shift algorithm has to be applied to the dataset.In order to guarantee an objective selection from a set of four atmospheric correction processors, the common validation strategy of comparisons between in-situ and satellite-derived water leaving reflectance spectra, is aided by a ranking system. In principal, the statistical parameters are transformed into relative scores, which evaluate the relationship of quality dependent on the algorithms under study. The sensitivity of these scores to the selected database has been assessed by a bootstrapping exercise, which allows identification of the uncertainty in the scoring results.A comparison of round robin results for the OC-CCI version 2 and the current version 3 is presented and some major changes are highlighted.

  20. [Evaluation of the implementation of the FAPACAN programme to prevent cancer behavioral risk in primary care users in the North of Spain].

    PubMed

    López González, M Luisa; Fernández Carreira, Jose Manuel; López González, Santiago; del Olivo del Valle Gómez, M; García Casas, Juan Bautista; Cueto Espinar, Antonio

    2003-01-01

    The evaluation of the process is an essential condition to correctly measure the impact of educational interventions on behaviour, its psychosocial determinants and the state of change, in the context of health promotion. The aim was to evaluate the quality of the implementation of the FAPACAN Programme, designed to prevent behavioural risk of cancer in Primary Care, and to improve its psychosocial determinants in the A.S.E. Model and the state of change according to Prochaska and DiClemente Theory. The quality of implementation was measured by means of a visit to the health centre, by filling in a checklist 'in situ', and a phone survey with the patient. Centralisation and association measures were found (Pearson and Spearman's coefficient). A multiple regression model was obtained with the score made by the patient (range of 0 to 8) and the covariables: gender, age, level of education, locality and family history of cancer. The quality scores obtained oscillate between 72% and 81% of optimum quality. Significant differences were found owing to the administrator (better with fewer years of exercise) and the patient (better with higher level of study). In general, the quality of implementation was more than sufficient, in spite of the poor provision by the health system.

  1. Closed-Loop Analysis of Soft Decisions for Serial Links

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin A.; Steele, Glen F.; Zucha, Joan P.; Schlesinger, Adam M.

    2013-01-01

    We describe the benefit of using closed-loop measurements for a radio receiver paired with a counterpart transmitter. We show that real-time analysis of the soft decision output of a receiver can provide rich and relevant insight far beyond the traditional hard-decision bit error rate (BER) test statistic. We describe a Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in live-time during the development of software defined radios. This test technique gains importance as modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more protocol overhead through noisier channels, and software-defined radios (SDRs) use error-correction codes that approach Shannon's theoretical limit of performance.

  2. Soft Decision Analyzer

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin; Steele, Glen; Zucha, Joan; Schlesinger, Adam

    2013-01-01

    We describe the benefit of using closed-loop measurements for a radio receiver paired with a counterpart transmitter. We show that real-time analysis of the soft decision output of a receiver can provide rich and relevant insight far beyond the traditional hard-decision bit error rate (BER) test statistic. We describe a Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in live-time during the development of software defined radios. This test technique gains importance as modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more protocol overhead through noisier channels, and software-defined radios (SDRs) use error-correction codes that approach Shannon's theoretical limit of performance.

  3. Verification of the Usefulness of the Trimble Rtx Extended Satellite Technology with the Xfill Function in the Local Network Implementing Rtk Measurements

    NASA Astrophysics Data System (ADS)

    Siejka, Zbigniew

    2014-12-01

    The paper presents the method of satellite measurements, which gives users the ability of GNSS continuous precise positioning in real time, even in the case of short interruptions in receiving the correction of the local ground system of measurements support. The proposed method is a combination of two satellite positioning technologies RTN GNSS and RTX Extended. In technology RTX Extended the xFill function was used for precise positioning in real time and in the local reference system. This function provides the ability to perform measurement without the need for constant communication with the ground support satellite system. Test measurements were performed on a test basis located in Krakow, and RTN GNSS positioning was done based on the national network of reference stations of the ASGEUPOS. The solution allows for short (up to 5 minutes) interruptions in radio or internet communication. When the primary stream of RTN correction is not available, then the global corrections Trimble xFill broadcasted by satellite are used. The new technology uses in the real-time data from the global network of tracking stations and contributes significantly to improving the quality and efficiency of surveying works. At present according to the authors, technology Trimble CenterPoint RTX can guarantee repeatability of measurements not worse than 3.8 cm (Trimble Survey Division, 2012). In the paper the comparative analysis of measurement results between the two technologies was performed: RTN carried out in the classic way, which was based on the corrections of the terrestrial local network of the Polish system of active geodetic network (ASG-EUPOS) and RTK xFill technology. The results were related to the data of test network, established as error free. The research gave satisfactory results and confirmed the great potential of the use of the new technology in the geodetic work realization. By combining these two technologies of GNSS surveying the user can greatly improve the overall performance of real-time positioning.

  4. Complex differential variance angiography with noise-bias correction for optical coherence tomography of the retina

    PubMed Central

    Braaf, Boy; Donner, Sabine; Nam, Ahhyun S.; Bouma, Brett E.; Vakoc, Benjamin J.

    2018-01-01

    Complex differential variance (CDV) provides phase-sensitive angiographic imaging for optical coherence tomography (OCT) with immunity to phase-instabilities of the imaging system and small-scale axial bulk motion. However, like all angiographic methods, measurement noise can result in erroneous indications of blood flow that confuse the interpretation of angiographic images. In this paper, a modified CDV algorithm that corrects for this noise-bias is presented. This is achieved by normalizing the CDV signal by analytically derived upper and lower limits. The noise-bias corrected CDV algorithm was implemented into an experimental 1 μm wavelength OCT system for retinal imaging that used an eye tracking scanner laser ophthalmoscope at 815 nm for compensation of lateral eye motions. The noise-bias correction improved the CDV imaging of the blood flow in tissue layers with a low signal-to-noise ratio and suppressed false indications of blood flow outside the tissue. In addition, the CDV signal normalization suppressed noise induced by galvanometer scanning errors and small-scale lateral motion. High quality cross-section and motion-corrected en face angiograms of the retina and choroid are presented. PMID:29552388

  5. Complex differential variance angiography with noise-bias correction for optical coherence tomography of the retina.

    PubMed

    Braaf, Boy; Donner, Sabine; Nam, Ahhyun S; Bouma, Brett E; Vakoc, Benjamin J

    2018-02-01

    Complex differential variance (CDV) provides phase-sensitive angiographic imaging for optical coherence tomography (OCT) with immunity to phase-instabilities of the imaging system and small-scale axial bulk motion. However, like all angiographic methods, measurement noise can result in erroneous indications of blood flow that confuse the interpretation of angiographic images. In this paper, a modified CDV algorithm that corrects for this noise-bias is presented. This is achieved by normalizing the CDV signal by analytically derived upper and lower limits. The noise-bias corrected CDV algorithm was implemented into an experimental 1 μm wavelength OCT system for retinal imaging that used an eye tracking scanner laser ophthalmoscope at 815 nm for compensation of lateral eye motions. The noise-bias correction improved the CDV imaging of the blood flow in tissue layers with a low signal-to-noise ratio and suppressed false indications of blood flow outside the tissue. In addition, the CDV signal normalization suppressed noise induced by galvanometer scanning errors and small-scale lateral motion. High quality cross-section and motion-corrected en face angiograms of the retina and choroid are presented.

  6. A robust in-situ warp-correction algorithm for VISAR streak camera data at the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.; Kalantar, Daniel H.

    2015-02-01

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high energy density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.

  7. Hybrid architecture for encoded measurement-based quantum computation

    PubMed Central

    Zwerger, M.; Briegel, H. J.; Dür, W.

    2014-01-01

    We present a hybrid scheme for quantum computation that combines the modular structure of elementary building blocks used in the circuit model with the advantages of a measurement-based approach to quantum computation. We show how to construct optimal resource states of minimal size to implement elementary building blocks for encoded quantum computation in a measurement-based way, including states for error correction and encoded gates. The performance of the scheme is determined by the quality of the resource states, where within the considered error model a threshold of the order of 10% local noise per particle for fault-tolerant quantum computation and quantum communication. PMID:24946906

  8. Comparing the Effectiveness of Error-Correction Strategies in Discrete Trial Training

    ERIC Educational Resources Information Center

    Turan, Michelle K.; Moroz, Lianne; Croteau, Natalie Paquet

    2012-01-01

    Error-correction strategies are essential considerations for behavior analysts implementing discrete trial training with children with autism. The research literature, however, is still lacking in the number of studies that compare and evaluate error-correction procedures. The purpose of this study was to compare two error-correction strategies:…

  9. Robust incremental compensation of the light attenuation with depth in 3D fluorescence microscopy.

    PubMed

    Kervrann, C; Legland, D; Pardini, L

    2004-06-01

    Summary Fluorescent signal intensities from confocal laser scanning microscopes (CLSM) suffer from several distortions inherent to the method. Namely, layers which lie deeper within the specimen are relatively dark due to absorption and scattering of both excitation and fluorescent light, photobleaching and/or other factors. Because of these effects, a quantitative analysis of images is not always possible without correction. Under certain assumptions, the decay of intensities can be estimated and used for a partial depth intensity correction. In this paper we propose an original robust incremental method for compensating the attenuation of intensity signals. Most previous correction methods are more or less empirical and based on fitting a decreasing parametric function to the section mean intensity curve computed by summing all pixel values in each section. The fitted curve is then used for the calculation of correction factors for each section and a new compensated sections series is computed. However, these methods do not perfectly correct the images. Hence, the algorithm we propose for the automatic correction of intensities relies on robust estimation, which automatically ignores pixels where measurements deviate from the decay model. It is based on techniques adopted from the computer vision literature for image motion estimation. The resulting algorithm is used to correct volumes acquired in CLSM. An implementation of such a restoration filter is discussed and examples of successful restorations are given.

  10. Improving sub-grid scale accuracy of boundary features in regional finite-difference models

    USGS Publications Warehouse

    Panday, Sorab; Langevin, Christian D.

    2012-01-01

    As an alternative to grid refinement, the concept of a ghost node, which was developed for nested grid applications, has been extended towards improving sub-grid scale accuracy of flow to conduits, wells, rivers or other boundary features that interact with a finite-difference groundwater flow model. The formulation is presented for correcting the regular finite-difference groundwater flow equations for confined and unconfined cases, with or without Newton Raphson linearization of the nonlinearities, to include the Ghost Node Correction (GNC) for location displacement. The correction may be applied on the right-hand side vector for a symmetric finite-difference Picard implementation, or on the left-hand side matrix for an implicit but asymmetric implementation. The finite-difference matrix connectivity structure may be maintained for an implicit implementation by only selecting contributing nodes that are a part of the finite-difference connectivity. Proof of concept example problems are provided to demonstrate the improved accuracy that may be achieved through sub-grid scale corrections using the GNC schemes.

  11. Integrated Community Case Management of Childhood Illness in Ethiopia: Implementation Strength and Quality of Care

    PubMed Central

    Miller, Nathan P.; Amouzou, Agbessi; Tafesse, Mengistu; Hazel, Elizabeth; Legesse, Hailemariam; Degefie, Tedbabe; Victora, Cesar G.; Black, Robert E.; Bryce, Jennifer

    2014-01-01

    Ethiopia has scaled up integrated community case management of childhood illness (iCCM) in most regions. We assessed the strength of iCCM implementation and the quality of care provided by health extension workers (HEWs). Data collectors observed HEWs' consultations with sick children and carried out gold standard re-examinations. Nearly all HEWs received training and supervision, and essential commodities were available. HEWs provided correct case management for 64% of children. The proportions of children correctly managed for pneumonia, diarrhea, and malnutrition were 72%, 79%, and 59%, respectively. Only 34% of children with severe illness were correctly managed. Health posts saw an average of 16 sick children in the previous 1 month. These results show that iCCM can be implemented at scale and that community-based HEWs can correctly manage multiple illnesses. However, to increase the chances of impact on child mortality, management of severe illness and use of iCCM services must be improved. PMID:24799369

  12. Accounting for Chromatic Atmospheric Effects on Barycentric Corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blackman, Ryan T.; Szymkowiak, Andrew E.; Fischer, Debra A.

    2017-03-01

    Atmospheric effects on stellar radial velocity measurements for exoplanet discovery and characterization have not yet been fully investigated for extreme precision levels. We carry out calculations to determine the wavelength dependence of barycentric corrections across optical wavelengths, due to the ubiquitous variations in air mass during observations. We demonstrate that radial velocity errors of at least several cm s{sup −1} can be incurred if the wavelength dependence is not included in the photon-weighted barycentric corrections. A minimum of four wavelength channels across optical spectra (380–680 nm) are required to account for this effect at the 10 cm s{sup −1} level,more » with polynomial fits of the barycentric corrections applied to cover all wavelengths. Additional channels may be required in poor observing conditions or to avoid strong telluric absorption features. Furthermore, consistent flux sampling on the order of seconds throughout the observation is necessary to ensure that accurate photon weights are obtained. Finally, we describe how a multiple-channel exposure meter will be implemented in the EXtreme PREcision Spectrograph (EXPRES).« less

  13. A color-corrected strategy for information multiplexed Fourier ptychographic imaging

    NASA Astrophysics Data System (ADS)

    Wang, Mingqun; Zhang, Yuzhen; Chen, Qian; Sun, Jiasong; Fan, Yao; Zuo, Chao

    2017-12-01

    Fourier ptychography (FP) is a novel computational imaging technique that provides both wide field of view (FoV) and high-resolution (HR) imaging capacity for biomedical imaging. Combined with information multiplexing technology, wavelength multiplexed (or color multiplexed) FP imaging can be implemented by lighting up R/G/B LED units simultaneously. Furthermore, a HR image can be recovered at each wavelength from the multiplexed dataset. This enhances the efficiency of data acquisition. However, since the same dataset of intensity measurement is used to recover the HR image at each wavelength, the mean value in each channel would converge to the same value. In this paper, a color correction strategy embedded in the multiplexing FP scheme is demonstrated, which is termed as color corrected wavelength multiplexed Fourier ptychography (CWMFP). Three images captured by turning on a LED array in R/G/B are required as priori knowledge to improve the accuracy of reconstruction in the recovery process. Using the reported technique, the redundancy requirement of information multiplexed FP is reduced. Moreover, the accuracy of reconstruction at each channel is improved with correct color reproduction of the specimen.

  14. Blind retrospective motion correction of MR images.

    PubMed

    Loktyushin, Alexander; Nickisch, Hannes; Pohmann, Rolf; Schölkopf, Bernhard

    2013-12-01

    Subject motion can severely degrade MR images. A retrospective motion correction algorithm, Gradient-based motion correction, which significantly reduces ghosting and blurring artifacts due to subject motion was proposed. The technique uses the raw data of standard imaging sequences; no sequence modifications or additional equipment such as tracking devices are required. Rigid motion is assumed. The approach iteratively searches for the motion trajectory yielding the sharpest image as measured by the entropy of spatial gradients. The vast space of motion parameters is efficiently explored by gradient-based optimization with a convergence guarantee. The method has been evaluated on both synthetic and real data in two and three dimensions using standard imaging techniques. MR images are consistently improved over different kinds of motion trajectories. Using a graphics processing unit implementation, computation times are in the order of a few minutes for a full three-dimensional volume. The presented technique can be an alternative or a complement to prospective motion correction methods and is able to improve images with strong motion artifacts from standard imaging sequences without requiring additional data. Copyright © 2013 Wiley Periodicals, Inc., a Wiley company.

  15. GIFTS SM EDU Level 1B Algorithms

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Gazarik, Michael J.; Reisse, Robert A.; Johnson, David G.

    2007-01-01

    The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) SensorModule (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiances using a Fourier transform spectrometer (FTS). The GIFTS instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the GIFTS SM EDU Level 1B algorithms involved in the calibration. The GIFTS Level 1B calibration procedures can be subdivided into four blocks. In the first block, the measured raw interferograms are first corrected for the detector nonlinearity distortion, followed by the complex filtering and decimation procedure. In the second block, a phase correction algorithm is applied to the filtered and decimated complex interferograms. The resulting imaginary part of the spectrum contains only the noise component of the uncorrected spectrum. Additional random noise reduction can be accomplished by applying a spectral smoothing routine to the phase-corrected spectrum. The phase correction and spectral smoothing operations are performed on a set of interferogram scans for both ambient and hot blackbody references. To continue with the calibration, we compute the spectral responsivity based on the previous results, from which, the calibrated ambient blackbody (ABB), hot blackbody (HBB), and scene spectra can be obtained. We now can estimate the noise equivalent spectral radiance (NESR) from the calibrated ABB and HBB spectra. The correction schemes that compensate for the fore-optics offsets and off-axis effects are also implemented. In the third block, we developed an efficient method of generating pixel performance assessments. In addition, a random pixel selection scheme is designed based on the pixel performance evaluation. Finally, in the fourth block, the single pixel algorithms are applied to the entire FPA.

  16. The network impact of hijacking a quantum repeater

    NASA Astrophysics Data System (ADS)

    Satoh, Takahiko; Nagayama, Shota; Oka, Takafumi; Van Meter, Rodney

    2018-07-01

    In quantum networking, repeater hijacking menaces the security and utility of quantum applications. To deal with this problem, it is important to take a measure of the impact of quantum repeater hijacking. First, we quantify the work of each quantum repeater with regards to each quantum communication. Based on this, we show the costs for repeater hijacking detection using distributed quantum state tomography and the amount of work loss and rerouting penalties caused by hijacking. This quantitative evaluation covers both purification-entanglement swapping and quantum error correction repeater networks. Naive implementation of the checks necessary for correct network operation can be subverted by a single hijacker to bring down an entire network. Fortunately, the simple fix of randomly assigned testing can prevent such an attack.

  17. Lidar investigations of ozone in the upper troposphere - lower stratosphere: technique and results of measurements

    NASA Astrophysics Data System (ADS)

    Romanovskii, O. A.; Burlakov, V. D.; Dolgii, S. I.; Nevzorov, A. A.; Nevzorov, A. V.; Kharchenko, O. V.

    2016-12-01

    Prediction of atmospheric ozone layer, which is the valuable and irreplaceable geo asset, is currently the important scientific and engineering problem. The relevance of the research is caused by the necessity to develop laser remote methods for sensing ozone to solve the problems of controlling the environment and climatology. The main aim of the research is to develop the technique for laser remote ozone sensing in the upper troposphere - lower stratosphere by differential absorption method for temperature and aerosol correction and analysis of measurement results. The report introduces the technique of recovering profiles of ozone vertical distribution considering temperature and aerosol correction in atmosphere lidar sounding by differential absorption method. The temperature correction of ozone absorption coefficients is introduced in the software to reduce the retrieval errors. The authors have determined wavelengths, promising to measure ozone profiles in the upper troposphere - lower stratosphere. We present the results of DIAL measurements of the vertical ozone distribution at the Siberian lidar station in Tomsk. Sensing is performed according to the method of differential absorption at wavelength pair of 299/341 nm, which are, respectively, the first and second Stokes components of SRS conversion of 4th harmonic of Nd:YAG laser (266 nm) in hydrogen. Lidar with receiving mirror 0.5 m in diameter is used to implement sensing of vertical ozone distribution in altitude range of 6-18 km. The recovered ozone profiles were compared with IASI satellite data and Kruger model. The results of applying the developed technique to recover the profiles of ozone vertical distribution considering temperature and aerosol correction in the altitude range of 6-18 km in lidar atmosphere sounding by differential absorption method confirm the prospects of using the selected wavelengths of ozone sensing 341 and 299 nm in the ozone lidar.

  18. Clinical implementation of MOSFET detectors for dosimetry in electron beams.

    PubMed

    Bloemen-van Gurp, Esther J; Minken, Andre W H; Mijnheer, Ben J; Dehing-Oberye, Cary J G; Lambin, Philippe

    2006-09-01

    To determine the factors converting the reading of a MOSFET detector placed on the patient's skin without additional build-up to the dose at the depth of dose maximum (D(max)) and investigate their feasibility for in vivo dose measurements in electron beams. Factors were determined to relate the reading of a MOSFET detector to D(max) for 4 - 15 MeV electron beams in reference conditions. The influence of variation in field size, SSD, angle and field shape on the MOSFET reading, obtained without additional build-up, was evaluated using 4, 8 and 15 MeV beams and compared to ionisation chamber data at the depth of dose maximum (z(max)). Patient entrance in vivo measurements included 40 patients, mostly treated for breast tumours. The MOSFET reading, converted to D(max), was compared to the dose prescribed at this depth. The factors to convert MOSFET reading to D(max) vary between 1.33 and 1.20 for the 4 and 15 MeV beams, respectively. The SSD correction factor is approximately 8% for a change in SSD from 95 to 100 cm, and 2% for each 5-cm increment above 100 cm SSD. A correction for fields having sides smaller than 6 cm and for irregular field shape is also recommended. For fields up to 20 x 20 cm(2) and for oblique incidence up to 45 degrees, a correction is not necessary. Patient measurements demonstrated deviations from the prescribed dose with a mean difference of -0.7% and a standard deviation of 2.9%. Performing dose measurements with MOSFET detectors placed on the patient's skin without additional build-up is a well suited technique for routine dose verification in electron beams, when applying the appropriate conversion and correction factors.

  19. Identification and Correction of Additive and Multiplicative Spatial Biases in Experimental High-Throughput Screening.

    PubMed

    Mazoure, Bogdan; Caraus, Iurie; Nadon, Robert; Makarenkov, Vladimir

    2018-06-01

    Data generated by high-throughput screening (HTS) technologies are prone to spatial bias. Traditionally, bias correction methods used in HTS assume either a simple additive or, more recently, a simple multiplicative spatial bias model. These models do not, however, always provide an accurate correction of measurements in wells located at the intersection of rows and columns affected by spatial bias. The measurements in these wells depend on the nature of interaction between the involved biases. Here, we propose two novel additive and two novel multiplicative spatial bias models accounting for different types of bias interactions. We describe a statistical procedure that allows for detecting and removing different types of additive and multiplicative spatial biases from multiwell plates. We show how this procedure can be applied by analyzing data generated by the four HTS technologies (homogeneous, microorganism, cell-based, and gene expression HTS), the three high-content screening (HCS) technologies (area, intensity, and cell-count HCS), and the only small-molecule microarray technology available in the ChemBank small-molecule screening database. The proposed methods are included in the AssayCorrector program, implemented in R, and available on CRAN.

  20. Intensity-Value Corrections for Integrating Sphere Measurements of Solid Samples Measured Behind Glass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Timothy J.; Bernacki, Bruce E.; Redding, Rebecca L.

    2014-11-01

    Accurate and calibrated directional-hemispherical reflectance spectra of solids are important for both in situ and remote sensing. Many solids are in the form of powders or granules and to measure their diffuse reflectance spectra in the laboratory, it is often necessary to place the samples behind a transparent medium such as glass for the ultraviolet (UV), visible, or near-infrared spectral regions. Using both experimental methods and a simple optical model, we demonstrate that glass (fused quartz in our case) leads to artifacts in the reflectance values. We report our observations that the measured reflectance values, for both hemispherical and diffusemore » reflectance, are distorted by the additional reflections arising at the air–quartz and sample–quartz interfaces. The values are dependent on the sample reflectance and are offset in intensity in the hemispherical case, leading to measured values up to ~6% too high for a 2% reflectance surface, ~3.8% too high for 10% reflecting surfaces, approximately correct for 40–60% diffuse-reflecting surfaces, and ~1.5% too low for 99% reflecting Spectralon® surfaces. For the case of diffuse-only reflectance, the measured values are uniformly too low due to the polished glass, with differences of nearly 6% for a 99% reflecting matte surface. The deviations arise from the added reflections from the quartz surfaces, as verified by both theory and experiment, and depend on sphere design. Finally, empirical correction factors were implemented into post-processing software to redress the artifact for hemispherical and diffuse reflectance data across the 300–2300 nm range.« less

  1. The Organization of Correctional Education Services

    ERIC Educational Resources Information Center

    Gehring, Thom

    2007-01-01

    There have been five major types of correctional education organizations over the centuries: Sabbath school, traditional or decentralized, bureau, correctional school district (CSD), and integral education. The middle three are modern organizational patterns that can be implemented throughout a system: Decentralized, bureau, and CSD. The…

  2. Implementation of the pyramid wavefront sensor as a direct phase detector for large amplitude aberrations

    NASA Astrophysics Data System (ADS)

    Kupke, Renate; Gavel, Don; Johnson, Jess; Reinig, Marc

    2008-07-01

    We investigate the non-modulating pyramid wave-front sensor's (P-WFS) implementation in the context of Lick Observatory's Villages visible light AO system on the Nickel 1-meter telescope. A complete adaptive optics correction, using a non-modulated P-WFS in slope sensing mode as a boot-strap to a regime in which the P-WFS can act as a direct phase sensor is explored. An iterative approach to reconstructing the wave-front phase, given the pyramid wave-front sensor's non-linear signal, is developed. Using Monte Carlo simulations, the iterative reconstruction method's photon noise propagation behavior is compared to both the pyramid sensor used in slope-sensing mode, and the traditional Shack Hartmann sensor's theoretical performance limits. We determine that bootstrapping using the P-WFS as a slope sensor does not offer enough correction to bring the phase residuals into a regime in which the iterative algorithm can provide much improvement in phase measurement. It is found that both the iterative phase reconstructor and the slope reconstruction methods offer an advantage in noise propagation over Shack Hartmann sensors.

  3. Hospital protocols for targeted glycemic control: Development, implementation, and models for cost justification.

    PubMed

    Magee, Michelle F

    2007-05-15

    Evolving elements of best practices for providing targeted glycemic control in the hospital setting, clinical performance measurement, basal-bolus plus correction-dose insulin regimens, components of standardized subcutaneous (s.c.) insulin order sets, and strategies for implementation and cost justification of glycemic control initiatives are discussed. Best practices for targeted glycemic control should address accurate documentation of hyperglycemia, initial patient assessment, management plan, target blood glucose range, blood glucose monitoring frequency, maintenance of glycemic control, criteria for glucose management consultations, and standardized insulin order sets and protocols. Establishing clinical performance measures, including desirable processes and outcomes, can help ensure the success of targeted hospital glycemic control initiatives. The basal-bolus plus correction-dose regimen for insulin administration will be used to mimic the normal physiologic pattern of endogenous insulin secretion. Standardized insulin order sets and protocols are being used to minimize the risk of error in insulin therapy. Components of standardized s.c. insulin order sets include specification of the hyperglycemia diagnosis, finger stick blood glucose monitoring frequency and timing, target blood glucose concentration range, cutoff values for excessively high or low blood glucose concentrations that warrant alerting the physician, basal and prandial or nutritional (i.e., bolus) insulin, correction doses, hypoglycemia treatment, and perioperative or procedural dosage adjustments. The endorsement of hospital administrators and key physician and nursing leaders is needed for glycemic control initiatives. Initiatives may be cost justified on the basis of the billings for clinical diabetes management services and/or the return- on-investment accrued to reductions in hospital length of stay, readmissions, and accurate documentation and coding of unrecognized or uncontrolled diabetes, and diabetes complications. Standardized insulin order sets and protocols may minimize risk of insulin errors. The endorsement of these protocols by administrators, physicians, nurses, and pharmacists is also needed for success.

  4. Homogenized total ozone data records from the European sensors GOME/ERS-2, SCIAMACHY/Envisat, and GOME-2/MetOp-A

    NASA Astrophysics Data System (ADS)

    Lerot, C.; Van Roozendael, M.; Spurr, R.; Loyola, D.; Coldewey-Egbers, M.; Kochenova, S.; van Gent, J.; Koukouli, M.; Balis, D.; Lambert, J.-C.; Granville, J.; Zehner, C.

    2014-02-01

    Within the European Space Agency's Climate Change Initiative, total ozone column records from GOME (Global Ozone Monitoring Experiment), SCIAMACHY (SCanning Imaging Absorption SpectroMeter for Atmospheric CartograpHY), and GOME-2 have been reprocessed with GODFIT version 3 (GOME-type Direct FITting). This algorithm is based on the direct fitting of reflectances simulated in the Huggins bands to the observations. We report on new developments in the algorithm from the version implemented in the operational GOME Data Processor v5. The a priori ozone profile database TOMSv8 is now combined with a recently compiled OMI/MLS tropospheric ozone climatology to improve the representativeness of a priori information. The Ring procedure that corrects simulated radiances for the rotational Raman inelastic scattering signature has been improved using a revised semi-empirical expression. Correction factors are also applied to the simulated spectra to account for atmospheric polarization. In addition, the computational performance has been significantly enhanced through the implementation of new radiative transfer tools based on principal component analysis of the optical properties. Furthermore, a soft-calibration scheme for measured reflectances and based on selected Brewer measurements has been developed in order to reduce the impact of level-1 errors. This soft-calibration corrects not only for possible biases in backscattered reflectances, but also for artificial spectral features interfering with the ozone signature. Intersensor comparisons and ground-based validation indicate that these ozone data sets are of unprecedented quality, with stability better than 1% per decade, a precision of 1.7%, and systematic uncertainties less than 3.6% over a wide range of atmospheric states.

  5. Addressing excess risk of overdose among recently incarcerated people in the USA: harm reduction interventions in correctional settings.

    PubMed

    Brinkley-Rubinstein, Lauren; Cloud, David H; Davis, Chelsea; Zaller, Nickolas; Delany-Brumsey, Ayesha; Pope, Leah; Martino, Sarah; Bouvier, Benjamin; Rich, Josiah

    2017-03-13

    Purpose The purpose of this paper is to discuss overdose among those with criminal justice experience and recommend harm reduction strategies to lessen overdose risk among this vulnerable population. Design/methodology/approach Strategies are needed to reduce overdose deaths among those with recent incarceration. Jails and prisons are at the epicenter of the opioid epidemic but are a largely untapped setting for implementing overdose education, risk assessment, medication assisted treatment, and naloxone distribution programs. Federal, state, and local plans commonly lack corrections as an ingredient in combating overdose. Harm reduction strategies are vital for reducing the risk of overdose in the post-release community. Findings Therefore, the authors recommend that the following be implemented in correctional settings: expansion of overdose education and naloxone programs; establishment of comprehensive medication assisted treatment programs as standard of care; development of corrections-specific overdose risk assessment tools; and increased collaboration between corrections entities and community-based organizations. Originality/value In this policy brief the authors provide recommendations for implementing harm reduction approaches in criminal justice settings. Adoption of these strategies could reduce the number of overdoses among those with recent criminal justice involvement.

  6. Evaluating motion processing algorithms for use with functional near-infrared spectroscopy data from young children.

    PubMed

    Delgado Reyes, Lourdes M; Bohache, Kevin; Wijeakumar, Sobanawartiny; Spencer, John P

    2018-04-01

    Motion artifacts are often a significant component of the measured signal in functional near-infrared spectroscopy (fNIRS) experiments. A variety of methods have been proposed to address this issue, including principal components analysis (PCA), correlation-based signal improvement (CBSI), wavelet filtering, and spline interpolation. The efficacy of these techniques has been compared using simulated data; however, our understanding of how these techniques fare when dealing with task-based cognitive data is limited. Brigadoi et al. compared motion correction techniques in a sample of adult data measured during a simple cognitive task. Wavelet filtering showed the most promise as an optimal technique for motion correction. Given that fNIRS is often used with infants and young children, it is critical to evaluate the effectiveness of motion correction techniques directly with data from these age groups. This study addresses that problem by evaluating motion correction algorithms implemented in HomER2. The efficacy of each technique was compared quantitatively using objective metrics related to the physiological properties of the hemodynamic response. Results showed that targeted PCA (tPCA), spline, and CBSI retained a higher number of trials. These techniques also performed well in direct head-to-head comparisons with the other approaches using quantitative metrics. The CBSI method corrected many of the artifacts present in our data; however, this approach produced sometimes unstable HRFs. The targeted PCA and spline methods proved to be the most robust, performing well across all comparison metrics. When compared head to head, tPCA consistently outperformed spline. We conclude, therefore, that tPCA is an effective technique for correcting motion artifacts in fNIRS data from young children.

  7. Local concurrent error detection and correction in data structures using virtual backpointers

    NASA Technical Reports Server (NTRS)

    Li, C. C.; Chen, P. P.; Fuchs, W. K.

    1987-01-01

    A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data structures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared databased of Virtual Double Linked Lists.

  8. Local concurrent error detection and correction in data structures using virtual backpointers

    NASA Technical Reports Server (NTRS)

    Li, Chung-Chi Jim; Chen, Paul Peichuan; Fuchs, W. Kent

    1989-01-01

    A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data strutures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared database of Virtual Double Linked Lists.

  9. Research and implementation of the algorithm for unwrapped and distortion correction basing on CORDIC for panoramic image

    NASA Astrophysics Data System (ADS)

    Zhang, Zhenhai; Li, Kejie; Wu, Xiaobing; Zhang, Shujiang

    2008-03-01

    The unwrapped and correcting algorithm based on Coordinate Rotation Digital Computer (CORDIC) and bilinear interpolation algorithm was presented in this paper, with the purpose of processing dynamic panoramic annular image. An original annular panoramic image captured by panoramic annular lens (PAL) can be unwrapped and corrected to conventional rectangular image without distortion, which is much more coincident with people's vision. The algorithm for panoramic image processing is modeled by VHDL and implemented in FPGA. The experimental results show that the proposed panoramic image algorithm for unwrapped and distortion correction has the lower computation complexity and the architecture for dynamic panoramic image processing has lower hardware cost and power consumption. And the proposed algorithm is valid.

  10. A formalism for reference dosimetry in photon beams in the presence of a magnetic field

    NASA Astrophysics Data System (ADS)

    van Asselen, B.; Woodings, S. J.; Hackett, S. L.; van Soest, T. L.; Kok, J. G. M.; Raaymakers, B. W.; Wolthaus, J. W. H.

    2018-06-01

    A generic formalism is proposed for reference dosimetry in the presence of a magnetic field. Besides the regular correction factors from the conventional reference dosimetry formalisms, two factors are used to take into account magnetic field effects: (1) a dose conversion factor to correct for the change in local dose distribution and (2) a correction of the reading of the dosimeter used for the reference dosimetry measurements. The formalism was applied to the Elekta MRI-Linac, for which the 1.5 T magnetic field is orthogonal to the 7 MV photon beam. For this setup at reference conditions it was shown that the dose decreases with increasing magnetic field strength. The reduction in local dose for a 1.5 T transverse field, compared to no field is 0.51%  ±  0.03% at the reference point of 10 cm depth. The effect of the magnetic field on the reading of the dosimeter was measured for two waterproof ionization chambers types (PTW 30013 and IBA FC65-G) before and after multiple ramp-up and ramp-downs of the magnetic field. The chambers were aligned perpendicular and parallel to the magnetic field. The corrections of the readings of the perpendicularly aligned chambers were 0.967  ±  0.002 and 0.957  ±  0.002 for respectively the PTW and IBA ionization chambers. In the parallel alignment the corrections were small; 0.997  ±  0.001 and 1.002  ±  0.003 for the PTW and IBA chamber respectively. The change in reading due to the magnetic field can be measured by individual departments. The proposed formalism can be used to determine the correction factors needed to establish the absorbed dose in a magnetic field. It requires Monte Carlo simulations of the local dose and measurements of the response of the dosimeter. The formalism was successfully implemented for the MRI-Linac and is applicable for other field strengths and geometries.

  11. A formalism for reference dosimetry in photon beams in the presence of a magnetic field.

    PubMed

    van Asselen, B; Woodings, S J; Hackett, S L; van Soest, T L; Kok, J G M; Raaymakers, B W; Wolthaus, J W H

    2018-06-11

    A generic formalism is proposed for reference dosimetry in the presence of a magnetic field. Besides the regular correction factors from the conventional reference dosimetry formalisms, two factors are used to take into account magnetic field effects: (1) a dose conversion factor to correct for the change in local dose distribution and (2) a correction of the reading of the dosimeter used for the reference dosimetry measurements. The formalism was applied to the Elekta MRI-Linac, for which the 1.5 T magnetic field is orthogonal to the 7 MV photon beam. For this setup at reference conditions it was shown that the dose decreases with increasing magnetic field strength. The reduction in local dose for a 1.5 T transverse field, compared to no field is 0.51%  ±  0.03% at the reference point of 10 cm depth. The effect of the magnetic field on the reading of the dosimeter was measured for two waterproof ionization chambers types (PTW 30013 and IBA FC65-G) before and after multiple ramp-up and ramp-downs of the magnetic field. The chambers were aligned perpendicular and parallel to the magnetic field. The corrections of the readings of the perpendicularly aligned chambers were 0.967  ±  0.002 and 0.957  ±  0.002 for respectively the PTW and IBA ionization chambers. In the parallel alignment the corrections were small; 0.997  ±  0.001 and 1.002  ±  0.003 for the PTW and IBA chamber respectively. The change in reading due to the magnetic field can be measured by individual departments. The proposed formalism can be used to determine the correction factors needed to establish the absorbed dose in a magnetic field. It requires Monte Carlo simulations of the local dose and measurements of the response of the dosimeter. The formalism was successfully implemented for the MRI-Linac and is applicable for other field strengths and geometries.

  12. Implementation of an approximate self-energy correction scheme in the orthogonalized linear combination of atomic orbitals method of band-structure calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gu, Z.; Ching, W.Y.

    Based on the Sterne-Inkson model for the self-energy correction to the single-particle energy in the local-density approximation (LDA), we have implemented an approximate energy-dependent and [bold k]-dependent [ital GW] correction scheme to the orthogonalized linear combination of atomic orbital-based local-density calculation for insulators. In contrast to the approach of Jenkins, Srivastava, and Inkson, we evaluate the on-site exchange integrals using the LDA Bloch functions throughout the Brillouin zone. By using a [bold k]-weighted band gap [ital E][sub [ital g

  13. A Study of the Design and Implementation of the ASR-Based iCASL System with Corrective Feedback to Facilitate English Learning

    ERIC Educational Resources Information Center

    Wang, Yi-Hsuan; Young, Shelley Shwu-Ching

    2014-01-01

    The purpose of the study is to explore and describe how to implement a pedagogical ASR-based intelligent computer-assisted speaking learning (iCASL) system to support adult learners with a private, flexible and individual learning environment to practice English pronunciation. The iCASL system integrates multiple levels of corrective feedback and…

  14. Metrology for decommissioning nuclear facilities: Partial outcomes of joint research project within the European Metrology Research Program.

    PubMed

    Suran, Jiri; Kovar, Petr; Smoldasova, Jana; Solc, Jaroslav; Van Ammel, Raf; Garcia Miranda, Maria; Russell, Ben; Arnold, Dirk; Zapata-García, Daniel; Boden, Sven; Rogiers, Bart; Sand, Johan; Peräjärvi, Kari; Holm, Philip; Hay, Bruno; Failleau, Guillaume; Plumeri, Stephane; Laurent Beck, Yves; Grisa, Tomas

    2018-04-01

    Decommissioning of nuclear facilities incurs high costs regarding the accurate characterisation and correct disposal of the decommissioned materials. Therefore, there is a need for the implementation of new and traceable measurement technologies to select the appropriate release or disposal route of radioactive wastes. This paper addresses some of the innovative outcomes of the project "Metrology for Decommissioning Nuclear Facilities" related to mapping of contamination inside nuclear facilities, waste clearance measurement, Raman distributed temperature sensing for long term repository integrity monitoring and validation of radiochemical procedures. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Digital Data Acquisition System for experiments with segmented detectors at National Superconducting Cyclotron Laboratory

    NASA Astrophysics Data System (ADS)

    Starosta, K.; Vaman, C.; Miller, D.; Voss, P.; Bazin, D.; Glasmacher, T.; Crawford, H.; Mantica, P.; Tan, H.; Hennig, W.; Walby, M.; Fallu-Labruyere, A.; Harris, J.; Breus, D.; Grudberg, P.; Warburton, W. K.

    2009-11-01

    A 624-channel Digital Data Acquisition System capable of instrumenting the Segmented Germanium Array at National Superconducting Cyclotron Laboratory has been implemented using Pixie-16 Digital Gamma Finder modules by XIA LLC. The system opens an opportunity for determination of the first interaction position of a γ ray in a SeGA detector from implementation of γ-ray tracking. This will translate into a significantly improved determination of angle of emission, and in consequence much better Doppler corrections for experiments with fast beams. For stopped-beam experiments the system provides means for zero dead time measurements of rare decays, which occur on time scales of microseconds.

  16. GIFTS SM EDU Data Processing and Algorithms

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Johnson, David G.; Reisse, Robert A.; Gazarik, Michael J.

    2007-01-01

    The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Sensor Module (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiances using a Fourier transform spectrometer (FTS). The GIFTS instrument employs three Focal Plane Arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the processing algorithms involved in the calibration stage. The calibration procedures can be subdivided into three stages. In the pre-calibration stage, a phase correction algorithm is applied to the decimated and filtered complex interferogram. The resulting imaginary part of the spectrum contains only the noise component of the uncorrected spectrum. Additional random noise reduction can be accomplished by applying a spectral smoothing routine to the phase-corrected blackbody reference spectra. In the radiometric calibration stage, we first compute the spectral responsivity based on the previous results, from which, the calibrated ambient blackbody (ABB), hot blackbody (HBB), and scene spectra can be obtained. During the post-processing stage, we estimate the noise equivalent spectral radiance (NESR) from the calibrated ABB and HBB spectra. We then implement a correction scheme that compensates for the effect of fore-optics offsets. Finally, for off-axis pixels, the FPA off-axis effects correction is performed. To estimate the performance of the entire FPA, we developed an efficient method of generating pixel performance assessments. In addition, a random pixel selection scheme is designed based on the pixel performance evaluation.

  17. [Effectiveness of intermittent pneumatic compression (IPC) on thrombosis prophylaxis: a systematic literature review].

    PubMed

    Rohrer, Ottilia; Eicher, Manuela

    2006-06-01

    Despite changes in patient demographics and short-ened length of hospital stay deep vein thrombosis (DVT) remains a major health care problem which may lead to a variety of other high risk complications. Current treatment guidelines focus on preventive measures. Beside drug therapy, physical measures executed by nursing professionals exist, the outcomes of which are discussed controversially. Based on 25 studies that were found in MEDLINE and the Cochrane library, this systematic literature review identifies the effectiveness of intermittent pneumatic compression (IPC) on thrombosis prophylaxis. In almost all medical settings IPC contributes to a significant reduction of the incidence of DVT. At the same time, IPC has minimal negative side effects and is also cost effective. Correct application of IPC and patient compliance are essential to achieve its effectiveness. An increased awareness within the healthcare team in identifying the risk for and implementing measures against DVT is needed. Guidelines need to be developed in order to improve the effectiveness of thrombosis prophylaxis with the implementation of IPC.

  18. Correction factors for the ISO rod phantom, a cylinder phantom, and the ICRU sphere for reference beta radiation fields of the BSS 2

    NASA Astrophysics Data System (ADS)

    Behrens, R.

    2015-03-01

    The International Organization for Standardization (ISO) requires in its standard ISO 6980 that beta reference radiation fields for radiation protection be calibrated in terms of absorbed dose to tissue at a depth of 0.07 mm in a slab phantom (30 cm x 30 cm x 15 cm). However, many beta dosemeters are ring dosemeters and are, therefore, irradiated on a rod phantom (1.9 cm in diameter and 30 cm long), or they are eye dosemeters possibly irradiated on a cylinder phantom (20 cm in diameter and 20 cm high), or area dosemeters irradiated free in air with the conventional quantity value (true value) being defined in a sphere (30 cm in diameter, made of ICRU tissue (International Commission on Radiation Units and Measurements)). Therefore, the correction factors for the conventional quantity value in the rod, the cylinder, and the sphere instead of the slab (all made of ICRU tissue) were calculated for the radiation fields of 147Pm, 85Kr, 90Sr/90Y, and, 106Ru/106Rh sources of the beta secondary standard BSS 2 developed at PTB. All correction factors were calculated for 0° up to 75° (in steps of 15°) radiation incidence. The results are ready for implementation in ISO 6980-3 and have recently been (partly) implemented in the software of the BSS 2.

  19. Atmospheric extinction in solar tower plants: the Absorption and Broadband Correction for MOR measurements

    NASA Astrophysics Data System (ADS)

    Hanrieder, N.; Wilbert, S.; Pitz-Paal, R.; Emde, C.; Gasteiger, J.; Mayer, B.; Polo, J.

    2015-05-01

    Losses of reflected Direct Normal Irradiance due to atmospheric extinction in concentrating solar tower plants can vary significantly with site and time. The losses of the direct normal irradiance between the heliostat field and receiver in a solar tower plant are mainly caused by atmospheric scattering and absorption by aerosol and water vapor concentration in the atmospheric boundary layer. Due to a high aerosol particle number, radiation losses can be significantly larger in desert environments compared to the standard atmospheric conditions which are usually considered in raytracing or plant optimization tools. Information about on-site atmospheric extinction is only rarely available. To measure these radiation losses, two different commercially available instruments were tested and more than 19 months of measurements were collected at the Plataforma Solar de Almería and compared. Both instruments are primarily used to determine the meteorological optical range (MOR). The Vaisala FS11 scatterometer is based on a monochromatic near-infrared light source emission and measures the strength of scattering processes in a small air volume mainly caused by aerosol particles. The Optec LPV4 long-path visibility transmissometer determines the monochromatic attenuation between a light-emitting diode (LED) light source at 532 nm and a receiver and therefore also accounts for absorption processes. As the broadband solar attenuation is of interest for solar resource assessment for Concentrating Solar Power (CSP), a correction procedure for these two instruments is developed and tested. This procedure includes a spectral correction of both instruments from monochromatic to broadband attenuation. That means the attenuation is corrected for the actual, time-dependent by the collector reflected solar spectrum. Further, an absorption correction for the Vaisala FS11 scatterometer is implemented. To optimize the Absorption and Broadband Correction (ABC) procedure, additional measurement input of a nearby sun photometer is used to enhance on-site atmospheric assumptions for description of the atmosphere in the algorithm. Comparing both uncorrected and spectral- and absorption-corrected extinction data from one year measurements at the Plataforma Solar de Almería, the mean difference between the scatterometer and the transmissometer is reduced from 4.4 to 0.6%. Applying the ABC procedure without the usage of additional input data from a sun photometer still reduces the difference between both sensors to about 0.8%. Applying an expert guess assuming a standard aerosol profile for continental regions instead of additional sun photometer input results in a mean difference of 0.81%. Therefore, applying this new correction method, both instruments can now be utilized to determine the solar broadband extinction in tower plants sufficiently accurate.

  20. Temporal high-pass non-uniformity correction algorithm based on grayscale mapping and hardware implementation

    NASA Astrophysics Data System (ADS)

    Jin, Minglei; Jin, Weiqi; Li, Yiyang; Li, Shuo

    2015-08-01

    In this paper, we propose a novel scene-based non-uniformity correction algorithm for infrared image processing-temporal high-pass non-uniformity correction algorithm based on grayscale mapping (THP and GM). The main sources of non-uniformity are: (1) detector fabrication inaccuracies; (2) non-linearity and variations in the read-out electronics and (3) optical path effects. The non-uniformity will be reduced by non-uniformity correction (NUC) algorithms. The NUC algorithms are often divided into calibration-based non-uniformity correction (CBNUC) algorithms and scene-based non-uniformity correction (SBNUC) algorithms. As non-uniformity drifts temporally, CBNUC algorithms must be repeated by inserting a uniform radiation source which SBNUC algorithms do not need into the view, so the SBNUC algorithm becomes an essential part of infrared imaging system. The SBNUC algorithms' poor robustness often leads two defects: artifacts and over-correction, meanwhile due to complicated calculation process and large storage consumption, hardware implementation of the SBNUC algorithms is difficult, especially in Field Programmable Gate Array (FPGA) platform. The THP and GM algorithm proposed in this paper can eliminate the non-uniformity without causing defects. The hardware implementation of the algorithm only based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay: less than 20 lines, it can be transplanted to a variety of infrared detectors equipped with FPGA image processing module, it can reduce the stripe non-uniformity and the ripple non-uniformity.

  1. Resource Conservation and Recovery Act (RCRA) Corrective Action Training: Strategies for Meeting the 2020 Vision

    EPA Pesticide Factsheets

    RCRA Corrective Action training to develop and enhance the skills of qualified personnel who will implement corrective actions for their sites by the year 2020 that are protective of human health and the environment while encouraging revitalization.

  2. Process Evaluation of Two Participatory Approaches: Implementing Total Worker Health® Interventions in a Correctional Workforce

    PubMed Central

    Dugan, Alicia G.; Farr, Dana A.; Namazi, Sara; Henning, Robert A.; Wallace, Kelly N.; El Ghaziri, Mazen; Punnett, Laura; Dussetschleger, Jeffrey L.; Cherniack, Martin G.

    2018-01-01

    Background Correctional Officers (COs) have among the highest injury rates and poorest health of all the public safety occupations. The HITEC-2 (Health Improvement Through Employee Control-2) study uses Participatory Action Research (PAR) to design and implement interventions to improve health and safety of COs. Method HITEC-2 compared two different types of participatory program, a CO-only “Design Team” (DT) and “Kaizen Event Teams” (KET) of COs and supervisors, to determine differences in implementation process and outcomes. The Program Evaluation Rating Sheet (PERS) was developed to document and evaluate program implementation. Results Both programs yielded successful and unsuccessful interventions, dependent upon team-, facility-, organizational, state-, facilitator-, and intervention-level factors. Conclusions PAR in corrections, and possibly other sectors, depends upon factors including participation, leadership, continuity and timing, resilience, and financial circumstances. The new PERS instrument may be useful in other sectors to assist in assessing intervention success. PMID:27378470

  3. Process evaluation of two participatory approaches: Implementing total worker health® interventions in a correctional workforce.

    PubMed

    Dugan, Alicia G; Farr, Dana A; Namazi, Sara; Henning, Robert A; Wallace, Kelly N; El Ghaziri, Mazen; Punnett, Laura; Dussetschleger, Jeffrey L; Cherniack, Martin G

    2016-10-01

    Correctional Officers (COs) have among the highest injury rates and poorest health of all the public safety occupations. The HITEC-2 (Health Improvement Through Employee Control-2) study uses Participatory Action Research (PAR) to design and implement interventions to improve health and safety of COs. HITEC-2 compared two different types of participatory program, a CO-only "Design Team" (DT) and "Kaizen Event Teams" (KET) of COs and supervisors, to determine differences in implementation process and outcomes. The Program Evaluation Rating Sheet (PERS) was developed to document and evaluate program implementation. Both programs yielded successful and unsuccessful interventions, dependent upon team-, facility-, organizational, state-, facilitator-, and intervention-level factors. PAR in corrections, and possibly other sectors, depends upon factors including participation, leadership, continuity and timing, resilience, and financial circumstances. The new PERS instrument may be useful in other sectors to assist in assessing intervention success. Am. J. Ind. Med. 59:897-918, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  4. Motion Artifact Reduction in Pediatric Diffusion Tensor Imaging Using Fast Prospective Correction

    PubMed Central

    Alhamud, A.; Taylor, Paul A.; Laughton, Barbara; van der Kouwe, André J.W.; Meintjes, Ernesta M.

    2014-01-01

    Purpose To evaluate the patterns of head motion in scans of young children and to examine the influence of corrective techniques, both qualitatively and quantitatively. We investigate changes that both retrospective (with and without diffusion table reorientation) and prospective (implemented with a short navigator sequence) motion correction induce in the resulting diffusion tensor measures. Materials and Methods Eighteen pediatric subjects (aged 5–6 years) were scanned using 1) a twice-refocused, 2D diffusion pulse sequence, 2) a prospectively motion-corrected, navigated diffusion sequence with reacquisition of a maximum of five corrupted diffusion volumes, and 3) a T1-weighted structural image. Mean fractional anisotropy (FA) values in white and gray matter regions, as well as tractography in the brainstem and projection fibers, were evaluated to assess differences arising from retrospective (via FLIRT in FSL) and prospective motion correction. In addition to human scans, a stationary phantom was also used for further evaluation. Results In several white and gray matter regions retrospective correction led to significantly (P < 0.05) reduced FA means and altered distributions compared to the navigated sequence. Spurious tractographic changes in the retrospectively corrected data were also observed in subject data, as well as in phantom and simulated data. Conclusion Due to the heterogeneity of brain structures and the comparatively low resolution (~2 mm) of diffusion data using 2D single shot sequencing, retrospective motion correction is susceptible to distortion from partial voluming. These changes often negatively bias diffusion tensor imaging parameters. Prospective motion correction was shown to produce smaller changes. PMID:24935904

  5. Motion artifact reduction in pediatric diffusion tensor imaging using fast prospective correction.

    PubMed

    Alhamud, A; Taylor, Paul A; Laughton, Barbara; van der Kouwe, André J W; Meintjes, Ernesta M

    2015-05-01

    To evaluate the patterns of head motion in scans of young children and to examine the influence of corrective techniques, both qualitatively and quantitatively. We investigate changes that both retrospective (with and without diffusion table reorientation) and prospective (implemented with a short navigator sequence) motion correction induce in the resulting diffusion tensor measures. Eighteen pediatric subjects (aged 5-6 years) were scanned using 1) a twice-refocused, 2D diffusion pulse sequence, 2) a prospectively motion-corrected, navigated diffusion sequence with reacquisition of a maximum of five corrupted diffusion volumes, and 3) a T1 -weighted structural image. Mean fractional anisotropy (FA) values in white and gray matter regions, as well as tractography in the brainstem and projection fibers, were evaluated to assess differences arising from retrospective (via FLIRT in FSL) and prospective motion correction. In addition to human scans, a stationary phantom was also used for further evaluation. In several white and gray matter regions retrospective correction led to significantly (P < 0.05) reduced FA means and altered distributions compared to the navigated sequence. Spurious tractographic changes in the retrospectively corrected data were also observed in subject data, as well as in phantom and simulated data. Due to the heterogeneity of brain structures and the comparatively low resolution (∼2 mm) of diffusion data using 2D single shot sequencing, retrospective motion correction is susceptible to distortion from partial voluming. These changes often negatively bias diffusion tensor imaging parameters. Prospective motion correction was shown to produce smaller changes. © 2014 Wiley Periodicals, Inc.

  6. First Industrial Tests of a Matrix Monitor Correction for the Differential Die-away Technique of Historical Waste Drums

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antoni, Rodolphe; Passard, Christian; Perot, Bertrand

    2015-07-01

    The fissile mass in radioactive waste drums filled with compacted metallic residues (spent fuel hulls and nozzles) produced at AREVA NC La Hague reprocessing plant is measured by neutron interrogation with the Differential Die-away measurement Technique (DDT). In the next years, old hulls and nozzles mixed with Ion-Exchange Resins will be measured. The ion-exchange resins increase neutron moderation in the matrix, compared to the waste measured in the current process. In this context, the Nuclear Measurement Laboratory (LMN) of CEA Cadarache has studied a matrix effect correction method, based on a drum monitor, namely a 3He proportional counter located insidemore » the measurement cavity. After feasibility studies performed with LMN's PROMETHEE 6 laboratory measurement cell and with MCNPX simulations, this paper presents first experimental tests performed on the industrial ACC (hulls and nozzles compaction facility) measurement system. A calculation vs. experiment benchmark has been carried out by performing dedicated calibration measurements with a representative drum and {sup 235}U samples. The comparison between calculation and experiment shows a satisfactory agreement for the drum monitor. The final objective of this work is to confirm the reliability of the modeling approach and the industrial feasibility of the method, which will be implemented on the industrial station for the measurement of historical wastes. (authors)« less

  7. Real-time intraoperative fluorescence imaging system using light-absorption correction.

    PubMed

    Themelis, George; Yoo, Jung Sun; Soh, Kwang-Sup; Schulz, Ralf; Ntziachristos, Vasilis

    2009-01-01

    We present a novel fluorescence imaging system developed for real-time interventional imaging applications. The system implements a correction scheme that improves the accuracy of epi-illumination fluorescence images for light intensity variation in tissues. The implementation is based on the use of three cameras operating in parallel, utilizing a common lens, which allows for the concurrent collection of color, fluorescence, and light attenuation images at the excitation wavelength from the same field of view. The correction is based on a ratio approach of fluorescence over light attenuation images. Color images and video is used for surgical guidance and for registration with the corrected fluorescence images. We showcase the performance metrics of this system on phantoms and animals, and discuss the advantages over conventional epi-illumination systems developed for real-time applications and the limits of validity of corrected epi-illumination fluorescence imaging.

  8. Implementing an indoor smoking ban in prison: enforcement issues and effects on tobacco use, exposure to second-hand smoke and health of inmates.

    PubMed

    Lasnier, Benoit; Cantinotti, Michael; Guyon, Louise; Royer, Ann; Brochu, Serge; Chayer, Lyne

    2011-01-01

    To describe the issues encountered during the implementation of an indoor smoking ban in prison and its effects on self-reported tobacco use, perceived exposure to second-hand smoke (SHS) and perceived health status of inmates in Quebec's provincial correctional facilities. Quantitative data were obtained from 113 inmates in three provincial correctional facilities in the province of Quebec, Canada. Qualitative data were obtained from 52 inmates and 27 staff members. Participants were recruited through a self-selection process. Particular efforts were made to enrol proportions of men, women, smokers and non-smokers similar to those generally found among correctional populations. Despite the indoor smoking ban, 93% of inmates who declared themselves smokers reported using tobacco products inside the correctional facilities and 48% did not report any reduction in their tobacco use. Only 46% of smokers declared having been caught smoking inside the facility, and more than half of them (58%) reported no disciplinary consequences to their smoking. A majority of inmates incarcerated before the implementation of the ban (66%) did not perceive a reduction of their exposure to SHS following the indoor ban. Enforcement issues were encountered during the implementation of the indoor ban, notably because of the amendment made to the original regulation (total smoking ban) and tolerance from smokers in the staff towards indoor smoking. They were also related to perceptions that banning indoor smoking is complex and poses management problems. This study's findings emphasize the importance of considering organizational and environmental factors when planning the implementation of an indoor smoking ban in correctional facilities.

  9. Extension of the Dytlewski-style dead time correction formalism for neutron multiplicity counting to any order

    NASA Astrophysics Data System (ADS)

    Croft, Stephen; Favalli, Andrea

    2017-10-01

    Neutron multiplicity counting using shift-register calculus is an established technique in the science of international nuclear safeguards for the identification, verification, and assay of special nuclear materials. Typically passive counting is used for Pu and mixed Pu-U items and active methods are used for U materials. Three measured counting rates, singles, doubles and triples are measured and, in combination with a simple analytical point-model, are used to calculate characteristics of the measurement item in terms of known detector and nuclear parameters. However, the measurement problem usually involves more than three quantities of interest, but even in cases where the next higher order count rate, quads, is statistically viable, it is not quantitatively applied because corrections for dead time losses are currently not available in the predominant analysis paradigm. In this work we overcome this limitation by extending the commonly used dead time correction method, developed by Dytlewski, to quads. We also give results for pents, which may be of interest for certain special investigations. Extension to still higher orders, may be accomplished by inspection based on the sequence presented. We discuss the foundations of the Dytlewski method, give limiting cases, and highlight the opportunities and implications that these new results expose. In particular there exist a number of ways in which the new results may be combined with other approaches to extract the correlated rates, and this leads to various practical implementations.

  10. Extension of the Dytlewski-style dead time correction formalism for neutron multiplicity counting to any order

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Croft, Stephen; Favalli, Andrea

    Here, neutron multiplicity counting using shift-register calculus is an established technique in the science of international nuclear safeguards for the identification, verification, and assay of special nuclear materials. Typically passive counting is used for Pu and mixed Pu-U items and active methods are used for U materials. Three measured counting rates, singles, doubles and triples are measured and, in combination with a simple analytical point-model, are used to calculate characteristics of the measurement item in terms of known detector and nuclear parameters. However, the measurement problem usually involves more than three quantities of interest, but even in cases where themore » next higher order count rate, quads, is statistically viable, it is not quantitatively applied because corrections for dead time losses are currently not available in the predominant analysis paradigm. In this work we overcome this limitation by extending the commonly used dead time correction method, developed by Dytlewski, to quads. We also give results for pents, which may be of interest for certain special investigations. Extension to still higher orders, may be accomplished by inspection based on the sequence presented. We discuss the foundations of the Dytlewski method, give limiting cases, and highlight the opportunities and implications that these new results expose. In particular there exist a number of ways in which the new results may be combined with other approaches to extract the correlated rates, and this leads to various practical implementations.« less

  11. Extension of the Dytlewski-style dead time correction formalism for neutron multiplicity counting to any order

    DOE PAGES

    Croft, Stephen; Favalli, Andrea

    2017-07-16

    Here, neutron multiplicity counting using shift-register calculus is an established technique in the science of international nuclear safeguards for the identification, verification, and assay of special nuclear materials. Typically passive counting is used for Pu and mixed Pu-U items and active methods are used for U materials. Three measured counting rates, singles, doubles and triples are measured and, in combination with a simple analytical point-model, are used to calculate characteristics of the measurement item in terms of known detector and nuclear parameters. However, the measurement problem usually involves more than three quantities of interest, but even in cases where themore » next higher order count rate, quads, is statistically viable, it is not quantitatively applied because corrections for dead time losses are currently not available in the predominant analysis paradigm. In this work we overcome this limitation by extending the commonly used dead time correction method, developed by Dytlewski, to quads. We also give results for pents, which may be of interest for certain special investigations. Extension to still higher orders, may be accomplished by inspection based on the sequence presented. We discuss the foundations of the Dytlewski method, give limiting cases, and highlight the opportunities and implications that these new results expose. In particular there exist a number of ways in which the new results may be combined with other approaches to extract the correlated rates, and this leads to various practical implementations.« less

  12. 40 CFR 16.5 - Request for correction or amendment of record.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 1 2014-07-01 2014-07-01 false Request for correction or amendment of record. 16.5 Section 16.5 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL IMPLEMENTATION OF PRIVACY ACT OF 1974 § 16.5 Request for correction or amendment of record. An individual may request correction or amendment of any record...

  13. 40 CFR 16.5 - Request for correction or amendment of record.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 1 2011-07-01 2011-07-01 false Request for correction or amendment of record. 16.5 Section 16.5 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL IMPLEMENTATION OF PRIVACY ACT OF 1974 § 16.5 Request for correction or amendment of record. An individual may request correction or amendment of any record...

  14. 40 CFR 16.5 - Request for correction or amendment of record.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 1 2013-07-01 2013-07-01 false Request for correction or amendment of record. 16.5 Section 16.5 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL IMPLEMENTATION OF PRIVACY ACT OF 1974 § 16.5 Request for correction or amendment of record. An individual may request correction or amendment of any record...

  15. 40 CFR 16.5 - Request for correction or amendment of record.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 1 2012-07-01 2012-07-01 false Request for correction or amendment of record. 16.5 Section 16.5 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL IMPLEMENTATION OF PRIVACY ACT OF 1974 § 16.5 Request for correction or amendment of record. An individual may request correction or amendment of any record...

  16. 40 CFR 16.5 - Request for correction or amendment of record.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Request for correction or amendment of record. 16.5 Section 16.5 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL IMPLEMENTATION OF PRIVACY ACT OF 1974 § 16.5 Request for correction or amendment of record. An individual may request correction or amendment of any record...

  17. Implementing Training for Correctional Educators. Correctional/Special Education Training Project.

    ERIC Educational Resources Information Center

    Wolford, Bruce I., Ed.; And Others

    These papers represent the collected thoughts of the contributors to a national training and dissemination conference dealing with identifying and developing linkages between postsecondary special education and criminal justice preservice education programs in order to improve training for correctional educators working with disabled clients. The…

  18. Experimental rheological procedure adapted to pasty dewatered sludge up to 45 % dry matter.

    PubMed

    Mouzaoui, M; Baudez, J C; Sauceau, M; Arlabosse, P

    2018-04-15

    Wastewater sludge are characterized by complex rheological properties, strongly dependent on solids concentration and temperature. These properties are required for process hydrodynamic modelling but their correct measurement is often challenging at high solids concentrations. This is especially true to model the hydrodynamic of dewatered sludge during drying process where solids content (TS) increases with residence time. Indeed, until now, the literature mostly focused on the rheological characterization of sludge at low and moderate TS (between 4 and 8%). Limited attention was paid to pasty and highly concentrated sludge mainly because of the difficulties to carry out the measurements. Results reproducibility appeared to be poor and thus may not be always fully representative of the effective material properties. This work demonstrates that reproducible results can be obtained by controlling cracks and fractures which always take place in classical rotational rheometry. In that purpose, a well-controlled experimental procedure has been developed, allowing the exact determination of the surface effectively sheared. This surface is calculated by scattering a classical stress sweep with measurements at a reference strain value. The implementation of this procedure allows the correct determination of solid-like characteristics from 20 to 45% TS but also shows that pasty and highly concentrated sludge highlight normal forces caused by dilatancy. Moreover the surface correction appears to be independent of TS in the studied range. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. 42 CFR 431.954 - Basis and scope.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... implementing any corrective action plans; requirements for State verification of an applicant's self-declaration or self-certification of eligibility for, and correct amount of, medical assistance under Medicaid...

  20. Measuring Data Quality Through a Source Data Verification Audit in a Clinical Research Setting.

    PubMed

    Houston, Lauren; Probst, Yasmine; Humphries, Allison

    2015-01-01

    Health data has long been scrutinised in relation to data quality and integrity problems. Currently, no internationally accepted or "gold standard" method exists measuring data quality and error rates within datasets. We conducted a source data verification (SDV) audit on a prospective clinical trial dataset. An audit plan was applied to conduct 100% manual verification checks on a 10% random sample of participant files. A quality assurance rule was developed, whereby if >5% of data variables were incorrect a second 10% random sample would be extracted from the trial data set. Error was coded: correct, incorrect (valid or invalid), not recorded or not entered. Audit-1 had a total error of 33% and audit-2 36%. The physiological section was the only audit section to have <5% error. Data not recorded to case report forms had the greatest impact on error calculations. A significant association (p=0.00) was found between audit-1 and audit-2 and whether or not data was deemed correct or incorrect. Our study developed a straightforward method to perform a SDV audit. An audit rule was identified and error coding was implemented. Findings demonstrate that monitoring data quality by a SDV audit can identify data quality and integrity issues within clinical research settings allowing quality improvement to be made. The authors suggest this approach be implemented for future research.

  1. Validation of ozone profile retrievals derived from the OMPS LP version 2.5 algorithm against correlative satellite measurements

    NASA Astrophysics Data System (ADS)

    Kramarova, Natalya A.; Bhartia, Pawan K.; Jaross, Glen; Moy, Leslie; Xu, Philippe; Chen, Zhong; DeLand, Matthew; Froidevaux, Lucien; Livesey, Nathaniel; Degenstein, Douglas; Bourassa, Adam; Walker, Kaley A.; Sheese, Patrick

    2018-05-01

    The Limb Profiler (LP) is a part of the Ozone Mapping and Profiler Suite launched on board of the Suomi NPP satellite in October 2011. The LP measures solar radiation scattered from the atmospheric limb in ultraviolet and visible spectral ranges between the surface and 80 km. These measurements of scattered solar radiances allow for the retrieval of ozone profiles from cloud tops up to 55 km. The LP started operational observations in April 2012. In this study we evaluate more than 5.5 years of ozone profile measurements from the OMPS LP processed with the new NASA GSFC version 2.5 retrieval algorithm. We provide a brief description of the key changes that had been implemented in this new algorithm, including a pointing correction, new cloud height detection, explicit aerosol correction and a reduction of the number of wavelengths used in the retrievals. The OMPS LP ozone retrievals have been compared with independent satellite profile measurements obtained from the Aura Microwave Limb Sounder (MLS), Atmospheric Chemistry Experiment Fourier Transform Spectrometer (ACE-FTS) and Odin Optical Spectrograph and InfraRed Imaging System (OSIRIS). We document observed biases and seasonal differences and evaluate the stability of the version 2.5 ozone record over 5.5 years. Our analysis indicates that the mean differences between LP and correlative measurements are well within required ±10 % between 18 and 42 km. In the upper stratosphere and lower mesosphere (> 43 km) LP tends to have a negative bias. We find larger biases in the lower stratosphere and upper troposphere, but LP ozone retrievals have significantly improved in version 2.5 compared to version 2 due to the implemented aerosol correction. In the northern high latitudes we observe larger biases between 20 and 32 km due to the remaining thermal sensitivity issue. Our analysis shows that LP ozone retrievals agree well with the correlative satellite observations in characterizing vertical, spatial and temporal ozone distribution associated with natural processes, like the seasonal cycle and quasi-biennial oscillations. We found a small positive drift ˜ 0.5 % yr-1 in the LP ozone record against MLS and OSIRIS that is more pronounced at altitudes above 35 km. This pattern in the relative drift is consistent with a possible 100 m drift in the LP sensor pointing detected by one of our altitude-resolving methods.

  2. Advanced Fire Detector for Space Applications

    NASA Technical Reports Server (NTRS)

    Kutzner, Joerg

    2012-01-01

    A document discusses an optical carbon monoxide sensor for early fire detection. During the sensor development, a concept was implemented to allow reliable carbon monoxide detection in the presence of interfering absorption signals. Methane interference is present in the operating wavelength range of the developed prototype sensor for carbon monoxide detection. The operating parameters of the prototype sensor have been optimized so that interference with methane is minimized. In addition, simultaneous measurement of methane is implemented, and the instrument automatically corrects the carbon monoxide signal at high methane concentrations. This is possible because VCSELs (vertical cavity surface emitting lasers) with extended current tuning capabilities are implemented in the optical device. The tuning capabilities of these new laser sources are sufficient to cover the wavelength range of several absorption lines. The delivered carbon monoxide sensor (COMA 1) reliably measures low carbon monoxide levels even in the presence of high methane signals. The signal bleed-over is determined during system calibration and is then accounted for in the system parameters. The sensor reports carbon monoxide concentrations reliably for (interfering) methane concentrations up to several thousand parts per million.

  3. How to implement a clinical pathway for intensive glucose regulation in acute coronary syndromes.

    PubMed

    de Mulder, Maarten; Zwaan, Esther; Wielinga, Yvonne; Stam, Frank; Umans, Victor A W M

    2009-06-01

    Hyperglycemia upon admission of myocardial infarction patients predicts inferior clinical outcomes. Current strategies investigating hyperglycemia correction mostly use glucose-driven protocols. Implementation of these often labor-intensive protocols might be facilitated with the approach of a clinical pathway. Therefore, we evaluated the implementation of our glucose-driven protocol.We adapted a protocol for use in our coronary care unit (CCU), which was implemented according to the steps of a clinical pathway. To compensate for carbohydrates in meals we additionally developed a regimen of subcutaneous insulin.Protocol adherence was facilitated with a Web-based insulin calculator. All hyperglycemic patients admitted to the CCU were eligible for treatment according to this protocol.In a 4-month period, 643 glucose measurements were obtained in hyperglycemic patients admitted to our CCU. Patients were treated intensively with IV insulin for 35 hours and had 23 glucose measurements in this time span on average. This regimen achieved a median glucose of 6.2 mmol/L. Severe hypoglycemia occurred in only 1.1% of measurements and was without severe clinical side effects.Introduction of new intensive insulin protocol according to the steps of a clinical pathway is safe and feasible. The presence of a clinical pathway coordinator and sound communication are important conditions for successful introduction, which can be further aided with a computerized calculator.

  4. Ocean Color Measurements from Landsat-8 OLI using SeaDAS

    NASA Technical Reports Server (NTRS)

    Franz, Bryan Alden; Bailey, Sean W.; Kuring, Norman; Werdell, P. Jeremy

    2014-01-01

    The Operational Land Imager (OLI) is a multi-spectral radiometer hosted on the recently launched Landsat-8 satellite. OLI includes a suite of relatively narrow spectral bands at 30-meter spatial resolution in the visible to shortwave infrared that make it a potential tool for ocean color radiometry: measurement of the reflected spectral radiance upwelling from beneath the ocean surface that carries information on the biogeochemical constituents of the upper ocean euphotic zone. To evaluate the potential of OLI to measure ocean color, processing support was implemented in SeaDAS, which is an open-source software package distributed by NASA for processing, analysis, and display of ocean remote sensing measurements from a variety of satellite-based multi-spectral radiometers. Here we describe the implementation of OLI processing capabilities within SeaDAS, including support for various methods of atmospheric correction to remove the effects of atmospheric scattering and absorption and retrieve the spectral remote-sensing reflectance (Rrs; sr exp 1). The quality of the retrieved Rrs imagery will be assessed, as will the derived water column constituents such as the concentration of the phytoplankton pigment chlorophyll a.

  5. Analysis of measurement system as the mechatronics system

    NASA Astrophysics Data System (ADS)

    Giniotis, V.; Grattan, K. T. V.; Rybokas, M.; Bručas, D.

    2010-07-01

    The paper deals with the mechatronic arrangement for angle measuring system application. The objects to be measured are the circular raster scales, rotary encoders and coded scales. The task of the measuring system is to determine the bias of angle measuring standard as the circular scale and to use the results for the error correction and accuracy improvement of metal cutting machines, coordinate measuring machines, robots, etc. The technical solutions are given with the application of active materials for smart piezoactuators implemented into the several positions of angular measuring equipment. Mechatronic measuring system is analysed as complex integrated system and some of its elements can be used as separate units. All these functional elements are described and commented in the paper with the diagrams and graphs of errors and examples of microdisplacement devices using the mechatronic elements.

  6. A new correction method serving to eliminate the parabola effect of flatbed scanners used in radiochromic film dosimetry.

    PubMed

    Poppinga, D; Schoenfeld, A A; Doerner, K J; Blanck, O; Harder, D; Poppe, B

    2014-02-01

    The purpose of this study is the correction of the lateral scanner artifact, i.e., the effect that, on a large homogeneously exposed EBT3 film, a flatbed scanner measures different optical densities at different positions along the x axis, the axis parallel to the elongated light source. At constant dose, the measured optical density profiles along this axis have a parabolic shape with significant dose dependent curvature. Therefore, the effect is shortly called the parabola effect. The objective of the algorithm developed in this study is to correct for the parabola effect. Any optical density measured at given position x is transformed into the equivalent optical density c at the apex of the parabola and then converted into the corresponding dose via the calibration of c versus dose. For the present study EBT3 films and an Epson 10000XL scanner including transparency unit were used for the analysis of the parabola effect. The films were irradiated with 6 MV photons from an Elekta Synergy accelerator in a RW3 slab phantom. In order to quantify the effect, ten film pieces with doses graded from 0 to 20.9 Gy were sequentially scanned at eight positions along the x axis and at six positions along the z axis (the movement direction of the light source) both for the portrait and landscape film orientations. In order to test the effectiveness of the new correction algorithm, the dose profiles of an open square field and an IMRT plan were measured by EBT3 films and compared with ionization chamber and ionization chamber array measurement. The parabola effect has been numerically studied over the whole measuring field of the Epson 10000XL scanner for doses up to 20.9 Gy and for both film orientations. The presented algorithm transforms any optical density at position x into the equivalent optical density that would be measured at the same dose at the apex of the parabola. This correction method has been validated up to doses of 5.2 Gy all over the scanner bed with 2D dose distributions of an open square photon field and an IMRT distribution. The algorithm presented in this study quantifies and corrects the parabola effect of EBT3 films scanned in commonly used commercial flatbed scanners at doses up to 5.2 Gy. It is easy to implement, and no additional work steps are necessary in daily routine film dosimetry.

  7. An Ultrasonographic Periodontal Probe

    NASA Astrophysics Data System (ADS)

    Bertoncini, C. A.; Hinders, M. K.

    2010-02-01

    Periodontal disease, commonly known as gum disease, affects millions of people. The current method of detecting periodontal pocket depth is painful, invasive, and inaccurate. As an alternative to manual probing, an ultrasonographic periodontal probe is being developed to use ultrasound echo waveforms to measure periodontal pocket depth, which is the main measure of periodontal disease. Wavelet transforms and pattern classification techniques are implemented in artificial intelligence routines that can automatically detect pocket depth. The main pattern classification technique used here, called a binary classification algorithm, compares test objects with only two possible pocket depth measurements at a time and relies on dimensionality reduction for the final determination. This method correctly identifies up to 90% of the ultrasonographic probe measurements within the manual probe's tolerance.

  8. Research Implementation.

    ERIC Educational Resources Information Center

    Trochim, William M. K.

    Investigated is the topic of research implementation and how it can affect evaluation results. Even when evaluations are well planned, the obtained results can be misleading if the conscientiously-constructed research plan is not correctly implemented in practice. In virtually every research arena, one finds major difficulties in implementing the…

  9. The Quantum Socket: Wiring for Superconducting Qubits - Part 3

    NASA Astrophysics Data System (ADS)

    Mariantoni, M.; Bejianin, J. H.; McConkey, T. G.; Rinehart, J. R.; Bateman, J. D.; Earnest, C. T.; McRae, C. H.; Rohanizadegan, Y.; Shiri, D.; Penava, B.; Breul, P.; Royak, S.; Zapatka, M.; Fowler, A. G.

    The implementation of a quantum computer requires quantum error correction codes, which allow to correct errors occurring on physical quantum bits (qubits). Ensemble of physical qubits will be grouped to form a logical qubit with a lower error rate. Reaching low error rates will necessitate a large number of physical qubits. Thus, a scalable qubit architecture must be developed. Superconducting qubits have been used to realize error correction. However, a truly scalable qubit architecture has yet to be demonstrated. A critical step towards scalability is the realization of a wiring method that allows to address qubits densely and accurately. A quantum socket that serves this purpose has been designed and tested at microwave frequencies. In this talk, we show results where the socket is used at millikelvin temperatures to measure an on-chip superconducting resonator. The control electronics is another fundamental element for scalability. We will present a proposal based on the quantum socket to interconnect a classical control hardware to a superconducting qubit hardware, where both are operated at millikelvin temperatures.

  10. [Effect of two-level community-based health education pattern on schistosomiasis control].

    PubMed

    Xia, Zhang; He-Hua, Hu; Xiong, Liu; Hua-Ming, Zhang; Shi-Hao, He; Chuan-Yun, Xiao; Rong, Tian; Wei-Rong, Zhang; Cai-Xia, Cui; Xiao-Hong, Wen; Jun, Liu; Li-Ying, Yang; Mei, Chen; Chun-Li, Cao; Shi-Zhu, Li

    2016-06-24

    To implement a two-level community-based health education pattern of schistosomiasis in residents of endemic areas in marshland and lake regions, so as to explore the suitable pattern of health education under hypo-endemic situation. Two schistosomiasis endemic villages in Jiangling County, Hubei Province were collected as study areas, and among which, one village was treated as an intervention group, where the two-level community-based health education pattern as well as regular control measures was implemented; the other village was a control group, where only regular control measures were implemented. The awareness rates on schistosomiasis control, the rates of correct behavior and the compliance rates of examination, treatment and chemotherapy of the two groups before and after the intervention were compared. According to the results of the baseline survey in 2014, the awareness rates of schistosomiasis control of the intervention and control groups were 84.00% and 77.45%, respectively, the correct rates of behavior of the two groups were 72.00% and 63.73%, respectively, and the compliance rates of the treatment were 80.36% and 82.28%, respectively, there were no statistically significant differences between all the above rates of the two groups (all P > 0.05). After the intervention of the two-level community-based health education, the correct rates of behavior, and the compliance rates of examination and chemotherapy of the two groups were 92.31% and 80.37%, 95.11% and 82.55%, 84.13% and 63.64%, respectively, and the differences between all the rates above of the two groups were statistically significant (all P < 0.05). When compared to those before intervention, the growing rates of the compliance rates of examination, treatment and chemotherapy of the intervention group were 20.97%, 15.33% and 23.29%, respectively, while those of control group were 14.27%, 4.17%, -3.77%, respectively, the growing rates of the intervention groups were higher than those of the control groups. Through the two-level community-based pattern of health education, the compliance rates of examination and treatment of the residents have improved, and therefore, the pattern is suitable for popularization and application in marshland and lake regions.

  11. Development and implementation of an automated quantitative film digitizer quality control program

    NASA Astrophysics Data System (ADS)

    Fetterly, Kenneth A.; Avula, Ramesh T. V.; Hangiandreou, Nicholas J.

    1999-05-01

    A semi-automated, quantitative film digitizer quality control program that is based on the computer analysis of the image data from a single digitized test film was developed. This program includes measurements of the geometric accuracy, optical density performance, signal to noise ratio, and presampled modulation transfer function. The variability of the measurements was less than plus or minus 5%. Measurements were made on a group of two clinical and two laboratory laser film digitizers during a trial period of approximately four months. Quality control limits were established based on clinical necessity, vendor specifications and digitizer performance. During the trial period, one of the digitizers failed the performance requirements and was corrected by calibration.

  12. A Robust In-Situ Warp-Correction Algorithm For VISAR Streak Camera Data at the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.

    2015-01-12

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However,more » the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.« less

  13. 78 FR 34264 - Technical Corrections to the HIPAA Privacy, Security, and Enforcement Rules

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-07

    ...-AA03 Technical Corrections to the HIPAA Privacy, Security, and Enforcement Rules AGENCY: Office for... corrections address certain inadvertent errors and omissions in the HIPAA Privacy, Security, and Enforcement... (HHS or ``the Department'') published a final rule to implement changes to the HIPAA Privacy, Security...

  14. 75 FR 17452 - PPL Susquehanna, LLC.; Susquehanna Steam Electric Station, Units 1 And 2; Correction to Federal...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-06

    ...This document corrects a notice appearing in the Federal Register on March 19, 2010 (75 FR 13322), that incorrectly stated the number of exemptions requested by the licensee and the corresponding implementation date. This action is necessary to correct erroneous information.

  15. "Life after Prison." Successful Community Reintegration Programs Reduce Recidivism in Illinois.

    ERIC Educational Resources Information Center

    Black, Hartzel L.; And Others

    The Southeastern Illinois College Correctional Educational Division (SIC-CED) begins its involvement at the offender's entry into the correctional institution and continues through the community networking system upon his or her release from the Illinois Department of Corrections. Funding has been awarded for development and implementation of the…

  16. 75 FR 39042 - Solicitation for a Cooperative Agreement: The Norval Morris Project Implementation Phase

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-07

    ... model designed to provide correctional agencies with a step-by-step approach to promote systemic change..., evidence-based approaches, evaluate their potential to inform correctional policy and practice, create... outside the corrections field to develop interdisciplinary approaches and draw on professional networks...

  17. 76 FR 56949 - Biomass Crop Assistance Program; Corrections

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-15

    .... ACTION: Interim rule; correction. SUMMARY: The Commodity Credit Corporation (CCC) is amending the Biomass... funds in favor of the ``project area'' portion of BCAP. CCC is also correcting errors in the regulation... INFORMATION: Background CCC published a final rule on October 27, 2010 (75 FR 66202-66243) implementing BCAP...

  18. Detection and classification of human body odor using an electronic nose.

    PubMed

    Wongchoosuk, Chatchawal; Lutz, Mario; Kerdcharoen, Teerakiat

    2009-01-01

    An electronic nose (E-nose) has been designed and equipped with software that can detect and classify human armpit body odor. An array of metal oxide sensors was used for detecting volatile organic compounds. The measurement circuit employs a voltage divider resistor to measure the sensitivity of each sensor. This E-nose was controlled by in-house developed software through a portable USB data acquisition card with a principle component analysis (PCA) algorithm implemented for pattern recognition and classification. Because gas sensor sensitivity in the detection of armpit odor samples is affected by humidity, we propose a new method and algorithms combining hardware/software for the correction of the humidity noise. After the humidity correction, the E-nose showed the capability of detecting human body odor and distinguishing the body odors from two persons in a relative manner. The E-nose is still able to recognize people, even after application of deodorant. In conclusion, this is the first report of the application of an E-nose for armpit odor recognition.

  19. A microcontroller system for investigating the catch effect: functional electrical stimulation of the common peroneal nerve.

    PubMed

    Hart, D J; Taylor, P N; Chappell, P H; Wood, D E

    2006-06-01

    Correction of drop foot in hemiplegic gait is achieved by electrical stimulation of the common peroneal nerve with a series of pulses at a fixed frequency. However, during normal gait, the electromyographic signals from the tibialis anterior muscle indicate that muscle force is not constant but varies during the swing phase. The application of double pulses for the correction of drop foot may enhance the gait by generating greater torque at the ankle and thereby increase the efficiency of the stimulation with reduced fatigue. A flexible controller has been designed around the Odstock Drop Foot Stimulator to deliver different profiles of pulses implementing doublets and optimum series. A peripheral interface controller (PIC) microcontroller with some external circuits has been designed and tested to accommodate six profiles. Preliminary results of the measurements from a normal subject seated in a multi-moment chair (an isometric torque measurement device) indicate that profiles containing doublets and optimum spaced pulses look favourable for clinical use.

  20. Variability of adjacency effects in sky reflectance measurements.

    PubMed

    Groetsch, Philipp M M; Gege, Peter; Simis, Stefan G H; Eleveld, Marieke A; Peters, Steef W M

    2017-09-01

    Sky reflectance R sky (λ) is used to correct in situ reflectance measurements in the remote detection of water color. We analyzed the directional and spectral variability in R sky (λ) due to adjacency effects against an atmospheric radiance model. The analysis is based on one year of semi-continuous R sky (λ) observations that were recorded in two azimuth directions. Adjacency effects contributed to R sky (λ) dependence on season and viewing angle and predominantly in the near-infrared (NIR). For our test area, adjacency effects spectrally resembled a generic vegetation spectrum. The adjacency effect was weakly dependent on the magnitude of Rayleigh- and aerosol-scattered radiance. The reflectance differed between viewing directions 5.4±6.3% for adjacency effects and 21.0±19.8% for Rayleigh- and aerosol-scattered R sky (λ) in the NIR. Under which conditions in situ water reflectance observations require dedicated correction for adjacency effects is discussed. We provide an open source implementation of our method to aid identification of such conditions.

  1. Remedial investigation work plan for Bear Creek Valley Operable Unit 2 (Rust Spoil Area, SY-200 Yard, Spoil Area 1) at the Oak Ridge Y-12 Plant, Oak Ridge, Tennessee. Environmental Restoration Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1993-05-01

    The enactment of the Resource Conservation and Recovery Act (RCRA) in 1976 and the Hazardous and Solid Waste Amendments (HSWA) to RCRA in 1984 created management requirements for hazardous waste facilities. The facilities within the Oak Ridge Reservation (ORR) were in the process of meeting the RCRA requirements when ORR was placed on the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) National Priorities List (NPL) on November 21, 1989. Under RCRA, the actions typically follow the RCRA Facility Assessment (RFA)/RCRA Facility Investigation (RFI)/Corrective Measures Study (CMS)/Corrective Measures implementation process. Under CERCLA the actions follow the PA/SI/Remedial Investigation (RI)/Feasibility Studymore » (FS)/Remedial Design/Remedial Action process. The development of this document will incorporate requirements under both RCRA and CERCLA into an RI work plan for the characterization of Bear Creek Valley (BCV) Operable Unit (OU) 2.« less

  2. Detection and Classification of Human Body Odor Using an Electronic Nose

    PubMed Central

    Wongchoosuk, Chatchawal; Lutz, Mario; Kerdcharoen, Teerakiat

    2009-01-01

    An electronic nose (E-nose) has been designed and equipped with software that can detect and classify human armpit body odor. An array of metal oxide sensors was used for detecting volatile organic compounds. The measurement circuit employs a voltage divider resistor to measure the sensitivity of each sensor. This E-nose was controlled by in-house developed software through a portable USB data acquisition card with a principle component analysis (PCA) algorithm implemented for pattern recognition and classification. Because gas sensor sensitivity in the detection of armpit odor samples is affected by humidity, we propose a new method and algorithms combining hardware/software for the correction of the humidity noise. After the humidity correction, the E-nose showed the capability of detecting human body odor and distinguishing the body odors from two persons in a relative manner. The E-nose is still able to recognize people, even after application of deodorant. In conclusion, this is the first report of the application of an E-nose for armpit odor recognition. PMID:22399995

  3. Development of a Network RTK Positioning and Gravity-Surveying Application with Gravity Correction Using a Smartphone

    PubMed Central

    Kim, Jinsoo; Lee, Youngcheol; Cha, Sungyeoul; Choi, Chuluong; Lee, Seongkyu

    2013-01-01

    This paper proposes a smartphone-based network real-time kinematic (RTK) positioning and gravity-surveying application (app) that allows semi-real-time measurements using the built-in Bluetooth features of the smartphone and a third-generation or long-term evolution wireless device. The app was implemented on a single smartphone by integrating a global navigation satellite system (GNSS) controller, a laptop, and a field-note writing tool. The observation devices (i.e., a GNSS receiver and relative gravimeter) functioned independently of this system. The app included a gravity module, which converted the measured relative gravity reading into an absolute gravity value according to tides; meter height; instrument drift correction; and network adjustments. The semi-real-time features of this app allowed data to be shared easily with other researchers. Moreover, the proposed smartphone-based gravity-survey app was easily adaptable to various locations and rough terrain due to its compact size. PMID:23857258

  4. [Impact of quality-indicator-based measures to improve the treatment of acute poisoning in pediatric emergency patients].

    PubMed

    Martínez Sánchez, Lidia; Trenchs Sainz de la Maza, Victoria; Azkunaga Santibáñez, Beatriz; Nogué-Xarau, Santiago; Ferrer Bosch, Nuria; García González, Elsa; Luaces I Cubells, Carles

    2016-02-01

    To analyze the impact of quality-indicator-based measures for improving quality of care for acute poisoning in pediatric emergency departments. Recent assessments of quality indicators were compared with benchmark targets and with results from previous studies. The first study evaluated 6 basic indicators in the pediatric emergency departments of members of to the working group on poisoning of the Spanish Society of Pediatric Emergency Medicine (GTI-SEUP). The second study evaluated 20 indicators in a single emergency department of GTI-SEUP members. Based on the results of those studies, the departments implemented the following corrective measures: creation of a team for gastric lavage follow-up, preparation of a new GTI-SEUP manual on poisoning, implementation of a protocol for poisoning incidents, and creation of specific poisoning-related fields for computerized patient records. The benchmark targets were reached on 4 quality indicators in the first study. Improvements were seen in the availability of protocols, as indicators exceeded the target in all the pediatric emergency departments (vs 29.2% of the departments in an earlier study, P < .001). No other significant improvements were observed. In the second study the benchmarks were reached on 13 indicators. Improvements were seen in compliance with incident reporting to the police (recently, 44.4% vs 19.2% previously, P = .036), case registration in the minimum basic data set (51.0% vs 1.9%, P < .001), and a trend toward increased administration of activated carbon within 2 hours (93.1% vs 83.5%, P = .099). No other significant improvements were seen. The corrective measures led to improvements in some quality indicators. There is still room for improvement in these emergency departamens' care of pediatric poisoning.

  5. Model development for MODIS thermal band electronic cross-talk

    NASA Astrophysics Data System (ADS)

    Chang, Tiejun; Wu, Aisheng; Geng, Xu; Li, Yonghong; Brinkmann, Jake; Keller, Graziela; Xiong, Xiaoxiong (Jack)

    2016-10-01

    MODerate-resolution Imaging Spectroradiometer (MODIS) has 36 bands. Among them, 16 thermal emissive bands covering a wavelength range from 3.8 to 14.4 μm. After 16 years on-orbit operation, the electronic crosstalk of a few Terra MODIS thermal emissive bands develop substantial issues which cause biases in the EV brightness temperature measurements and surface feature contamination. The crosstalk effects on band 27 with center wavelength at 6.7 μm and band 29 at 8.5 μm increased significantly in recent years, affecting downstream products such as water vapor and cloud mask. The crosstalk issue can be observed from nearly monthly scheduled lunar measurements, from which the crosstalk coefficients can be derived. Most of MODIS thermal bands are saturated at moon surface temperatures and the development of an alternative approach is very helpful for verification. In this work, a physical model was developed to assess the crosstalk impact on calibration as well as in Earth view brightness temperature retrieval. This model was applied to Terra MODIS band 29 empirically for correction of Earth brightness temperature measurements. In the model development, the detector nonlinear response is considered. The impacts of the electronic crosstalk are assessed in two steps. The first step consists of determining the impact on calibration using the on-board blackbody (BB). Due to the detector nonlinear response and large background signal, both linear and nonlinear coefficients are affected by the crosstalk from sending bands. The crosstalk impact on calibration coefficients was calculated. The second step is to calculate the effects on the Earth view brightness temperature retrieval. The effects include those from affected calibration coefficients and the contamination of Earth view measurements. This model links the measurement bias with crosstalk coefficients, detector nonlinearity, and the ratio of Earth measurements between the sending and receiving bands. The correction of the electronic crosstalk can be implemented empirically from the processed bias at different brightness temperature. The implementation can be done through two approaches. As routine calibration assessment for thermal infrared bands, the trending over select Earth scenes is processed for all the detectors in a band and the band averaged bias is derived for certain time. In this case, the correction of an affected band can be made using the regression of the model with band averaged bias and then corrections of detector differences are applied. The second approach requires the trending for individual detectors and the bias for each detector is used for regression with the model. A test using the first approach was made for Terra MODIS band 29 with the biases derived from long-term trending of sea surface temperature and Dome-C surface temperature.

  6. Efficient anisotropic quasi-P wavefield extrapolation using an isotropic low-rank approximation

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen-dong; Liu, Yike; Alkhalifah, Tariq; Wu, Zedong

    2018-04-01

    The computational cost of quasi-P wave extrapolation depends on the complexity of the medium, and specifically the anisotropy. Our effective-model method splits the anisotropic dispersion relation into an isotropic background and a correction factor to handle this dependency. The correction term depends on the slope (measured using the gradient) of current wavefields and the anisotropy. As a result, the computational cost is independent of the nature of anisotropy, which makes the extrapolation efficient. A dynamic implementation of this approach decomposes the original pseudo-differential operator into a Laplacian, handled using the low-rank approximation of the spectral operator, plus an angular dependent correction factor applied in the space domain to correct for anisotropy. We analyse the role played by the correction factor and propose a new spherical decomposition of the dispersion relation. The proposed method provides accurate wavefields in phase and more balanced amplitudes than a previous spherical decomposition. Also, it is free of SV-wave artefacts. Applications to a simple homogeneous transverse isotropic medium with a vertical symmetry axis (VTI) and a modified Hess VTI model demonstrate the effectiveness of the approach. The Reverse Time Migration applied to a modified BP VTI model reveals that the anisotropic migration using the proposed modelling engine performs better than an isotropic migration.

  7. Modeling the vestibulo-ocular reflex of the squirrel monkey during eccentric rotation and roll tilt

    NASA Technical Reports Server (NTRS)

    Merfeld, D. M.; Paloski, W. H. (Principal Investigator)

    1995-01-01

    Model simulations of the squirrel monkey vestibulo-ocular reflex (VOR) are presented for two motion paradigms: constant velocity eccentric rotation and roll tilt about a naso-occipital axis. The model represents the implementation of three hypotheses: the "internal model" hypothesis, the "gravito-inertial force (GIF) resolution" hypothesis, and the "compensatory VOR" hypothesis. The internal model hypothesis is based on the idea that the nervous system knows the dynamics of the sensory systems and implements this knowledge as an internal dynamic model. The GIF resolution hypothesis is based on the idea that the nervous system knows that gravity minus linear acceleration equals GIF and implements this knowledge by resolving the otolith measurement of GIF into central estimates of gravity and linear acceleration, such that the central estimate of gravity minus the central estimate of acceleration equals the otolith measurement of GIF. The compensatory VOR hypothesis is based on the idea that the VOR compensates for the central estimates of angular velocity and linear velocity, which sum in a near-linear manner. During constant velocity eccentric rotation, the model correctly predicts that: (1) the peak horizontal response is greater while "facing-motion" than with "back-to-motion"; (2) the axis of eye rotation shifts toward alignment with GIF; and (3) a continuous vertical response, slow phase downward, exists prior to deceleration. The model also correctly predicts that a torsional response during the roll rotation is the only velocity response observed during roll rotations about a naso-occipital axis. The success of this model in predicting the observed experimental responses suggests that the model captures the essence of the complex sensory interactions engendered by eccentric rotation and roll tilt.

  8. Towards self-correcting quantum memories

    NASA Astrophysics Data System (ADS)

    Michnicki, Kamil

    This thesis presents a model of self-correcting quantum memories where quantum states are encoded using topological stabilizer codes and error correction is done using local measurements and local dynamics. Quantum noise poses a practical barrier to developing quantum memories. This thesis explores two types of models for suppressing noise. One model suppresses thermalizing noise energetically by engineering a Hamiltonian with a high energy barrier between code states. Thermalizing dynamics are modeled phenomenologically as a Markovian quantum master equation with only local generators. The second model suppresses stochastic noise with a cellular automaton that performs error correction using syndrome measurements and a local update rule. Several ways of visualizing and thinking about stabilizer codes are presented in order to design ones that have a high energy barrier: the non-local Ising model, the quasi-particle graph and the theory of welded stabilizer codes. I develop the theory of welded stabilizer codes and use it to construct a code with the highest known energy barrier in 3-d for spin Hamiltonians: the welded solid code. Although the welded solid code is not fully self correcting, it has some self correcting properties. It has an increased memory lifetime for an increased system size up to a temperature dependent maximum. One strategy for increasing the energy barrier is by mediating an interaction with an external system. I prove a no-go theorem for a class of Hamiltonians where the interaction terms are local, of bounded strength and commute with the stabilizer group. Under these conditions the energy barrier can only be increased by a multiplicative constant. I develop cellular automaton to do error correction on a state encoded using the toric code. The numerical evidence indicates that while there is no threshold, the model can extend the memory lifetime significantly. While of less theoretical importance, this could be practical for real implementations of quantum memories. Numerical evidence also suggests that the cellular automaton could function as a decoder with a soft threshold.

  9. The combination of the error correction methods of GAFCHROMIC EBT3 film

    PubMed Central

    Li, Yinghui; Chen, Lixin; Zhu, Jinhan; Liu, Xiaowei

    2017-01-01

    Purpose The aim of this study was to combine a set of methods for use of radiochromic film dosimetry, including calibration, correction for lateral effects and a proposed triple-channel analysis. These methods can be applied to GAFCHROMIC EBT3 film dosimetry for radiation field analysis and verification of IMRT plans. Methods A single-film exposure was used to achieve dose calibration, and the accuracy was verified based on comparisons with the square-field calibration method. Before performing the dose analysis, the lateral effects on pixel values were corrected. The position dependence of the lateral effect was fitted by a parabolic function, and the curvature factors of different dose levels were obtained using a quadratic formula. After lateral effect correction, a triple-channel analysis was used to reduce disturbances and convert scanned images from films into dose maps. The dose profiles of open fields were measured using EBT3 films and compared with the data obtained using an ionization chamber. Eighteen IMRT plans with different field sizes were measured and verified with EBT3 films, applying our methods, and compared to TPS dose maps, to check correct implementation of film dosimetry proposed here. Results The uncertainty of lateral effects can be reduced to ±1 cGy. Compared with the results of Micke A et al., the residual disturbances of the proposed triple-channel method at 48, 176 and 415 cGy are 5.3%, 20.9% and 31.4% smaller, respectively. Compared with the ionization chamber results, the difference in the off-axis ratio and percentage depth dose are within 1% and 2%, respectively. For the application of IMRT verification, there were no difference between two triple-channel methods. Compared with only corrected by triple-channel method, the IMRT results of the combined method (include lateral effect correction and our present triple-channel method) show a 2% improvement for large IMRT fields with the criteria 3%/3 mm. PMID:28750023

  10. Enhanced autocompensating quantum cryptography system.

    PubMed

    Bethune, Donald S; Navarro, Martha; Risk, William P

    2002-03-20

    We have improved the hardware and software of our autocompensating system for quantum key distribution by replacing bulk optical components at the end stations with fiber-optic equivalents and implementing software that synchronizes end-station activities, communicates basis choices, corrects errors, and performs privacy amplification over a local area network. The all-fiber-optic arrangement provides stable, efficient, and high-contrast routing of the photons. The low-bit error rate leads to high error-correction efficiency and minimizes data sacrifice during privacy amplification. Characterization measurements made on a number of commercial avalanche photodiodes are presented that highlight the need for improved devices tailored specifically for quantum information applications. A scheme for frequency shifting the photons returning from Alice's station to allow them to be distinguished from backscattered noise photons is also described.

  11. Increased use of malaria rapid diagnostic tests improves targeting of anti-malarial treatment in rural Tanzania: implications for nationwide rollout of malaria rapid diagnostic tests

    PubMed Central

    2012-01-01

    Background The World Health Organization recommends parasitological confirmation of all malaria cases. Tanzania is implementing a phased rollout of malaria rapid diagnostic tests (RDTs) for routine use in all levels of care as one strategy to increase parasitological confirmation of malaria diagnosis. This study was carried out to evaluated artemisinin combination therapy (ACT) prescribing patterns in febrile patients with and without uncomplicated malaria in one pre-RDT implementation and one post-RDT implementation area. Methods A cross-sectional health facility surveys was conducted during high and low malaria transmission seasons in 2010 in both areas. Clinical information and a reference blood film on all patients presenting for an initial illness consultation were collected. Malaria was defined as a history of fever in the past 48 h and microscopically confirmed parasitaemia. Routine diagnostic testing was defined as RDT or microscopy ordered by the health worker and performed at the health facility as part of the health worker-patient consultation. Correct diagnostic testing was defined as febrile patient tested with RDT or microscopy. Over-testing was defined as a non-febrile patient tested with RDT or microscopy. Correct treatment was defined as patient with malaria prescribed ACT. Over-treatment was defined as patient without malaria prescribed ACT. Results A total of 1,247 febrile patients (627 from pre-implementation area and 620 from post-implementation area) were included in the analysis. In the post-RDT implementation area, 80.9% (95% CI, 68.2-89.3) of patients with malaria received recommended treatment with ACT compared to 70.3% (95% CI, 54.7-82.2) of patients in the pre-RDT implementation area. Correct treatment was significantly higher in the post-implementation area during high transmission season (85.9% (95%CI, 72.0-93.6) compared to 58.3% (95%CI, 39.4-75.1) in pre-implementation area (p = 0.01). Over-treatment with ACT of patients without malaria was less common in the post-RDT implementation area (20.9%; 95% CI, 14.7-28.8) compared to the pre-RDT implementation area (45.8%; 95% CI, 37.2-54.6) (p < 0.01) in high transmission. The odds of overtreatment was significantly lower in post- RDT area (adjusted Odds Ratio (OR: 95%CI) 0.57(0.36-0.89); and much higher with clinical diagnosis adjusted OR (95%CI) 2.24(1.37-3.67) Conclusion Implementation of RDTs increased use of RDTs for parasitological confirmation and reduced over-treatment with ACT during high malaria transmission season in one area in Tanzania. Continued monitoring of the national RDT rollout will be needed to assess whether these changes in case management practices will be replicated in other areas and sustained over time. Additional measures (such as refresher trainings, closer supervisions, etc.) may be needed to improve ACT targeting during low transmission seasons. PMID:22747655

  12. Development of PET projection data correction algorithm

    NASA Astrophysics Data System (ADS)

    Bazhanov, P. V.; Kotina, E. D.

    2017-12-01

    Positron emission tomography is modern nuclear medicine method used in metabolism and internals functions examinations. This method allows to diagnosticate treatments on their early stages. Mathematical algorithms are widely used not only for images reconstruction but also for PET data correction. In this paper random coincidences and scatter correction algorithms implementation are considered, as well as algorithm of PET projection data acquisition modeling for corrections verification.

  13. Hierarchical specification of the SIFT fault tolerant flight control system

    NASA Technical Reports Server (NTRS)

    Melliar-Smith, P. M.; Schwartz, R. L.

    1981-01-01

    The specification and mechanical verification of the Software Implemented Fault Tolerance (SIFT) flight control system is described. The methodology employed in the verification effort is discussed, and a description of the hierarchical models of the SIFT system is given. To meet the objective of NASA for the reliability of safety critical flight control systems, the SIFT computer must achieve a reliability well beyond the levels at which reliability can be actually measured. The methodology employed to demonstrate rigorously that the SIFT computer meets as reliability requirements is described. The hierarchy of design specifications from very abstract descriptions of system function down to the actual implementation is explained. The most abstract design specifications can be used to verify that the system functions correctly and with the desired reliability since almost all details of the realization were abstracted out. A succession of lower level models refine these specifications to the level of the actual implementation, and can be used to demonstrate that the implementation has the properties claimed of the abstract design specifications.

  14. Vessel-Mounted ADCP Data Calibration and Correction

    NASA Astrophysics Data System (ADS)

    de Andrade, A. F.; Barreira, L. M.; Violante-Carvalho, N.

    2013-05-01

    A set of scripts for vessel-mounted ADCP (Acoustic Doppler Current Profiler) data processing is presented. The need for corrections in the data measured by a ship-mounted ADCP and the complexities found during installation, implementation and identification of tasks performed by currently available systems for data processing consist the main motivating factors for the development of a system that would be more practical in manipulation, open code and more manageable for the user. The proposed processing system consists of a set of scripts developed in Matlab TM programming language. The system is able to read the binary files provided by the data acquisition program VMDAS (Vessel Mounted Data Acquisition System), Teledyne RDInstruments proprietary, and calculate calibration factors to correct the data and visualize them after correction. For use the new system, it is only necessary that the ADCP data collected with VMDAS program is in a processing diretory and Matlab TM software be installed on the user's computer. Developed algorithms were extensively tested with ADCP data obtained during Oceano Sul III (Southern Ocean III - OSIII) cruise, conducted by Brazilian Navy aboard the R/V "Antares", from March 26th to May 10th 2007, in the oceanic region between the states of São Paulo and Rio Grande do Sul. For read the data the function rdradcp.m, developed by Rich Pawlowicz and available on his website (http://www.eos.ubc.ca/~rich/#RDADCP), was used. To calculate the calibration factors, alignment error (α) and sensitivity error (β) in Water Tracking and Bottom Tracking Modes, equations deduced by Joyce (1998), Pollard & Read (1989) and Trump & Marmorino (1996) were implemented in Matlab. To validate the calibration factors obtained in the processing system developed, the parameters were compared with the factors provided by CODAS (Common Ocean Data Access System, available at http://currents.soest.hawaii.edu/docs/doc/index.html), post-processing program. For the same data analyzed, the factors provided by both systems were similar. Thereafter, the values obtained were used to correct the data and finally matrices were saved with data corrected and they can be plotted. The values of volume transport of the Brazil Current (BC) were calculated using the corrected data by the two systems and proved quite close, confirming the quality of the correction of the system.

  15. Analysis of measured data of human body based on error correcting frequency

    NASA Astrophysics Data System (ADS)

    Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

    2014-04-01

    Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

  16. Correlating behavioral responses to FMRI signals from human prefrontal cortex: examining cognitive processes using task analysis.

    PubMed

    DeSouza, Joseph F X; Ovaysikia, Shima; Pynn, Laura

    2012-06-20

    The aim of this methods paper is to describe how to implement a neuroimaging technique to examine complementary brain processes engaged by two similar tasks. Participants' behavior during task performance in an fMRI scanner can then be correlated to the brain activity using the blood-oxygen-level-dependent signal. We measure behavior to be able to sort correct trials, where the subject performed the task correctly and then be able to examine the brain signals related to correct performance. Conversely, if subjects do not perform the task correctly, and these trials are included in the same analysis with the correct trials we would introduce trials that were not only for correct performance. Thus, in many cases these errors can be used themselves to then correlate brain activity to them. We describe two complementary tasks that are used in our lab to examine the brain during suppression of an automatic responses: the stroop(1) and anti-saccade tasks. The emotional stroop paradigm instructs participants to either report the superimposed emotional 'word' across the affective faces or the facial 'expressions' of the face stimuli(1,2). When the word and the facial expression refer to different emotions, a conflict between what must be said and what is automatically read occurs. The participant has to resolve the conflict between two simultaneously competing processes of word reading and facial expression. Our urge to read out a word leads to strong 'stimulus-response (SR)' associations; hence inhibiting these strong SR's is difficult and participants are prone to making errors. Overcoming this conflict and directing attention away from the face or the word requires the subject to inhibit bottom up processes which typically directs attention to the more salient stimulus. Similarly, in the anti-saccade task(3,4,5,6), where an instruction cue is used to direct only attention to a peripheral stimulus location but then the eye movement is made to the mirror opposite position. Yet again we measure behavior by recording the eye movements of participants which allows for the sorting of the behavioral responses into correct and error trials(7) which then can be correlated to brain activity. Neuroimaging now allows researchers to measure different behaviors of correct and error trials that are indicative of different cognitive processes and pinpoint the different neural networks involved.

  17. Calibration procedures for imaging spectrometers: improving data quality from satellite missions to UAV campaigns

    NASA Astrophysics Data System (ADS)

    Brachmann, Johannes F. S.; Baumgartner, Andreas; Lenhard, Karim

    2016-10-01

    The Calibration Home Base (CHB) at the Remote Sensing Technology Institute of the German Aerospace Center (DLR-IMF) is an optical laboratory designed for the calibration of imaging spectrometers for the VNIR/SWIR wavelength range. Radiometric, spectral and geometric characterization is realized in the CHB in a precise and highly automated fashion. This allows performing a wide range of time consuming measurements in an efficient way. The implementation of ISO 9001 standards ensures a traceable quality of results. DLR-IMF will support the calibration and characterization campaign of the future German spaceborne hyperspectral imager EnMAP. In the context of this activity, a procedure for the correction of imaging artifacts, such as due to stray light, is currently being developed by DLR-IMF. Goal is the correction of in-band stray light as well as ghost images down to a level of a few digital numbers in the whole wavelength range 420-2450 nm. DLR-IMF owns a Norsk Elektro Optikks HySpex airborne imaging spectrometer system that has been thoroughly characterized. This system will be used to test stray light calibration procedures for EnMAP. Hyperspectral snapshot sensors offer the possibility to simultaneously acquire hyperspectral data in two dimensions. Recently, these rather new spectrometers have arisen much interest in the remote sensing community. Different designs are currently used for local area observation such as by use of small unmanned aerial vehicles (sUAV). In this context the CHB's measurement capabilities are currently extended such that a standard measurement procedure for these new sensors will be implemented.

  18. Correction of Pelvic Tilt and Pelvic Rotation in Cup Measurement after THA - An Experimental Study.

    PubMed

    Schwarz, Timo Julian; Weber, Markus; Dornia, Christian; Worlicek, Michael; Renkawitz, Tobias; Grifka, Joachim; Craiovan, Benjamin

    2017-09-01

    Purpose  Accurate assessment of cup orientation on postoperative pelvic radiographs is essential for evaluating outcome after THA. Here, we present a novel method for correcting measurement inaccuracies due to pelvic tilt and rotation. Method  In an experimental setting, a cup was implanted into a dummy pelvis, and its final position was verified via CT. To show the effect of pelvic tilt and rotation on cup position, the dummy was fixed to a rack to achieve a tilt between + 15° anterior and -15° posterior and 0° to 20° rotation to the contralateral side. According to Murray's definitions of anteversion and inclination, we created a novel corrective procedure to measure cup position in the pelvic reference frame (anterior pelvic plane) to compensate measurement errors due to pelvic tilt and rotation. Results  The cup anteversion measured on CT was 23.3°; on AP pelvic radiographs, however, variations in pelvic tilt (± 15°) resulted in anteversion angles between 11.0° and 36.2° (mean error 8.3°± 3.9°). The cup inclination was 34.1° on CT and ranged between 31.0° and 38.7° (m. e. 2.3°± 1.5°) on radiographs. Pelvic rotation between 0° and 20° showed high variation in radiographic anteversion (21.2°-31.2°, m. e. 6.0°± 3.1°) and inclination (34.1°-27.2°, m. e. 3.4°± 2.5°). Our novel correction algorithm for pelvic tilt reduced the mean error in anteversion measurements to 0.6°± 0.2° and in inclination measurements to 0.7° (SD± 0.2). Similarly, the mean error due to pelvic rotation was reduced to 0.4°± 0.4° for anteversion and to 1.3°± 0.8 for inclination. Conclusion  Pelvic tilt and pelvic rotation may lead to misinterpretation of cup position on anteroposterior pelvic radiographs. Mathematical correction concepts have the potential to significantly reduce these errors, and could be implemented in future radiological software tools. Key Points   · Pelvic tilt and rotation influence cup orientation after THA. · Cup anteversion and inclination should be referenced to the pelvis. · Radiological measurement errors of cup position may be reduced by mathematical concepts. Citation Format · Schwarz TJ, Weber M, Dornia C et al. Correction of Pelvic Tilt and Pelvic Rotation in Cup Measurement after THA - An Experimental Study. Fortschr Röntgenstr 2017; 189: 864 - 873. © Georg Thieme Verlag KG Stuttgart · New York.

  19. High speed fault tolerant secure communication for muon chamber using FPGA based GBTx emulator

    NASA Astrophysics Data System (ADS)

    Sau, Suman; Mandal, Swagata; Saini, Jogender; Chakrabarti, Amlan; Chattopadhyay, Subhasis

    2015-12-01

    The Compressed Baryonic Matter (CBM) experiment is a part of the Facility for Antiproton and Ion Research (FAIR) in Darmstadt at the GSI. The CBM experiment will investigate the highly compressed nuclear matter using nucleus-nucleus collisions. This experiment will examine lieavy-ion collisions in fixed target geometry and will be able to measure hadrons, electrons and muons. CBM requires precise time synchronization, compact hardware, radiation tolerance, self-triggered front-end electronics, efficient data aggregation schemes and capability to handle high data rate (up to several TB/s). As a part of the implementation of read out chain of Muon Cliamber(MUCH) [1] in India, we have tried to implement FPGA based emulator of GBTx in India. GBTx is a radiation tolerant ASIC that can be used to implement multipurpose high speed bidirectional optical links for high-energy physics (HEP) experiments and is developed by CERN. GBTx will be used in highly irradiated area and more prone to be affected by multi bit error. To mitigate this effect instead of single bit error correcting RS code we have used two bit error correcting (15, 7) BCH code. It will increase the redundancy which in turn increases the reliability of the coded data. So the coded data will be less prone to be affected by noise due to radiation. The data will go from detector to PC through multiple nodes through the communication channel. The computing resources are connected to a network which can be accessed by authorized person to prevent unauthorized data access which might happen by compromising the network security. Thus data encryption is essential. In order to make the data communication secure, advanced encryption standard [2] (AES - a symmetric key cryptography) and RSA [3], [4] (asymmetric key cryptography) are used after the channel coding. We have implemented GBTx emulator on two Xilinx Kintex-7 boards (KC705). One will act as transmitter and other will act as receiver and they are connected through optical fiber through small form-factor pluggable (SFP) port. We have tested the setup in the runtime environment using Xilinx Cliipscope Pro Analyzer. We also measure the resource utilization, throughput., power optimization of implemented design.

  20. Analyzing the effectiveness of a frame-level redundancy scrubbing technique for SRAM-based FPGAs

    DOE PAGES

    Tonfat, Jorge; Lima Kastensmidt, Fernanda; Rech, Paolo; ...

    2015-12-17

    Radiation effects such as soft errors are the major threat to the reliability of SRAM-based FPGAs. This work analyzes the effectiveness in correcting soft errors of a novel scrubbing technique using internal frame redundancy called Frame-level Redundancy Scrubbing (FLR-scrubbing). This correction technique can be implemented in a coarse grain TMR design. The FLR-scrubbing technique was implemented on a mid-size Xilinx Virtex-5 FPGA device used as a case study. The FLR-scrubbing technique was tested under neutron radiation and fault injection. Implementation results demonstrated minimum area and energy consumption overhead when compared to other techniques. The time to repair the fault ismore » also improved by using the Internal Configuration Access Port (ICAP). Lastly, neutron radiation test results demonstrated that the proposed technique is suitable for correcting accumulated SEUs and MBUs.« less

  1. Peer education programs in corrections: curriculum, implementation, and nursing interventions.

    PubMed

    Dubik-Unruh, S

    1999-01-01

    Despite the prevalence of HIV and other infectious diseases in U.S. prisons, and the mix of infected and high-risk prisoners in crowded and volatile living conditions, federal and state prisons have reduced or eliminated prevention education programs addressing HIV and other infectious diseases for incarcerated populations. Nurses' knowledge, education, and licensure place them in a position to influence prison policy in developing and implementing educational programs for inmates and staff. Their role as advocates for patients in prison and their separation from the more punitive aspects of corrections also enable nurses to earn the trust of inmate populations. These factors identify nurses as the staff best suited within corrections to implement inmate prevention education. Training inmate educators to provide peer prevention and strategies for risk reduction have potential to modify inmate behaviors both within the facility and following release. Selection criteria for peer educator recruitment, prison-sensitive issues, and suggested training activities are discussed.

  2. Analyzing the effectiveness of a frame-level redundancy scrubbing technique for SRAM-based FPGAs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tonfat, Jorge; Lima Kastensmidt, Fernanda; Rech, Paolo

    Radiation effects such as soft errors are the major threat to the reliability of SRAM-based FPGAs. This work analyzes the effectiveness in correcting soft errors of a novel scrubbing technique using internal frame redundancy called Frame-level Redundancy Scrubbing (FLR-scrubbing). This correction technique can be implemented in a coarse grain TMR design. The FLR-scrubbing technique was implemented on a mid-size Xilinx Virtex-5 FPGA device used as a case study. The FLR-scrubbing technique was tested under neutron radiation and fault injection. Implementation results demonstrated minimum area and energy consumption overhead when compared to other techniques. The time to repair the fault ismore » also improved by using the Internal Configuration Access Port (ICAP). Lastly, neutron radiation test results demonstrated that the proposed technique is suitable for correcting accumulated SEUs and MBUs.« less

  3. Selective suppression of the incorrect response implementation in choice behavior assessed by transcranial magnetic stimulation.

    PubMed

    Tandonnet, Christophe; Garry, Michael I; Summers, Jeffery J

    2011-04-01

    Selecting the adequate alternative in choice situations may involve an inhibition process. Here we assessed response implementation during the reaction time of a between-hand choice task with single- or paired-pulse (3 or 15 ms interstimulus intervals [ISIs]) transcranial magnetic stimulation of the motor cortex. The amplitude of the single-pulse motor evoked potential (MEP) initially increased for both hands. At around 130 ms, the single-pulse MEP kept increasing for the responding hand and decreased for the nonresponding hand. The paired-pulse MEP revealed a similar pattern for both ISIs with no effect on short intracortical inhibition and intracortical facilitation measures. The results suggest that the incorrect response implementation was selectively suppressed before execution of the correct response, preventing errors in choice context. The results favor models assuming that decision making involves an inhibition process. Copyright © 2010 Society for Psychophysiological Research.

  4. The numerical simulation tool for the MAORY multiconjugate adaptive optics system

    NASA Astrophysics Data System (ADS)

    Arcidiacono, C.; Schreiber, L.; Bregoli, G.; Diolaiti, E.; Foppiani, I.; Agapito, G.; Puglisi, A.; Xompero, M.; Oberti, S.; Cosentino, G.; Lombini, M.; Butler, R. C.; Ciliegi, P.; Cortecchia, F.; Patti, M.; Esposito, S.; Feautrier, P.

    2016-07-01

    The Multiconjugate Adaptive Optics RelaY (MAORY) is and Adaptive Optics module to be mounted on the ESO European-Extremely Large Telescope (E-ELT). It is an hybrid Natural and Laser Guide System that will perform the correction of the atmospheric turbulence volume above the telescope feeding the Multi-AO Imaging Camera for Deep Observations Near Infrared spectro-imager (MICADO). We developed an end-to-end Monte- Carlo adaptive optics simulation tool to investigate the performance of a the MAORY and the calibration, acquisition, operation strategies. MAORY will implement Multiconjugate Adaptive Optics combining Laser Guide Stars (LGS) and Natural Guide Stars (NGS) measurements. The simulation tool implement the various aspect of the MAORY in an end to end fashion. The code has been developed using IDL and use libraries in C++ and CUDA for efficiency improvements. Here we recall the code architecture, we describe the modeled instrument components and the control strategies implemented in the code.

  5. Adaptive DIT-Based Fringe Tracking and Prediction at IOTA

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.

    2004-01-01

    An automatic fringe tracking system has been developed and implemented at the Infrared Optical Telescope Array (IOTA). In testing during May 2002, the system successfully minimized the optical path differences (OPDs) for all three baselines at IOTA. Based on sliding window discrete Fourier transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on off-line data. Implemented in ANSI C on the 266 MHZ PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. Preliminary analysis on an extension of this algorithm indicates a potential for predictive tracking, although at present, real-time implementation of this extension would require significantly more computational capacity.

  6. An efficient implementation of semiempirical quantum-chemical orthogonalization-corrected methods for excited-state dynamics

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Thiel, Walter

    2018-04-01

    We present an efficient implementation of configuration interaction with single excitations (CIS) for semiempirical orthogonalization-corrected OMx methods and standard modified neglect of diatomic overlap (MNDO)-type methods for the computation of vertical excitation energies as well as analytical gradients and nonadiabatic couplings. This CIS implementation is combined with Tully's fewest switches algorithm to enable surface hopping simulations of excited-state nonadiabatic dynamics. We introduce an accurate and efficient expression for the semiempirical evaluation of nonadiabatic couplings, which offers a significant speedup for medium-size molecules and is suitable for use in long nonadiabatic dynamics runs. As a pilot application, the semiempirical CIS implementation is employed to investigate ultrafast energy transfer processes in a phenylene ethynylene dendrimer model.

  7. An efficient implementation of semiempirical quantum-chemical orthogonalization-corrected methods for excited-state dynamics.

    PubMed

    Liu, Jie; Thiel, Walter

    2018-04-21

    We present an efficient implementation of configuration interaction with single excitations (CIS) for semiempirical orthogonalization-corrected OMx methods and standard modified neglect of diatomic overlap (MNDO)-type methods for the computation of vertical excitation energies as well as analytical gradients and nonadiabatic couplings. This CIS implementation is combined with Tully's fewest switches algorithm to enable surface hopping simulations of excited-state nonadiabatic dynamics. We introduce an accurate and efficient expression for the semiempirical evaluation of nonadiabatic couplings, which offers a significant speedup for medium-size molecules and is suitable for use in long nonadiabatic dynamics runs. As a pilot application, the semiempirical CIS implementation is employed to investigate ultrafast energy transfer processes in a phenylene ethynylene dendrimer model.

  8. Applications and error correction for adiabatic quantum optimization

    NASA Astrophysics Data System (ADS)

    Pudenz, Kristen

    Adiabatic quantum optimization (AQO) is a fast-developing subfield of quantum information processing which holds great promise in the relatively near future. Here we develop an application, quantum anomaly detection, and an error correction code, Quantum Annealing Correction (QAC), for use with AQO. The motivation for the anomaly detection algorithm is the problematic nature of classical software verification and validation (V&V). The number of lines of code written for safety-critical applications such as cars and aircraft increases each year, and with it the cost of finding errors grows exponentially (the cost of overlooking errors, which can be measured in human safety, is arguably even higher). We approach the V&V problem by using a quantum machine learning algorithm to identify charateristics of software operations that are implemented outside of specifications, then define an AQO to return these anomalous operations as its result. Our error correction work is the first large-scale experimental demonstration of quantum error correcting codes. We develop QAC and apply it to USC's equipment, the first and second generation of commercially available D-Wave AQO processors. We first show comprehensive experimental results for the code's performance on antiferromagnetic chains, scaling the problem size up to 86 logical qubits (344 physical qubits) and recovering significant encoded success rates even when the unencoded success rates drop to almost nothing. A broader set of randomized benchmarking problems is then introduced, for which we observe similar behavior to the antiferromagnetic chain, specifically that the use of QAC is almost always advantageous for problems of sufficient size and difficulty. Along the way, we develop problem-specific optimizations for the code and gain insight into the various on-chip error mechanisms (most prominently thermal noise, since the hardware operates at finite temperature) and the ways QAC counteracts them. We finish by showing that the scheme is robust to qubit loss on-chip, a significant benefit when considering an implemented system.

  9. Implementing a Loosely Coupled Fluid Structure Interaction Finite Element Model in PHASTA

    NASA Astrophysics Data System (ADS)

    Pope, David

    Fluid Structure Interaction problems are an important multi-physics phenomenon in the design of aerospace vehicles and other engineering applications. A variety of computational fluid dynamics solvers capable of resolving the fluid dynamics exist. PHASTA is one such computational fluid dynamics solver. Enhancing the capability of PHASTA to resolve Fluid-Structure Interaction first requires implementing a structural dynamics solver. The implementation also requires a correction of the mesh used to solve the fluid equations to account for the deformation of the structure. This results in mesh motion and causes the need for an Arbitrary Lagrangian-Eulerian modification to the fluid dynamics equations currently implemented in PHASTA. With the implementation of both structural dynamics physics, mesh correction, and the Arbitrary Lagrangian-Eulerian modification of the fluid dynamics equations, PHASTA is made capable of solving Fluid-Structure Interaction problems.

  10. Liquid lens: advances in adaptive optics

    NASA Astrophysics Data System (ADS)

    Casey, Shawn Patrick

    2010-12-01

    'Liquid lens' technologies promise significant advancements in machine vision and optical communications systems. Adaptations for machine vision, human vision correction, and optical communications are used to exemplify the versatile nature of this technology. Utilization of liquid lens elements allows the cost effective implementation of optical velocity measurement. The project consists of a custom image processor, camera, and interface. The images are passed into customized pattern recognition and optical character recognition algorithms. A single camera would be used for both speed detection and object recognition.

  11. Advanced Techniques of Artificial Networks Design for Radio Signal Detection

    NASA Astrophysics Data System (ADS)

    Danilin, S. N.; Shchanikov, S. A.; Iventev, A. A.; Zuev, A. D.

    2018-05-01

    This paper is concerned with the issue of secure radio communication of data between manned aircrafts, unmanned drones and control services. It is indicated that the use of artificial neural networks (ANN) enables correct identification of messages transmitted through radio channels and enhances identification quality by every measure. The authors designed and implemented a simulation modeling technology for ANN development, which enables signal detection with required accuracy in the context of noise jamming, natural and other types of noise.

  12. Corrective Action Decision Document/Closure Report for Corrective Action Unit 105: Area 2 Yucca Flat Atmospheric Test Sites, Nevada National Security Site, Nevada, Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthews, Patrick

    2014-01-01

    The purpose of this Corrective Action Decision Document/Closure Report is to provide justification and documentation supporting the recommendation that no further corrective action is needed for CAU 105 based on the implementation of the corrective actions. Corrective action investigation (CAI) activities were performed from October 22, 2012, through May 23, 2013, as set forth in the Corrective Action Investigation Plan for Corrective Action Unit 105: Area 2 Yucca Flat Atmospheric Test Sites; and in accordance with the Soils Activity Quality Assurance Plan, which establishes requirements, technical planning, and general quality practices.

  13. 77 FR 19095 - Privacy Act; Implementation; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-30

    ... DEPARTMENT OF DEFENSE Office of the Secretary [Docket ID: DOD-2012-OS-0031] 32 CFR Part 322... direct final rule, Department of Defense discovered that paragraphs (l)(2) through (l)(5) in Sec. 322.7...) published on March 16, 2012 (77 FR 15595-15596), make the following corrections: Sec. 322.7 [Corrected] On...

  14. A Three-Stage Model for Implementing Focused Written Corrective Feedback

    ERIC Educational Resources Information Center

    Chong, Sin Wang

    2017-01-01

    This article aims to show how the findings from written corrective feedback (WCF) research can be applied in practice. One particular kind of WCF--focused WCF--is brought into the spotlight. The article first summarizes major findings from focused WCF research to reveal the potential advantages of correcting a few preselected language items…

  15. Quantum error correction of continuous-variable states against Gaussian noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ralph, T. C.

    2011-08-15

    We describe a continuous-variable error correction protocol that can correct the Gaussian noise induced by linear loss on Gaussian states. The protocol can be implemented using linear optics and photon counting. We explore the theoretical bounds of the protocol as well as the expected performance given current knowledge and technology.

  16. Corrective Action Framework for the Office of Student Financial Assistance.

    ERIC Educational Resources Information Center

    Advanced Technology, Inc., Reston, VA.

    An ongoing corrective action framework for the Office of Student Financial Assistance (OSFA) is presented. Attention is directed to the formal management structure in OSFA and current initiatives to improve management, and the placement of the corrective action process in the organizational hierarchy. Four formal mechanisms needed to implement the…

  17. Atmospheric extinction in solar tower plants: absorption and broadband correction for MOR measurements

    NASA Astrophysics Data System (ADS)

    Hanrieder, N.; Wilbert, S.; Pitz-Paal, R.; Emde, C.; Gasteiger, J.; Mayer, B.; Polo, J.

    2015-08-01

    Losses of reflected Direct Normal Irradiance due to atmospheric extinction in concentrated solar tower plants can vary significantly with site and time. The losses of the direct normal irradiance between the heliostat field and receiver in a solar tower plant are mainly caused by atmospheric scattering and absorption by aerosol and water vapor concentration in the atmospheric boundary layer. Due to a high aerosol particle number, radiation losses can be significantly larger in desert environments compared to the standard atmospheric conditions which are usually considered in ray-tracing or plant optimization tools. Information about on-site atmospheric extinction is only rarely available. To measure these radiation losses, two different commercially available instruments were tested, and more than 19 months of measurements were collected and compared at the Plataforma Solar de Almería. Both instruments are primarily used to determine the meteorological optical range (MOR). The Vaisala FS11 scatterometer is based on a monochromatic near-infrared light source emission and measures the strength of scattering processes in a small air volume mainly caused by aerosol particles. The Optec LPV4 long-path visibility transmissometer determines the monochromatic attenuation between a light-emitting diode (LED) light source at 532 nm and a receiver and therefore also accounts for absorption processes. As the broadband solar attenuation is of interest for solar resource assessment for concentrated solar power (CSP), a correction procedure for these two instruments is developed and tested. This procedure includes a spectral correction of both instruments from monochromatic to broadband attenuation. That means the attenuation is corrected for the time-dependent solar spectrum which is reflected by the collector. Further, an absorption correction for the Vaisala FS11 scatterometer is implemented. To optimize the absorption and broadband correction (ABC) procedure, additional measurement input of a nearby sun photometer is used to enhance on-site atmospheric assumptions for description of the atmosphere in the algorithm. Comparing both uncorrected and spectral- and absorption-corrected extinction data from 1-year measurements at the Plataforma Solar de Almería, the mean difference between the scatterometer and the transmissometer is reduced from 4.4 to 0.57 %. Applying the ABC procedure without the usage of additional input data from a sun photometer still reduces the difference between both sensors to about 0.8 %. Applying an expert guess assuming a standard aerosol profile for continental regions instead of additional sun photometer input results in a mean difference of 0.8 %. Additionally, a simulation approach which just uses sun photometer and common meteorological data to determine the on-site atmospheric extinction at surface is presented and corrected FS11 and LPV4 measurements are validated with the simulation results. For T1 km equal to 0.9 and a 10 min time resolution, an uncertainty analysis showed that an absolute uncertainty of about 0.038 is expected for the FS11 and about 0.057 for the LPV4. Combining both uncertainties results in an overall absolute uncertainty of 0.068 which justifies quite well the mean RMSE between both corrected data sets. For yearly averages several error influences average out and absolute uncertainties of 0.020 and 0.054 can be expected for the FS11 and the LPV4, respectively. Therefore, applying this new correction method, both instruments can now be utilized to sufficiently accurately determine the solar broadband extinction in tower plants.

  18. Data consistency checks for Jefferson Lab Experiment E00-002

    NASA Astrophysics Data System (ADS)

    Telfeyan, John; Niculescu, Gabriel; Niculescu, Ioana

    2006-10-01

    Jefferson Lab experiment E00-002 aims to measure inclusive electron-proton and electron-deuteron scattering cross section at low Q squared and moderately low Bjorken x. Data in this kinematic region will further our understanding of the transition between the perturbative and non-perturbative regimes of Quantum Chromodynamics (QCD). As part of the data analysis effort underway at James Madison University (JMU) a comprehensive set of checks and tests was implemented. These tests ensure the quality and consistency of the experimental data, as well as providing, where appropriate, correction factors between the experimental apparatus as used and its idealized computer-simulated representation. This contribution will outline this testing procedure as implemented in the JMU analysis, highlighting the most important features/results.

  19. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and Harold Baranger; 26. Critique of fault-tolerant quantum information processing Robert Alicki; References; Index.

  20. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    NASA Astrophysics Data System (ADS)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  1. Predicting Recovery Potential for Individual Stroke Patients Increases Rehabilitation Efficiency.

    PubMed

    Stinear, Cathy M; Byblow, Winston D; Ackerley, Suzanne J; Barber, P Alan; Smith, Marie-Claire

    2017-04-01

    Several clinical measures and biomarkers are associated with motor recovery after stroke, but none are used to guide rehabilitation for individual patients. The objective of this study was to evaluate the implementation of upper limb predictions in stroke rehabilitation, by combining clinical measures and biomarkers using the Predict Recovery Potential (PREP) algorithm. Predictions were provided for patients in the implementation group (n=110) and withheld from the comparison group (n=82). Predictions guided rehabilitation therapy focus for patients in the implementation group. The effects of predictive information on clinical practice (length of stay, therapist confidence, therapy content, and dose) were evaluated. Clinical outcomes (upper limb function, impairment and use, independence, and quality of life) were measured 3 and 6 months poststroke. The primary clinical practice outcome was inpatient length of stay. The primary clinical outcome was Action Research Arm Test score 3 months poststroke. Length of stay was 1 week shorter for the implementation group (11 days; 95% confidence interval, 9-13 days) than the comparison group (17 days; 95% confidence interval, 14-21 days; P =0.001), controlling for upper limb impairment, age, sex, and comorbidities. Therapists were more confident ( P =0.004) and modified therapy content according to predictions for the implementation group ( P <0.05). The algorithm correctly predicted the primary clinical outcome for 80% of patients in both groups. There were no adverse effects of algorithm implementation on patient outcomes at 3 or 6 months poststroke. PREP algorithm predictions modify therapy content and increase rehabilitation efficiency after stroke without compromising clinical outcome. URL: http://anzctr.org.au. Unique identifier: ACTRN12611000755932. © 2017 American Heart Association, Inc.

  2. Fluence correction factor for graphite calorimetry in a clinical high-energy carbon-ion beam.

    PubMed

    Lourenço, A; Thomas, R; Homer, M; Bouchard, H; Rossomme, S; Renaud, J; Kanai, T; Royle, G; Palmans, H

    2017-04-07

    The aim of this work is to develop and adapt a formalism to determine absorbed dose to water from graphite calorimetry measurements in carbon-ion beams. Fluence correction factors, [Formula: see text], needed when using a graphite calorimeter to derive dose to water, were determined in a clinical high-energy carbon-ion beam. Measurements were performed in a 290 MeV/n carbon-ion beam with a field size of 11  ×  11 cm 2 , without modulation. In order to sample the beam, a plane-parallel Roos ionization chamber was chosen for its small collecting volume in comparison with the field size. Experimental information on fluence corrections was obtained from depth-dose measurements in water. This procedure was repeated with graphite plates in front of the water phantom. Fluence corrections were also obtained with Monte Carlo simulations through the implementation of three methods based on (i) the fluence distributions differential in energy, (ii) a ratio of calculated doses in water and graphite at equivalent depths and (iii) simulations of the experimental setup. The [Formula: see text] term increased in depth from 1.00 at the entrance toward 1.02 at a depth near the Bragg peak, and the average difference between experimental and numerical simulations was about 0.13%. Compared to proton beams, there was no reduction of the [Formula: see text] due to alpha particles because the secondary particle spectrum is dominated by projectile fragmentation. By developing a practical dose conversion technique, this work contributes to improving the determination of absolute dose to water from graphite calorimetry in carbon-ion beams.

  3. Color reproduction software for a digital still camera

    NASA Astrophysics Data System (ADS)

    Lee, Bong S.; Park, Du-Sik; Nam, Byung D.

    1998-04-01

    We have developed a color reproduction software for a digital still camera. The image taken by the camera was colorimetrically reproduced on the monitor after characterizing the camera and the monitor, and color matching between two devices. The reproduction was performed at three levels; level processing, gamma correction, and color transformation. The image contrast was increased after the level processing adjusting the level of dark and bright portions of the image. The relationship between the level processed digital values and the measured luminance values of test gray samples was calculated, and the gamma of the camera was obtained. The method for getting the unknown monitor gamma was proposed. As a result, the level processed values were adjusted by the look-up table created by the camera and the monitor gamma correction. For a color transformation matrix for the camera, 3 by 3 or 3 by 4 matrix was used, which was calculated by the regression between the gamma corrected values and the measured tristimulus values of each test color samples the various reproduced images were displayed on the dialogue box implemented in our software, which were generated according to four illuminations for the camera and three color temperatures for the monitor. An user can easily choose he best reproduced image comparing each others.

  4. Absolute cerebral blood flow quantification with pulsed arterial spin labeling during hyperoxia corrected with the simultaneous measurement of the longitudinal relaxation time of arterial blood.

    PubMed

    Pilkinton, David T; Hiraki, Teruyuki; Detre, John A; Greenberg, Joel H; Reddy, Ravinder

    2012-06-01

    Quantitative arterial spin labeling (ASL) estimates of cerebral blood flow (CBF) during oxygen inhalation are important in several contexts, including functional experiments calibrated with hyperoxia and studies investigating the effect of hyperoxia on regional CBF. However, ASL measurements of CBF during hyperoxia are confounded by the reduction in the longitudinal relaxation time of arterial blood (T(1a) ) from paramagnetic molecular oxygen dissolved in blood plasma. The aim of this study is to accurately quantify the effect of arbitrary levels of hyperoxia on T(1a) and correct ASL measurements of CBF during hyperoxia on a per-subject basis. To mitigate artifacts, including the inflow of fresh spins, partial voluming, pulsatility, and motion, a pulsed ASL approach was implemented for in vivo measurements of T(1a) in the rat brain at 3 Tesla. After accounting for the effect of deoxyhemoglobin dilution, the relaxivity of oxygen on blood was found to closely match phantom measurements. The results of this study suggest that the measured ASL signal changes are dominated by reductions in T(1a) for brief hyperoxic inhalation epochs, while the physiologic effects of oxygen on the vasculature account for most of the measured reduction in CBF for longer hyperoxic exposures. Copyright © 2011 Wiley-Liss, Inc.

  5. ToF-SIMS measurements with topographic information in combined images.

    PubMed

    Koch, Sabrina; Ziegler, Georg; Hutter, Herbert

    2013-09-01

    In 2D and 3D time-of-flight secondary ion mass spectrometric (ToF-SIMS) analysis, accentuated structures on the sample surface induce distorted element distributions in the measurement. The origin of this effect is the 45° incidence angle of the analysis beam, recording planar images with distortion of the sample surface. For the generation of correct element distributions, these artifacts associated with the sample surface need to be eliminated by measuring the sample surface topography and applying suitable algorithms. For this purpose, the next generation of ToF-SIMS instruments will feature a scanning probe microscope directly implemented in the sample chamber which allows the performance of topography measurements in situ. This work presents the combination of 2D and 3D ToF-SIMS analysis with topographic measurements by ex situ techniques such as atomic force microscopy (AFM), confocal microscopy (CM), and digital holographic microscopy (DHM). The concept of the combination of topographic and ToF-SIMS measurements in a single representation was applied to organic and inorganic samples featuring surface structures in the nanometer and micrometer ranges. The correct representation of planar and distorted ToF-SIMS images was achieved by the combination of topographic data with images of 2D as well as 3D ToF-SIMS measurements, using either AFM, CM, or DHM for the recording of topographic data.

  6. First measurement of proton's charge form factor at very low Q2 with initial state radiation

    NASA Astrophysics Data System (ADS)

    Mihovilovič, M.; Weber, A. B.; Achenbach, P.; Beranek, T.; Beričič, J.; Bernauer, J. C.; Böhm, R.; Bosnar, D.; Cardinali, M.; Correa, L.; Debenjak, L.; Denig, A.; Distler, M. O.; Esser, A.; Ferretti Bondy, M. I.; Fonvieille, H.; Friedrich, J. M.; Friščić, I.; Griffioen, K.; Hoek, M.; Kegel, S.; Kohl, Y.; Merkel, H.; Middleton, D. G.; Müller, U.; Nungesser, L.; Pochodzalla, J.; Rohrbeck, M.; Sánchez Majos, S.; Schlimme, B. S.; Schoth, M.; Schulz, F.; Sfienti, C.; Širca, S.; Štajner, S.; Thiel, M.; Tyukin, A.; Vanderhaeghen, M.; Weinriefer, M.

    2017-08-01

    We report on a new experimental method based on initial-state radiation (ISR) in e-p scattering, which exploits the radiative tail of the elastic peak to study the properties of electromagnetic processes and to extract the proton charge form factor (GEp) at extremely small Q2. The ISR technique was implemented in an experiment at the three-spectrometer facility of the Mainz Microtron (MAMI). This led to a precise validation of radiative corrections far away from elastic line and provided first measurements of GEp for 0.001 ≤Q2 ≤ 0.004(GeV / c)2.

  7. National Transonic Facility Characterization Status

    NASA Technical Reports Server (NTRS)

    Bobbitt, C., Jr.; Everhart, J.; Foster, J.; Hill, J.; McHatton, R.; Tomek, W.

    2000-01-01

    This paper describes the current status of the characterization of the National Transonic Facility. The background and strategy for the tunnel characterization, as well as the current status of the four main areas of the characterization (tunnel calibration, flow quality characterization, data quality assurance, and support of the implementation of wall interference corrections) are presented. The target accuracy requirements for tunnel characterization measurements are given, followed by a comparison of the measured tunnel flow quality to these requirements based on current available information. The paper concludes with a summary of which requirements are being met, what areas need improvement, and what additional information is required in follow-on characterization studies.

  8. Identification and uncertainty estimation of vertical reflectivity profiles using a Lagrangian approach to support quantitative precipitation measurements by weather radar

    NASA Astrophysics Data System (ADS)

    Hazenberg, P.; Torfs, P. J. J. F.; Leijnse, H.; Delrieu, G.; Uijlenhoet, R.

    2013-09-01

    This paper presents a novel approach to estimate the vertical profile of reflectivity (VPR) from volumetric weather radar data using both a traditional Eulerian as well as a newly proposed Lagrangian implementation. For this latter implementation, the recently developed Rotational Carpenter Square Cluster Algorithm (RoCaSCA) is used to delineate precipitation regions at different reflectivity levels. A piecewise linear VPR is estimated for either stratiform or neither stratiform/convective precipitation. As a second aspect of this paper, a novel approach is presented which is able to account for the impact of VPR uncertainty on the estimated radar rainfall variability. Results show that implementation of the VPR identification and correction procedure has a positive impact on quantitative precipitation estimates from radar. Unfortunately, visibility problems severely limit the impact of the Lagrangian implementation beyond distances of 100 km. However, by combining this procedure with the global Eulerian VPR estimation procedure for a given rainfall type (stratiform and neither stratiform/convective), the quality of the quantitative precipitation estimates increases up to a distance of 150 km. Analyses of the impact of VPR uncertainty shows that this aspect accounts for a large fraction of the differences between weather radar rainfall estimates and rain gauge measurements.

  9. UltraTrack: Software for semi-automated tracking of muscle fascicles in sequences of B-mode ultrasound images.

    PubMed

    Farris, Dominic James; Lichtwark, Glen A

    2016-05-01

    Dynamic measurements of human muscle fascicle length from sequences of B-mode ultrasound images have become increasingly prevalent in biomedical research. Manual digitisation of these images is time consuming and algorithms for automating the process have been developed. Here we present a freely available software implementation of a previously validated algorithm for semi-automated tracking of muscle fascicle length in dynamic ultrasound image recordings, "UltraTrack". UltraTrack implements an affine extension to an optic flow algorithm to track movement of the muscle fascicle end-points throughout dynamically recorded sequences of images. The underlying algorithm has been previously described and its reliability tested, but here we present the software implementation with features for: tracking multiple fascicles in multiple muscles simultaneously; correcting temporal drift in measurements; manually adjusting tracking results; saving and re-loading of tracking results and loading a range of file formats. Two example runs of the software are presented detailing the tracking of fascicles from several lower limb muscles during a squatting and walking activity. We have presented a software implementation of a validated fascicle-tracking algorithm and made the source code and standalone versions freely available for download. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Dynamic Collimator Angle Adjustments During Volumetric Modulated Arc Therapy to Account for Prostate Rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boer, Johan de; Wolf, Anne Lisa; Szeto, Yenny Z.

    2015-04-01

    Purpose: Rotations of the prostate gland induce considerable geometric uncertainties in prostate cancer radiation therapy. Collimator and gantry angle adjustments can correct these rotations in intensity modulated radiation therapy. Modern volumetric modulated arc therapy (VMAT) treatments, however, include a wide range of beam orientations that differ in modulation, and corrections require dynamic collimator rotations. The aim of this study was to implement a rotation correction strategy for VMAT dose delivery and validate it for left-right prostate rotations. Methods and Materials: Clinical VMAT treatment plans of 5 prostate cancer patients were used. Simulated left-right prostate rotations between +15° and −15° weremore » corrected by collimator rotations. We compared corrected and uncorrected plans by dose volume histograms, minimum dose (D{sub min}) to the prostate, bladder surface receiving ≥78 Gy (S78) and rectum equivalent uniform dose (EUD; n=0.13). Each corrected plan was delivered to a phantom, and its deliverability was evaluated by γ-evaluation between planned and delivered dose, which was reconstructed from portal images acquired during delivery. Results: On average, clinical target volume minimum dose (D{sub min}) decreased up to 10% without corrections. Negative left-right rotations were corrected almost perfectly, whereas D{sub min} remained within 4% for positive rotations. Bladder S78 and rectum EUD of the corrected plans matched those of the original plans. The average pass rate for the corrected plans delivered to the phantom was 98.9% at 3% per 3 mm gamma criteria. The measured dose in the planning target volume approximated the original dose, rotated around the simulated left-right angle, well. Conclusions: It is feasible to dynamically adjust the collimator angle during VMAT treatment delivery to correct for prostate rotations. This technique can safely correct for left-right prostate rotations up to 15°.« less

  11. Implementation of the O(αt2) MSSM Higgs-mass corrections in FeynHiggs

    NASA Astrophysics Data System (ADS)

    Hahn, Thomas; Paßehr, Sebastian

    2017-05-01

    We describe the implementation of the two-loop Higgs-mass corrections of O(αt2) in the complex MSSM in FeynHiggs. The program for the calculation is comprised of several scripts which flexibly use FeynArts and FormCalc together with other packages. It is included in FeynHiggs and documented here in some detail so that it can be re-used as a template for similar calculations.

  12. 3D refraction correction and extraction of clinical parameters from spectral domain optical coherence tomography of the cornea.

    PubMed

    Zhao, Mingtao; Kuo, Anthony N; Izatt, Joseph A

    2010-04-26

    Capable of three-dimensional imaging of the cornea with micrometer-scale resolution, spectral domain-optical coherence tomography (SDOCT) offers potential advantages over Placido ring and Scheimpflug photography based systems for accurate extraction of quantitative keratometric parameters. In this work, an SDOCT scanning protocol and motion correction algorithm were implemented to minimize the effects of patient motion during data acquisition. Procedures are described for correction of image data artifacts resulting from 3D refraction of SDOCT light in the cornea and from non-idealities of the scanning system geometry performed as a pre-requisite for accurate parameter extraction. Zernike polynomial 3D reconstruction and a recursive half searching algorithm (RHSA) were implemented to extract clinical keratometric parameters including anterior and posterior radii of curvature, central cornea optical power, central corneal thickness, and thickness maps of the cornea. Accuracy and repeatability of the extracted parameters obtained using a commercial 859nm SDOCT retinal imaging system with a corneal adapter were assessed using a rigid gas permeable (RGP) contact lens as a phantom target. Extraction of these parameters was performed in vivo in 3 patients and compared to commercial Placido topography and Scheimpflug photography systems. The repeatability of SDOCT central corneal power measured in vivo was 0.18 Diopters, and the difference observed between the systems averaged 0.1 Diopters between SDOCT and Scheimpflug photography, and 0.6 Diopters between SDOCT and Placido topography.

  13. High-rate dead-time corrections in a general purpose digital pulse processing system

    PubMed Central

    Abbene, Leonardo; Gerardi, Gaetano

    2015-01-01

    Dead-time losses are well recognized and studied drawbacks in counting and spectroscopic systems. In this work the abilities on dead-time correction of a real-time digital pulse processing (DPP) system for high-rate high-resolution radiation measurements are presented. The DPP system, through a fast and slow analysis of the output waveform from radiation detectors, is able to perform multi-parameter analysis (arrival time, pulse width, pulse height, pulse shape, etc.) at high input counting rates (ICRs), allowing accurate counting loss corrections even for variable or transient radiations. The fast analysis is used to obtain both the ICR and energy spectra with high throughput, while the slow analysis is used to obtain high-resolution energy spectra. A complete characterization of the counting capabilities, through both theoretical and experimental approaches, was performed. The dead-time modeling, the throughput curves, the experimental time-interval distributions (TIDs) and the counting uncertainty of the recorded events of both the fast and the slow channels, measured with a planar CdTe (cadmium telluride) detector, will be presented. The throughput formula of a series of two types of dead-times is also derived. The results of dead-time corrections, performed through different methods, will be reported and discussed, pointing out the error on ICR estimation and the simplicity of the procedure. Accurate ICR estimations (nonlinearity < 0.5%) were performed by using the time widths and the TIDs (using 10 ns time bin width) of the detected pulses up to 2.2 Mcps. The digital system allows, after a simple parameter setting, different and sophisticated procedures for dead-time correction, traditionally implemented in complex/dedicated systems and time-consuming set-ups. PMID:26289270

  14. Implementation of an experimental fault-tolerant memory system

    NASA Technical Reports Server (NTRS)

    Carter, W. C.; Mccarthy, C. E.

    1976-01-01

    The experimental fault-tolerant memory system described in this paper has been designed to enable the modular addition of spares, to validate the theoretical fault-secure and self-testing properties of the translator/corrector, to provide a basis for experiments using the new testing and correction processes for recovery, and to determine the practicality of such systems. The hardware design and implementation are described, together with methods of fault insertion. The hardware/software interface, including a restricted single error correction/double error detection (SEC/DED) code, is specified. Procedures are carefully described which, (1) test for specified physical faults, (2) ensure that single error corrections are not miscorrections due to triple faults, and (3) enable recovery from double errors.

  15. Protecting ICS Systems Within the Energy Sector from Cyber Attacks

    NASA Astrophysics Data System (ADS)

    Barnes, Shaquille

    Advance persistent threat (APT) groups are continuing to attack the energy sector through cyberspace, which poses a risk to our society, national security, and economy. Industrial control systems (ICSs) are not designed to handle cyber-attacks, which is why asset owners need to implement the correct proactive and reactive measures to mitigate the risk to their ICS environments. The Industrial Control Systems Cyber Emergency Response Team (ICS-CERT) responded to 290 incidents for fiscal year 2016, where 59 of those incidents came from the Energy Sector. APT groups know how vulnerable energy sector ICS systems are and the destruction they can cause when they go offline such as loss of production, loss of life, and economic impact. Defending against APT groups requires more than just passive controls such as firewalls and antivirus solutions. Asset owners should implement a combination of best practices and active defense in their environment to defend against APT groups. Cyber-attacks against critical infrastructure will become more complex and harder to detect and respond to with traditional security controls. The purpose of this paper was to provide asset owners with the correct security controls and methodologies to help defend against APT groups.

  16. Comparison of HORACE and PHOTOS Algorithms for Multi-Photon Emission in the Context of the W Boson Mass Measurement

    DOE PAGES

    Kotwal, Ashutosh V.; Jayatilaka, Bodhitha

    2016-01-01

    W boson mass measurement is sensitive to QED radiative corrections due to virtual photon loops and real photon emission. The largest shift in the measured mass, which depends on the transverse momentum spectrum of the charged lepton from the boson decay, is caused by the emission of real photons from the final-state lepton. There are a number of calculations and codes available to model the final-state photon emission. We perform a detailed study, comparing the results from HORACE and PHOTOS implementations of the final-state multiphoton emission in the context of a direct measurement ofW boson mass at Tevatron. Mass fitsmore » are performed using a simulation of the CDF II detector.« less

  17. 50 CFR 501.6 - Requests for correction or amendment of a record.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 7 2010-10-01 2010-10-01 false Requests for correction or amendment of a record. 501.6 Section 501.6 Wildlife and Fisheries MARINE MAMMAL COMMISSION IMPLEMENTATION OF THE PRIVACY ACT OF 1974 § 501.6 Requests for correction or amendment of a record. (a) Any individual may request...

  18. Defense Logistics Agency Disposition Services Afghanistan Disposal Process Needed Improvement

    DTIC Science & Technology

    2013-11-08

    audit, and management was proactive in correcting the deficiencies we identified. DLA DS eliminated backlogs, identified and corrected system ...problems, provided additional system training, corrected coding errors, added personnel to key positions, addressed scale issues, submitted debit...Service Automated Information System to the Reutilization Business Integration2 (RBI) solution. The implementation of RBI in Afghanistan occurred in

  19. The Rise and Fall of Boot Camps: A Case Study in Common-Sense Corrections

    ERIC Educational Resources Information Center

    Cullen, Francis T.; Blevins, Kristie R.; Trager, Jennifer S.; Gendreau, Paul

    2005-01-01

    "Common sense" is often used as a powerful rationale for implementing correctional programs that have no basis in criminology and virtually no hope of reducing recidivism. Within this context, we undertake a case study in "common-sense' corrections by showing how the rise of boot camps, although having multiple causes, was ultimately legitimized…

  20. 16 CFR 1014.7 - Agency review of request for correction or amendment of a record.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Agency review of request for correction or... POLICIES AND PROCEDURES IMPLEMENTING THE PRIVACY ACT OF 1974 § 1014.7 Agency review of request for... review the request and either make the requested correction or amendment or notify the individual of his...

  1. High Resolution Viscosity Measurement by Thermal Noise Detection

    PubMed Central

    Aguilar Sandoval, Felipe; Sepúlveda, Manuel; Bellon, Ludovic; Melo, Francisco

    2015-01-01

    An interferometric method is implemented in order to accurately assess the thermal fluctuations of a micro-cantilever sensor in liquid environments. The power spectrum density (PSD) of thermal fluctuations together with Sader’s model of the cantilever allow for the indirect measurement of the liquid viscosity with good accuracy. The good quality of the deflection signal and the characteristic low noise of the instrument allow for the detection and corrections of drawbacks due to both the cantilever shape irregularities and the uncertainties on the position of the laser spot at the fluctuating end of the cantilever. Variation of viscosity below 0.03 mPa·s was detected with the alternative to achieve measurements with a volume as low as 50 μL. PMID:26540061

  2. High Resolution Viscosity Measurement by Thermal Noise Detection.

    PubMed

    Sandoval, Felipe Aguilar; Sepúlveda, Manuel; Bellon, Ludovic; Melo, Francisco

    2015-11-03

    An interferometric method is implemented in order to accurately assess the thermal fluctuations of a micro-cantilever sensor in liquid environments. The power spectrum density (PSD) of thermal fluctuations together with Sader's model of the cantilever allow for the indirect measurement of the liquid viscosity with good accuracy. The good quality of the deflection signal and the characteristic low noise of the instrument allow for the detection and corrections of drawbacks due to both the cantilever shape irregularities and the uncertainties on the position of the laser spot at the fluctuating end of the cantilever. Variation of viscosity below 0:03mPa·s was detected with the alternative to achieve measurements with a volume as low as 50 µL.

  3. A technique for measuring oxygen saturation in biological tissues based on diffuse optical spectroscopy

    NASA Astrophysics Data System (ADS)

    Kleshnin, Mikhail; Orlova, Anna; Kirillin, Mikhail; Golubiatnikov, German; Turchin, Ilya

    2017-07-01

    A new approach to optical measuring blood oxygen saturation was developed and implemented. This technique is based on an original three-stage algorithm for reconstructing the relative concentration of biological chromophores (hemoglobin, water, lipids) from the measured spectra of diffusely scattered light at different distances from the probing radiation source. The numerical experiments and approbation of the proposed technique on a biological phantom have shown the high reconstruction accuracy and the possibility of correct calculation of hemoglobin oxygenation in the presence of additive noise and calibration errors. The obtained results of animal studies have agreed with the previously published results of other research groups and demonstrated the possibility to apply the developed technique to monitor oxygen saturation in tumor tissue.

  4. Development of a New Optical Measuring Set-Up

    NASA Astrophysics Data System (ADS)

    Miroshnichenko, I. P.; Parinov, I. A.

    2018-06-01

    The paper proposes a description of the developed optical measuring set-up for the contactless recording and processing of measurement results for small spatial (linear and angular) displacements of control surfaces based on the use of laser technologies and optical interference methods. The proposed set-up is designed to solve all the arising measurement tasks in the study of the physical and mechanical properties of new materials and in the process of diagnosing the state of structural materials by acoustic active methods of nondestructive testing. The structure of the set-up, its constituent parts are described, and the features of construction and functioning during measurements are discussed. New technical solutions for the implementation of the components of the set-up under consideration are obtained. The purpose and description of the original specialized software, used to perform a priori analysis of measurement results, are present, while performing measurements, for a posteriori analysis of measurement results. Moreover, the influences of internal and external disturbance effects on the measurement results and correcting measurement results directly in their implementation are determined. The technical solutions, used in the set-up, are protected by the patents of the Russian Federation for inventions, and software is protected by the certificates of state registration of computer programs. The proposed set-up is intended for use in instrumentation, mechanical engineering, shipbuilding, aviation, energy sector, etc.

  5. Advances in iterative non-uniformity correction techniques for infrared scene projection

    NASA Astrophysics Data System (ADS)

    Danielson, Tom; Franks, Greg; LaVeigne, Joe; Prewarski, Marcus; Nehring, Brian

    2015-05-01

    Santa Barbara Infrared (SBIR) is continually developing improved methods for non-uniformity correction (NUC) of its Infrared Scene Projectors (IRSPs) as part of its comprehensive efforts to achieve the best possible projector performance. The most recent step forward, Advanced Iterative NUC (AI-NUC), improves upon previous NUC approaches in several ways. The key to NUC performance is achieving the most accurate possible input drive-to-radiance output mapping for each emitter pixel. This requires many highly-accurate radiance measurements of emitter output, as well as sophisticated manipulation of the resulting data set. AI-NUC expands the available radiance data set to include all measurements made of emitter output at any point. In addition, it allows the user to efficiently manage that data for use in the construction of a new NUC table that is generated from an improved fit of the emitter response curve. Not only does this improve the overall NUC by offering more statistics for interpolation than previous approaches, it also simplifies the removal of erroneous data from the set so that it does not propagate into the correction tables. AI-NUC is implemented by SBIR's IRWindows4 automated test software as part its advanced turnkey IRSP product (the Calibration Radiometry System or CRS), which incorporates all necessary measurement, calibration and NUC table generation capabilities. By employing AI-NUC on the CRS, SBIR has demonstrated the best uniformity results on resistive emitter arrays to date.

  6. Towards good practice for health statistics: lessons from the Millennium Development Goal health indicators.

    PubMed

    Murray, Christopher J L

    2007-03-10

    Health statistics are at the centre of an increasing number of worldwide health controversies. Several factors are sharpening the tension between the supply and demand for high quality health information, and the health-related Millennium Development Goals (MDGs) provide a high-profile example. With thousands of indicators recommended but few measured well, the worldwide health community needs to focus its efforts on improving measurement of a small set of priority areas. Priority indicators should be selected on the basis of public-health significance and several dimensions of measurability. Health statistics can be divided into three types: crude, corrected, and predicted. Health statistics are necessary inputs to planning and strategic decision making, programme implementation, monitoring progress towards targets, and assessment of what works and what does not. Crude statistics that are biased have no role in any of these steps; corrected statistics are preferred. For strategic decision making, when corrected statistics are unavailable, predicted statistics can play an important part. For monitoring progress towards agreed targets and assessment of what works and what does not, however, predicted statistics should not be used. Perhaps the most effective method to decrease controversy over health statistics and to encourage better primary data collection and the development of better analytical methods is a strong commitment to provision of an explicit data audit trail. This initiative would make available the primary data, all post-data collection adjustments, models including covariates used for farcasting and forecasting, and necessary documentation to the public.

  7. Integrated PET/MR breast cancer imaging: Attenuation correction and implementation of a 16-channel RF coil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oehmigen, Mark, E-mail: mark.oehmigen@uni-due.de

    Purpose: This study aims to develop, implement, and evaluate a 16-channel radiofrequency (RF) coil for integrated positron emission tomography/magnetic resonance (PET/MR) imaging of breast cancer. The RF coil is designed for optimized MR imaging performance and PET transparency and attenuation correction (AC) is applied for accurate PET quantification. Methods: A 16-channel breast array RF coil was designed for integrated PET/MR hybrid imaging of breast cancer lesions. The RF coil features a lightweight rigid design and is positioned with a spacer at a defined position on the patient table of an integrated PET/MR system. Attenuation correction is performed by generating andmore » applying a dedicated 3D CT-based template attenuation map. Reposition accuracy of the RF coil on the system patient table while using the positioning frame was tested in repeated measurements using MR-visible markers. The MR, PET, and PET/MR imaging performances were systematically evaluated using modular breast phantoms. Attenuation correction of the RF coil was evaluated with difference measurements of the active breast phantoms filled with radiotracer in the PET detector with and without the RF coil in place, serving as a standard of reference measurement. The overall PET/MR imaging performance and PET quantification accuracy of the new 16-channel RF coil and its AC were then evaluated in first clinical examinations on ten patients with local breast cancer. Results: The RF breast array coil provides excellent signal-to-noise ratio and signal homogeneity across the volume of the breast phantoms in MR imaging and visualizes small structures in the phantoms down to 0.4 mm in plane. Difference measurements with PET revealed a global loss and thus attenuation of counts by 13% (mean value across the whole phantom volume) when the RF coil is placed in the PET detector. Local attenuation ranging from 0% in the middle of the phantoms up to 24% was detected in the peripheral regions of the phantoms at positions closer to attenuating hardware structures of the RF coil. The position accuracy of the RF coil on the patient table when using the positioning frame was determined well below 1 mm for all three spatial dimensions. This ensures perfect position match between the RF coil and its three-dimensional attenuation template during the PET data reconstruction process. When applying the CT-based AC of the RF coil, the global attenuation bias was mostly compensated to ±0.5% across the entire breast imaging volume. The patient study revealed high quality MR, PET, and combined PET/MR imaging of breast cancer. Quantitative activity measurements in all 11 breast cancer lesions of the ten patients resulted in increased mean difference values of SUV{sub max} 11.8% (minimum 3.2%; maximum 23.2%) between nonAC images and images when AC of the RF breast coil was applied. This supports the quantitative results of the phantom study as well as successful attenuation correction of the RF coil. Conclusions: A 16-channel breast RF coil was designed for optimized MR imaging performance and PET transparency and was successfully integrated with its dedicated attenuation correction template into a whole-body PET/MR system. Systematic PET/MR imaging evaluation with phantoms and an initial study on patients with breast cancer provided excellent MR and PET image quality and accurate PET quantification.« less

  8. Patients' understanding and use of advance directives.

    PubMed Central

    Jacobson, J A; White, B E; Battin, M P; Francis, L P; Green, D J; Kasworm, E S

    1994-01-01

    The Patient Self-Determination Act was implemented in December 1991. Before and after its implementation, we used a structured interview of 302 randomly selected patients to determine their awareness, understanding, and use of advance directives. Implementation of the Act did not have a major effect on these. Although more than 90% of patients were aware of the living will, only about a third selected the correct definition or the correct circumstances in which it applied, and less than 20% of patients had completed one. About a third of patients were aware of a Durable Power of Attorney for Health Care and chose the correct definition, and about half identified the correct circumstances in which it applies; less than 10% had completed such a document. Surprisingly, patients who said they had completed advance directives did not demonstrate better understanding of these documents. Our results indicate that many patients, including some who have completed advance directives, do not fully understand them. It may be unwise to regard these documents as carefully considered, compelling statements of patients' preferences. Appropriate responses to our findings include increased public education, revising state statutes to bring them into congruence with public perception, and expanding the dialogue between physicians and patients. PMID:8191755

  9. On the use of mobile phones and wearable microphones for noise exposure measurements: Calibration and measurement accuracy

    NASA Astrophysics Data System (ADS)

    Dumoulin, Romain

    Despite the fact that noise-induced hearing loss remains the number one occupational disease in developed countries, individual noise exposure levels are still rarely known and infrequently tracked. Indeed, efforts to standardize noise exposure levels present disadvantages such as costly instrumentation and difficulties associated with on site implementation. Given their advanced technical capabilities and widespread daily usage, mobile phones could be used to measure noise levels and make noise monitoring more accessible. However, the use of mobile phones for measuring noise exposure is currently limited due to the lack of formal procedures for their calibration and challenges regarding the measurement procedure. Our research investigated the calibration of mobile phone-based solutions for measuring noise exposure using a mobile phone's built-in microphones and wearable external microphones. The proposed calibration approach integrated corrections that took into account microphone placement error. The corrections were of two types: frequency-dependent, using a digital filter and noise level-dependent, based on the difference between the C-weighted noise level minus A-weighted noise level of the noise measured by the phone. The electro-acoustical limitations and measurement calibration procedure of the mobile phone were investigated. The study also sought to quantify the effect of noise exposure characteristics on the accuracy of calibrated mobile phone measurements. Measurements were carried out in reverberant and semi-anechoic chambers with several mobiles phone units of the same model, two types of external devices (an earpiece and a headset with an in-line microphone) and an acoustical test fixture (ATF). The proposed calibration approach significantly improved the accuracy of the noise level measurements in diffuse and free fields, with better results in the diffuse field and with ATF positions causing little or no acoustic shadowing. Several sources of errors and uncertainties were identified including the errors associated with the inter-unit-variability, the presence of signal saturation and the microphone placement relative to the source and the wearer. The results of the investigations and validation measurements led to recommendations regarding the measurement procedure including the use of external microphones having lower sensitivity and provided the basis for a standardized and unique factory default calibration method intended for implementation in any mobile phone. A user-defined adjustment was proposed to minimize the errors associated with calibration and the acoustical field. Mobile phones implementing the proposed laboratory calibration and used with external microphones showed great potential as noise exposure instruments. Combined with their potential as training and prevention tools, the expansion of their use could significantly help reduce the risks of noise-induced hearing loss.

  10. Waveform Synthesizer For Imaging And Ranging Applications

    DOEpatents

    DUDLEY, PETER A.; [et al

    2004-11-30

    Frequency dependent corrections are provided for quadrature imbalance. An operational procedure filters imbalance effects without prior calibration or equalization. Waveform generation can be adjusted/corrected in a synthetic aperture radar system (SAR), where a rolling phase shift is applied to the SAR's QDWS signal where it is demodulated in a receiver; unwanted energies, such as imbalance energy, are separated from a desired signal in Doppler; the separated energy is filtered from the receiver leaving the desired signal; and the separated energy in the receiver is measured to determine the degree of imbalance that is represented by it. Calibration methods can also be implemented into synthesis. The degree of quadrature imbalance can be used to determine calibration values that can then be provided as compensation for frequency dependent errors in components, such as the QDWS and SSB mixer, affecting quadrature signal quality.

  11. Wall interference assessment and corrections

    NASA Technical Reports Server (NTRS)

    Newman, P. A.; Kemp, W. B., Jr.; Garriz, J. A.

    1989-01-01

    Wind tunnel wall interference assessment and correction (WIAC) concepts, applications, and typical results are discussed in terms of several nonlinear transonic codes and one panel method code developed for and being implemented at NASA-Langley. Contrasts between 2-D and 3-D transonic testing factors which affect WIAC procedures are illustrated using airfoil data from the 0.3 m Transonic Cryogenic Tunnel and Pathfinder 1 data from the National Transonic Facility. Initial results from the 3-D WIAC codes are encouraging; research on and implementation of WIAC concepts continue.

  12. The impact of water temperature on the measurement of absolute dose

    NASA Astrophysics Data System (ADS)

    Islam, Naveed Mehdi

    To standardize reference dosimetry in radiation therapy, Task Group 51 (TG 51) of American Association of Physicist's in Medicine (AAPM) recommends that dose calibration measurements be made in a water tank at a depth of 10 cm and at a reference geometry. Methodologies are provided for calculating various correction factors to be applied in calculating the absolute dose. However the protocol does not specify the water temperature to be used. In practice, the temperature of water during dosimetry may vary considerably between independent sessions and different centers. In this work the effect of water temperature on absolute dosimetry has been investigated. Density of water varies with temperature, which in turn may impact the beam attenuation and scatter properties. Furthermore, due to thermal expansion or contraction air volume inside the chamber may change. All of these effects can result in a change in the measurement. Dosimetric measurements were made using a Farmer type ion chamber on a Varian Linear Accelerator for 6 MV and 23 MV photon energies for temperatures ranging from 10 to 40 °C. A thermal insulation was designed for the water tank in order to maintain relatively stable temperature over the duration of the experiment. Dose measured at higher temperatures were found to be consistently higher by a very small magnitude. Although the differences in dose were less than the uncertainty in each measurement, a linear regression of the data suggests that the trend is statistically significant with p-values of 0.002 and 0.013 for 6 and 23 MV beams respectively. For a 10 degree difference in water phantom temperatures, which is a realistic deviation across clinics, the final calculated reference dose can differ by 0.24% or more. To address this effect, first a reference temperature (e.g.22 °C) can be set as the standard; subsequently a correction factor can be implemented for deviations from this reference. Such a correction factor is expected to be of similar magnitude as existing TG 51 recommended correction factors.

  13. A Compact Laboratory Spectro-Goniometer (CLabSpeG) to Assess the BRDF of Materials. Presentation, Calibration and Implementation on Fagus sylvatica L. Leaves

    PubMed Central

    Biliouris, Dimitrios; Verstraeten, Willem W.; Dutré, Phillip; van Aardt, Jan A.N.; Muys, Bart; Coppin, Pol

    2007-01-01

    The design and calibration of a new hyperspectral Compact Laboratory Spectro-Goniometer (CLabSpeG) is presented. CLabSpeG effectively measures the bidirectional reflectance Factor (BRF) of a sample, using a halogen light source and an Analytical Spectral Devices (ASD) spectroradiometer. The apparatus collects 4356 reflectance data readings covering the spectrum from 350 nm to 2500 nm by independent positioning of the sensor, sample holder, and light source. It has an azimuth and zenith resolution of 30 and 15 degrees, respectively. CLabSpeG is used to collect BRF data and extract Bidirectional Reflectance Distribution Function (BRDF) data of non-isotropic vegetation elements such as bark, soil, and leaves. Accurate calibration has ensured robust geometric accuracy of the apparatus, correction for the conicality of the light source, while sufficient radiometric stability and repeatability between measurements are obtained. The bidirectional reflectance data collection is automated and remotely controlled and takes approximately two and half hours for a BRF measurement cycle over a full hemisphere with 125 cm radius and 2.4 minutes for a single BRF acquisition. A specific protocol for vegetative leaf collection and measurement was established in order to investigate the possibility to extract BRDF values from Fagus sylvatica L. leaves under laboratory conditions. Drying leaf effects induce a reflectance change during the BRF measurements due to the laboratory illumination source. Therefore, the full hemisphere could not be covered with one leaf. Instead 12 BRF measurements per leaf were acquired covering all azimuth positions for a single light source zenith position. Data are collected in radiance format and reflectance is calculated by dividing the leaf cycle measurement with a radiance cycle of a Spectralon reference panel, multiplied by a Spectralon reflectance correction factor and a factor to correct for the conical effect of the light source. BRF results of measured leaves are presented. PMID:28903201

  14. A Compact Laboratory Spectro-Goniometer (CLabSpeG) to Assess the BRDF of Materials. Presentation, Calibration and Implementation on Fagus sylvatica L. Leaves.

    PubMed

    Biliouris, Dimitrios; Verstraeten, Willem W; Dutré, Phillip; Van Aardt, Jan A N; Muys, Bart; Coppin, Pol

    2007-09-07

    The design and calibration of a new hyperspectral Compact Laboratory Spectro-Goniometer (CLabSpeG) is presented. CLabSpeG effectively measures the bidirectionalreflectance Factor (BRF) of a sample, using a halogen light source and an AnalyticalSpectral Devices (ASD) spectroradiometer. The apparatus collects 4356 reflectance datareadings covering the spectrum from 350 nm to 2500 nm by independent positioning of thesensor, sample holder, and light source. It has an azimuth and zenith resolution of 30 and15 degrees, respectively. CLabSpeG is used to collect BRF data and extract BidirectionalReflectance Distribution Function (BRDF) data of non-isotropic vegetation elements suchas bark, soil, and leaves. Accurate calibration has ensured robust geometric accuracy of theapparatus, correction for the conicality of the light source, while sufficient radiometricstability and repeatability between measurements are obtained. The bidirectionalreflectance data collection is automated and remotely controlled and takes approximatelytwo and half hours for a BRF measurement cycle over a full hemisphere with 125 cmradius and 2.4 minutes for a single BRF acquisition. A specific protocol for vegetative leafcollection and measurement was established in order to investigate the possibility to extractBRDF values from Fagus sylvatica L. leaves under laboratory conditions. Drying leafeffects induce a reflectance change during the BRF measurements due to the laboratorySensors 2007, 7 1847 illumination source. Therefore, the full hemisphere could not be covered with one leaf. Instead 12 BRF measurements per leaf were acquired covering all azimuth positions for a single light source zenith position. Data are collected in radiance format and reflectance is calculated by dividing the leaf cycle measurement with a radiance cycle of a Spectralon reference panel, multiplied by a Spectralon reflectance correction factor and a factor to correct for the conical effect of the light source. BRF results of measured leaves are presented.

  15. Corrective Action Decision Document/Closure Report for Corrective Action Unit 571: Area 9 Yucca Flat Plutonium Dispersion Sites, Nevada National Security Site, Nevada, Revision 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthews, Patrick

    2014-08-01

    The purpose of this CADD/CR is to provide documentation and justification that no further corrective action is needed for the closure of CAU 571 based on the implementation of corrective actions. This includes a description of investigation activities, an evaluation of the data, and a description of corrective actions that were performed. The CAIP provides information relating to the scope and planning of the investigation. Therefore, that information will not be repeated in this document.

  16. The effect of systematic set-up deviations on the absorbed dose distribution for left-sided breast cancer treated with respiratory gating

    NASA Astrophysics Data System (ADS)

    Edvardsson, A.; Ceberg, S.

    2013-06-01

    The aim of this study was 1) to investigate interfraction set-up uncertainties for patients treated with respiratory gating for left-sided breast cancer, 2) to investigate the effect of the inter-fraction set-up on the absorbed dose-distribution for the target and organs at risk (OARs) and 3) optimize the set-up correction strategy. By acquiring multiple set-up images the systematic set-up deviation was evaluated. The effect of the systematic set-up deviation on the absorbed dose distribution was evaluated by 1) simulation in the treatment planning system and 2) measurements with a biplanar diode array. The set-up deviations could be decreased using a no action level correction strategy. Not using the clinically implemented adaptive maximum likelihood factor for the gating patients resulted in better set-up. When the uncorrected set-up deviations were simulated the average mean absorbed dose was increased from 1.38 to 2.21 Gy for the heart, 4.17 to 8.86 Gy to the left anterior descending coronary artery and 5.80 to 7.64 Gy to the left lung. Respiratory gating can induce systematic set-up deviations which would result in increased mean absorbed dose to the OARs if not corrected for and should therefore be corrected for by an appropriate correction strategy.

  17. Optical factors determined by the T-matrix method in turbidity measurement of absolute coagulation rate constants.

    PubMed

    Xu, Shenghua; Liu, Jie; Sun, Zhiwei

    2006-12-01

    Turbidity measurement for the absolute coagulation rate constants of suspensions has been extensively adopted because of its simplicity and easy implementation. A key factor in deriving the rate constant from experimental data is how to theoretically evaluate the so-called optical factor involved in calculating the extinction cross section of doublets formed during aggregation. In a previous paper, we have shown that compared with other theoretical approaches, the T-matrix method provides a robust solution to this problem and is effective in extending the applicability range of the turbidity methodology, as well as increasing measurement accuracy. This paper will provide a more comprehensive discussion of the physical insight for using the T-matrix method in turbidity measurement and associated technical details. In particular, the importance of ensuring the correct value for the refractive indices for colloidal particles and the surrounding medium used in the calculation is addressed, because the indices generally vary with the wavelength of the incident light. The comparison of calculated results with experiments shows that the T-matrix method can correctly calculate optical factors even for large particles, whereas other existing theories cannot. In addition, the data of the optical factor calculated by the T-matrix method for a range of particle radii and incident light wavelengths are listed.

  18. Design and implementation of a low-cost multiple-range digital phase detector

    NASA Astrophysics Data System (ADS)

    Omran, Hesham; Albasha, Lutfi; Al-Ali, A. R.

    2012-06-01

    This article describes the design, simulation, implementation and testing of a novel low-cost multiple-range programmable digital phase detector. The detector receives two periodic signals and calculates the ratio of the time difference to the time period to measure and display the phase difference. The resulting output values are in integer form ranging from -180° to 180°. Users can select the detector pre-set operation frequency ranges using a three-bit pre-scalar. This enables to use the detector for various applications. The proposed detector can be programmed over a frequency range of 10 Hz to 25 kHz by configuring its clock divider circuit. Detector simulations were conducted and verified using ModelSim and the design was implemented and tested using an Altera Cyclone II field-programmable gate array board. Both the simulation and actual circuit testing results showed that the phase detector has a magnitude of error of only 1°. The detector is ideal for applications such as power factor measurement and correction, self-tuning resonant circuits and in metal detection systems. Unlike other stand-alone phase detection systems, the reported system has the ability to be programmed to several frequency ranges, hence expanding its bandwidth.

  19. Training teachers in generalized writing of behavior modification programs for multihandicapped deaf children.

    PubMed

    Hundert, J

    1982-01-01

    In contrast to previous studies where teachers were instructed how to implement behavior modification programs designed by an experimenter, teachers in the present experiment were taught how to write as well as implement behavior modification programs. The generalized effects of two training conditions on teacher and pupil behaviors were assessed by a multiple baseline design where, following baseline, two teachers of multi-handicapped deaf children were taught to set objectives and measure pupil performance (measurement training), Later, through a training manual, they learned a general problem-solving approach to writing behavior modification programs (programming training). After both training conditions, experimenter feedback was given for teachers' application of training to a target behavior for one pupil and generalization was measured across target behaviors for the same pupil and across pupils. It was found that measurement training had little general effect on either teacher behavior or pupil behavior. However, after programming training, teachers increased their program writing and correct use of behavior modification procedures and generalized this training across pupils and target behaviors. Along with these effects, there was improvement in pupil behaviors. Possible explanation for generalized effects of teacher training were considered.

  20. 78 FR 52893 - Implementation of the 2008 National Ambient Air Quality Standards for Ozone: State Implementation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-27

    ... ENVIRONMENTAL PROTECTION AGENCY 40 CFR Parts 50, 51, 70 and 71 [EPA-HQ-OAR-2010-0885, FRL-9810-3] RIN 2060-AR34 Implementation of the 2008 National Ambient Air Quality Standards for Ozone: State Implementation Plan Requirements Correction In proposed rule document 2013-13233 appearing on pages 34178 through...

  1. Corrective Action Decision Document/Corrective Action Plan for Corrective Action Unit 97: Yucca Flat/Climax Mine Nevada National Security Site, Nevada, Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farnham, Irene

    This corrective action decision document (CADD)/corrective action plan (CAP) has been prepared for Corrective Action Unit (CAU) 97, Yucca Flat/Climax Mine, Nevada National Security Site (NNSS), Nevada. The Yucca Flat/Climax Mine CAU is located in the northeastern portion of the NNSS and comprises 720 corrective action sites. A total of 747 underground nuclear detonations took place within this CAU between 1957 and 1992 and resulted in the release of radionuclides (RNs) in the subsurface in the vicinity of the test cavities. The CADD portion describes the Yucca Flat/Climax Mine CAU data-collection and modeling activities completed during the corrective action investigationmore » (CAI) stage, presents the corrective action objectives, and describes the actions recommended to meet the objectives. The CAP portion describes the corrective action implementation plan. The CAP presents CAU regulatory boundary objectives and initial use-restriction boundaries identified and negotiated by DOE and the Nevada Division of Environmental Protection (NDEP). The CAP also presents the model evaluation process designed to build confidence that the groundwater flow and contaminant transport modeling results can be used for the regulatory decisions required for CAU closure. The UGTA strategy assumes that active remediation of subsurface RN contamination is not feasible with current technology. As a result, the corrective action is based on a combination of characterization and modeling studies, monitoring, and institutional controls. The strategy is implemented through a four-stage approach that comprises the following: (1) corrective action investigation plan (CAIP), (2) CAI, (3) CADD/CAP, and (4) closure report (CR) stages.« less

  2. 50 CFR 501.7 - Agency review of requests for amendment or correction of a record.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 7 2010-10-01 2010-10-01 false Agency review of requests for amendment or correction of a record. 501.7 Section 501.7 Wildlife and Fisheries MARINE MAMMAL COMMISSION IMPLEMENTATION OF THE PRIVACY ACT OF 1974 § 501.7 Agency review of requests for amendment or correction of a record. (a...

  3. Erratum: Erratum to: Maximally symmetric two Higgs doublet model with natural standard model alignment

    NASA Astrophysics Data System (ADS)

    Bhupal Dev, P. S.; Pilaftsis, Apostolos

    2015-11-01

    Here we correct some typesetting errors in ref. [1]. These corrections have been implemented in the latest version of [1] on arXiv and the corrected equations have also been reproduced in ref. [2] for the reader's convenience. We clarify that all numerical results presented in ref. [1] remain unaffected by these typographic errors.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, Nadine; Prestel, S.; Ritzmann, M.

    We present the first public implementation of antenna-based QCD initial- and final-state showers. The shower kernels are 2→3 antenna functions, which capture not only the collinear dynamics but also the leading soft (coherent) singularities of QCD matrix elements. We define the evolution measure to be inversely proportional to the leading poles, hence gluon emissions are evolved in a p ⊥ measure inversely proportional to the eikonal, while processes that only contain a single pole (e.g., g → qq¯) are evolved in virtuality. Non-ordered emissions are allowed, suppressed by an additional power of 1/Q 2. Recoils and kinematics are governed bymore » exact on-shell 2 → 3 phase-space factorisations. This first implementation is limited to massless QCD partons and colourless resonances. Tree-level matrix-element corrections are included for QCD up to O(α 4 s) (4 jets), and for Drell–Yan and Higgs production up to O(α 3 s) (V / H + 3 jets). Finally, the resulting algorithm has been made publicly available in Vincia 2.0.« less

  5. Soft sensor based composition estimation and controller design for an ideal reactive distillation column.

    PubMed

    Vijaya Raghavan, S R; Radhakrishnan, T K; Srinivasan, K

    2011-01-01

    In this research work, the authors have presented the design and implementation of a recurrent neural network (RNN) based inferential state estimation scheme for an ideal reactive distillation column. Decentralized PI controllers are designed and implemented. The reactive distillation process is controlled by controlling the composition which has been estimated from the available temperature measurements using a type of RNN called Time Delayed Neural Network (TDNN). The performance of the RNN based state estimation scheme under both open loop and closed loop have been compared with a standard Extended Kalman filter (EKF) and a Feed forward Neural Network (FNN). The online training/correction has been done for both RNN and FNN schemes for every ten minutes whenever new un-trained measurements are available from a conventional composition analyzer. The performance of RNN shows better state estimation capability as compared to other state estimation schemes in terms of qualitative and quantitative performance indices. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Design study of Software-Implemented Fault-Tolerance (SIFT) computer

    NASA Technical Reports Server (NTRS)

    Wensley, J. H.; Goldberg, J.; Green, M. W.; Kutz, W. H.; Levitt, K. N.; Mills, M. E.; Shostak, R. E.; Whiting-Okeefe, P. M.; Zeidler, H. M.

    1982-01-01

    Software-implemented fault tolerant (SIFT) computer design for commercial aviation is reported. A SIFT design concept is addressed. Alternate strategies for physical implementation are considered. Hardware and software design correctness is addressed. System modeling and effectiveness evaluation are considered from a fault-tolerant point of view.

  7. Wave front sensing for next generation earth observation telescope

    NASA Astrophysics Data System (ADS)

    Delvit, J.-M.; Thiebaut, C.; Latry, C.; Blanchet, G.

    2017-09-01

    High resolution observations systems are highly dependent on optics quality and are usually designed to be nearly diffraction limited. Such a performance allows to set a Nyquist frequency closer to the cut off frequency, or equivalently to minimize the pupil diameter for a given ground sampling distance target. Up to now, defocus is the only aberration that is allowed to evolve slowly and that may be inflight corrected, using an open loop correction based upon ground estimation and refocusing command upload. For instance, Pleiades satellites defocus is assessed from star acquisitions and refocusing is done with a thermal actuation of the M2 mirror. Next generation systems under study at CNES should include active optics in order to allow evolving aberrations not only limited to defocus, due for instance to in orbit thermal variable conditions. Active optics relies on aberration estimations through an onboard Wave Front Sensor (WFS). One option is using a Shack Hartmann. The Shack-Hartmann wave-front sensor could be used on extended scenes (unknown landscapes). A wave-front computation algorithm should then be implemented on-board the satellite to provide the control loop wave-front error measure. In the worst case scenario, this measure should be computed before each image acquisition. A robust and fast shift estimation algorithm between Shack-Hartmann images is then needed to fulfill this last requirement. A fast gradient-based algorithm using optical flows with a Lucas-Kanade method has been studied and implemented on an electronic device developed by CNES. Measurement accuracy depends on the Wave Front Error (WFE), the landscape frequency content, the number of searched aberrations, the a priori knowledge of high order aberrations and the characteristics of the sensor. CNES has realized a full scale sensitivity analysis on the whole parameter set with our internally developed algorithm.

  8. Evaluation of corrective measures implemented for the preventive conservation of fresco paintings in Ariadne’s house (Pompeii, Italy)

    PubMed Central

    2013-01-01

    Background A microclimate monitoring study was conducted in 2008 aimed at assessing the conservation risks affecting the valuable wall paintings decorating Ariadne’s House (Pompeii, Italy). It was found that thermohygrometric conditions were very unfavorable for the conservation of frescoes. As a result, it was decided to implement corrective measures, and the transparent polycarbonate sheets covering three rooms (one of them delimited by four walls and the others composed of three walls) were replaced by opaque roofs. In order to examine the effectiveness of this measure, the same monitoring system comprised by 26 thermohygrometric probes was installed again in summer 2010. Data recorded in 2008 and 2010 were compared. Results Microclimate conditions were also monitored in a control room with the same roof in both years. The average temperature in this room was lower in 2010, and it was decided to consider a time frame of 18 summer days with the same mean temperature in both years. In the rooms with three walls, the statistical analysis revealed that the diurnal maximum temperature decreased about 3.5°C due to the roof change, and the minimum temperature increased 0.5°C. As a result, the daily thermohygrometric variations resulted less pronounced in 2010, with a reduction of approximately 4°C, which is favorable for the preservation of mural paintings. In the room with four walls, the daily fluctuations also decreased about 4°C. Based on the results, other alternative actions are discussed aimed at improving the conservation conditions of wall paintings. Conclusions The roof change has reduced the most unfavorable thermohygrometric conditions affecting the mural paintings, but additional actions should be adopted for a long term preservation of Pompeian frescoes. PMID:23683173

  9. Evaluation of corrective measures implemented for the preventive conservation of fresco paintings in Ariadne's house (Pompeii, Italy).

    PubMed

    Merello, Paloma; García-Diego, Fernando-Juan; Zarzo, Manuel

    2013-05-17

    A microclimate monitoring study was conducted in 2008 aimed at assessing the conservation risks affecting the valuable wall paintings decorating Ariadne's House (Pompeii, Italy). It was found that thermohygrometric conditions were very unfavorable for the conservation of frescoes. As a result, it was decided to implement corrective measures, and the transparent polycarbonate sheets covering three rooms (one of them delimited by four walls and the others composed of three walls) were replaced by opaque roofs. In order to examine the effectiveness of this measure, the same monitoring system comprised by 26 thermohygrometric probes was installed again in summer 2010. Data recorded in 2008 and 2010 were compared. Microclimate conditions were also monitored in a control room with the same roof in both years. The average temperature in this room was lower in 2010, and it was decided to consider a time frame of 18 summer days with the same mean temperature in both years. In the rooms with three walls, the statistical analysis revealed that the diurnal maximum temperature decreased about 3.5°C due to the roof change, and the minimum temperature increased 0.5°C. As a result, the daily thermohygrometric variations resulted less pronounced in 2010, with a reduction of approximately 4°C, which is favorable for the preservation of mural paintings. In the room with four walls, the daily fluctuations also decreased about 4°C. Based on the results, other alternative actions are discussed aimed at improving the conservation conditions of wall paintings. The roof change has reduced the most unfavorable thermohygrometric conditions affecting the mural paintings, but additional actions should be adopted for a long term preservation of Pompeian frescoes.

  10. Implementation and performance of shutterless uncooled micro-bolometer cameras

    NASA Astrophysics Data System (ADS)

    Das, J.; de Gaspari, D.; Cornet, P.; Deroo, P.; Vermeiren, J.; Merken, P.

    2015-06-01

    A shutterless algorithm is implemented into the Xenics LWIR thermal cameras and modules. Based on a calibration set and a global temperature coefficient the optimal non-uniformity correction is calculated onboard of the camera. The limited resources in the camera require a compact algorithm, hence the efficiency of the coding is important. The performance of the shutterless algorithm is studied by a comparison of the residual non-uniformity (RNU) and signal-to-noise ratio (SNR) between the shutterless and shuttered correction algorithm. From this comparison we conclude that the shutterless correction is only slightly less performant compared to the standard shuttered algorithm, making this algorithm very interesting for thermal infrared applications where small weight and size, and continuous operation are important.

  11. Improving contact layer patterning using SEM contour based etch model

    NASA Astrophysics Data System (ADS)

    Weisbuch, François; Lutich, Andrey; Schatz, Jirka; Hertzsch, Tino; Moll, Hans-Peter

    2016-10-01

    The patterning of the contact layer is modulated by strong etch effects that are highly dependent on the geometry of the contacts. Such litho-etch biases need to be corrected to ensure a good pattern fidelity. But aggressive designs contain complex shapes that can hardly be compensated with etch bias table and are difficult to characterize with standard CD metrology. In this work we propose to implement a model based etch compensation method able to deal with any contact configuration. With the help of SEM contours, it was possible to get reliable 2D measurements particularly helpful to calibrate the etch model. The selections of calibration structures was optimized in combination with model form to achieve an overall errRMS of 3nm allowing the implementation of the model in production.

  12. A proposal of an architecture for the coordination level of intelligent machines

    NASA Technical Reports Server (NTRS)

    Beard, Randall; Farah, Jeff; Lima, Pedro

    1993-01-01

    The issue of obtaining a practical, structured, and detailed description of an architecture for the Coordination Level of Center for Intelligent Robotic Systems for Sapce Exploration (CIRSSE) Testbed Intelligent Controller is addressed. Previous theoretical and implementation works were the departure point for the discussion. The document is organized as follows: after this introductory section, section 2 summarizes the overall view of the Intelligent Machine (IM) as a control system, proposing a performance measure on which to base its design. Section 3 addresses with some detail implementation issues. An hierarchic petri-net with feedback-based learning capabilities is proposed. Finally, section 4 is an attempt to address the feedback problem. Feedback is used for two functions: error recovery and reinforcement learning of the correct translations for the petri-net transitions.

  13. Compensation of X-ray mirror shape-errors using refractive optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sawhney, Kawal, E-mail: Kawal.sawhney@diamond.ac.uk; Laundy, David; Pape, Ian

    2016-08-01

    Focusing of X-rays to nanometre scale focal spots requires high precision X-ray optics. For nano-focusing mirrors, height errors in the mirror surface retard or advance the X-ray wavefront and after propagation to the focal plane, this distortion of the wavefront causes blurring of the focus resulting in a limit on the spatial resolution. We describe here the implementation of a method for correcting the wavefront that is applied before a focusing mirror using custom-designed refracting structures which locally cancel out the wavefront distortion from the mirror. We demonstrate in measurements on a synchrotron radiation beamline a reduction in the sizemore » of the focal spot of a characterized test mirror by a factor of greater than 10 times. This technique could be used to correct existing synchrotron beamline focusing and nanofocusing optics providing a highly stable wavefront with low distortion for obtaining smaller focus sizes. This method could also correct multilayer or focusing crystal optics allowing larger numerical apertures to be used in order to reduce the diffraction limited focal spot size.« less

  14. Implementation of surgical quality improvement: auditing tool for surgical site infection prevention practices.

    PubMed

    Hechenbleikner, Elizabeth M; Hobson, Deborah B; Bennett, Jennifer L; Wick, Elizabeth C

    2015-01-01

    Surgical site infections are a potentially preventable patient harm. Emerging evidence suggests that the implementation of evidence-based process measures for infection reduction is highly variable. The purpose of this work was to develop an auditing tool to assess compliance with infection-related process measures and establish a system for identifying and addressing defects in measure implementation. This was a retrospective cohort study using electronic medical records. We used the auditing tool to assess compliance with 10 process measures in a sample of colorectal surgery patients with and without postoperative infections at an academic medical center (January 2012 to March 2013). We investigated 59 patients with surgical site infections and 49 patients without surgical site infections. First, overall compliance rates for the 10 process measures were compared between patients with infection vs patients without infection to assess if compliance was lower among patients with surgical site infections. Then, because of the burden of data collection, the tool was used exclusively to evaluate quarterly compliance rates among patients with infection. The results were reviewed, and the key factors contributing to noncompliance were identified and addressed. Ninety percent of process measures had lower compliance rates among patients with infection. Detailed review of infection cases identified many defects that improved following the implementation of system-level changes: correct cefotetan redosing (education of anesthesia personnel), temperature at surgical incision >36.0°C (flags used to identify patients for preoperative warming), and the use of preoperative mechanical bowel preparation with oral antibiotics (laxative solutions and antibiotics distributed in clinic before surgery). Quarterly compliance improved for 80% of process measures by the end of the study period. This study was conducted on a small surgical cohort within a select subspecialty. The infection auditing tool is a useful strategy for identifying defects and guiding quality improvement interventions. This is an iterative process requiring dedicated resources and continuous patient and frontline provider engagement.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poppinga, D., E-mail: daniela.poppinga@uni-oldenburg.de; Schoenfeld, A. A.; Poppe, B.

    Purpose: The purpose of this study is the correction of the lateral scanner artifact, i.e., the effect that, on a large homogeneously exposed EBT3 film, a flatbed scanner measures different optical densities at different positions along thex axis, the axis parallel to the elongated light source. At constant dose, the measured optical densitiy profiles along this axis have a parabolic shape with significant dose dependent curvature. Therefore, the effect is shortly called the parabola effect. The objective of the algorithm developed in this study is to correct for the parabola effect. Any optical density measured at given position x ismore » transformed into the equivalent optical density c at the apex of the parabola and then converted into the corresponding dose via the calibration of c versus dose. Methods: For the present study EBT3 films and an Epson 10000XL scanner including transparency unit were used for the analysis of the parabola effect. The films were irradiated with 6 MV photons from an Elekta Synergy accelerator in a RW3 slab phantom. In order to quantify the effect, ten film pieces with doses graded from 0 to 20.9 Gy were sequentially scanned at eight positions along thex axis and at six positions along the z axis (the movement direction of the light source) both for the portrait and landscape film orientations. In order to test the effectiveness of the new correction algorithm, the dose profiles of an open square field and an IMRT plan were measured by EBT3 films and compared with ionization chamber and ionization chamber array measurement. Results: The parabola effect has been numerically studied over the whole measuring field of the Epson 10000XL scanner for doses up to 20.9 Gy and for both film orientations. The presented algorithm transforms any optical density at positionx into the equivalent optical density that would be measured at the same dose at the apex of the parabola. This correction method has been validated up to doses of 5.2 Gy all over the scanner bed with 2D dose distributions of an open square photon field and an IMRT distribution. Conclusions: The algorithm presented in this study quantifies and corrects the parabola effect of EBT3 films scanned in commonly used commercial flatbed scanners at doses up to 5.2 Gy. It is easy to implement, and no additional work steps are necessary in daily routine film dosimetry.« less

  16. Impact and Implementation of Higher-Order Ionospheric Effects on Precise GNSS Applications

    NASA Astrophysics Data System (ADS)

    Hadas, T.; Krypiak-Gregorczyk, A.; Hernández-Pajares, M.; Kaplon, J.; Paziewski, J.; Wielgosz, P.; Garcia-Rigo, A.; Kazmierski, K.; Sosnica, K.; Kwasniak, D.; Sierny, J.; Bosy, J.; Pucilowski, M.; Szyszko, R.; Portasiak, K.; Olivares-Pulido, G.; Gulyaeva, T.; Orus-Perez, R.

    2017-11-01

    High precision Global Navigation Satellite Systems (GNSS) positioning and time transfer require correcting signal delays, in particular higher-order ionospheric (I2+) terms. We present a consolidated model to correct second- and third-order terms, geometric bending and differential STEC bending effects in GNSS data. The model has been implemented in an online service correcting observations from submitted RINEX files for I2+ effects. We performed GNSS data processing with and without including I2+ corrections, in order to investigate the impact of I2+ corrections on GNSS products. We selected three time periods representing different ionospheric conditions. We used GPS and GLONASS observations from a global network and two regional networks in Poland and Brazil. We estimated satellite orbits, satellite clock corrections, Earth rotation parameters, troposphere delays, horizontal gradients, and receiver positions using global GNSS solution, Real-Time Kinematic (RTK), and Precise Point Positioning (PPP) techniques. The satellite-related products captured most of the impact of I2+ corrections, with the magnitude up to 2 cm for clock corrections, 1 cm for the along- and cross-track orbit components, and below 5 mm for the radial component. The impact of I2+ on troposphere products turned out to be insignificant in general. I2+ corrections had limited influence on the performance of ambiguity resolution and the reliability of RTK positioning. Finally, we found that I2+ corrections caused a systematic shift in the coordinate domain that was time- and region-dependent and reached up to -11 mm for the north component of the Brazilian stations during the most active ionospheric conditions.

  17. High-Threshold Fault-Tolerant Quantum Computation with Analog Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Fukui, Kosuke; Tomita, Akihisa; Okamoto, Atsushi; Fujii, Keisuke

    2018-04-01

    To implement fault-tolerant quantum computation with continuous variables, the Gottesman-Kitaev-Preskill (GKP) qubit has been recognized as an important technological element. However, it is still challenging to experimentally generate the GKP qubit with the required squeezing level, 14.8 dB, of the existing fault-tolerant quantum computation. To reduce this requirement, we propose a high-threshold fault-tolerant quantum computation with GKP qubits using topologically protected measurement-based quantum computation with the surface code. By harnessing analog information contained in the GKP qubits, we apply analog quantum error correction to the surface code. Furthermore, we develop a method to prevent the squeezing level from decreasing during the construction of the large-scale cluster states for the topologically protected, measurement-based, quantum computation. We numerically show that the required squeezing level can be relaxed to less than 10 dB, which is within the reach of the current experimental technology. Hence, this work can considerably alleviate this experimental requirement and take a step closer to the realization of large-scale quantum computation.

  18. Comparison of selected dose calculation algorithms in radiotherapy treatment planning for tissues with inhomogeneities

    NASA Astrophysics Data System (ADS)

    Woon, Y. L.; Heng, S. P.; Wong, J. H. D.; Ung, N. M.

    2016-03-01

    Inhomogeneity correction is recommended for accurate dose calculation in radiotherapy treatment planning since human body are highly inhomogeneous with the presence of bones and air cavities. However, each dose calculation algorithm has its own limitations. This study is to assess the accuracy of five algorithms that are currently implemented for treatment planning, including pencil beam convolution (PBC), superposition (SP), anisotropic analytical algorithm (AAA), Monte Carlo (MC) and Acuros XB (AXB). The calculated dose was compared with the measured dose using radiochromic film (Gafchromic EBT2) in inhomogeneous phantoms. In addition, the dosimetric impact of different algorithms on intensity modulated radiotherapy (IMRT) was studied for head and neck region. MC had the best agreement with the measured percentage depth dose (PDD) within the inhomogeneous region. This was followed by AXB, AAA, SP and PBC. For IMRT planning, MC algorithm is recommended for treatment planning in preference to PBC and SP. The MC and AXB algorithms were found to have better accuracy in terms of inhomogeneity correction and should be used for tumour volume within the proximity of inhomogeneous structures.

  19. Measurement of specimen-induced aberrations of biological samples using phase stepping interferometry.

    PubMed

    Schwertner, M; Booth, M J; Neil, M A A; Wilson, T

    2004-01-01

    Confocal or multiphoton microscopes, which deliver optical sections and three-dimensional (3D) images of thick specimens, are widely used in biology. These techniques, however, are sensitive to aberrations that may originate from the refractive index structure of the specimen itself. The aberrations cause reduced signal intensity and the 3D resolution of the instrument is compromised. It has been suggested to correct for aberrations in confocal microscopes using adaptive optics. In order to define the design specifications for such adaptive optics systems, one has to know the amount of aberrations present for typical applications such as with biological samples. We have built a phase stepping interferometer microscope that directly measures the aberration of the wavefront. The modal content of the wavefront is extracted by employing Zernike mode decomposition. Results for typical biological specimens are presented. It was found for all samples investigated that higher order Zernike modes give only a small contribution to the overall aberration. Therefore, these higher order modes can be neglected in future adaptive optics sensing and correction schemes implemented into confocal or multiphoton microscopes, leading to more efficient designs.

  20. Radiosondes Corrected for Inaccuracy in RH Measurements

    DOE Data Explorer

    Miloshevich, Larry

    2008-01-15

    Corrections for inaccuracy in Vaisala radiosonde RH measurements have been applied to ARM SGP radiosonde soundings. The magnitude of the corrections can vary considerably between soundings. The radiosonde measurement accuracy, and therefore the correction magnitude, is a function of atmospheric conditions, mainly T, RH, and dRH/dt (humidity gradient). The corrections are also very sensitive to the RH sensor type, and there are 3 Vaisala sensor types represented in this dataset (RS80-H, RS90, and RS92). Depending on the sensor type and the radiosonde production date, one or more of the following three corrections were applied to the RH data: Temperature-Dependence correction (TD), Contamination-Dry Bias correction (C), Time Lag correction (TL). The estimated absolute accuracy of NIGHTTIME corrected and uncorrected Vaisala RH measurements, as determined by comparison to simultaneous reference-quality measurements from Holger Voemel's (CU/CIRES) cryogenic frostpoint hygrometer (CFH), is given by Miloshevich et al. (2006).

  1. Wavefront-guided correction of ocular aberrations: Are phase plate and refractive surgery solutions equal?

    NASA Astrophysics Data System (ADS)

    Marchese, Linda E.; Munger, Rejean; Priest, David

    2005-08-01

    Wavefront-guided laser eye surgery has been recently introduced and holds the promise of correcting not only defocus and astigmatism in patients but also higher-order aberrations. Research is just beginning on the implementation of wavefront-guided methods in optical solutions, such as phase-plate-based spectacles, as alternatives to surgery. We investigate the theoretical differences between the implementation of wavefront-guided surgical and phase plate corrections. The residual aberrations of 43 model eyes are calculated after simulated refractive surgery and also after a phase plate is placed in front of the untreated eye. In each case, the current wavefront-guided paradigm that applies a direct map of the ocular aberrations to the correction zone is used. The simulation results demonstrate that an ablation map that is a Zernike fit of a direct transform of the ocular wavefront phase error is not as efficient in correcting refractive errors of sphere, cylinder, spherical aberration, and coma as when the same Zernike coefficients are applied to a phase plate, with statistically significant improvements from 2% to 6%.

  2. Fever management in the emergency department of the Children's Hospital of Fudan University: a best practice implementation project.

    PubMed

    Hu, Fei; Zhang, Jiayan; Shi, Shupeng; Zhou, Zhang

    2016-09-01

    Febrile illness in young children usually indicates an underlying infection and is a cause of concern for parents and carers. It is very important that healthcare professionals know how to recognize fever, assess children with fever, treat children with fever and role of nurses and parents. This paper outlines a best practice implementation project on the management of fever in children in an emergency department. To audit current practice of fever management for children in an emergency department and to implement strategies to standardize pediatric fever management based on evidence-based practice guidelines. We used the Joanna Briggs Institute's Practical Application of Clinical Evidence System and Getting Research into Practice to examine compliance with fever management criteria based on the best available evidence before and after the implementation of strategies to spread the use of evidence-based practice protocols. We found significant improvements in pediatric fever management as measured by the knowledge scores of parents (54.5-83.7) and nurses (67.6-90.3). This suggested a need for continuous education. We found a noticeable improvement in compliance across all the five criteria; using correct methods to measure temperature (86-98%), staff education (0-100%), parents education (0-100%), using assessment tools (0-100%) and observed management (0-98%). This best practice implementation project demonstrated the use of effective strategies to standardize the protocol for fever management, implement assessment tool, develop multimedia materials, deliver continuous staff education and update nursing documentation and patient education pamphlets to ensure best practice is delivered by nurses to improve patient outcomes.

  3. Does Your Optical Particle Counter Measure What You Think it Does? Calibration and Refractive Index Correction Methods.

    NASA Astrophysics Data System (ADS)

    Rosenberg, Phil; Dean, Angela; Williams, Paul; Dorsey, James; Minikin, Andreas; Pickering, Martyn; Petzold, Andreas

    2013-04-01

    Optical Particle Counters (OPCs) are the de-facto standard for in-situ measurements of airborne aerosol size distributions and small cloud particles over a wide size range. This is particularly the case on airborne platforms where fast response is important. OPCs measure scattered light from individual particles and generally bin particles according to the measured peak amount of light scattered (the OPC's response). Most manufacturers provide a table along with their instrument which indicates the particle diameters which represent the edges of each bin. It is important to correct the particle size reported by OPCs for the refractive index of the particles being measured, which is often not the same as for those used during calibration. However, the OPC's response is not a monotonic function of particle diameter and obvious problems occur when refractive index corrections are attempted, but multiple diameters correspond to the same OPC response. Here we recommend that OPCs are calibrated in terms of particle scattering cross section as this is a monotonic (usually linear) function of an OPC's response. We present a method for converting a bin's boundaries in terms of scattering cross section into a bin centre and bin width in terms of diameter for any aerosol species for which the scattering properties are known. The relationship between diameter and scattering cross section can be arbitrarily complex and does not need to be monotonic; it can be based on Mie-Lorenz theory or any other scattering theory. Software has been provided on the Sourceforge open source repository for scientific users to implement such methods in their own measurement and calibration routines. As a case study data is presented showing data from Passive Cavity Aerosol Spectrometer Probe (PCASP) and a Cloud Droplet Probe (CDP) calibrated using polystyrene latex spheres and glass beads before being deployed as part of the Fennec project to measure airborne dust in the inaccessible regions of the Sahara.

  4. Magnetically confined electron beam system for high resolution electron transmission-beam experiments

    NASA Astrophysics Data System (ADS)

    Lozano, A. I.; Oller, J. C.; Krupa, K.; Ferreira da Silva, F.; Limão-Vieira, P.; Blanco, F.; Muñoz, A.; Colmenares, R.; García, G.

    2018-06-01

    A novel experimental setup has been implemented to provide accurate electron scattering cross sections from molecules at low and intermediate impact energies (1-300 eV) by measuring the attenuation of a magnetically confined linear electron beam from a molecular target. High-resolution electron energy is achieved through confinement in a magnetic gas trap where electrons are cooled by successive collisions with N2. Additionally, we developed and present a method to correct systematic errors arising from energy and angular resolution limitations. The accuracy of the entire measurement procedure is validated by comparing the N2 total scattering cross section in the considered energy range with benchmark values available in the literature.

  5. First measurement of proton's charge form factor at very low Q 2 with initial state radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mihovilovič, M.; Weber, A. B.; Achenbach, P.

    Here we report on a new experimental method based on initial-state radiation (ISR) in e–pscattering, which exploits the radiative tail of the elastic peak to study the properties of electromagnetic processes and to extract the proton charge form factor (Gmore » $$p\\atop{E}$$) at extremely small Q 2. The ISR technique was implemented in an experiment at the three-spectrometer facility of the Mainz Microtron (MAMI). This led to a precise validation of radiative corrections far away from elastic line and provided first measurements of G$$p\\atop{E}$$ for 0.001 ≤Q 2≤0.004 (GeV/c) 2.« less

  6. First measurement of proton's charge form factor at very low Q 2 with initial state radiation

    DOE PAGES

    Mihovilovič, M.; Weber, A. B.; Achenbach, P.; ...

    2017-05-15

    Here we report on a new experimental method based on initial-state radiation (ISR) in e–pscattering, which exploits the radiative tail of the elastic peak to study the properties of electromagnetic processes and to extract the proton charge form factor (Gmore » $$p\\atop{E}$$) at extremely small Q 2. The ISR technique was implemented in an experiment at the three-spectrometer facility of the Mainz Microtron (MAMI). This led to a precise validation of radiative corrections far away from elastic line and provided first measurements of G$$p\\atop{E}$$ for 0.001 ≤Q 2≤0.004 (GeV/c) 2.« less

  7. Liquefaction assessment based on combined use of CPT and shear wave velocity measurements

    NASA Astrophysics Data System (ADS)

    Bán, Zoltán; Mahler, András; Győri, Erzsébet

    2017-04-01

    Soil liquefaction is one of the most devastating secondary effects of earthquakes and can cause significant damage in built infrastructure. For this reason liquefaction hazard shall be considered in all regions where moderate-to-high seismic activity encounters with saturated, loose, granular soil deposits. Several approaches exist to take into account this hazard, from which the in-situ test based empirical methods are the most commonly used in practice. These methods are generally based on the results of CPT, SPT or shear wave velocity measurements. In more complex or high risk projects CPT and VS measurement are often performed at the same location commonly in the form of seismic CPT. Furthermore, VS profile determined by surface wave methods can also supplement the standard CPT measurement. However, combined use of both in-situ indices in one single empirical method is limited. For this reason, the goal of this research was to develop such an empirical method within the framework of simplified empirical procedures where the results of CPT and VS measurements are used in parallel and can supplement each other. The combination of two in-situ indices, a small strain property measurement with a large strain measurement, can reduce uncertainty of empirical methods. In the first step by careful reviewing of the already existing liquefaction case history databases, sites were selected where the records of both CPT and VS measurement are available. After implementing the necessary corrections on the gathered 98 case histories with respect to fines content, overburden pressure and magnitude, a logistic regression was performed to obtain the probability contours of liquefaction occurrence. Logistic regression is often used to explore the relationship between a binary response and a set of explanatory variables. The occurrence or absence of liquefaction can be considered as binary outcome and the equivalent clean sand value of normalized overburden corrected cone tip resistance (qc1Ncs), the overburden corrected shear wave velocity (V S1), and the magnitude and effective stress corrected cyclic stress ratio (CSRM=7.5,σv'=1atm) were considered as input variables. In this case the graphical representation of the cyclic resistance ratio curve for a given probability has been replaced by a surface that separates the liquefaction and non-liquefaction cases.

  8. Removing the thermal component from heart rate provides an accurate VO2 estimation in forest work.

    PubMed

    Dubé, Philippe-Antoine; Imbeau, Daniel; Dubeau, Denise; Lebel, Luc; Kolus, Ahmet

    2016-05-01

    Heart rate (HR) was monitored continuously in 41 forest workers performing brushcutting or tree planting work. 10-min seated rest periods were imposed during the workday to estimate the HR thermal component (ΔHRT) per Vogt et al. (1970, 1973). VO2 was measured using a portable gas analyzer during a morning submaximal step-test conducted at the work site, during a work bout over the course of the day (range: 9-74 min), and during an ensuing 10-min rest pause taken at the worksite. The VO2 estimated, from measured HR and from corrected HR (thermal component removed), were compared to VO2 measured during work and rest. Varied levels of HR thermal component (ΔHRTavg range: 0-38 bpm) originating from a wide range of ambient thermal conditions, thermal clothing insulation worn, and physical load exerted during work were observed. Using raw HR significantly overestimated measured work VO2 by 30% on average (range: 1%-64%). 74% of VO2 prediction error variance was explained by the HR thermal component. VO2 estimated from corrected HR, was not statistically different from measured VO2. Work VO2 can be estimated accurately in the presence of thermal stress using Vogt et al.'s method, which can be implemented easily by the practitioner with inexpensive instruments. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  9. Water vapor retrieval from near-IR measurements of polarized scanning atmospheric corrector

    NASA Astrophysics Data System (ADS)

    Qie, Lili; Ning, Yuanming; Zhang, Yang; Chen, Xingfeng; Ma, Yan; Li, Zhengqiang; Cui, Wenyu

    2018-02-01

    Water vapor and aerosol are two key atmospheric factors effecting the remote sensing image quality. As water vapor is responsible for most of the solar radiation absorption occurring in the cloudless atmosphere, accurate measurement of water content is important to not only atmospheric correction of remote sensing images, but also many other applications such as the study of energy balance and global climate change, land surface temperature retrieval in thermal remote sensing. A multi-spectral, single-angular, polarized radiometer called Polarized Scanning Atmospheric Corrector (PSAC) were developed in China, which are designed to mount on the same satellite platform with the principle payload and provide essential parameters for principle payload image atmospheric correction. PSAC detect water vapor content via measuring atmosphere reflectance at water vapor absorbing channels (i.e. 0.91 μm) and nearby atmospheric window channel (i.e. 0.865μm). A near-IR channel ratio method was implemented to retrieve column water vapor (CWV) amount from PSAC measurements. Field experiments were performed at Yantai, in Shandong province of China, PSAC aircraft observations were acquired. The comparison between PSAC retrievals and ground-based Sun-sky radiometer measurements of CWV during the experimental flights illustrates that this method retrieves CWV with relative deviations ranging from 4% 13%. This method retrieve CWV more accurate over land than over ocean, as the water reflectance is low.

  10. Digital algorithm for dispersion correction in optical coherence tomography for homogeneous and stratified media.

    PubMed

    Marks, Daniel L; Oldenburg, Amy L; Reynolds, J Joshua; Boppart, Stephen A

    2003-01-10

    The resolution of optical coherence tomography (OCT) often suffers from blurring caused by material dispersion. We present a numerical algorithm for computationally correcting the effect of material dispersion on OCT reflectance data for homogeneous and stratified media. This is experimentally demonstrated by correcting the image of a polydimethyl siloxane microfludic structure and of glass slides. The algorithm can be implemented using the fast Fourier transform. With broad spectral bandwidths and highly dispersive media or thick objects, dispersion correction becomes increasingly important.

  11. Digital Algorithm for Dispersion Correction in Optical Coherence Tomography for Homogeneous and Stratified Media

    NASA Astrophysics Data System (ADS)

    Marks, Daniel L.; Oldenburg, Amy L.; Reynolds, J. Joshua; Boppart, Stephen A.

    2003-01-01

    The resolution of optical coherence tomography (OCT) often suffers from blurring caused by material dispersion. We present a numerical algorithm for computationally correcting the effect of material dispersion on OCT reflectance data for homogeneous and stratified media. This is experimentally demonstrated by correcting the image of a polydimethyl siloxane microfludic structure and of glass slides. The algorithm can be implemented using the fast Fourier transform. With broad spectral bandwidths and highly dispersive media or thick objects, dispersion correction becomes increasingly important.

  12. Digital Holographic Demonstration Systems by Stanford University and Siros Technologies

    NASA Astrophysics Data System (ADS)

    Hesselink, L.

    Its useful capacity, transfer rate and access time measure the performance of a holographic data storage system (HDSS). Data should never be lost, requiring a corrected bit error rate (BER) of 10-12 to 10-15. To compete successfully in the large storage marketplace, an HDS drive should be cost-competitive with improved performance over other drives. The exception could be certain niche markets, where unique HDS attributes — all-solid-state implementation with extremely short access times or associative retrieval — are attractive or required.

  13. Automated Geometry assisted PEC for electron beam direct write nanolithography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ocola, Leonidas E.; Gosztola, David J.; Rosenmann, Daniel

    Nanoscale geometry assisted proximity effect correction (NanoPEC) is demonstrated to improve PEC for nanoscale structures over standard PEC, in terms of feature sharpness for sub-100 nm structures. The method was implemented onto an existing commercially available PEC software. Plasmonic arrays of crosses were fabricated using regular PEC and NanoPEC, and optical absorbance was measured. Results confirm that the improved sharpness of the structures leads to increased sharpness in the optical absorbance spectrum features. We also demonstrated that this method of PEC is applicable to arbitrary shaped structures beyond crosses.

  14. Hypergol Maintenance Facility Hazardous Waste South Staging Areas, SWMU 070

    NASA Technical Reports Server (NTRS)

    Wilson, Deborah M.; Miller, Ralinda R.

    2015-01-01

    The purpose of this CMI Year 9 AGWMR is to present the actions taken and results obtained during the ninth year of implementation of Corrective Measures (CM) at HMF. Groundwater monitoring activities were conducted in accordance with the CMI Work Plan (Tetra Tech, 2005a) and CMI Site-Specific Safety and Health Plan (Tetra Tech, 2005b). Groundwater monitoring activities detailed in this Year 9 report include pre-startup sampling in February 2014(prior to restarting the air sparging system) and quarterly performance monitoring in March, July, and September 2014.

  15. A study of respiration-correlated cone-beam CT scans to correct target positioning errors in radiotherapy of thoracic cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santoro, J. P.; McNamara, J.; Yorke, E.

    2012-10-15

    Purpose: There is increasingly widespread usage of cone-beam CT (CBCT) for guiding radiation treatment in advanced-stage lung tumors, but difficulties associated with daily CBCT in conventionally fractionated treatments include imaging dose to the patient, increased workload and longer treatment times. Respiration-correlated cone-beam CT (RC-CBCT) can improve localization accuracy in mobile lung tumors, but further increases the time and workload for conventionally fractionated treatments. This study investigates whether RC-CBCT-guided correction of systematic tumor deviations in standard fractionated lung tumor radiation treatments is more effective than 2D image-based correction of skeletal deviations alone. A second study goal compares respiration-correlated vs respiration-averaged imagesmore » for determining tumor deviations. Methods: Eleven stage II-IV nonsmall cell lung cancer patients are enrolled in an IRB-approved prospective off-line protocol using RC-CBCT guidance to correct for systematic errors in GTV position. Patients receive a respiration-correlated planning CT (RCCT) at simulation, daily kilovoltage RC-CBCT scans during the first week of treatment and weekly scans thereafter. Four types of correction methods are compared: (1) systematic error in gross tumor volume (GTV) position, (2) systematic error in skeletal anatomy, (3) daily skeletal corrections, and (4) weekly skeletal corrections. The comparison is in terms of weighted average of the residual GTV deviations measured from the RC-CBCT scans and representing the estimated residual deviation over the treatment course. In the second study goal, GTV deviations computed from matching RCCT and RC-CBCT are compared to deviations computed from matching respiration-averaged images consisting of a CBCT reconstructed using all projections and an average-intensity-projection CT computed from the RCCT. Results: Of the eleven patients in the GTV-based systematic correction protocol, two required no correction, seven required a single correction, one required two corrections, and one required three corrections. Mean residual GTV deviation (3D distance) following GTV-based systematic correction (mean {+-} 1 standard deviation 4.8 {+-} 1.5 mm) is significantly lower than for systematic skeletal-based (6.5 {+-} 2.9 mm, p= 0.015), and weekly skeletal-based correction (7.2 {+-} 3.0 mm, p= 0.001), but is not significantly lower than daily skeletal-based correction (5.4 {+-} 2.6 mm, p= 0.34). In two cases, first-day CBCT images reveal tumor changes-one showing tumor growth, the other showing large tumor displacement-that are not readily observed in radiographs. Differences in computed GTV deviations between respiration-correlated and respiration-averaged images are 0.2 {+-} 1.8 mm in the superior-inferior direction and are of similar magnitude in the other directions. Conclusions: An off-line protocol to correct GTV-based systematic error in locally advanced lung tumor cases can be effective at reducing tumor deviations, although the findings need confirmation with larger patient statistics. In some cases, a single cone-beam CT can be useful for assessing tumor changes early in treatment, if more than a few days elapse between simulation and the start of treatment. Tumor deviations measured with respiration-averaged CT and CBCT images are consistent with those measured with respiration-correlated images; the respiration-averaged method is more easily implemented in the clinic.« less

  16. Method of measuring blood oxygenation based on spectroscopy of diffusely scattered light

    NASA Astrophysics Data System (ADS)

    Kleshnin, M. S.; Orlova, A. G.; Kirillin, M. Yu.; Golubyatnikov, G. Yu.; Turchin, I. V.

    2017-05-01

    A new approach to the measurement of blood oxygenation is developed and implemented, based on an original two-step algorithm reconstructing the relative concentration of biological chromophores (haemoglobin, water, lipids) from the measured spectra of diffusely scattered light at different distances from the radiation source. The numerical experiments and approbation of the proposed approach using a biological phantom have shown the high accuracy of the reconstruction of optical properties of the object in question, as well as the possibility of correct calculation of the haemoglobin oxygenation in the presence of additive noises without calibration of the measuring device. The results of the experimental studies in animals agree with the previously published results obtained by other research groups and demonstrate the possibility of applying the developed method to the monitoring of blood oxygenation in tumour tissues.

  17. Implementation of Coupled Skin Temperature Analysis and Bias Correction in a Global Atmospheric Data Assimilation System

    NASA Technical Reports Server (NTRS)

    Radakovich, Jon; Bosilovich, M.; Chern, Jiun-dar; daSilva, Arlindo

    2004-01-01

    The NASA/NCAR Finite Volume GCM (fvGCM) with the NCAR CLM (Community Land Model) version 2.0 was integrated into the NASA/GMAO Finite Volume Data Assimilation System (fvDAS). A new method was developed for coupled skin temperature assimilation and bias correction where the analysis increment and bias correction term is passed into the CLM2 and considered a forcing term in the solution to the energy balance. For our purposes, the fvDAS CLM2 was run at 1 deg. x 1.25 deg. horizontal resolution with 55 vertical levels. We assimilate the ISCCP-DX (30 km resolution) surface temperature product. The atmospheric analysis was performed 6-hourly, while the skin temperature analysis was performed 3-hourly. The bias correction term, which was updated at the analysis times, was added to the skin temperature tendency equation at every timestep. In this presentation, we focus on the validation of the surface energy budget at the in situ reference sites for the Coordinated Enhanced Observation Period (CEOP). We will concentrate on sites that include independent skin temperature measurements and complete energy budget observations for the month of July 2001. In addition, MODIS skin temperature will be used for validation. Several assimilations were conducted and preliminary results will be presented.

  18. Quantum Error Correction with a Globally-Coupled Array of Neutral Atom Qubits

    DTIC Science & Technology

    2013-02-01

    magneto - optical trap ) located at the center of the science cell. Fluorescence...Bottle beam trap GBA Gaussian beam array EMCCD electron multiplying charge coupled device microsec. microsecond MOT Magneto - optical trap QEC quantum error correction qubit quantum bit ...developed and implemented an array of neutral atom qubits in optical traps for studies of quantum error correction. At the end of the three year

  19. Bias reduction in repeated-measures observational studies by the use of propensity score: the case of enteral sedation for critically ill patients.

    PubMed

    Umbrello, Michele; Mistraletti, Giovanni; Corbella, Davide; Cigada, Marco; Salini, Silvia; Morabito, Alberto; Iapichino, Gaetano

    2012-12-01

    Within the evidence-based medicine paradigm, randomized controlled trials represent the "gold standard" to produce reliable evidence. Indeed, planning and implementing randomized controlled trials in critical care medicine presents limitations because of intrinsic and structural problems. As a consequence, observational studies still occur frequently. In these cases, propensity score (PS) (probability of receiving a treatment conditional on observed covariates) is an increasingly used technique to adjust the results. Few studies addressed the specific issue of a PS correction of repeated-measures designs. Three techniques for correcting the analysis of nonrandomized designs (matching, stratification, regression adjustment) are presented in a tutorial form and applied to a real case study: the comparison between intravenous and enteral sedative therapy in the intensive care unit setting. After showing the results before and after the use of PS, we suggest that such a tool allows to partially overcoming the bias associated with the observational nature of the study. It permits to correct the estimates for any observed covariate, while unobserved confounders cannot be controlled for. Propensity score represents a useful additional tool to estimate the effects of treatments in nonrandomized studies. In the case study, an enteral sedation approach was equally effective to an intravenous regime, allowing for a lower level of sedation and spare of resources. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Closure Report for Corrective Action Unit 573: Alpha Contaminated Sites Nevada National Security Site, Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthews, Patrick

    This Closure Report (CR) presents information supporting the closure of Corrective Action Unit (CAU) 573: Alpha Contaminated Sites, Nevada National Security Site, Nevada. CAU 573 comprises the two corrective action sites (CASs): 05-23-02-GMX Alpha Contaminated Are-Closure in Place and 05-45-01-Atmospheric Test Site - Hamilton- Clean Closure. The purpose of this CR is to provide justification and documentation supporting the recommendation that no further corrective action is needed for CAU 573 based on the implementation of the corrective actions. Corrective action activities were performed at Hamilton from May 25 through June 30, 2016; and at GMX from May 25 to Octobermore » 27, 2016, as set forth in the Corrective Action Decision Document (CADD)/Corrective Action Plan (CAP) for Corrective Action Unit 573: Alpha Contaminated Sites; and in accordance with the Soils Activity Quality Assurance Plan, which establishes requirements, technical planning, and general quality practices. Verification sample results were evaluated against data quality objective criteria developed by stakeholders that included representatives from the Nevada Division of Environmental Protection and the DOE, National Nuclear Security Administration Nevada Field Office (NNSA/NFO) during the corrective action alternative (CAA) meeting held on November 24, 2015. Radiological doses exceeding the final action level were assumed to be present within the high contamination areas associated with CAS 05-23-02, thus requiring corrective action. It was also assumed that radionuclides were present at levels that require corrective action within the soil/debris pile associated with CAS 05-45-01. During the CAU 573 CAA meeting, the CAA of closure in place with a use restriction (UR) was selected by the stakeholders as the preferred corrective action of the high contamination areas at CAS 05-23-02 (GMX), which contain high levels of removable contamination; and the CAA of clean closure was selected by the stakeholders as preferred corrective action for the debris pile at CAS 05-45-01 (Hamilton). The closure in place was accomplished by posting signs containing a warning label on the existing contamination area fence line; and recording the FFACO UR and administrative UR in the FFACO database, the NNSA/NFO CAU/CAS files, and the management and operating contractor Geographic Information Systems. The clean closure was accomplished by excavating the soil/debris pile, disposing of the contents at the Area 5 Radioactive Waste Management Complex, and collecting verification samples. The corrective actions were implemented as stipulated in the CADD/CAP, and verification sample results confirm that the criteria for the completion of corrective actions have been met. Based on the implementation of these corrective actions, NNSA/NFO provides the following recommendations: No further corrective actions are necessary for CAU 573; The Nevada Division of Environmental Protection should issue a Notice of Completion to NNSA/NFO for closure of CAU 573; CAU 573 should be moved from Appendix III to Appendix IV of the FFACO.« less

  1. A nonlinear lag correction algorithm for a-Si flat-panel x-ray detectors

    PubMed Central

    Starman, Jared; Star-Lack, Josh; Virshup, Gary; Shapiro, Edward; Fahrig, Rebecca

    2012-01-01

    Purpose: Detector lag, or residual signal, in a-Si flat-panel (FP) detectors can cause significant shading artifacts in cone-beam computed tomography reconstructions. To date, most correction models have assumed a linear, time-invariant (LTI) model and correct lag by deconvolution with an impulse response function (IRF). However, the lag correction is sensitive to both the exposure intensity and the technique used for determining the IRF. Even when the LTI correction that produces the minimum error is found, residual artifact remains. A new non-LTI method was developed to take into account the IRF measurement technique and exposure dependencies. Methods: First, a multiexponential (N = 4) LTI model was implemented for lag correction. Next, a non-LTI lag correction, known as the nonlinear consistent stored charge (NLCSC) method, was developed based on the LTI multiexponential method. It differs from other nonlinear lag correction algorithms in that it maintains a consistent estimate of the amount of charge stored in the FP and it does not require intimate knowledge of the semiconductor parameters specific to the FP. For the NLCSC method, all coefficients of the IRF are functions of exposure intensity. Another nonlinear lag correction method that only used an intensity weighting of the IRF was also compared. The correction algorithms were applied to step-response projection data and CT acquisitions of a large pelvic phantom and an acrylic head phantom. The authors collected rising and falling edge step-response data on a Varian 4030CB a-Si FP detector operating in dynamic gain mode at 15 fps at nine incident exposures (2.0%–92% of the detector saturation exposure). For projection data, 1st and 50th frame lag were measured before and after correction. For the CT reconstructions, five pairs of ROIs were defined and the maximum and mean signal differences within a pair were calculated for the different exposures and step-response edge techniques. Results: The LTI corrections left residual 1st and 50th frame lag up to 1.4% and 0.48%, while the NLCSC lag correction reduced 1st and 50th frame residual lags to less than 0.29% and 0.0052%. For CT reconstructions, the NLCSC lag correction gave an average error of 11 HU for the pelvic phantom and 3 HU for the head phantom, compared to 14–19 HU and 2–11 HU for the LTI corrections and 15 HU and 9 HU for the intensity weighted non-LTI algorithm. The maximum ROI error was always smallest for the NLCSC correction. The NLCSC correction was also superior to the intensity weighting algorithm. Conclusions: The NLCSC lag algorithm corrected for the exposure dependence of lag, provided superior image improvement for the pelvic phantom reconstruction, and gave similar results to the best case LTI results for the head phantom. The blurred ring artifact that is left over in the LTI corrections was better removed by the NLCSC correction in all cases. PMID:23039642

  2. 7 CFR 275.19 - Monitoring and evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM PERFORMANCE REPORTING SYSTEM Corrective Action § 275.19... Project Area/Management Unit Corrective Action Plan is implemented and achieves the anticipated results...

  3. California State Implementation Plan; Navajo Nation; Salt River Pima-Maricopa Indian Community; Correcting Amendments

    EPA Pesticide Factsheets

    EPA published final rules in the Federal Register approving certain revisions to the California SIP. EPA included inaccurate amendatory instructions preventing incorporation of the actions into the CFR. All the errors are being corrected by this action.

  4. 75 FR 33573 - Rural Housing Service

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-14

    ... Housing Program (GRRHP) Demonstration Program for Fiscal Year 2010; Correction AGENCY: Rural Housing... in the Federal Register of May 10, 2010, announcing the implementation of a demonstration program under the section 538 Guaranteed Rural Rental Housing Program (GRHHP) for Fiscal Year 2010. A correction...

  5. Successful reentry: the perspective of private correctional health care providers.

    PubMed

    Mellow, Jeff; Greifinger, Robert B

    2007-01-01

    Due to public health and safety concerns, discharge planning is increasingly prioritized by correctional systems when preparing prisoners for their reintegration into the community. Annually, private correctional health care vendors provide $3 billion of health care services to inmates in correctional facilities throughout the U.S., but rarely are contracted to provide transitional health care. A discussion with 12 people representing five private nationwide correctional health care providers highlighted the barriers they face when implementing transitional health care and what templates of services health care companies could provide to state and counties to enhance the reentry process.

  6. Corrective Action Decision Document/Closure Report for Corrective Action Unit 371: Johnnie Boy Crater and Pin Stripe Nevada Test Site, Nevada, Revision 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patrick Matthews

    This Corrective Action Decision Document/Closure Report has been prepared for Corrective Action Unit 371, Johnnie Boy Crater and Pin Stripe, located within Areas 11 and 18 at the Nevada Test Site, Nevada, in accordance with the Federal Facility Agreement and Consent Order (FFACO). Corrective Action Unit (CAU) 371 comprises two corrective action sites (CASs): • 11-23-05, Pin Stripe Contamination Area • 18-45-01, U-18j-2 Crater (Johnnie Boy) The purpose of this Corrective Action Decision Document/Closure Report is to provide justification and documentation supporting the recommendation that no further corrective action is needed for CAU 371 based on the implementation of correctivemore » actions. The corrective action of closure in place with administrative controls was implemented at both CASs. Corrective action investigation (CAI) activities were performed from January 8, 2009, through February 16, 2010, as set forth in the Corrective Action Investigation Plan for Corrective Action Unit 371: Johnnie Boy Crater and Pin Stripe. The approach for the CAI was divided into two facets: investigation of the primary release of radionuclides and investigation of other releases (migration in washes and chemical releases). The purpose of the CAI was to fulfill data needs as defined during the data quality objective (DQO) process. The CAU 371 dataset of investigation results was evaluated based on the data quality indicator parameters. This evaluation demonstrated the dataset is acceptable for use in fulfilling the DQO data needs. Analytes detected during the CAI were evaluated against final action levels (FALs) established in this document. Radiological doses exceeding the FAL of 25 millirem per year were not found to be present in the surface soil. However, it was assumed that radionuclides are present in subsurface media within the Johnnie Boy crater and the fissure at Pin Stripe. Due to the assumption of radiological dose exceeding the FAL, corrective actions were undertaken that consist of implementing a use restriction and posting warning signs at each site. These use restrictions were recorded in the FFACO database; the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office (NNSA/NSO) Facility Information Management System; and the NNSA/NSO CAU/CAS files. Therefore, NNSA/NSO provides the following recommendations: • No further corrective actions are necessary for CAU 371. • A Notice of Completion to NNSA/NSO is requested from the Nevada Division of Environmental Protection for closure of CAU 371. • Corrective Action Unit 371 should be moved from Appendix III to Appendix IV of the FFACO.« less

  7. Waveform synthesis for imaging and ranging applications

    DOEpatents

    Doerry, Armin W.; Dudley, Peter A.; Dubert, Dale F.; Tise, Bertice L.

    2004-12-07

    Frequency dependent corrections are provided for quadrature imbalance and Local Oscillator (LO) feed-through. An operational procedure filters imbalance and LO feed-through effects without prior calibration or equalization. Waveform generation can be adjusted/corrected in a synthetic aperture radar system (SAR), where a rolling phase shift is applied to the SAR's QDWS signal where it is demodulated in a receiver; unwanted energies, such as LO feed-through and/or imbalance energy, are separated from a desired signal in Doppler; the separated energy is filtered from the receiver leaving the desired signal; and the separated energy in the receiver is measured to determine the degree of imbalance that is represented by it. Calibration methods can also be implemented into synthesis. The degree of LO feed-through and imbalance can be used to determine calibration values that can then be provided as compensation for frequency dependent errors in components, such as the QDWS and SSB mixer, affecting quadrature signal quality.

  8. Updating finite element dynamic models using an element-by-element sensitivity methodology

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel; Hemez, Francois M.

    1993-01-01

    A sensitivity-based methodology for improving the finite element model of a given structure using test modal data and a few sensors is presented. The proposed method searches for both the location and sources of the mass and stiffness errors and does not interfere with the theory behind the finite element model while correcting these errors. The updating algorithm is derived from the unconstrained minimization of the squared L sub 2 norms of the modal dynamic residuals via an iterative two-step staggered procedure. At each iteration, the measured mode shapes are first expanded assuming that the model is error free, then the model parameters are corrected assuming that the expanded mode shapes are exact. The numerical algorithm is implemented in an element-by-element fashion and is capable of 'zooming' on the detected error locations. Several simulation examples which demonstate the potential of the proposed methodology are discussed.

  9. Study on SOC wavelet analysis for LiFePO4 battery

    NASA Astrophysics Data System (ADS)

    Liu, Xuepeng; Zhao, Dongmei

    2017-08-01

    Improving the prediction accuracy of SOC can reduce the complexity of the conservative and control strategy of the strategy such as the scheduling, optimization and planning of LiFePO4 battery system. Based on the analysis of the relationship between the SOC historical data and the external stress factors, the SOC Estimation-Correction Prediction Model based on wavelet analysis is established. Using wavelet neural network prediction model is of high precision to achieve forecast link, external stress measured data is used to update parameters estimation in the model, implement correction link, makes the forecast model can adapt to the LiFePO4 battery under rated condition of charge and discharge the operating point of the variable operation area. The test results show that the method can obtain higher precision prediction model when the input and output of LiFePO4 battery are changed frequently.

  10. Bi-dimensional empirical mode decomposition based fringe-like pattern suppression in polarization interference imaging spectrometer

    NASA Astrophysics Data System (ADS)

    Ren, Wenyi; Cao, Qizhi; Wu, Dan; Jiang, Jiangang; Yang, Guoan; Xie, Yingge; Wang, Guodong; Zhang, Sheqi

    2018-01-01

    Many observers using interference imaging spectrometer were plagued by the fringe-like pattern(FP) that occurs for optical wavelengths in red and near-infrared region. It brings us more difficulties in the data processing such as the spectrum calibration, information retrieval, and so on. An adaptive method based on the bi-dimensional empirical mode decomposition was developed to suppress the nonlinear FP in polarization interference imaging spectrometer. The FP and corrected interferogram were separated effectively. Meanwhile, the stripes introduced by CCD mosaic was suppressed. The nonlinear interferogram background removal and the spectrum distortion correction were implemented as well. It provides us an alternative method to adaptively suppress the nonlinear FP without prior experimental data and knowledge. This approach potentially is a powerful tool in the fields of Fourier transform spectroscopy, holographic imaging, optical measurement based on moire fringe, etc.

  11. Waveform Synthesizer For Imaging And Ranging Applications

    DOEpatents

    Dubbert, Dale F.; Dudley, Peter A.; Doerry, Armin W.; Tise, Bertice L.

    2004-12-28

    Frequency dependent corrections are provided for Local Oscillator (LO) feed-through. An operational procedure filters LO feed-through effects without prior calibration or equalization. Waveform generation can be adjusted/corrected in a synthetic aperture radar system (SAR), where a rolling phase shift is applied to the SAR's QDWS signal where it is demodulated in a receiver, unwanted energies, such as LO feed-through energy, are separated from a desired signal in Doppler; the separated energy is filtered from the receiver leaving the desired signal; and the separated energy in the receiver is measured to determine the degree of imbalance that is represented by it. Calibration methods can also be implemented into synthesis. The degree of LO feed-through can be used to determine calibration values that can then be provided as compensation for frequency dependent errors in components, such as the QDWS and SSB mixer, affecting quadrature signal quality.

  12. 40 CFR 258.56 - Assessment of corrective measures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Assessment of corrective measures. 258... Assessment of corrective measures. (a) Within 90 days of finding that any of the constituents listed in... assessment of corrective measures. Such an assessment must be completed within a reasonable period of time...

  13. 40 CFR 258.56 - Assessment of corrective measures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Assessment of corrective measures. 258... Assessment of corrective measures. (a) Within 90 days of finding that any of the constituents listed in... assessment of corrective measures. Such an assessment must be completed within a reasonable period of time...

  14. 40 CFR 258.56 - Assessment of corrective measures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Assessment of corrective measures. 258... Assessment of corrective measures. (a) Within 90 days of finding that any of the constituents listed in... assessment of corrective measures. Such an assessment must be completed within a reasonable period of time...

  15. Infant mortality surveillance in Recife, Pernambuco, Brazil: operationalization, strengths and limitations.

    PubMed

    Oliveira, Conceição Maria de; Bonfim, Cristine Vieira do; Guimarães, Maria José Bezerra; Frias, Paulo Germano; Antonino, Verônica Cristina Sposito; Medeiros, Zulma Maria

    2017-01-01

    to report the experience on infant mortality surveillance (IMS) in the municipality of Recife-PE, Brazil. a documentary research and a query with key-informants who participated in the implementation and consolidation of the IMS were conducted; data of the Mortality Information System (SIM) and of the surveillance worksheets were used to measure the coverage of the investigated deaths. the implementation of the IMS has occurred gradually since 2003; the strategy is composed by (i) identification of deaths, (ii) investigation, (iii) discussion, (iv) recommendations and correction of vital statistics; upon completion of implementation (2006), 98.5% (256) of the deaths had been investigated and discussed, with the participation of those involved in the cases; in 2015, this coverage corresponded to 97.7%. the main recommendations consisted of expanding the access, coverage and improvement of primary, secondary and tertiary care quality; IMS is able to support changes in health care practices, as well as planning and organization of maternal and child care.

  16. Practice-based learning and improvement for institutions: a case report.

    PubMed

    Kirk, Susan E; Howell, R Edward

    2010-12-01

    In 2006, the University of Virginia became one of the first academic medical institutions to be placed on probation, after the Accreditation Council for Graduate Medical Education (ACGME) Institutional Review Committee implemented a new classification system for institutional reviews. After University of Virginia reviewed its practices and implemented needed changes, the institution was able to have probation removed and full accreditation restored. Whereas graduate medical education committees and designated institutional officials are required to conduct internal reviews of each ACGME-accredited program midway through its accreditation cycle, no similar requirement exists for institutions. As we designed corrective measures at the University of Virginia, we realized that regularly scheduled audits of the entire institution would have prevented the accumulation of deficiencies. We suggest that institutional internal reviews be implemented to ensure that the ACGME institutional requirements for graduate medical education are met. This process represents practice-based learning and improvement at the institutional level and may prevent other institutions from receiving unfavorable accreditation decisions.

  17. Practice-Based Learning and Improvement for Institutions: A Case Report

    PubMed Central

    Kirk, Susan E.; Howell, R. Edward

    2010-01-01

    Background In 2006, the University of Virginia became one of the first academic medical institutions to be placed on probation, after the Accreditation Council for Graduate Medical Education (ACGME) Institutional Review Committee implemented a new classification system for institutional reviews. Intervention After University of Virginia reviewed its practices and implemented needed changes, the institution was able to have probation removed and full accreditation restored. Whereas graduate medical education committees and designated institutional officials are required to conduct internal reviews of each ACGME–accredited program midway through its accreditation cycle, no similar requirement exists for institutions. Learning As we designed corrective measures at the University of Virginia, we realized that regularly scheduled audits of the entire institution would have prevented the accumulation of deficiencies. We suggest that institutional internal reviews be implemented to ensure that the ACGME institutional requirements for graduate medical education are met. This process represents practice-based learning and improvement at the institutional level and may prevent other institutions from receiving unfavorable accreditation decisions. PMID:22132290

  18. Entrance dose measurements for in‐vivo diode dosimetry: Comparison of correction factors for two types of commercial silicon diode detectors

    PubMed Central

    Zhu, X. R.

    2000-01-01

    Silicon diode dosimeters have been used routinely for in‐vivo dosimetry. Despite their popularity, an appropriate implementation of an in‐vivo dosimetry program using diode detectors remains a challenge for clinical physicists. One common approach is to relate the diode readout to the entrance dose, that is, dose to the reference depth of maximum dose such as dmax for the 10×10 cm2 field. Various correction factors are needed in order to properly infer the entrance dose from the diode readout, depending on field sizes, target‐to‐surface distances (TSD), and accessories (such as wedges and compensate filters). In some clinical practices, however, no correction factor is used. In this case, a diode‐dosimeter‐based in‐vivo dosimetry program may not serve the purpose effectively; that is, to provide an overall check of the dosimetry procedure. In this paper, we provide a formula to relate the diode readout to the entrance dose. Correction factors for TSD, field size, and wedges used in this formula are also clearly defined. Two types of commercial diode detectors, ISORAD (n‐type) and the newly available QED (p‐type) (Sun Nuclear Corporation), are studied. We compared correction factors for TSDs, field sizes, and wedges. Our results are consistent with the theory of radiation damage of silicon diodes. Radiation damage has been shown to be more serious for n‐type than for p‐type detectors. In general, both types of diode dosimeters require correction factors depending on beam energy, TSD, field size, and wedge. The magnitudes of corrections for QED (p‐type) diodes are smaller than ISORAD detectors. PACS number(s): 87.66.–a, 87.52.–g PMID:11674824

  19. Orbit-orbit relativistic correction calculated with all-electron molecular explicitly correlated Gaussians.

    PubMed

    Stanke, Monika; Palikot, Ewa; Kȩdziera, Dariusz; Adamowicz, Ludwik

    2016-12-14

    An algorithm for calculating the first-order electronic orbit-orbit magnetic interaction correction for an electronic wave function expanded in terms of all-electron explicitly correlated molecular Gaussian (ECG) functions with shifted centers is derived and implemented. The algorithm is tested in calculations concerning the H 2 molecule. It is also applied in calculations for LiH and H 3 + molecular systems. The implementation completes our work on the leading relativistic correction for ECGs and paves the way for very accurate ECG calculations of ground and excited potential energy surfaces (PESs) of small molecules with two and more nuclei and two and more electrons, such as HeH - , H 3 + , HeH 2 + , and LiH 2 + . The PESs will be used to determine rovibrational spectra of the systems.

  20. Coded Modulation in C and MATLAB

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon; Andrews, Kenneth S.

    2011-01-01

    This software, written separately in C and MATLAB as stand-alone packages with equivalent functionality, implements encoders and decoders for a set of nine error-correcting codes and modulators and demodulators for five modulation types. The software can be used as a single program to simulate the performance of such coded modulation. The error-correcting codes implemented are the nine accumulate repeat-4 jagged accumulate (AR4JA) low-density parity-check (LDPC) codes, which have been approved for international standardization by the Consultative Committee for Space Data Systems, and which are scheduled to fly on a series of NASA missions in the Constellation Program. The software implements the encoder and decoder functions, and contains compressed versions of generator and parity-check matrices used in these operations.

Top