Sample records for systematic lossy error

  1. A microcontroller-based microwave free-space measurement system for permittivity determination of lossy liquid materials.

    PubMed

    Hasar, U C

    2009-05-01

    A microcontroller-based noncontact and nondestructive microwave free-space measurement system for real-time and dynamic determination of complex permittivity of lossy liquid materials has been proposed. The system is comprised of two main sections--microwave and electronic. While the microwave section provides for measuring only the amplitudes of reflection coefficients, the electronic section processes these data and determines the complex permittivity using a general purpose microcontroller. The proposed method eliminates elaborate liquid sample holder preparation and only requires microwave components to perform reflection measurements from one side of the holder. In addition, it explicitly determines the permittivity of lossy liquid samples from reflection measurements at different frequencies without any knowledge on sample thickness. In order to reduce systematic errors in the system, we propose a simple calibration technique, which employs simple and readily available standards. The measurement system can be a good candidate for industrial-based applications.

  2. IPTV multicast with peer-assisted lossy error control

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Zhu, Xiaoqing; Begen, Ali C.; Girod, Bernd

    2010-07-01

    Emerging IPTV technology uses source-specific IP multicast to deliver television programs to end-users. To provide reliable IPTV services over the error-prone DSL access networks, a combination of multicast forward error correction (FEC) and unicast retransmissions is employed to mitigate the impulse noises in DSL links. In existing systems, the retransmission function is provided by the Retransmission Servers sitting at the edge of the core network. In this work, we propose an alternative distributed solution where the burden of packet loss repair is partially shifted to the peer IP set-top boxes. Through Peer-Assisted Repair (PAR) protocol, we demonstrate how the packet repairs can be delivered in a timely, reliable and decentralized manner using the combination of server-peer coordination and redundancy of repairs. We also show that this distributed protocol can be seamlessly integrated with an application-layer source-aware error protection mechanism called forward and retransmitted Systematic Lossy Error Protection (SLEP/SLEPr). Simulations show that this joint PARSLEP/ SLEPr framework not only effectively mitigates the bottleneck experienced by the Retransmission Servers, thus greatly enhancing the scalability of the system, but also efficiently improves the resistance to the impulse noise.

  3. Efficient Sparse Signal Transmission over a Lossy Link Using Compressive Sensing

    PubMed Central

    Wu, Liantao; Yu, Kai; Cao, Dongyu; Hu, Yuhen; Wang, Zhi

    2015-01-01

    Reliable data transmission over lossy communication link is expensive due to overheads for error protection. For signals that have inherent sparse structures, compressive sensing (CS) is applied to facilitate efficient sparse signal transmissions over lossy communication links without data compression or error protection. The natural packet loss in the lossy link is modeled as a random sampling process of the transmitted data, and the original signal will be reconstructed from the lossy transmission results using the CS-based reconstruction method at the receiving end. The impacts of packet lengths on transmission efficiency under different channel conditions have been discussed, and interleaving is incorporated to mitigate the impact of burst data loss. Extensive simulations and experiments have been conducted and compared to the traditional automatic repeat request (ARQ) interpolation technique, and very favorable results have been observed in terms of both accuracy of the reconstructed signals and the transmission energy consumption. Furthermore, the packet length effect provides useful insights for using compressed sensing for efficient sparse signal transmission via lossy links. PMID:26287195

  4. A radio-aware routing algorithm for reliable directed diffusion in lossy wireless sensor networks.

    PubMed

    Kim, Yong-Pyo; Jung, Euihyun; Park, Yong-Jin

    2009-01-01

    In Wireless Sensor Networks (WSNs), transmission errors occur frequently due to node failure, battery discharge, contention or interference by objects. Although Directed Diffusion has been considered as a prominent data-centric routing algorithm, it has some weaknesses due to unexpected network errors. In order to address these problems, we proposed a radio-aware routing algorithm to improve the reliability of Directed Diffusion in lossy WSNs. The proposed algorithm is aware of the network status based on the radio information from MAC and PHY layers using a cross-layer design. The cross-layer design can be used to get detailed information about current status of wireless network such as a link quality or transmission errors of communication links. The radio information indicating variant network conditions and link quality was used to determine an alternative route that provides reliable data transmission under lossy WSNs. According to the simulation result, the radio-aware reliable routing algorithm showed better performance in both grid and random topologies with various error rates. The proposed solution suggested the possibility of providing a reliable transmission method for QoS requests in lossy WSNs based on the radio-awareness. The energy and mobility issues will be addressed in the future work.

  5. Constitutive parameter measurements of lossy materials

    NASA Technical Reports Server (NTRS)

    Dominek, A.; Park, A.

    1989-01-01

    The electrical constitutive parameters of lossy materials are considered. A discussion of the NRL arch for lossy coatings is presented involving analytical analyses of the reflected field using the geometrical theory of diffraction (GTD) and physical optics (PO). The actual values for these parameters can be obtained through a traditional transmission technique which is examined from an error analysis standpoint. Alternate sample geometries are suggested for this technique to reduce sample tolerance requirements for accurate parameter determination. The performance for one alternate geometry is given.

  6. Causal impulse response for circular sources in viscous media

    PubMed Central

    Kelly, James F.; McGough, Robert J.

    2008-01-01

    The causal impulse response of the velocity potential for the Stokes wave equation is derived for calculations of transient velocity potential fields generated by circular pistons in viscous media. The causal Green’s function is numerically verified using the material impulse response function approach. The causal, lossy impulse response for a baffled circular piston is then calculated within the near field and the far field regions using expressions previously derived for the fast near field method. Transient velocity potential fields in viscous media are computed with the causal, lossy impulse response and compared to results obtained with the lossless impulse response. The numerical error in the computed velocity potential field is quantitatively analyzed for a range of viscous relaxation times and piston radii. Results show that the largest errors are generated in locations near the piston face and for large relaxation times, and errors are relatively small otherwise. Unlike previous frequency-domain methods that require numerical inverse Fourier transforms for the evaluation of the lossy impulse response, the present approach calculates the lossy impulse response directly in the time domain. The results indicate that this causal impulse response is ideal for time-domain calculations that simultaneously account for diffraction and quadratic frequency-dependent attenuation in viscous media. PMID:18397018

  7. Visually lossless compression of digital hologram sequences

    NASA Astrophysics Data System (ADS)

    Darakis, Emmanouil; Kowiel, Marcin; Näsänen, Risto; Naughton, Thomas J.

    2010-01-01

    Digital hologram sequences have great potential for the recording of 3D scenes of moving macroscopic objects as their numerical reconstruction can yield a range of perspective views of the scene. Digital holograms inherently have large information content and lossless coding of holographic data is rather inefficient due to the speckled nature of the interference fringes they contain. Lossy coding of still holograms and hologram sequences has shown promising results. By definition, lossy compression introduces errors in the reconstruction. In all of the previous studies, numerical metrics were used to measure the compression error and through it, the coding quality. Digital hologram reconstructions are highly speckled and the speckle pattern is very sensitive to data changes. Hence, numerical quality metrics can be misleading. For example, for low compression ratios, a numerically significant coding error can have visually negligible effects. Yet, in several cases, it is of high interest to know how much lossy compression can be achieved, while maintaining the reconstruction quality at visually lossless levels. Using an experimental threshold estimation method, the staircase algorithm, we determined the highest compression ratio that was not perceptible to human observers for objects compressed with Dirac and MPEG-4 compression methods. This level of compression can be regarded as the point below which compression is perceptually lossless although physically the compression is lossy. It was found that up to 4 to 7.5 fold compression can be obtained with the above methods without any perceptible change in the appearance of video sequences.

  8. A Real-Time High Performance Data Compression Technique For Space Applications

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Venbrux, Jack; Bhatia, Prakash; Miller, Warner H.

    2000-01-01

    A high performance lossy data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also error-resilient in that error propagation is contained within a few scan lines. The algorithm is based on block-transform combined with bit-plane encoding; this combination results in an embedded bit string with exactly the desirable compression rate. The lossy coder is described. The compression scheme performs well on a suite of test images typical of images from spacecraft instruments. Hardware implementations are in development; a functional chip set is expected by the end of 2001.

  9. MQCC: Maximum Queue Congestion Control for Multipath Networks with Blockage

    DTIC Science & Technology

    2015-10-19

    higher error rates in wireless networks result in a great deal of “false” congestion indications, resulting in underutilization of the network [4...approaches that are relevant to lossy wireless networks . Multipath TCP (MPTCP) schemes [9], [10] explore the design and implementation of multipath...attempts to “fix” TCP to work with lossy wireless networks using existing techniques. The authors have taken the view that because packet losses are

  10. A Hybrid Data Compression Scheme for Power Reduction in Wireless Sensors for IoT.

    PubMed

    Deepu, Chacko John; Heng, Chun-Huat; Lian, Yong

    2017-04-01

    This paper presents a novel data compression and transmission scheme for power reduction in Internet-of-Things (IoT) enabled wireless sensors. In the proposed scheme, data is compressed with both lossy and lossless techniques, so as to enable hybrid transmission mode, support adaptive data rate selection and save power in wireless transmission. Applying the method to electrocardiogram (ECG), the data is first compressed using a lossy compression technique with a high compression ratio (CR). The residual error between the original data and the decompressed lossy data is preserved using entropy coding, enabling a lossless restoration of the original data when required. Average CR of 2.1 × and 7.8 × were achieved for lossless and lossy compression respectively with MIT/BIH database. The power reduction is demonstrated using a Bluetooth transceiver and is found to be reduced to 18% for lossy and 53% for lossless transmission respectively. Options for hybrid transmission mode, adaptive rate selection and system level power reduction make the proposed scheme attractive for IoT wireless sensors in healthcare applications.

  11. Systematic network coding for two-hop lossy transmissions

    NASA Astrophysics Data System (ADS)

    Li, Ye; Blostein, Steven; Chan, Wai-Yip

    2015-12-01

    In this paper, we consider network transmissions over a single or multiple parallel two-hop lossy paths. These scenarios occur in applications such as sensor networks or WiFi offloading. Random linear network coding (RLNC), where previously received packets are re-encoded at intermediate nodes and forwarded, is known to be a capacity-achieving approach for these networks. However, a major drawback of RLNC is its high encoding and decoding complexity. In this work, a systematic network coding method is proposed. We show through both analysis and simulation that the proposed method achieves higher end-to-end rate as well as lower computational cost than RLNC for finite field sizes and finite-sized packet transmissions.

  12. The compression–error trade-off for large gridded data sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silver, Jeremy D.; Zender, Charles S.

    The netCDF-4 format is widely used for large gridded scientific data sets and includes several compression methods: lossy linear scaling and the non-lossy deflate and shuffle algorithms. Many multidimensional geoscientific data sets exhibit considerable variation over one or several spatial dimensions (e.g., vertically) with less variation in the remaining dimensions (e.g., horizontally). On such data sets, linear scaling with a single pair of scale and offset parameters often entails considerable loss of precision. We introduce an alternative compression method called "layer-packing" that simultaneously exploits lossy linear scaling and lossless compression. Layer-packing stores arrays (instead of a scalar pair) of scalemore » and offset parameters. An implementation of this method is compared with lossless compression, storing data at fixed relative precision (bit-grooming) and scalar linear packing in terms of compression ratio, accuracy and speed. When viewed as a trade-off between compression and error, layer-packing yields similar results to bit-grooming (storing between 3 and 4 significant figures). Bit-grooming and layer-packing offer significantly better control of precision than scalar linear packing. Relative performance, in terms of compression and errors, of bit-groomed and layer-packed data were strongly predicted by the entropy of the exponent array, and lossless compression was well predicted by entropy of the original data array. Layer-packed data files must be "unpacked" to be readily usable. The compression and precision characteristics make layer-packing a competitive archive format for many scientific data sets.« less

  13. The compression–error trade-off for large gridded data sets

    DOE PAGES

    Silver, Jeremy D.; Zender, Charles S.

    2017-01-27

    The netCDF-4 format is widely used for large gridded scientific data sets and includes several compression methods: lossy linear scaling and the non-lossy deflate and shuffle algorithms. Many multidimensional geoscientific data sets exhibit considerable variation over one or several spatial dimensions (e.g., vertically) with less variation in the remaining dimensions (e.g., horizontally). On such data sets, linear scaling with a single pair of scale and offset parameters often entails considerable loss of precision. We introduce an alternative compression method called "layer-packing" that simultaneously exploits lossy linear scaling and lossless compression. Layer-packing stores arrays (instead of a scalar pair) of scalemore » and offset parameters. An implementation of this method is compared with lossless compression, storing data at fixed relative precision (bit-grooming) and scalar linear packing in terms of compression ratio, accuracy and speed. When viewed as a trade-off between compression and error, layer-packing yields similar results to bit-grooming (storing between 3 and 4 significant figures). Bit-grooming and layer-packing offer significantly better control of precision than scalar linear packing. Relative performance, in terms of compression and errors, of bit-groomed and layer-packed data were strongly predicted by the entropy of the exponent array, and lossless compression was well predicted by entropy of the original data array. Layer-packed data files must be "unpacked" to be readily usable. The compression and precision characteristics make layer-packing a competitive archive format for many scientific data sets.« less

  14. Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di, Sheng; Cappello, Franck

    Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points canmore » be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.« less

  15. Metamaterial-based lossy anisotropic epsilon-near-zero medium for energy collimation

    NASA Astrophysics Data System (ADS)

    Shen, Nian-Hai; Zhang, Peng; Koschny, Thomas; Soukoulis, Costas M.

    2016-06-01

    A lossy anisotropic epsilon-near-zero (ENZ) medium may lead to a counterintuitive phenomenon of omnidirectional bending-to-normal refraction [S. Feng, Phys. Rev. Lett. 108, 193904 (2012), 10.1103/PhysRevLett.108.193904], which offers a fabulous strategy for energy collimation and energy harvesting. Here, in the scope of effective medium theory, we systematically investigate two simple metamaterial configurations, i.e., metal-dielectric-layered structures and the wire medium, to explore the possibility of fulfilling the conditions of such an anisotropic lossy ENZ medium by playing with materials' parameters. Both realistic metamaterial structures and their effective medium equivalences have been numerically simulated, and the results are in excellent agreement with each other. Our study provides clear guidance and therefore paves the way towards the search for proper designs of anisotropic metamaterials for a decent effect of energy collimation and wave-front manipulation.

  16. Lossless and lossy compression of quantitative phase images of red blood cells obtained by digital holographic imaging.

    PubMed

    Jaferzadeh, Keyvan; Gholami, Samaneh; Moon, Inkyu

    2016-12-20

    In this paper, we evaluate lossless and lossy compression techniques to compress quantitative phase images of red blood cells (RBCs) obtained by an off-axis digital holographic microscopy (DHM). The RBC phase images are numerically reconstructed from their digital holograms and are stored in 16-bit unsigned integer format. In the case of lossless compression, predictive coding of JPEG lossless (JPEG-LS), JPEG2000, and JP3D are evaluated, and compression ratio (CR) and complexity (compression time) are compared against each other. It turns out that JP2k can outperform other methods by having the best CR. In the lossy case, JP2k and JP3D with different CRs are examined. Because some data is lost in a lossy way, the degradation level is measured by comparing different morphological and biochemical parameters of RBC before and after compression. Morphological parameters are volume, surface area, RBC diameter, sphericity index, and the biochemical cell parameter is mean corpuscular hemoglobin (MCH). Experimental results show that JP2k outperforms JP3D not only in terms of mean square error (MSE) when CR increases, but also in compression time in the lossy compression way. In addition, our compression results with both algorithms demonstrate that with high CR values the three-dimensional profile of RBC can be preserved and morphological and biochemical parameters can still be within the range of reported values.

  17. A singular-value method for reconstruction of nonradial and lossy objects.

    PubMed

    Jiang, Wei; Astheimer, Jeffrey; Waag, Robert

    2012-03-01

    Efficient inverse scattering algorithms for nonradial lossy objects are presented using singular-value decomposition to form reduced-rank representations of the scattering operator. These algorithms extend eigenfunction methods that are not applicable to nonradial lossy scattering objects because the scattering operators for these objects do not have orthonormal eigenfunction decompositions. A method of local reconstruction by segregation of scattering contributions from different local regions is also presented. Scattering from each region is isolated by forming a reduced-rank representation of the scattering operator that has domain and range spaces comprised of far-field patterns with retransmitted fields that focus on the local region. Methods for the estimation of the boundary, average sound speed, and average attenuation slope of the scattering object are also given. These methods yielded approximations of scattering objects that were sufficiently accurate to allow residual variations to be reconstructed in a single iteration. Calculated scattering from a lossy elliptical object with a random background, internal features, and white noise is used to evaluate the proposed methods. Local reconstruction yielded images with spatial resolution that is finer than a half wavelength of the center frequency and reproduces sound speed and attenuation slope with relative root-mean-square errors of 1.09% and 11.45%, respectively.

  18. Near-lossless multichannel EEG compression based on matrix and tensor decompositions.

    PubMed

    Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej

    2013-05-01

    A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.

  19. Analysis of counterfactual quantum key distribution using error-correcting theory

    NASA Astrophysics Data System (ADS)

    Li, Yan-Bing

    2014-10-01

    Counterfactual quantum key distribution is an interesting direction in quantum cryptography and has been realized by some researchers. However, it has been pointed that its insecure in information theory when it is used over a high lossy channel. In this paper, we retry its security from a error-correcting theory point of view. The analysis indicates that the security flaw comes from the reason that the error rate in the users' raw key pair is as high as that under the Eve's attack when the loss rate exceeds 50 %.

  20. Impact of lossy compression on diagnostic accuracy of radiographs for periapical lesions

    NASA Technical Reports Server (NTRS)

    Eraso, Francisco E.; Analoui, Mostafa; Watson, Andrew B.; Rebeschini, Regina

    2002-01-01

    OBJECTIVES: The purpose of this study was to evaluate the lossy Joint Photographic Experts Group compression for endodontic pretreatment digital radiographs. STUDY DESIGN: Fifty clinical charge-coupled device-based, digital radiographs depicting periapical areas were selected. Each image was compressed at 2, 4, 8, 16, 32, 48, and 64 compression ratios. One root per image was marked for examination. Images were randomized and viewed by four clinical observers under standardized viewing conditions. Each observer read the image set three times, with at least two weeks between each reading. Three pre-selected sites per image (mesial, distal, apical) were scored on a five-scale score confidence scale. A panel of three examiners scored the uncompressed images, with a consensus score for each site. The consensus score was used as the baseline for assessing the impact of lossy compression on the diagnostic values of images. The mean absolute error between consensus and observer scores was computed for each observer, site, and reading session. RESULTS: Balanced one-way analysis of variance for all observers indicated that for compression ratios 48 and 64, there was significant difference between mean absolute error of uncompressed and compressed images (P <.05). After converting the five-scale score to two-level diagnostic values, the diagnostic accuracy was strongly correlated (R (2) = 0.91) with the compression ratio. CONCLUSION: The results of this study suggest that high compression ratios can have a severe impact on the diagnostic quality of the digital radiographs for detection of periapical lesions.

  1. Lossy compression for Animated Web Visualisation

    NASA Astrophysics Data System (ADS)

    Prudden, R.; Tomlinson, J.; Robinson, N.; Arribas, A.

    2017-12-01

    This talk will discuss an technique for lossy data compression specialised for web animation. We set ourselves the challenge of visualising a full forecast weather field as an animated 3D web page visualisation. This data is richly spatiotemporal, however it is routinely communicated to the public as a 2D map, and scientists are largely limited to visualising data via static 2D maps or 1D scatter plots. We wanted to present Met Office weather forecasts in a way that represents all the generated data. Our approach was to repurpose the technology used to stream high definition videos. This enabled us to achieve high rates of compression, while being compatible with both web browsers and GPU processing. Since lossy compression necessarily involves discarding information, evaluating the results is an important and difficult problem. This is essentially a problem of forecast verification. The difficulty lies in deciding what it means for two weather fields to be "similar", as simple definitions such as mean squared error often lead to undesirable results. In the second part of the talk, I will briefly discuss some ideas for alternative measures of similarity.

  2. On lossy transform compression of ECG signals with reference to deformation of their parameter values.

    PubMed

    Koski, Antti; Tossavainen, Timo; Juhola, Martti

    2004-01-01

    Electrocardiogram (ECG) signals are the most prominent biomedical signal type used in clinical medicine. Their compression is important and widely researched in the medical informatics community. In the previous literature compression efficacy has been investigated only in the context of how much known or developed methods reduced the storage required by compressed forms of original ECG signals. Sometimes statistical signal evaluations based on, for example, root mean square error were studied. In previous research we developed a refined method for signal compression and tested it jointly with several known techniques for other biomedical signals. Our method of so-called successive approximation quantization used with wavelets was one of the most successful in those tests. In this paper, we studied to what extent these lossy compression methods altered values of medical parameters (medical information) computed from signals. Since the methods are lossy, some information is lost due to the compression when a high enough compression ratio is reached. We found that ECG signals sampled at 400 Hz could be compressed to one fourth of their original storage space, but the values of their medical parameters changed less than 5% due to compression, which indicates reliable results.

  3. Lossy compression of weak lensing data

    DOE PAGES

    Vanderveld, R. Ali; Bernstein, Gary M.; Stoughton, Chris; ...

    2011-07-12

    Future orbiting observatories will survey large areas of sky in order to constrain the physics of dark matter and dark energy using weak gravitational lensing and other methods. Lossy compression of the resultant data will improve the cost and feasibility of transmitting the images through the space communication network. We evaluate the consequences of the lossy compression algorithm of Bernstein et al. (2010) for the high-precision measurement of weak-lensing galaxy ellipticities. This square-root algorithm compresses each pixel independently, and the information discarded is by construction less than the Poisson error from photon shot noise. For simulated space-based images (without cosmicmore » rays) digitized to the typical 16 bits per pixel, application of the lossy compression followed by image-wise lossless compression yields images with only 2.4 bits per pixel, a factor of 6.7 compression. We demonstrate that this compression introduces no bias in the sky background. The compression introduces a small amount of additional digitization noise to the images, and we demonstrate a corresponding small increase in ellipticity measurement noise. The ellipticity measurement method is biased by the addition of noise, so the additional digitization noise is expected to induce a multiplicative bias on the galaxies measured ellipticities. After correcting for this known noise-induced bias, we find a residual multiplicative ellipticity bias of m {approx} -4 x 10 -4. This bias is small when compared to the many other issues that precision weak lensing surveys must confront, and furthermore we expect it to be reduced further with better calibration of ellipticity measurement methods.« less

  4. Recent advances in lossy compression of scientific floating-point data

    NASA Astrophysics Data System (ADS)

    Lindstrom, P.

    2017-12-01

    With a continuing exponential trend in supercomputer performance, ever larger data sets are being generated through numerical simulation. Bandwidth and storage capacity are, however, not keeping pace with this increase in data size, causing significant data movement bottlenecks in simulation codes and substantial monetary costs associated with archiving vast volumes of data. Worse yet, ever smaller fractions of data generated can be stored for further analysis, where scientists frequently rely on decimating or averaging large data sets in time and/or space. One way to mitigate these problems is to employ data compression to reduce data volumes. However, lossless compression of floating-point data can achieve only very modest size reductions on the order of 10-50%. We present ZFP and FPZIP, two state-of-the-art lossy compressors for structured floating-point data that routinely achieve one to two orders of magnitude reduction with little to no impact on the accuracy of visualization and quantitative data analysis. We provide examples of the use of such lossy compressors in climate and seismic modeling applications to effectively accelerate I/O and reduce storage requirements. We further discuss how the design decisions behind these and other compressors impact error distributions and other statistical and differential properties, including derived quantities of interest relevant to each science application.

  5. Stability Analysis of Multi-Sensor Kalman Filtering over Lossy Networks

    PubMed Central

    Gao, Shouwan; Chen, Pengpeng; Huang, Dan; Niu, Qiang

    2016-01-01

    This paper studies the remote Kalman filtering problem for a distributed system setting with multiple sensors that are located at different physical locations. Each sensor encapsulates its own measurement data into one single packet and transmits the packet to the remote filter via a lossy distinct channel. For each communication channel, a time-homogeneous Markov chain is used to model the normal operating condition of packet delivery and losses. Based on the Markov model, a necessary and sufficient condition is obtained, which can guarantee the stability of the mean estimation error covariance. Especially, the stability condition is explicitly expressed as a simple inequality whose parameters are the spectral radius of the system state matrix and transition probabilities of the Markov chains. In contrast to the existing related results, our method imposes less restrictive conditions on systems. Finally, the results are illustrated by simulation examples. PMID:27104541

  6. The use of ZFP lossy floating point data compression in tornado-resolving thunderstorm simulations

    NASA Astrophysics Data System (ADS)

    Orf, L.

    2017-12-01

    In the field of atmospheric science, numerical models are used to produce forecasts of weather and climate and serve as virtual laboratories for scientists studying atmospheric phenomena. In both operational and research arenas, atmospheric simulations exploiting modern supercomputing hardware can produce a tremendous amount of data. During model execution, the transfer of floating point data from memory to the file system is often a significant bottleneck where I/O can dominate wallclock time. One way to reduce the I/O footprint is to compress the floating point data, which reduces amount of data saved to the file system. In this presentation we introduce LOFS, a file system developed specifically for use in three-dimensional numerical weather models that are run on massively parallel supercomputers. LOFS utilizes the core (in-memory buffered) HDF5 driver and includes compression options including ZFP, a lossy floating point data compression algorithm. ZFP offers several mechanisms for specifying the amount of lossy compression to be applied to floating point data, including the ability to specify the maximum absolute error allowed in each compressed 3D array. We explore different maximum error tolerances in a tornado-resolving supercell thunderstorm simulation for model variables including cloud and precipitation, temperature, wind velocity and vorticity magnitude. We find that average compression ratios exceeding 20:1 in scientifically interesting regions of the simulation domain produce visually identical results to uncompressed data in visualizations and plots. Since LOFS splits the model domain across many files, compression ratios for a given error tolerance can be compared across different locations within the model domain. We find that regions of high spatial variability (which tend to be where scientifically interesting things are occurring) show the lowest compression ratios, whereas regions of the domain with little spatial variability compress extremely well. We observe that the overhead for compressing data with ZFP is low, and that compressing data in memory reduces the amount of memory overhead needed to store the virtual files before they are flushed to disk.

  7. QualComp: a new lossy compressor for quality scores based on rate distortion theory

    PubMed Central

    2013-01-01

    Background Next Generation Sequencing technologies have revolutionized many fields in biology by reducing the time and cost required for sequencing. As a result, large amounts of sequencing data are being generated. A typical sequencing data file may occupy tens or even hundreds of gigabytes of disk space, prohibitively large for many users. This data consists of both the nucleotide sequences and per-base quality scores that indicate the level of confidence in the readout of these sequences. Quality scores account for about half of the required disk space in the commonly used FASTQ format (before compression), and therefore the compression of the quality scores can significantly reduce storage requirements and speed up analysis and transmission of sequencing data. Results In this paper, we present a new scheme for the lossy compression of the quality scores, to address the problem of storage. Our framework allows the user to specify the rate (bits per quality score) prior to compression, independent of the data to be compressed. Our algorithm can work at any rate, unlike other lossy compression algorithms. We envisage our algorithm as being part of a more general compression scheme that works with the entire FASTQ file. Numerical experiments show that we can achieve a better mean squared error (MSE) for small rates (bits per quality score) than other lossy compression schemes. For the organism PhiX, whose assembled genome is known and assumed to be correct, we show that it is possible to achieve a significant reduction in size with little compromise in performance on downstream applications (e.g., alignment). Conclusions QualComp is an open source software package, written in C and freely available for download at https://sourceforge.net/projects/qualcomp. PMID:23758828

  8. Efficiency of coherent-state quantum cryptography in the presence of loss: Influence of realistic error correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heid, Matthias; Luetkenhaus, Norbert

    2006-05-15

    We investigate the performance of a continuous-variable quantum key distribution scheme in a practical setting. More specifically, we take a nonideal error reconciliation procedure into account. The quantum channel connecting the two honest parties is assumed to be lossy but noiseless. Secret key rates are given for the case that the measurement outcomes are postselected or a reverse reconciliation scheme is applied. The reverse reconciliation scheme loses its initial advantage in the practical setting. If one combines postselection with reverse reconciliation, however, much of this advantage can be recovered.

  9. Design of a receiver operating characteristic (ROC) study of 10:1 lossy image compression

    NASA Astrophysics Data System (ADS)

    Collins, Cary A.; Lane, David; Frank, Mark S.; Hardy, Michael E.; Haynor, David R.; Smith, Donald V.; Parker, James E.; Bender, Gregory N.; Kim, Yongmin

    1994-04-01

    The digital archiving system at Madigan Army Medical Center (MAMC) uses a 10:1 lossy data compression algorithm for most forms of computed radiography. A systematic study on the potential effect of lossy image compression on patient care has been initiated with a series of studies focused on specific diagnostic tasks. The studies are based upon the receiver operating characteristic (ROC) method of analysis for diagnostic systems. The null hypothesis is that observer performance with approximately 10:1 compressed and decompressed images is not different from using original, uncompressed images for detecting subtle pathologic findings seen on computed radiographs of bone, chest, or abdomen, when viewed on a high-resolution monitor. Our design involves collecting cases from eight pathologic categories. Truth is determined by committee using confirmatory studies performed during routine clinical practice whenever possible. Software has been developed to aid in case collection and to allow reading of the cases for the study using stand-alone Siemens Litebox workstations. Data analysis uses two methods, ROC analysis and free-response ROC (FROC) methods. This study will be one of the largest ROC/FROC studies of its kind and could benefit clinical radiology practice using PACS technology. The study design and results from a pilot FROC study are presented.

  10. Output MSE and PSNR prediction in DCT-based lossy compression of remote sensing images

    NASA Astrophysics Data System (ADS)

    Kozhemiakin, Ruslan A.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2017-10-01

    Amount and size of remote sensing (RS) images acquired by modern systems are so large that data have to be compressed in order to transfer, save and disseminate them. Lossy compression becomes more popular for aforementioned situations. But lossy compression has to be applied carefully with providing acceptable level of introduced distortions not to lose valuable information contained in data. Then introduced losses have to be controlled and predicted and this is problematic for many coders. In this paper, we analyze possibilities of predicting mean square error or, equivalently, PSNR for coders based on discrete cosine transform (DCT) applied either for compressing singlechannel RS images or multichannel data in component-wise manner. The proposed approach is based on direct dependence between distortions introduced due to DCT coefficient quantization and losses in compressed data. One more innovation deals with possibility to employ a limited number (percentage) of blocks for which DCT-coefficients have to be calculated. This accelerates prediction and makes it considerably faster than compression itself. There are two other advantages of the proposed approach. First, it is applicable for both uniform and non-uniform quantization of DCT coefficients. Second, the approach is quite general since it works for several analyzed DCT-based coders. The simulation results are obtained for standard test images and then verified for real-life RS data.

  11. A new approach of objective quality evaluation on JPEG2000 lossy-compressed lung cancer CT images

    NASA Astrophysics Data System (ADS)

    Cai, Weihua; Tan, Yongqiang; Zhang, Jianguo

    2007-03-01

    Image compression has been used to increase the communication efficiency and storage capacity. JPEG 2000 compression, based on the wavelet transformation, has its advantages comparing to other compression methods, such as ROI coding, error resilience, adaptive binary arithmetic coding and embedded bit-stream. However it is still difficult to find an objective method to evaluate the image quality of lossy-compressed medical images so far. In this paper, we present an approach to evaluate the image quality by using a computer aided diagnosis (CAD) system. We selected 77 cases of CT images, bearing benign and malignant lung nodules with confirmed pathology, from our clinical Picture Archiving and Communication System (PACS). We have developed a prototype of CAD system to classify these images into benign ones and malignant ones, the performance of which was evaluated by the receiver operator characteristics (ROC) curves. We first used JPEG 2000 to compress these cases of images with different compression ratio from lossless to lossy, and used the CAD system to classify the cases with different compressed ratio, then compared the ROC curves from the CAD classification results. Support vector machine (SVM) and neural networks (NN) were used to classify the malignancy of input nodules. In each approach, we found that the area under ROC (AUC) decreases with the increment of compression ratio with small fluctuations.

  12. Security of a discretely signaled continuous variable quantum key distribution protocol for high rate systems.

    PubMed

    Zhang, Zheshen; Voss, Paul L

    2009-07-06

    We propose a continuous variable based quantum key distribution protocol that makes use of discretely signaled coherent light and reverse error reconciliation. We present a rigorous security proof against collective attacks with realistic lossy, noisy quantum channels, imperfect detector efficiency, and detector electronic noise. This protocol is promising for convenient, high-speed operation at link distances up to 50 km with the use of post-selection.

  13. The Linear Bicharacteristic Scheme for Computational Electromagnetics

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Chan, Siew-Loong

    2000-01-01

    The upwind leapfrog or Linear Bicharacteristic Scheme (LBS) has previously been implemented and demonstrated on electromagnetic wave propagation problems. This paper extends the Linear Bicharacteristic Scheme for computational electromagnetics to treat lossy dielectric and magnetic materials and perfect electrical conductors. This is accomplished by proper implementation of the LBS for homogeneous lossy dielectric and magnetic media, and treatment of perfect electrical conductors (PECs) are shown to follow directly in the limit of high conductivity. Heterogeneous media are treated through implementation of surface boundary conditions and no special extrapolations or interpolations at dielectric material boundaries are required. Results are presented for one-dimensional model problems on both uniform and nonuniform grids, and the FDTD algorithm is chosen as a convenient reference algorithm for comparison. The results demonstrate that the explicit LBS is a dissipation-free, second-order accurate algorithm which uses a smaller stencil than the FDTD algorithm, yet it has approximately one-third the phase velocity error. The LBS is also more accurate on nonuniform grids.

  14. A Two-Dimensional Linear Bicharacteristic Scheme for Electromagnetics

    NASA Technical Reports Server (NTRS)

    Beggs, John H.

    2002-01-01

    The upwind leapfrog or Linear Bicharacteristic Scheme (LBS) has previously been implemented and demonstrated on one-dimensional electromagnetic wave propagation problems. This memorandum extends the Linear Bicharacteristic Scheme for computational electromagnetics to model lossy dielectric and magnetic materials and perfect electrical conductors in two dimensions. This is accomplished by proper implementation of the LBS for homogeneous lossy dielectric and magnetic media and for perfect electrical conductors. Both the Transverse Electric and Transverse Magnetic polarizations are considered. Computational requirements and a Fourier analysis are also discussed. Heterogeneous media are modeled through implementation of surface boundary conditions and no special extrapolations or interpolations at dielectric material boundaries are required. Results are presented for two-dimensional model problems on uniform grids, and the Finite Difference Time Domain (FDTD) algorithm is chosen as a convenient reference algorithm for comparison. The results demonstrate that the two-dimensional explicit LBS is a dissipation-free, second-order accurate algorithm which uses a smaller stencil than the FDTD algorithm, yet it has less phase velocity error.

  15. Robust Transmission of H.264/AVC Streams Using Adaptive Group Slicing and Unequal Error Protection

    NASA Astrophysics Data System (ADS)

    Thomos, Nikolaos; Argyropoulos, Savvas; Boulgouris, Nikolaos V.; Strintzis, Michael G.

    2006-12-01

    We present a novel scheme for the transmission of H.264/AVC video streams over lossy packet networks. The proposed scheme exploits the error-resilient features of H.264/AVC codec and employs Reed-Solomon codes to protect effectively the streams. A novel technique for adaptive classification of macroblocks into three slice groups is also proposed. The optimal classification of macroblocks and the optimal channel rate allocation are achieved by iterating two interdependent steps. Dynamic programming techniques are used for the channel rate allocation process in order to reduce complexity. Simulations clearly demonstrate the superiority of the proposed method over other recent algorithms for transmission of H.264/AVC streams.

  16. Wide angle and narrow-band asymmetric absorption in visible and near-infrared regime through lossy Bragg stacks

    PubMed Central

    Shu, Shiwei; Zhan, Yawen; Lee, Chris; Lu, Jian; Li, Yang Yang

    2016-01-01

    Absorber is an important component in various optical devices. Here we report a novel type of asymmetric absorber in the visible and near-infrared spectrum which is based on lossy Bragg stacks. The lossy Bragg stacks can achieve near-perfect absorption at one side and high reflection at the other within the narrow bands (several nm) of resonance wavelengths, whereas display almost identical absorption/reflection responses for the rest of the spectrum. Meanwhile, this interesting wavelength-selective asymmetric absorption behavior persists for wide angles, does not depend on polarization, and can be ascribed to the lossy characteristics of the Bragg stacks. Moreover, interesting Fano resonance with easily tailorable peak profiles can be realized using the lossy Bragg stacks. PMID:27251768

  17. Analysis of tractable distortion metrics for EEG compression applications.

    PubMed

    Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cárdenas-Barrera, Julián; Cruz-Roldán, Fernando

    2012-07-01

    Coding distortion in lossy electroencephalographic (EEG) signal compression methods is evaluated through tractable objective criteria. The percentage root-mean-square difference, which is a global and relative indicator of the quality held by reconstructed waveforms, is the most widely used criterion. However, this parameter does not ensure compliance with clinical standard guidelines that specify limits to allowable noise in EEG recordings. As a result, expert clinicians may have difficulties interpreting the resulting distortion of the EEG for a given value of this parameter. Conversely, the root-mean-square error is an alternative criterion that quantifies distortion in understandable units. In this paper, we demonstrate that the root-mean-square error is better suited to control and to assess the distortion introduced by compression methods. The experiments conducted in this paper show that the use of the root-mean-square error as target parameter in EEG compression allows both clinicians and scientists to infer whether coding error is clinically acceptable or not at no cost for the compression ratio.

  18. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-07-07

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

  19. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.

  20. Evaluating lossy data compression on climate simulation data within a large ensemble

    DOE PAGES

    Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.; ...

    2016-12-07

    High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data,more » the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that applying lossy data compression to climate simulation data is both advantageous in terms of data reduction and generally acceptable in terms of effects on scientific results.« less

  1. Evaluating lossy data compression on climate simulation data within a large ensemble

    NASA Astrophysics Data System (ADS)

    Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.; Xu, Haiying; Stolpe, Martin B.; Naveau, Phillipe; Sanderson, Ben; Ebert-Uphoff, Imme; Samarasinghe, Savini; De Simone, Francesco; Carbone, Francesco; Gencarelli, Christian N.; Dennis, John M.; Kay, Jennifer E.; Lindstrom, Peter

    2016-12-01

    High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data, the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that applying lossy data compression to climate simulation data is both advantageous in terms of data reduction and generally acceptable in terms of effects on scientific results.

  2. Evaluating lossy data compression on climate simulation data within a large ensemble

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.

    High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data,more » the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that applying lossy data compression to climate simulation data is both advantageous in terms of data reduction and generally acceptable in terms of effects on scientific results.« less

  3. A novel shape-based coding-decoding technique for an industrial visual inspection system.

    PubMed

    Mukherjee, Anirban; Chaudhuri, Subhasis; Dutta, Pranab K; Sen, Siddhartha; Patra, Amit

    2004-01-01

    This paper describes a unique single camera-based dimension storage method for image-based measurement. The system has been designed and implemented in one of the integrated steel plants of India. The purpose of the system is to encode the frontal cross-sectional area of an ingot. The encoded data will be stored in a database to facilitate the future manufacturing diagnostic process. The compression efficiency and reconstruction error of the lossy encoding technique have been reported and found to be quite encouraging.

  4. Security of coherent-state quantum cryptography in the presence of Gaussian noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heid, Matthias; Luetkenhaus, Norbert

    2007-08-15

    We investigate the security against collective attacks of a continuous variable quantum key distribution scheme in the asymptotic key limit for a realistic setting. The quantum channel connecting the two honest parties is assumed to be lossy and imposes Gaussian noise on the observed quadrature distributions. Secret key rates are given for direct and reverse reconciliation schemes including post-selection in the collective attack scenario. The effect of a nonideal error correction and two-way communication in the classical post-processing step is also taken into account.

  5. Research and Analysis on the Localization of a 3-D Single Source in Lossy Medium Using Uniform Circular Array

    PubMed Central

    Xue, Bing; Qu, Xiaodong; Fang, Guangyou; Ji, Yicai

    2017-01-01

    In this paper, the methods and analysis for estimating the location of a three-dimensional (3-D) single source buried in lossy medium are presented with uniform circular array (UCA). The mathematical model of the signal in the lossy medium is proposed. Using information in the covariance matrix obtained by the sensors’ outputs, equations of the source location (azimuth angle, elevation angle, and range) are obtained. Then, the phase and amplitude of the covariance matrix function are used to process the source localization in the lossy medium. By analyzing the characteristics of the proposed methods and the multiple signal classification (MUSIC) method, the computational complexity and the valid scope of these methods are given. From the results, whether the loss is known or not, we can choose the best method for processing the issues (localization in lossless medium or lossy medium). PMID:28574467

  6. StirMark Benchmark: audio watermarking attacks based on lossy compression

    NASA Astrophysics Data System (ADS)

    Steinebach, Martin; Lang, Andreas; Dittmann, Jana

    2002-04-01

    StirMark Benchmark is a well-known evaluation tool for watermarking robustness. Additional attacks are added to it continuously. To enable application based evaluation, in our paper we address attacks against audio watermarks based on lossy audio compression algorithms to be included in the test environment. We discuss the effect of different lossy compression algorithms like MPEG-2 audio Layer 3, Ogg or VQF on a selection of audio test data. Our focus is on changes regarding the basic characteristics of the audio data like spectrum or average power and on removal of embedded watermarks. Furthermore we compare results of different watermarking algorithms and show that lossy compression is still a challenge for most of them. There are two strategies for adding evaluation of robustness against lossy compression to StirMark Benchmark: (a) use of existing free compression algorithms (b) implementation of a generic lossy compression simulation. We discuss how such a model can be implemented based on the results of our tests. This method is less complex, as no real psycho acoustic model has to be applied. Our model can be used for audio watermarking evaluation of numerous application fields. As an example, we describe its importance for e-commerce applications with watermarking security.

  7. Sea ice classification using fast learning neural networks

    NASA Technical Reports Server (NTRS)

    Dawson, M. S.; Fung, A. K.; Manry, M. T.

    1992-01-01

    A first learning neural network approach to the classification of sea ice is presented. The fast learning (FL) neural network and a multilayer perceptron (MLP) trained with backpropagation learning (BP network) were tested on simulated data sets based on the known dominant scattering characteristics of the target class. Four classes were used in the data simulation: open water, thick lossy saline ice, thin saline ice, and multiyear ice. The BP network was unable to consistently converge to less than 25 percent error while the FL method yielded an average error of approximately 1 percent on the first iteration of training. The fast learning method presented can significantly reduce the CPU time necessary to train a neural network as well as consistently yield higher classification accuracy than BP networks.

  8. High thermal conductivity lossy dielectric using co-densified multilayer configuration

    DOEpatents

    Tiegs, Terry N.; Kiggans, Jr., James O.

    2003-06-17

    Systems and methods are described for loss dielectrics. A method of manufacturing a lossy dielectric includes providing at least one high dielectric loss layer and providing at least one high thermal conductivity-electrically insulating layer adjacent the at least one high dielectric loss layer and then densifying together. The systems and methods provide advantages because the lossy dielectrics are less costly and more environmentally friendly than the available alternatives.

  9. A Simple, Low Overhead Data Compression Algorithm for Converting Lossy Compression Processes to Lossless

    DTIC Science & Technology

    1993-12-01

    0~0 S* NAVAL POSTGRADUATE SCHOOL Monterey, California DTIC ELECTE THESIS S APR 11 1994DU A SIMPLE, LOW OVERHEAD DATA COMPRESSION ALGORITHM FOR...A SIMPLE. LOW OVERHEAD DATA COMPRESSION ALGORITHM FOR CONVERTING LOSSY COMPRESSION PROCESSES TO LOSSLESS. 6. AUTHOR(S) Abbott, Walter D., III 7...Approved for public release; distribution is unlimited. A Simple, Low Overhead Data Compression Algorithm for Converting Lossy Processes to Lossless by

  10. A simple circular-polarized antenna: Circular waveguide horn coated with lossy magnetic material

    NASA Technical Reports Server (NTRS)

    Lee, C. S.; Lee, S. W.; Justice, D. W.

    1986-01-01

    A circular waveguide horn coated with a lossy material in its interior wall can be used as an alternative to a corrugated waveguide for radiating a circularly polarized (CP) field. To achieve good CP radiation, the diameter of the structure must be larger than the free-space wavelength, and the coating material must be sufficiently lossy and magnetic. This device is cheaper and lighter in weight than the corrugated one.

  11. Effect of the losses in the vocal tract on determination of the area function.

    PubMed

    Gülmezoğlu, M Bilginer; Barkana, Atalay

    2003-01-01

    In this work, the cross-sectional areas of the vocal tract are determined for the lossy and lossless cases by using the pole-zero models obtained from the electrical equivalent circuit model of the vocal tract and the system identification method. The cross-sectional areas are used to compare the lossy and lossless cases. In the lossy case, the internal losses due to wall vibration, heat conduction, air friction and viscosity are considered, that is, the complex poles and zeros obtained from the models are used directly. Whereas, in the lossless case, only the imaginary parts of these poles and zeros are used. The vocal tract shapes obtained for the lossy case are close to the actual ones.

  12. Ammonia sensing using lossy mode resonances in a tapered optical fibre coated with porphyrin-incorporated titanium dioxide

    NASA Astrophysics Data System (ADS)

    Tiwari, Divya; Mullaney, Kevin; Korposh, Serhiy; James, Stephen W.; Lee, Seung-Woo; Tatam, Ralph P.

    2016-05-01

    The development of an ammonia sensor, formed by the deposition of a functionalised titanium dioxide film onto a tapered optical fibre is presented. The titanium dioxide coating allows the coupling of light from the fundamental core mode to a lossy mode supported by the coating, thus creating lossy mode resonance (LMR) in the transmission spectrum. The porphyrin compound that was used to functionalise the coating was removed from the titanium dioxide coating upon exposure to ammonia, causing a change in the refractive index of the coating and a concomitant shift in the central wavelength of the lossy mode resonance. Concentrations of ammonia as small as 1ppm was detected with a response time of less than 1min.

  13. Analysis of Compression Algorithm in Ground Collision Avoidance Systems (Auto-GCAS)

    NASA Technical Reports Server (NTRS)

    Schmalz, Tyler; Ryan, Jack

    2011-01-01

    Automatic Ground Collision Avoidance Systems (Auto-GCAS) utilizes Digital Terrain Elevation Data (DTED) stored onboard a plane to determine potential recovery maneuvers. Because of the current limitations of computer hardware on military airplanes such as the F-22 and F-35, the DTED must be compressed through a lossy technique called binary-tree tip-tilt. The purpose of this study is to determine the accuracy of the compressed data with respect to the original DTED. This study is mainly interested in the magnitude of the error between the two as well as the overall distribution of the errors throughout the DTED. By understanding how the errors of the compression technique are affected by various factors (topography, density of sampling points, sub-sampling techniques, etc.), modifications can be made to the compression technique resulting in better accuracy. This, in turn, would minimize unnecessary activation of A-GCAS during flight as well as maximizing its contribution to fighter safety.

  14. Towards Holography via Quantum Source-Channel Codes.

    PubMed

    Pastawski, Fernando; Eisert, Jens; Wilming, Henrik

    2017-07-14

    While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.

  15. Towards Holography via Quantum Source-Channel Codes

    NASA Astrophysics Data System (ADS)

    Pastawski, Fernando; Eisert, Jens; Wilming, Henrik

    2017-07-01

    While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.

  16. GTD analysis of airborne antennas radiating in the presence of lossy dielectric layers

    NASA Technical Reports Server (NTRS)

    Rojas-Teran, R. G.; Burnside, W. D.

    1981-01-01

    The patterns of monopole or aperture antennas mounted on a perfectly conducting convex surface radiating in the presence of a dielectric or metal plate are computed. The geometrical theory of diffraction is used to analyze the radiating system and extended here to include diffraction by flat dielectric slabs. Modified edge diffraction coefficients valid for wedges whose walls are lossy or lossless thin dielectric or perfectly conducting plates are developed. The width of the dielectric plates cannot exceed a quarter of a wavelength in free space, and the interior angle of the wedge is assumed to be close to 0 deg or 180 deg. Systematic methods for computing the individual components of the total high frequency field are discussed. The accuracy of the solutions is demonstrated by comparisons with measured results, where a 2 lambda by 4 lambda prolate spheroid is used as the convex surface. A jump or kink appears in the calculated pattern when higher order terms that are important are not included in the final solution. The most immediate application of the results presented here is in the modelling of structures such as aircraft which are composed of nonmetallic parts that play a significant role in the pattern.

  17. Numerical methods for analyzing electromagnetic scattering

    NASA Technical Reports Server (NTRS)

    Lee, S. W.; Lo, Y. T.; Chuang, S. L.; Lee, C. S.

    1985-01-01

    Numerical methods to analyze electromagnetic scattering are presented. The dispersions and attenuations of the normal modes in a circular waveguide coated with lossy material were completely analyzed. The radar cross section (RCS) from a circular waveguide coated with lossy material was calculated. The following is observed: (1) the interior irradiation contributes to the RCS much more than does the rim diffraction; (2) at low frequency, the RCS from the circular waveguide terminated by a perfect electric conductor (PEC) can be reduced more than 13 dB down with a coating thickness less than 1% of the radius using the best lossy material available in a 6 radius-long cylinder; (3) at high frequency, a modal separation between the highly attenuated and the lowly attenuated modes is evident if the coating material is too lossy, however, a large RCS reduction can be achieved for a small incident angle with a thin layer of coating. It is found that the waveguide coated with a lossy magnetic material can be used as a substitute for a corrugated waveguide to produce a circularly polarized radiation yield.

  18. High-quality lossy compression: current and future trends

    NASA Astrophysics Data System (ADS)

    McLaughlin, Steven W.

    1995-01-01

    This paper is concerned with current and future trends in the lossy compression of real sources such as imagery, video, speech and music. We put all lossy compression schemes into common framework where each can be characterized in terms of three well-defined advantages: cell shape, region shape and memory advantages. We concentrate on image compression and discuss how new entropy constrained trellis-based compressors achieve cell- shape, region-shape and memory gain resulting in high fidelity and high compression.

  19. Cosmological Particle Data Compression in Practice

    NASA Astrophysics Data System (ADS)

    Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.

    2017-12-01

    In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.

  20. Algorithm for Compressing Time-Series Data

    NASA Technical Reports Server (NTRS)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  1. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-03-10

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.

  2. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.

  3. Electromagnetic scattering by a straight thin wire

    NASA Technical Reports Server (NTRS)

    Shamansky, Harry T.; Dominek, Allen K.; Peters, Leon, Jr.

    1989-01-01

    The traveling-wave energy, which multiply diffracts on a straight thin wire, is represented as a sum of terms, each with a distinct physical meaning, that can be individually examined in the time domain. Expressions for each scattering mechanism on a straight thin wire are cast in the form of four basic electromagnetic wave concepts: diffraction, attachment, launch, and reflection. Using the basic mechanisms from P. Ya. Ufimtsev (1962), each of the scattering mechanisms is included into the total scattered field for the straight thin wire. Scattering as a function of angle and frequency is then compared to the moment-method solution. These analytic expressions are then extended to a lossy wire with a simple approximate modification using the propagation velocity on the wire as derived from the Sommerfeld wave on a straight lossy wire. Both the perfectly conducting and lossy wire solutions are compared to moment-method results, and excellent agreement is found. As is common with asymptotic solutions, when the electrical length of wire is smaller than 0.2 lambda the results lose accuracy. The expressions modified to approximate the scattering for the lossy thin wire yield excellent agreement even for lossy wires where the wire radius is on the order of skin depth.

  4. LSST Probes of Dark Energy: New Energy vs New Gravity

    NASA Astrophysics Data System (ADS)

    Bradshaw, Andrew; Tyson, A.; Jee, M. J.; Zhan, H.; Bard, D.; Bean, R.; Bosch, J.; Chang, C.; Clowe, D.; Dell'Antonio, I.; Gawiser, E.; Jain, B.; Jarvis, M.; Kahn, S.; Knox, L.; Newman, J.; Wittman, D.; Weak Lensing, LSST; LSS Science Collaborations

    2012-01-01

    Is the late time acceleration of the universe due to new physics in the form of stress-energy or a departure from General Relativity? LSST will measure the shape, magnitude, and color of 4x109 galaxies to high S/N over 18,000 square degrees. These data will be used to separately measure the gravitational growth of mass structure and distance vs redshift to unprecedented precision by combining multiple probes in a joint analysis. Of the five LSST probes of dark energy, weak gravitational lensing (WL) and baryon acoustic oscillation (BAO) probes are particularly effective in combination. By measuring the 2-D BAO scale in ugrizy-band photometric redshift-selected samples, LSST will determine the angular diameter distance to a dozen redshifts with sub percent-level errors. Reconstruction of the WL shear power spectrum on linear and weakly non-linear scales, and of the cross-correlation of shear measured in different photometric redshift bins provides a constraint on the evolution of dark energy that is complementary to the purely geometric measures provided by supernovae and BAO. Cross-correlation of the WL shear and BAO signal within redshift shells minimizes the sensitivity to systematics. LSST will also detect shear peaks, providing independent constraints. Tomographic study of the shear of background galaxies as a function of redshift allows a geometric test of dark energy. To extract the dark energy signal and distinguish between the two forms of new physics, LSST will rely on accurate stellar point-spread functions (PSF) and unbiased reconstruction of galaxy image shapes from hundreds of exposures. Although a weighted co-added deep image has high S/N, it is a form of lossy compression. Bayesian forward modeling algorithms can in principle use all the information. We explore systematic effects on shape measurements and present tests of an algorithm called Multi-Fit, which appears to avoid PSF-induced shear systematics in a computationally efficient way.

  5. A block-based JPEG-LS compression technique with lossless region of interest

    NASA Astrophysics Data System (ADS)

    Deng, Lihua; Huang, Zhenghua; Yao, Shoukui

    2018-03-01

    JPEG-LS lossless compression algorithm is used in many specialized applications that emphasize on the attainment of high fidelity for its lower complexity and better compression ratios than the lossless JPEG standard. But it cannot prevent error diffusion because of the context dependence of the algorithm, and have low compression rate when compared to lossy compression. In this paper, we firstly divide the image into two parts: ROI regions and non-ROI regions. Then we adopt a block-based image compression technique to decrease the range of error diffusion. We provide JPEG-LS lossless compression for the image blocks which include the whole or part region of interest (ROI) and JPEG-LS near lossless compression for the image blocks which are included in the non-ROI (unimportant) regions. Finally, a set of experiments are designed to assess the effectiveness of the proposed compression method.

  6. ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

    2005-01-01

    ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

  7. Toward maximum transmittance into absorption layers in solar cells: investigation of lossy-film-induced mismatches between reflectance and transmittance extrema.

    PubMed

    Chang, Yin-Jung; Lai, Chi-Sheng

    2013-09-01

    The mismatch in film thickness and incident angle between reflectance and transmittance extrema due to the presence of lossy film(s) is investigated toward the maximum transmittance design in the active region of solar cells. Using a planar air/lossy film/silicon double-interface geometry illustrates important and quite opposite mismatch behaviors associated with TE and TM waves. In a typical thin-film CIGS solar cell, mismatches contributed by TM waves in general dominate. The angular mismatch is at least 10° in about 37%-53% of the spectrum, depending on the thickness combination of all lossy interlayers. The largest thickness mismatch of a specific interlayer generally increases with the thickness of the layer itself. Antireflection coating designs for solar cells should therefore be optimized in terms of the maximum transmittance into the active region, even if the corresponding reflectance is not at its minimum.

  8. A hierarchical storage management (HSM) scheme for cost-effective on-line archival using lossy compression.

    PubMed

    Avrin, D E; Andriole, K P; Yin, L; Gould, R G; Arenson, R L

    2001-03-01

    A hierarchical storage management (HSM) scheme for cost-effective on-line archival of image data using lossy compression is described. This HSM scheme also provides an off-site tape backup mechanism and disaster recovery. The full-resolution image data are viewed originally for primary diagnosis, then losslessly compressed and sent off site to a tape backup archive. In addition, the original data are wavelet lossy compressed (at approximately 25:1 for computed radiography, 10:1 for computed tomography, and 5:1 for magnetic resonance) and stored on a large RAID device for maximum cost-effective, on-line storage and immediate retrieval of images for review and comparison. This HSM scheme provides a solution to 4 problems in image archiving, namely cost-effective on-line storage, disaster recovery of data, off-site tape backup for the legal record, and maximum intermediate storage and retrieval through the use of on-site lossy compression.

  9. Normal modes in an overmoded circular waveguide coated with lossy material

    NASA Technical Reports Server (NTRS)

    Lee, C. S.; Lee, S. W.; Chuang, S. L.

    1985-01-01

    The normal modes in an overmoded waveguide coated with a lossy material are analyzed, particularly for their attenuation properties as a function of coating material, layer thickness, and frequency. When the coating material is not too lossy, the low-order modes are highly attenuated even with a thin layer of coating. This coated guide serves as a mode suppressor of the low-order modes, which can be particularly useful for reducing the radar cross section (RCS) of a cavity structure such as a jet inlet. When the coating material is very lossy, low-order modes fall into two distinct groups: highly and lowly attenuated modes. However, as a/lambda (a = radius of the cylinder; lambda = the free-space wavelength) increases, the separation between these two groups becomes less distinctive. The attenuation constants of most of the low-order modes become small, and decrease as a function of lambda sup 2/a sup 3.

  10. Helmholtz and parabolic equation solutions to a benchmark problem in ocean acoustics.

    PubMed

    Larsson, Elisabeth; Abrahamsson, Leif

    2003-05-01

    The Helmholtz equation (HE) describes wave propagation in applications such as acoustics and electromagnetics. For realistic problems, solving the HE is often too expensive. Instead, approximations like the parabolic wave equation (PE) are used. For low-frequency shallow-water environments, one persistent problem is to assess the accuracy of the PE model. In this work, a recently developed HE solver that can handle a smoothly varying bathymetry, variable material properties, and layered materials, is used for an investigation of the errors in PE solutions. In the HE solver, a preconditioned Krylov subspace method is applied to the discretized equations. The preconditioner combines domain decomposition and fast transform techniques. A benchmark problem with upslope-downslope propagation over a penetrable lossy seamount is solved. The numerical experiments show that, for the same bathymetry, a soft and slow bottom gives very similar HE and PE solutions, whereas the PE model is far from accurate for a hard and fast bottom. A first attempt to estimate the error is made by computing the relative deviation from the energy balance for the PE solution. This measure gives an indication of the magnitude of the error, but cannot be used as a strict error bound.

  11. Reducing disk storage of full-3D seismic waveform tomography (F3DT) through lossy online compression

    NASA Astrophysics Data System (ADS)

    Lindstrom, Peter; Chen, Po; Lee, En-Jui

    2016-08-01

    Full-3D seismic waveform tomography (F3DT) is the latest seismic tomography technique that can assimilate broadband, multi-component seismic waveform observations into high-resolution 3D subsurface seismic structure models. The main drawback in the current F3DT implementation, in particular the scattering-integral implementation (F3DT-SI), is the high disk storage cost and the associated I/O overhead of archiving the 4D space-time wavefields of the receiver- or source-side strain tensors. The strain tensor fields are needed for computing the data sensitivity kernels, which are used for constructing the Jacobian matrix in the Gauss-Newton optimization algorithm. In this study, we have successfully integrated a lossy compression algorithm into our F3DT-SI workflow to significantly reduce the disk space for storing the strain tensor fields. The compressor supports a user-specified tolerance for bounding the error, and can be integrated into our finite-difference wave-propagation simulation code used for computing the strain fields. The decompressor can be integrated into the kernel calculation code that reads the strain fields from the disk and compute the data sensitivity kernels. During the wave-propagation simulations, we compress the strain fields before writing them to the disk. To compute the data sensitivity kernels, we read the compressed strain fields from the disk and decompress them before using them in kernel calculations. Experiments using a realistic dataset in our California statewide F3DT project have shown that we can reduce the strain-field disk storage by at least an order of magnitude with acceptable loss, and also improve the overall I/O performance of the entire F3DT-SI workflow significantly. The integration of the lossy online compressor may potentially open up the possibilities of the wide adoption of F3DT-SI in routine seismic tomography practices in the near future.

  12. Reducing Disk Storage of Full-3D Seismic Waveform Tomography (F3DT) Through Lossy Online Compression

    DOE PAGES

    Lindstrom, Peter; Chen, Po; Lee, En-Jui

    2016-05-05

    Full-3D seismic waveform tomography (F3DT) is the latest seismic tomography technique that can assimilate broadband, multi-component seismic waveform observations into high-resolution 3D subsurface seismic structure models. The main drawback in the current F3DT implementation, in particular the scattering-integral implementation (F3DT-SI), is the high disk storage cost and the associated I/O overhead of archiving the 4D space-time wavefields of the receiver- or source-side strain tensors. The strain tensor fields are needed for computing the data sensitivity kernels, which are used for constructing the Jacobian matrix in the Gauss-Newton optimization algorithm. In this study, we have successfully integrated a lossy compression algorithmmore » into our F3DT SI workflow to significantly reduce the disk space for storing the strain tensor fields. The compressor supports a user-specified tolerance for bounding the error, and can be integrated into our finite-difference wave-propagation simulation code used for computing the strain fields. The decompressor can be integrated into the kernel calculation code that reads the strain fields from the disk and compute the data sensitivity kernels. During the wave-propagation simulations, we compress the strain fields before writing them to the disk. To compute the data sensitivity kernels, we read the compressed strain fields from the disk and decompress them before using them in kernel calculations. Experiments using a realistic dataset in our California statewide F3DT project have shown that we can reduce the strain-field disk storage by at least an order of magnitude with acceptable loss, and also improve the overall I/O performance of the entire F3DT-SI workflow significantly. The integration of the lossy online compressor may potentially open up the possibilities of the wide adoption of F3DT-SI in routine seismic tomography practices in the near future.« less

  13. Enabling Near Real-Time Remote Search for Fast Transient Events with Lossy Data Compression

    NASA Astrophysics Data System (ADS)

    Vohl, Dany; Pritchard, Tyler; Andreoni, Igor; Cooke, Jeffrey; Meade, Bernard

    2017-09-01

    We present a systematic evaluation of JPEG2000 (ISO/IEC 15444) as a transport data format to enable rapid remote searches for fast transient events as part of the Deeper Wider Faster programme. Deeper Wider Faster programme uses 20 telescopes from radio to gamma rays to perform simultaneous and rapid-response follow-up searches for fast transient events on millisecond-to-hours timescales. Deeper Wider Faster programme search demands have a set of constraints that is becoming common amongst large collaborations. Here, we focus on the rapid optical data component of Deeper Wider Faster programme led by the Dark Energy Camera at Cerro Tololo Inter-American Observatory. Each Dark Energy Camera image has 70 total coupled-charged devices saved as a 1.2 gigabyte FITS file. Near real-time data processing and fast transient candidate identifications-in minutes for rapid follow-up triggers on other telescopes-requires computational power exceeding what is currently available on-site at Cerro Tololo Inter-American Observatory. In this context, data files need to be transmitted rapidly to a foreign location for supercomputing post-processing, source finding, visualisation and analysis. This step in the search process poses a major bottleneck, and reducing the data size helps accommodate faster data transmission. To maximise our gain in transfer time and still achieve our science goals, we opt for lossy data compression-keeping in mind that raw data is archived and can be evaluated at a later time. We evaluate how lossy JPEG2000 compression affects the process of finding transients, and find only a negligible effect for compression ratios up to 25:1. We also find a linear relation between compression ratio and the mean estimated data transmission speed-up factor. Adding highly customised compression and decompression steps to the science pipeline considerably reduces the transmission time-validating its introduction to the Deeper Wider Faster programme science pipeline and enabling science that was otherwise too difficult with current technology.

  14. Electromagnetic pulse excitation of finite- and infinitely-long lossy conductors over a lossy ground plane

    DOE PAGES

    Campione, Salvatore; Warne, Larry K.; Basilio, Lorena I.; ...

    2017-01-13

    This study details a model for the response of a finite- or an infinite-length wire interacting with a conducting ground to an electromagnetic pulse excitation. We develop a frequency–domain method based on transmission line theory that we name ATLOG – Analytic Transmission Line Over Ground. This method is developed as an alternative to full-wave methods, as it delivers a fast and reliable solution. It allows for the treatment of finite or infinite lossy, coated wires, and lossy grounds. The cases of wire above ground, as well as resting on the ground and buried beneath the ground are treated. The reportedmore » method is general and the time response of the induced current is obtained using an inverse Fourier transform of the current in the frequency domain. The focus is on the characteristics and propagation of the transmission line mode. Comparisons with full-wave simulations strengthen the validity of the proposed method.« less

  15. Electromagnetic pulse excitation of finite- and infinitely-long lossy conductors over a lossy ground plane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campione, Salvatore; Warne, Larry K.; Basilio, Lorena I.

    This study details a model for the response of a finite- or an infinite-length wire interacting with a conducting ground to an electromagnetic pulse excitation. We develop a frequency–domain method based on transmission line theory that we name ATLOG – Analytic Transmission Line Over Ground. This method is developed as an alternative to full-wave methods, as it delivers a fast and reliable solution. It allows for the treatment of finite or infinite lossy, coated wires, and lossy grounds. The cases of wire above ground, as well as resting on the ground and buried beneath the ground are treated. The reportedmore » method is general and the time response of the induced current is obtained using an inverse Fourier transform of the current in the frequency domain. The focus is on the characteristics and propagation of the transmission line mode. Comparisons with full-wave simulations strengthen the validity of the proposed method.« less

  16. The Localized Scleroderma Skin Severity Index and Physician Global Assessment of Disease Activity: A Work in Progress Toward Development of Localized Scleroderma Outcome Measures

    PubMed Central

    ARKACHAISRI, THASCHAWEE; VILAIYUK, SOAMARAT; LI, SUZANNE; O’NEIL, KATHLEEN M.; POPE, ELENA; HIGGINS, GLORIA C.; PUNARO, MARILYNN; RABINOVICH, EGLA C.; ROSENKRANZ, MARGALIT; KIETZ, DANIEL A.; ROSEN, PAUL; SPALDING, STEVEN J.; HENNON, TERESA R.; TOROK, KATHRYN S.; CASSIDY, ELAINE; MEDSGER, THOMAS A.

    2013-01-01

    Objective To develop and evaluate a Localized Scleroderma (LS) Skin Severity Index (LoSSI) and global assessments’ clinimetric property and effect on quality of life (QOL). Methods A 3-phase study was conducted. The first phase involved 15 patients with LS and 14 examiners who assessed LoSSI [surface area (SA), erythema (ER), skin thickness (ST), and new lesion/extension (N/E)] twice for inter/intrarater reliability. Patient global assessment of disease severity (PtGA-S) and Children’s Dermatology Life Quality Index (CDLQI) were collected for intrarater reliability evaluation. The second phase was aimed to develop clinical determinants for physician global assessment of disease activity (PhysGA-A) and to assess its content validity. The third phase involved 2 examiners assessing LoSSI and PhysGA-A on 27 patients. Effect of training on improving reliability/validity and sensitivity to change of the LoSSI and PhysGA-A was determined. Results Interrater reliability was excellent for ER [intraclass correlation coefficient (ICC) 0.71], ST (ICC 0.70), LoSSI (ICC 0.80), and PhysGA-A (ICC 0.90) but poor for SA (ICC 0.35); thus, LoSSI was modified to mLoSSI. Examiners’ experience did not affect the scores, but training/practice improved reliability. Intrarater reliability was excellent for ER, ST, and LoSSI (Spearman’s rho = 0.71–0.89) and moderate for SA. PtGA-S and CDLQI showed good intrarater agreement (ICC 0.63 and 0.80). mLoSSI correlated moderately with PhysGA-A and PtGA-S. Both mLoSSI and PhysGA-A were sensitive to change following therapy. Conclusion mLoSSI and PhysGA-A are reliable and valid tools for assessing LS disease severity and show high sensitivity to detect change over time. These tools are feasible for use in routine clinical practice. They should be considered for inclusion in a core set of LS outcome measures for clinical trials. PMID:19833758

  17. Identification and correction of systematic error in high-throughput sequence data

    PubMed Central

    2011-01-01

    Background A feature common to all DNA sequencing technologies is the presence of base-call errors in the sequenced reads. The implications of such errors are application specific, ranging from minor informatics nuisances to major problems affecting biological inferences. Recently developed "next-gen" sequencing technologies have greatly reduced the cost of sequencing, but have been shown to be more error prone than previous technologies. Both position specific (depending on the location in the read) and sequence specific (depending on the sequence in the read) errors have been identified in Illumina and Life Technology sequencing platforms. We describe a new type of systematic error that manifests as statistically unlikely accumulations of errors at specific genome (or transcriptome) locations. Results We characterize and describe systematic errors using overlapping paired reads from high-coverage data. We show that such errors occur in approximately 1 in 1000 base pairs, and that they are highly replicable across experiments. We identify motifs that are frequent at systematic error sites, and describe a classifier that distinguishes heterozygous sites from systematic error. Our classifier is designed to accommodate data from experiments in which the allele frequencies at heterozygous sites are not necessarily 0.5 (such as in the case of RNA-Seq), and can be used with single-end datasets. Conclusions Systematic errors can easily be mistaken for heterozygous sites in individuals, or for SNPs in population analyses. Systematic errors are particularly problematic in low coverage experiments, or in estimates of allele-specific expression from RNA-Seq data. Our characterization of systematic error has allowed us to develop a program, called SysCall, for identifying and correcting such errors. We conclude that correction of systematic errors is important to consider in the design and interpretation of high-throughput sequencing experiments. PMID:22099972

  18. Wave attenuation and mode dispersion in a waveguide coated with lossy dielectric material

    NASA Technical Reports Server (NTRS)

    Lee, C. S.; Chuang, S. L.; Lee, S. W.; Lo, Y. T.

    1984-01-01

    The modal attenuation constants in a cylindrical waveguide coated with a lossy dielectric material are studied as functions of frequency, dielectric constant, and thickness of the dielectric layer. A dielectric material best suited for a large attenuation is suggested. Using Kirchhoff's approximation, the field attenuation in a coated waveguide which is illuminated by a normally incident plane wave is also studied. For a circular guide which has a diameter of two wavelengths and is coated with a thin lossy dielectric layer (omega sub r = 9.1 - j2.3, thickness = 3% of the radius), a 3 dB attenuation is achieved within 16 diameters.

  19. An Evaluation Framework for Lossy Compression of Genome Sequencing Quality Values.

    PubMed

    Alberti, Claudio; Daniels, Noah; Hernaez, Mikel; Voges, Jan; Goldfeder, Rachel L; Hernandez-Lopez, Ana A; Mattavelli, Marco; Berger, Bonnie

    2016-01-01

    This paper provides the specification and an initial validation of an evaluation framework for the comparison of lossy compressors of genome sequencing quality values. The goal is to define reference data, test sets, tools and metrics that shall be used to evaluate the impact of lossy compression of quality values on human genome variant calling. The functionality of the framework is validated referring to two state-of-the-art genomic compressors. This work has been spurred by the current activity within the ISO/IEC SC29/WG11 technical committee (a.k.a. MPEG), which is investigating the possibility of starting a standardization activity for genomic information representation.

  20. Compression techniques in tele-radiology

    NASA Astrophysics Data System (ADS)

    Lu, Tianyu; Xiong, Zixiang; Yun, David Y.

    1999-10-01

    This paper describes a prototype telemedicine system for remote 3D radiation treatment planning. Due to voluminous medical image data and image streams generated in interactive frame rate involved in the application, the importance of deploying adjustable lossy to lossless compression techniques is emphasized in order to achieve acceptable performance via various kinds of communication networks. In particular, the compression of the data substantially reduces the transmission time and therefore allows large-scale radiation distribution simulation and interactive volume visualization using remote supercomputing resources in a timely fashion. The compression algorithms currently used in the software we developed are JPEG and H.263 lossy methods and Lempel-Ziv (LZ77) lossless methods. Both objective and subjective assessment of the effect of lossy compression methods on the volume data are conducted. Favorable results are obtained showing that substantial compression ratio is achievable within distortion tolerance. From our experience, we conclude that 30dB (PSNR) is about the lower bound to achieve acceptable quality when applying lossy compression to anatomy volume data (e.g. CT). For computer simulated data, much higher PSNR (up to 100dB) is expectable. This work not only introduces such novel approach for delivering medical services that will have significant impact on the existing cooperative image-based services, but also provides a platform for the physicians to assess the effects of lossy compression techniques on the diagnostic and aesthetic appearance of medical imaging.

  1. Comparative study of lossy and lossless data compression in distributed optical fiber sensing systems

    NASA Astrophysics Data System (ADS)

    Atubga, David; Wu, Huijuan; Lu, Lidong; Sun, Xiaoyan

    2017-02-01

    Typical fully distributed optical fiber sensors (DOFS) with dozens of kilometers are equivalent to tens of thousands of point sensors along the whole monitoring line, which means tens of thousands of data will be generated for one pulse launching period. Therefore, in an all-day nonstop monitoring, large volumes of data are created thereby triggering the demand for large storage space and high speed for data transmission. In addition, when the monitoring length and channel numbers increase, the data also increase extensively. The task of mitigating large volumes of data accumulation, large storage capacity, and high-speed data transmission is, therefore, the aim of this paper. To demonstrate our idea, we carried out a comparative study of two lossless methods, Huffman and Lempel Ziv Welch (LZW), with a lossy data compression algorithm, fast wavelet transform (FWT) based on three distinctive DOFS sensing data, such as Φ-OTDR, P-OTDR, and B-OTDA. Our results demonstrated that FWT yielded the best compression ratio with good consumption time, irrespective of errors in signal construction of the three DOFS data. Our outcomes indicate the promising potentials of FWT which makes it more suitable, reliable, and convenient for real-time compression of the DOFS data. Finally, it was observed that differences in the DOFS data structure have some influence on both the compression ratio and computational cost.

  2. Subjective evaluation of compressed image quality

    NASA Astrophysics Data System (ADS)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  3. Security of modified Ping-Pong protocol in noisy and lossy channel

    PubMed Central

    Han, Yun-Guang; Yin, Zhen-Qiang; Li, Hong-Wei; Chen, Wei; Wang, Shuang; Guo, Guang-Can; Han, Zheng-Fu

    2014-01-01

    The “Ping-Pong” (PP) protocol is a two-way quantum key protocol based on entanglement. In this protocol, Bob prepares one maximally entangled pair of qubits, and sends one qubit to Alice. Then, Alice performs some necessary operations on this qubit and sends it back to Bob. Although this protocol was proposed in 2002, its security in the noisy and lossy channel has not been proven. In this report, we add a simple and experimentally feasible modification to the original PP protocol, and prove the security of this modified PP protocol against collective attacks when the noisy and lossy channel is taken into account. Simulation results show that our protocol is practical. PMID:24816899

  4. Security of modified Ping-Pong protocol in noisy and lossy channel.

    PubMed

    Han, Yun-Guang; Yin, Zhen-Qiang; Li, Hong-Wei; Chen, Wei; Wang, Shuang; Guo, Guang-Can; Han, Zheng-Fu

    2014-05-12

    The "Ping-Pong" (PP) protocol is a two-way quantum key protocol based on entanglement. In this protocol, Bob prepares one maximally entangled pair of qubits, and sends one qubit to Alice. Then, Alice performs some necessary operations on this qubit and sends it back to Bob. Although this protocol was proposed in 2002, its security in the noisy and lossy channel has not been proven. In this report, we add a simple and experimentally feasible modification to the original PP protocol, and prove the security of this modified PP protocol against collective attacks when the noisy and lossy channel is taken into account. Simulation results show that our protocol is practical.

  5. Assessing the Effects of Data Compression in Simulations Using Physically Motivated Metrics

    DOE PAGES

    Laney, Daniel; Langer, Steven; Weber, Christopher; ...

    2014-01-01

    This paper examines whether lossy compression can be used effectively in physics simulations as a possible strategy to combat the expected data-movement bottleneck in future high performance computing architectures. We show that, for the codes and simulations we tested, compression levels of 3–5X can be applied without causing significant changes to important physical quantities. Rather than applying signal processing error metrics, we utilize physics-based metrics appropriate for each code to assess the impact of compression. We evaluate three different simulation codes: a Lagrangian shock-hydrodynamics code, an Eulerian higher-order hydrodynamics turbulence modeling code, and an Eulerian coupled laser-plasma interaction code. Wemore » compress relevant quantities after each time-step to approximate the effects of tightly coupled compression and study the compression rates to estimate memory and disk-bandwidth reduction. We find that the error characteristics of compression algorithms must be carefully considered in the context of the underlying physics being modeled.« less

  6. Radiometric resolution enhancement by lossy compression as compared to truncation followed by lossless compression

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Manohar, Mareboyana

    1994-01-01

    Recent advances in imaging technology make it possible to obtain imagery data of the Earth at high spatial, spectral and radiometric resolutions from Earth orbiting satellites. The rate at which the data is collected from these satellites can far exceed the channel capacity of the data downlink. Reducing the data rate to within the channel capacity can often require painful trade-offs in which certain scientific returns are sacrificed for the sake of others. In this paper we model the radiometric version of this form of lossy compression by dropping a specified number of least significant bits from each data pixel and compressing the remaining bits using an appropriate lossless compression technique. We call this approach 'truncation followed by lossless compression' or TLLC. We compare the TLLC approach with applying a lossy compression technique to the data for reducing the data rate to the channel capacity, and demonstrate that each of three different lossy compression techniques (JPEG/DCT, VQ and Model-Based VQ) give a better effective radiometric resolution than TLLC for a given channel rate.

  7. A Systematic Approach to Error Free Telemetry

    DTIC Science & Technology

    2017-06-28

    A SYSTEMATIC APPROACH TO ERROR FREE TELEMETRY 412TW-TIM-17-03 DISTRIBUTION A: Approved for public release. Distribution is...Systematic Approach to Error-Free Telemetry) was submitted by the Commander, 412th Test Wing, Edwards AFB, California 93524. Prepared by...Technical Information Memorandum 3. DATES COVERED (From - Through) February 2016 4. TITLE AND SUBTITLE A Systematic Approach to Error-Free

  8. Efficient compression of molecular dynamics trajectory files.

    PubMed

    Marais, Patrick; Kenwood, Julian; Smith, Keegan Carruthers; Kuttel, Michelle M; Gain, James

    2012-10-15

    We investigate whether specific properties of molecular dynamics trajectory files can be exploited to achieve effective file compression. We explore two classes of lossy, quantized compression scheme: "interframe" predictors, which exploit temporal coherence between successive frames in a simulation, and more complex "intraframe" schemes, which compress each frame independently. Our interframe predictors are fast, memory-efficient and well suited to on-the-fly compression of massive simulation data sets, and significantly outperform the benchmark BZip2 application. Our schemes are configurable: atomic positional accuracy can be sacrificed to achieve greater compression. For high fidelity compression, our linear interframe predictor gives the best results at very little computational cost: at moderate levels of approximation (12-bit quantization, maximum error ≈ 10(-2) Å), we can compress a 1-2 fs trajectory file to 5-8% of its original size. For 200 fs time steps-typically used in fine grained water diffusion experiments-we can compress files to ~25% of their input size, still substantially better than BZip2. While compression performance degrades with high levels of quantization, the simulation error is typically much greater than the associated approximation error in such cases. Copyright © 2012 Wiley Periodicals, Inc.

  9. The Visual Uncertainty Paradigm for Controlling Screen-Space Information in Visualization

    ERIC Educational Resources Information Center

    Dasgupta, Aritra

    2012-01-01

    The information visualization pipeline serves as a lossy communication channel for presentation of data on a screen-space of limited resolution. The lossy communication is not just a machine-only phenomenon due to information loss caused by translation of data, but also a reflection of the degree to which the human user can comprehend visual…

  10. Electromagnetic backscattering from a random distribution of lossy dielectric scatterers

    NASA Technical Reports Server (NTRS)

    Lang, R. H.

    1980-01-01

    Electromagnetic backscattering from a sparse distribution of discrete lossy dielectric scatterers occupying a region 5 was studied. The scatterers are assumed to have random position and orientation. Scattered fields are calculated by first finding the mean field and then by using it to define an equivalent medium within the volume 5. The scatterers are then viewed as being embedded in the equivalent medium; the distorted Born approximation is then used to find the scattered fields. This technique represents an improvement over the standard Born approximation since it takes into account the attenuation of the incident and scattered waves in the equivalent medium. The method is used to model a leaf canopy when the leaves are modeled by lossy dielectric discs.

  11. A Systematic Error Correction Method for TOVS Radiances

    NASA Technical Reports Server (NTRS)

    Joiner, Joanna; Rokke, Laurie; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Treatment of systematic errors is crucial for the successful use of satellite data in a data assimilation system. Systematic errors in TOVS radiance measurements and radiative transfer calculations can be as large or larger than random instrument errors. The usual assumption in data assimilation is that observational errors are unbiased. If biases are not effectively removed prior to assimilation, the impact of satellite data will be lessened and can even be detrimental. Treatment of systematic errors is important for short-term forecast skill as well as the creation of climate data sets. A systematic error correction algorithm has been developed as part of a 1D radiance assimilation. This scheme corrects for spectroscopic errors, errors in the instrument response function, and other biases in the forward radiance calculation for TOVS. Such algorithms are often referred to as tuning of the radiances. The scheme is able to account for the complex, air-mass dependent biases that are seen in the differences between TOVS radiance observations and forward model calculations. We will show results of systematic error correction applied to the NOAA 15 Advanced TOVS as well as its predecessors. We will also discuss the ramifications of inter-instrument bias with a focus on stratospheric measurements.

  12. More on Systematic Error in a Boyle's Law Experiment

    ERIC Educational Resources Information Center

    McCall, Richard P.

    2012-01-01

    A recent article in "The Physics Teacher" describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.

  13. Loss/gain-induced ultrathin antireflection coatings

    PubMed Central

    Luo, Jie; Li, Sucheng; Hou, Bo; Lai, Yun

    2016-01-01

    Tradional antireflection coatings composed of dielectric layers usually require the thickness to be larger than quarter wavelength. Here, we demonstrate that materials with permittivity or permeability dominated by imaginary parts, i.e. lossy or gain media, can realize non-resonant antireflection coatings in deep sub-wavelength scale. Interestingly, while the reflected waves are eliminated as in traditional dielectric antireflection coatings, the transmitted waves can be enhanced or reduced, depending on whether gain or lossy media are applied, respectively. We provide a unified theory for the design of such ultrathin antireflection coatings, showing that under different polarizations and incident angles, different types of ultrathin coatings should be applied. Especially, under transverse magnetic polarization, the requirement shows a switch between gain and lossy media at Brewster angle. As a proof of principle, by using conductive films as a special type of lossy antireflection coatings, we experimentally demonstrate the suppression of Fabry-Pérot resonances in a broad frequency range for microwaves. This valuable functionality can be applied to remove undesired resonant effects, such as the frequency-dependent side lobes induced by resonances in dielectric coverings of antennas. Our work provides a guide for the design of ultrathin antireflection coatings as well as their applications in broadband reflectionless devices. PMID:27349750

  14. Time domain analysis of thin-wire antennas over lossy ground using the reflection-coefficient approximation

    NASA Astrophysics Data System (ADS)

    FernáNdez Pantoja, M.; Yarovoy, A. G.; Rubio Bretones, A.; GonzáLez GarcíA, S.

    2009-12-01

    This paper presents a procedure to extend the methods of moments in time domain for the transient analysis of thin-wire antennas to include those cases where the antennas are located over a lossy half-space. This extended technique is based on the reflection coefficient (RC) approach, which approximates the fields incident on the ground interface as plane waves and calculates the time domain RC using the inverse Fourier transform of Fresnel equations. The implementation presented in this paper uses general expressions for the RC which extend its range of applicability to lossy grounds, and is proven to be accurate and fast for antennas located not too near to the ground. The resulting general purpose procedure, able to treat arbitrarily oriented thin-wire antennas, is appropriate for all kind of half-spaces, including lossy cases, and it has turned out to be as computationally fast solving the problem of an arbitrary ground as dealing with a perfect electric conductor ground plane. Results show a numerical validation of the method for different half-spaces, paying special attention to the influence of the antenna to ground distance in the accuracy of the results.

  15. Causal Inference for fMRI Time Series Data with Systematic Errors of Measurement in a Balanced On/Off Study of Social Evaluative Threat.

    PubMed

    Sobel, Michael E; Lindquist, Martin A

    2014-07-01

    Functional magnetic resonance imaging (fMRI) has facilitated major advances in understanding human brain function. Neuroscientists are interested in using fMRI to study the effects of external stimuli on brain activity and causal relationships among brain regions, but have not stated what is meant by causation or defined the effects they purport to estimate. Building on Rubin's causal model, we construct a framework for causal inference using blood oxygenation level dependent (BOLD) fMRI time series data. In the usual statistical literature on causal inference, potential outcomes, assumed to be measured without systematic error, are used to define unit and average causal effects. However, in general the potential BOLD responses are measured with stimulus dependent systematic error. Thus we define unit and average causal effects that are free of systematic error. In contrast to the usual case of a randomized experiment where adjustment for intermediate outcomes leads to biased estimates of treatment effects (Rosenbaum, 1984), here the failure to adjust for task dependent systematic error leads to biased estimates. We therefore adjust for systematic error using measured "noise covariates" , using a linear mixed model to estimate the effects and the systematic error. Our results are important for neuroscientists, who typically do not adjust for systematic error. They should also prove useful to researchers in other areas where responses are measured with error and in fields where large amounts of data are collected on relatively few subjects. To illustrate our approach, we re-analyze data from a social evaluative threat task, comparing the findings with results that ignore systematic error.

  16. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less

  17. A strip-shield improves the efficiency of a solenoid coil in probes for high-field solid-state NMR of lossy biological samples.

    PubMed

    Wu, Chin H; Grant, Christopher V; Cook, Gabriel A; Park, Sang Ho; Opella, Stanley J

    2009-09-01

    A strip-shield inserted between a high inductance double-tuned solenoid coil and the glass tube containing the sample improves the efficiency of probes used for high-field solid-state NMR experiments on lossy aqueous samples of proteins and other biopolymers. A strip-shield is a coil liner consisting of thin copper strips layered on a PTFE (polytetrafluoroethylene) insulator. With lossy samples, the shift in tuning frequency is smaller, the reduction in Q, and RF-induced heating are all significantly reduced when the strip-shield is present. The performance of 800MHz (1)H/(15)N and (1)H/(13)C double-resonance probes is demonstrated on aqueous samples of membrane proteins in phospholipid bilayers.

  18. Propagation of stage measurement uncertainties to streamflow time series

    NASA Astrophysics Data System (ADS)

    Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary

    2016-04-01

    Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.

  19. Effect of patient setup errors on simultaneously integrated boost head and neck IMRT treatment plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siebers, Jeffrey V.; Keall, Paul J.; Wu Qiuwen

    2005-10-01

    Purpose: The purpose of this study is to determine dose delivery errors that could result from random and systematic setup errors for head-and-neck patients treated using the simultaneous integrated boost (SIB)-intensity-modulated radiation therapy (IMRT) technique. Methods and Materials: Twenty-four patients who participated in an intramural Phase I/II parotid-sparing IMRT dose-escalation protocol using the SIB treatment technique had their dose distributions reevaluated to assess the impact of random and systematic setup errors. The dosimetric effect of random setup error was simulated by convolving the two-dimensional fluence distribution of each beam with the random setup error probability density distribution. Random setup errorsmore » of {sigma} = 1, 3, and 5 mm were simulated. Systematic setup errors were simulated by randomly shifting the patient isocenter along each of the three Cartesian axes, with each shift selected from a normal distribution. Systematic setup error distributions with {sigma} = 1.5 and 3.0 mm along each axis were simulated. Combined systematic and random setup errors were simulated for {sigma} = {sigma} = 1.5 and 3.0 mm along each axis. For each dose calculation, the gross tumor volume (GTV) received by 98% of the volume (D{sub 98}), clinical target volume (CTV) D{sub 90}, nodes D{sub 90}, cord D{sub 2}, and parotid D{sub 50} and parotid mean dose were evaluated with respect to the plan used for treatment for the structure dose and for an effective planning target volume (PTV) with a 3-mm margin. Results: Simultaneous integrated boost-IMRT head-and-neck treatment plans were found to be less sensitive to random setup errors than to systematic setup errors. For random-only errors, errors exceeded 3% only when the random setup error {sigma} exceeded 3 mm. Simulated systematic setup errors with {sigma} = 1.5 mm resulted in approximately 10% of plan having more than a 3% dose error, whereas a {sigma} = 3.0 mm resulted in half of the plans having more than a 3% dose error and 28% with a 5% dose error. Combined random and systematic dose errors with {sigma} = {sigma} = 3.0 mm resulted in more than 50% of plans having at least a 3% dose error and 38% of the plans having at least a 5% dose error. Evaluation with respect to a 3-mm expanded PTV reduced the observed dose deviations greater than 5% for the {sigma} = {sigma} = 3.0 mm simulations to 5.4% of the plans simulated. Conclusions: Head-and-neck SIB-IMRT dosimetric accuracy would benefit from methods to reduce patient systematic setup errors. When GTV, CTV, or nodal volumes are used for dose evaluation, plans simulated including the effects of random and systematic errors deviate substantially from the nominal plan. The use of PTVs for dose evaluation in the nominal plan improves agreement with evaluated GTV, CTV, and nodal dose values under simulated setup errors. PTV concepts should be used for SIB-IMRT head-and-neck squamous cell carcinoma patients, although the size of the margins may be less than those used with three-dimensional conformal radiation therapy.« less

  20. Excitation of the Uller-Zenneck electromagnetic surface waves in the prism-coupled configuration

    NASA Astrophysics Data System (ADS)

    Rasheed, Mehran; Faryad, Muhammad

    2017-08-01

    A configuration to excite the Uller-Zenneck surface electromagnetic waves at the planar interfaces of homogeneous and isotropic dielectric materials is proposed and theoretically analyzed. The Uller-Zenneck waves are surface waves that can exist at the planar interface of two dissimilar dielectric materials of which at least one is a lossy dielectric material. In this paper, a slab of a lossy dielectric material was taken with lossless dielectric materials on both sides. A canonical boundary-value problem was set up and solved to find the possible Uller-Zenneck waves and waveguide modes. The Uller-Zenneck waves guided by the slab of the lossy dielectric material were found to be either symmetric or antisymmetric and transmuted into waveguide modes when the thickness of that slab was increased. A prism-coupled configuration was then successfully devised to excite the Uller-Zenneck waves. The results showed that the Uller-Zenneck waves are excited at the same angle of incidence for any thickness of the slab of the lossy dielectric material, whereas the waveguide modes can be excited when the slab is sufficiently thick. The excitation of Uller-Zenneck waves at the planar interfaces with homogeneous and all-dielectric materials can usher in new avenues for the applications for electromagnetic surface waves.

  1. Video quality assesment using M-SVD

    NASA Astrophysics Data System (ADS)

    Tao, Peining; Eskicioglu, Ahmet M.

    2007-01-01

    Objective video quality measurement is a challenging problem in a variety of video processing application ranging from lossy compression to printing. An ideal video quality measure should be able to mimic the human observer. We present a new video quality measure, M-SVD, to evaluate distorted video sequences based on singular value decomposition. A computationally efficient approach is developed for full-reference (FR) video quality assessment. This measure is tested on the Video Quality Experts Group (VQEG) phase I FR-TV test data set. Our experiments show the graphical measure displays the amount of distortion as well as the distribution of error in all frames of the video sequence while the numerical measure has a good correlation with perceived video quality outperforms PSNR and other objective measures by a clear margin.

  2. Noise tolerance in wavelength-selective switching of optical differential quadrature-phase-shift-keying pulse train by collinear acousto-optic devices.

    PubMed

    Goto, Nobuo; Miyazaki, Yasumitsu

    2014-06-01

    Optical switching of high-bit-rate quadrature-phase-shift-keying (QPSK) pulse trains using collinear acousto-optic (AO) devices is theoretically discussed. Since the collinear AO devices have wavelength selectivity, the switched optical pulse trains suffer from distortion when the bandwidth of the pulse train is comparable to the pass bandwidth of the AO device. As the AO device, a sidelobe-suppressed device with a tapered surface-acoustic-wave (SAW) waveguide and a Butterworth-type filter device with a lossy SAW directional coupler are considered. Phase distortion of optical pulse trains at 40 to 100  Gsymbols/s in QPSK format is numerically analyzed. Bit-error-rate performance with additive Gaussian noise is also evaluated by the Monte Carlo method.

  3. Performance of device-independent quantum key distribution

    NASA Astrophysics Data System (ADS)

    Cao, Zhu; Zhao, Qi; Ma, Xiongfeng

    2016-07-01

    Quantum key distribution provides information-theoretically-secure communication. In practice, device imperfections may jeopardise the system security. Device-independent quantum key distribution solves this problem by providing secure keys even when the quantum devices are untrusted and uncharacterized. Following a recent security proof of the device-independent quantum key distribution, we improve the key rate by tightening the parameter choice in the security proof. In practice where the system is lossy, we further improve the key rate by taking into account the loss position information. From our numerical simulation, our method can outperform existing results. Meanwhile, we outline clear experimental requirements for implementing device-independent quantum key distribution. The maximal tolerable error rate is 1.6%, the minimal required transmittance is 97.3%, and the minimal required visibility is 96.8 % .

  4. Errors in causal inference: an organizational schema for systematic error and random error.

    PubMed

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Systematic Error Study for ALICE charged-jet v2 Measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heinz, M.; Soltz, R.

    We study the treatment of systematic errors in the determination of v 2 for charged jets in √ sNN = 2:76 TeV Pb-Pb collisions by the ALICE Collaboration. Working with the reported values and errors for the 0-5% centrality data we evaluate the Χ 2 according to the formulas given for the statistical and systematic errors, where the latter are separated into correlated and shape contributions. We reproduce both the Χ 2 and p-values relative to a null (zero) result. We then re-cast the systematic errors into an equivalent co-variance matrix and obtain identical results, demonstrating that the two methodsmore » are equivalent.« less

  6. Effect of MLC leaf position, collimator rotation angle, and gantry rotation angle errors on intensity-modulated radiotherapy plans for nasopharyngeal carcinoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, Sen; Li, Guangjun; Wang, Maojie

    The purpose of this study was to investigate the effect of multileaf collimator (MLC) leaf position, collimator rotation angle, and accelerator gantry rotation angle errors on intensity-modulated radiotherapy plans for nasopharyngeal carcinoma. To compare dosimetric differences between the simulating plans and the clinical plans with evaluation parameters, 6 patients with nasopharyngeal carcinoma were selected for simulation of systematic and random MLC leaf position errors, collimator rotation angle errors, and accelerator gantry rotation angle errors. There was a high sensitivity to dose distribution for systematic MLC leaf position errors in response to field size. When the systematic MLC position errors weremore » 0.5, 1, and 2 mm, respectively, the maximum values of the mean dose deviation, observed in parotid glands, were 4.63%, 8.69%, and 18.32%, respectively. The dosimetric effect was comparatively small for systematic MLC shift errors. For random MLC errors up to 2 mm and collimator and gantry rotation angle errors up to 0.5°, the dosimetric effect was negligible. We suggest that quality control be regularly conducted for MLC leaves, so as to ensure that systematic MLC leaf position errors are within 0.5 mm. Because the dosimetric effect of 0.5° collimator and gantry rotation angle errors is negligible, it can be concluded that setting a proper threshold for allowed errors of collimator and gantry rotation angle may increase treatment efficacy and reduce treatment time.« less

  7. National Information Systems Security Conference (19th) held in Baltimore, Maryland on October 22-25, 1996. Volume 1

    DTIC Science & Technology

    1996-10-25

    been demonstrated that steganography is ineffective 195 when images are stored using this compression algorithm [2]. Difficulty in designing a general...Despite the relative ease of employing steganography to covertly transport data in an uncompressed 24-bit image , lossy compression algorithms based on... image , the security threat that steganography poses cannot be completely eliminated by application of a transform-based lossy compression algorithm

  8. Evaluation of Efficient XML Interchange (EXI) for Large Datasets and as an Alternative to Binary JSON Encodings

    DTIC Science & Technology

    2015-03-01

    fall in the lossy category (Gonzalez, Woods , & Eddins, 2009, p. 420). For the textual or numeric data in XML, however, lossy compression is...7/1,337 > Professional Notes Being Efficient with Bandwidth By Lieutenant Commander Steve Debich, Lieutenant Bruce Hill, Captain Scot Miller (Retired...2005). XML Binary Characterization. Retrieved from http://www.w3.org/TR/xbc-characterization/ Gonzalez, R., Woods , R., & Eddins, S. (2009

  9. A comprehensive review of lossy mode resonance-based fiber optic sensors

    NASA Astrophysics Data System (ADS)

    Wang, Qi; Zhao, Wan-Ming

    2018-01-01

    This review paper presents the achievements and present developments in lossy mode resonances-based optical fiber sensors in different sensing field, such as physical, chemical and biological, and briefly look forward to its future development trend in the eyes of the author. Lossy mode resonances (LMR) is a relatively new physical optics phenomenon put forward in recent years. Fiber sensors utilizing LMR offered a new way to improve the sensing capability. LMR fiber sensors have diverse structures such as D-shaped, cladding-off, fiber tip, U-shaped and tapered fiber structures. Major applications of LMR sensors include refraction sensors and biosensors. LMR-based fiber sensors have attracted considerable research and development interest, because of their distinct advantages such as high sensitivity and label-free measurement. This kind of sensor is also of academic interest and many novel and great ideas are continuously developed.

  10. Full Wave Analysis of RF Signal Attenuation in a Lossy Rough Surface Cave using a High Order Time Domain Vector Finite Element Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pingenot, J; Rieben, R; White, D

    2005-10-31

    We present a computational study of signal propagation and attenuation of a 200 MHz planar loop antenna in a cave environment. The cave is modeled as a straight and lossy random rough wall. To simulate a broad frequency band, the full wave Maxwell equations are solved directly in the time domain via a high order vector finite element discretization using the massively parallel CEM code EMSolve. The numerical technique is first verified against theoretical results for a planar loop antenna in a smooth lossy cave. The simulation is then performed for a series of random rough surface meshes in ordermore » to generate statistical data for the propagation and attenuation properties of the antenna in a cave environment. Results for the mean and variance of the power spectral density of the electric field are presented and discussed.« less

  11. Modeling of video traffic in packet networks, low rate video compression, and the development of a lossy+lossless image compression algorithm

    NASA Technical Reports Server (NTRS)

    Sayood, K.; Chen, Y. C.; Wang, X.

    1992-01-01

    During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.

  12. Task-oriented lossy compression of magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques

    1996-04-01

    A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.

  13. Possibility of perfect concealment by lossy conventional and lossy metamaterial cylindrical invisibility cloaks

    NASA Astrophysics Data System (ADS)

    Dehbashi, Reza; Shahabadi, Mahmoud

    2013-12-01

    The commonly used coordinate transformation for cylindrical cloaks is generalized. This transformation is utilized to determine an anisotropic inhomogeneous diagonal material tensors of a shell type cloak for various material types, i.e., double-positive (DPS: ɛ, μ > 0), double-negative (DNG: ɛ, μ < 0), ɛ-negative (ENG), and μ-negative (MNG). To obtain conditions of perfect cloaking for various material types, a rigorous analysis is performed. It is shown that perfect cloaking will be achieved for same type material for the cloak and its surrounding medium. Moreover, material losses are included in the analysis to demonstrate that perfect cloaking for lossy materials can be achieved for identical loss tangent of the cloak and its surrounding material. Sensitivity of the cloaking performance to losses for different material types is also investigated. The obtained analytical results are verified using a Finite-Element computational analysis.

  14. Accuracy of a teleported squeezed coherent-state superposition trapped into a high-Q cavity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sales, J. S.; Silva, L. F. da; Almeida, N. G. de

    2011-03-15

    We propose a scheme to teleport a superposition of squeezed coherent states from one mode of a lossy cavity to one mode of a second lossy cavity. Based on current experimental capabilities, we present a calculation of the fidelity demonstrating that accurate quantum teleportation can be achieved for some parameters of the squeezed coherent states superposition. The signature of successful quantum teleportation is present in the negative values of the Wigner function.

  15. Accuracy of a teleported squeezed coherent-state superposition trapped into a high-Q cavity

    NASA Astrophysics Data System (ADS)

    Sales, J. S.; da Silva, L. F.; de Almeida, N. G.

    2011-03-01

    We propose a scheme to teleport a superposition of squeezed coherent states from one mode of a lossy cavity to one mode of a second lossy cavity. Based on current experimental capabilities, we present a calculation of the fidelity demonstrating that accurate quantum teleportation can be achieved for some parameters of the squeezed coherent states superposition. The signature of successful quantum teleportation is present in the negative values of the Wigner function.

  16. [Remote access to a web-based image distribution system].

    PubMed

    Bergh, B; Schlaefke, A; Frankenbach, R; Vogl, T J

    2004-06-01

    To assess different network and security technologies for remote access to a web-based image distribution system of a hospital intranet. Following preparatory testing, the time-to-display (TTD) was measured for three image types (CR, CT, MR). The evaluation included two remote access technologies consisting of direct ISDN-Dial-Up or VPN connection (Virtual Private Network), with three different connection speeds of 64, 128 (ISDN) and 768 Kbit/s (ADSL-Asymmetric Digital Subscriber Line), as well as with lossless and lossy compression. Depending on the image type, the TTD with lossless compression for 64 Kbit/s varied from 1 : 00 to 2 : 40 minutes, for 128 Kbit/s from 0 : 35 to 1 : 15 minutes and for ADSL from 0 : 15 to 0 : 45 minutes. The ISDN-Dial-Up connection was superior to VPN technology at 64 Kbit/s but did not allow higher connection speeds. Lossy compression reduced the TTD by half for all measurements. VPN technology is preferable to direct Dial-Up connections since it offers higher connection speeds and advantages in usage and security. For occasional usage, 128 Kbit/s (ISDN) can be considered sufficient, especially in conjunction with lossy compression. ADSL should be chosen when a more frequent usage is anticipated, whereby lossy compression may be omitted. Due to higher bandwidths and improved usability, the web-based approach appears superior to conventional teleradiology systems.

  17. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T. S.

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmissionmore » and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  18. Systematic Errors in an Air Track Experiment.

    ERIC Educational Resources Information Center

    Ramirez, Santos A.; Ham, Joe S.

    1990-01-01

    Errors found in a common physics experiment to measure acceleration resulting from gravity using a linear air track are investigated. Glider position at release and initial velocity are shown to be sources of systematic error. (CW)

  19. Representation of deformable motion for compression of dynamic cardiac image data

    NASA Astrophysics Data System (ADS)

    Weinlich, Andreas; Amon, Peter; Hutter, Andreas; Kaup, André

    2012-02-01

    We present a new approach for efficient estimation and storage of tissue deformation in dynamic medical image data like 3-D+t computed tomography reconstructions of human heart acquisitions. Tissue deformation between two points in time can be described by means of a displacement vector field indicating for each voxel of a slice, from which position in the previous slice at a fixed position in the third dimension it has moved to this position. Our deformation model represents the motion in a compact manner using a down-sampled potential function of the displacement vector field. This function is obtained by a Gauss-Newton minimization of the estimation error image, i. e., the difference between the current and the deformed previous slice. For lossless or lossy compression of volume slices, the potential function and the error image can afterwards be coded separately. By assuming deformations instead of translational motion, a subsequent coding algorithm using this method will achieve better compression ratios for medical volume data than with conventional block-based motion compensation known from video coding. Due to the smooth prediction without block artifacts, particularly whole-image transforms like wavelet decomposition as well as intra-slice prediction methods can benefit from this approach. We show that with discrete cosine as well as with Karhunen-Lo`eve transform the method can achieve a better energy compaction of the error image than block-based motion compensation while reaching approximately the same prediction error energy.

  20. The quality of systematic reviews about interventions for refractive error can be improved: a review of systematic reviews.

    PubMed

    Mayo-Wilson, Evan; Ng, Sueko Matsumura; Chuck, Roy S; Li, Tianjing

    2017-09-05

    Systematic reviews should inform American Academy of Ophthalmology (AAO) Preferred Practice Pattern® (PPP) guidelines. The quality of systematic reviews related to the forthcoming Preferred Practice Pattern® guideline (PPP) Refractive Errors & Refractive Surgery is unknown. We sought to identify reliable systematic reviews to assist the AAO Refractive Errors & Refractive Surgery PPP. Systematic reviews were eligible if they evaluated the effectiveness or safety of interventions included in the 2012 PPP Refractive Errors & Refractive Surgery. To identify potentially eligible systematic reviews, we searched the Cochrane Eyes and Vision United States Satellite database of systematic reviews. Two authors identified eligible reviews and abstracted information about the characteristics and quality of the reviews independently using the Systematic Review Data Repository. We classified systematic reviews as "reliable" when they (1) defined criteria for the selection of studies, (2) conducted comprehensive literature searches for eligible studies, (3) assessed the methodological quality (risk of bias) of the included studies, (4) used appropriate methods for meta-analyses (which we assessed only when meta-analyses were reported), (5) presented conclusions that were supported by the evidence provided in the review. We identified 124 systematic reviews related to refractive error; 39 met our eligibility criteria, of which we classified 11 to be reliable. Systematic reviews classified as unreliable did not define the criteria for selecting studies (5; 13%), did not assess methodological rigor (10; 26%), did not conduct comprehensive searches (17; 44%), or used inappropriate quantitative methods (3; 8%). The 11 reliable reviews were published between 2002 and 2016. They included 0 to 23 studies (median = 9) and analyzed 0 to 4696 participants (median = 666). Seven reliable reviews (64%) assessed surgical interventions. Most systematic reviews of interventions for refractive error are low methodological quality. Following widely accepted guidance, such as Cochrane or Institute of Medicine standards for conducting systematic reviews, would contribute to improved patient care and inform future research.

  1. An analysis of the least-squares problem for the DSN systematic pointing error model

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1991-01-01

    A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

  2. Quantum repeaters using continuous-variable teleportation

    NASA Astrophysics Data System (ADS)

    Dias, Josephine; Ralph, T. C.

    2017-02-01

    Quantum optical states are fragile and can become corrupted when passed through a lossy communication channel. Unlike for classical signals, optical amplifiers cannot be used to recover quantum signals. Quantum repeaters have been proposed as a way of reducing errors and hence increasing the range of quantum communications. Current protocols target specific discrete encodings, for example quantum bits encoded on the polarization of single photons. We introduce a more general approach that can reduce the effect of loss on any quantum optical encoding, including those based on continuous variables such as the field amplitudes. We show that in principle the protocol incurs a resource cost that scales polynomially with distance. We analyze the simplest implementation and find that while its range is limited it can still achieve useful improvements in the distance over which quantum entanglement of field amplitudes can be distributed.

  3. ELLIPTICAL WEIGHTED HOLICs FOR WEAK LENSING SHEAR MEASUREMENT. III. THE EFFECT OF RANDOM COUNT NOISE ON IMAGE MOMENTS IN WEAK LENSING ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okura, Yuki; Futamase, Toshifumi, E-mail: yuki.okura@nao.ac.jp, E-mail: tof@astr.tohoku.ac.jp

    This is the third paper on the improvement of systematic errors in weak lensing analysis using an elliptical weight function, referred to as E-HOLICs. In previous papers, we succeeded in avoiding errors that depend on the ellipticity of the background image. In this paper, we investigate the systematic error that depends on the signal-to-noise ratio of the background image. We find that the origin of this error is the random count noise that comes from the Poisson noise of sky counts. The random count noise makes additional moments and centroid shift error, and those first-order effects are canceled in averaging,more » but the second-order effects are not canceled. We derive the formulae that correct this systematic error due to the random count noise in measuring the moments and ellipticity of the background image. The correction formulae obtained are expressed as combinations of complex moments of the image, and thus can correct the systematic errors caused by each object. We test their validity using a simulated image and find that the systematic error becomes less than 1% in the measured ellipticity for objects with an IMCAT significance threshold of {nu} {approx} 11.7.« less

  4. Conference Proceedings on Applied Computational Electromagnetics (3rd) Held in Monterey, California on 24-26 March 1987

    DTIC Science & Technology

    1987-03-01

    the VLSI Implementation of the Electromagnetic Field of an Arbitrary Current Source" B.A. Hoyt, A.J. Terzuoli, A.V. Lair ., Air Force Institute of...method is that cavities of arbitrary three dimensional shapes and nonuniform lossy materials can be analyzed. THEORY OF VECTOR POTENTIAL FINITE...elements used to model the cavity. The method includes the effects of nonuniform lossy materials and can analyze cavities of a wide variety of two- and

  5. SU-E-T-613: Dosimetric Consequences of Systematic MLC Leaf Positioning Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kathuria, K; Siebers, J

    2014-06-01

    Purpose: The purpose of this study is to determine the dosimetric consequences of systematic MLC leaf positioning errors for clinical IMRT patient plans so as to establish detection tolerances for quality assurance programs. Materials and Methods: Dosimetric consequences were simulated by extracting mlc delivery instructions from the TPS, altering the file by the specified error, reloading the delivery instructions into the TPS, recomputing dose, and extracting dose-volume metrics for one head-andneck and one prostate patient. Machine error was simulated by offsetting MLC leaves in Pinnacle in a systematic way. Three different algorithms were followed for these systematic offsets, and aremore » as follows: a systematic sequential one-leaf offset (one leaf offset in one segment per beam), a systematic uniform one-leaf offset (same one leaf offset per segment per beam) and a systematic offset of a given number of leaves picked uniformly at random from a given number of segments (5 out of 10 total). Dose to the PTV and normal tissue was simulated. Results: A systematic 5 mm offset of 1 leaf for all delivery segments of all beams resulted in a maximum PTV D98 deviation of 1%. Results showed very low dose error in all reasonably possible machine configurations, rare or otherwise, which could be simulated. Very low error in dose to PTV and OARs was shown in all possible cases of one leaf per beam per segment being offset (<1%), or that of only one leaf per beam being offset (<.2%). The errors resulting from a high number of adjacent leaves (maximum of 5 out of 60 total leaf-pairs) being simultaneously offset in many (5) of the control points (total 10–18 in all beams) per beam, in both the PTV and the OARs analyzed, were similarly low (<2–3%). Conclusions: The above results show that patient shifts and anatomical changes are the main source of errors in dose delivered, not machine delivery. These two sources of error are “visually complementary” and uncorrelated (albeit not additive in the final error) and one can easily incorporate error resulting from machine delivery in an error model based purely on tumor motion.« less

  6. A Practical Methodology for Quantifying Random and Systematic Components of Unexplained Variance in a Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Deloach, Richard; Obara, Clifford J.; Goodman, Wesley L.

    2012-01-01

    This paper documents a check standard wind tunnel test conducted in the Langley 0.3-Meter Transonic Cryogenic Tunnel (0.3M TCT) that was designed and analyzed using the Modern Design of Experiments (MDOE). The test designed to partition the unexplained variance of typical wind tunnel data samples into two constituent components, one attributable to ordinary random error, and one attributable to systematic error induced by covariate effects. Covariate effects in wind tunnel testing are discussed, with examples. The impact of systematic (non-random) unexplained variance on the statistical independence of sequential measurements is reviewed. The corresponding correlation among experimental errors is discussed, as is the impact of such correlation on experimental results generally. The specific experiment documented herein was organized as a formal test for the presence of unexplained variance in representative samples of wind tunnel data, in order to quantify the frequency with which such systematic error was detected, and its magnitude relative to ordinary random error. Levels of systematic and random error reported here are representative of those quantified in other facilities, as cited in the references.

  7. Numerical methods for analyzing electromagnetic scattering

    NASA Technical Reports Server (NTRS)

    Lee, S. W.; Lo, Y. T.; Chuang, S. L.; Lee, C. S.

    1985-01-01

    Attenuation properties of the normal modes in an overmoded waveguide coated with a lossy material were analyzed. It is found that the low-order modes, can be significantly attenuated even with a thin layer of coating if the coating material is not too lossy. A thinner layer of coating is required for large attenuation of the low-order modes if the coating material is magnetic rather than dielectric. The Radar Cross Section (RCS) from an uncoated circular guide terminated by a perfect electric conductor was calculated and compared with available experimental data. It is confirmed that the interior irradiation contributes to the RCS. The equivalent-current method based on the geometrical theory of diffraction (GTD) was chosen for the calculation of the contribution from the rim diffraction. The RCS reduction from a coated circular guide terminated by a PEC are planned schemes for the experiments are included. The waveguide coated with a lossy magnetic material is suggested as a substitute for the corrugated waveguide.

  8. FaStore - a space-saving solution for raw sequencing data.

    PubMed

    Roguski, Lukasz; Ochoa, Idoia; Hernaez, Mikel; Deorowicz, Sebastian

    2018-03-29

    The affordability of DNA sequencing has led to the generation of unprecedented volumes of raw sequencing data. These data must be stored, processed, and transmitted, which poses significant challenges. To facilitate this effort, we introduce FaStore, a specialized compressor for FASTQ files. FaStore does not use any reference sequences for compression, and permits the user to choose from several lossy modes to improve the overall compression ratio, depending on the specific needs. FaStore in the lossless mode achieves a significant improvement in compression ratio with respect to previously proposed algorithms. We perform an analysis on the effect that the different lossy modes have on variant calling, the most widely used application for clinical decision making, especially important in the era of precision medicine. We show that lossy compression can offer significant compression gains, while preserving the essential genomic information and without affecting the variant calling performance. FaStore can be downloaded from https://github.com/refresh-bio/FaStore. sebastian.deorowicz@polsl.pl. Supplementary data are available at Bioinformatics online.

  9. Analysis of the electromagnetic scattering from an inlet geometry with lossy walls

    NASA Technical Reports Server (NTRS)

    Myung, N. H.; Pathak, P. H.; Chunang, C. D.

    1985-01-01

    One of the primary goals is to develop an approximate but sufficiently accurate analysis for the problem of electromagnetic (EM) plane wave scattering by an open ended, perfectly-conducting, semi-infinite hollow circular waveguide (or duct) with a thin, uniform layer of lossy or absorbing material on its inner wall, and with a simple termination inside. The less difficult but useful problem of the EM scattering by a two-dimensional (2-D), semi-infinite parallel plate waveguide with an impedance boundary condition on the inner walls was chosen initially for analysis. The impedance boundary condition in this problem serves to model a thin layer of lossy dielectric/ferrite coating on the otherwise perfectly-conducting interior waveguide walls. An approximate but efficient and accurate ray solution was obtained recently. That solution is presently being extended to the case of a moderately thick dielectric/ferrite coating on the walls so as to be valid for situations where the impedance boundary condition may not remain sufficiently accurate.

  10. Antenna pattern control using impedance surfaces

    NASA Technical Reports Server (NTRS)

    Balanis, Constantine A.; Liu, Kefeng

    1992-01-01

    During this research period, we have effectively transferred existing computer codes from CRAY supercomputer to work station based systems. The work station based version of our code preserved the accuracy of the numerical computations while giving a much better turn-around time than the CRAY supercomputer. Such a task relieved us of the heavy dependence of the supercomputer account budget and made codes developed in this research project more feasible for applications. The analysis of pyramidal horns with impedance surfaces was our major focus during this research period. Three different modeling algorithms in analyzing lossy impedance surfaces were investigated and compared with measured data. Through this investigation, we discovered that a hybrid Fourier transform technique, which uses the eigen mode in the stepped waveguide section and the Fourier transformed field distributions across the stepped discontinuities for lossy impedances coating, gives a better accuracy in analyzing lossy coatings. After a further refinement of the present technique, we will perform an accurate radiation pattern synthesis in the coming reporting period.

  11. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  12. Using off-the-shelf lossy compression for wireless home sleep staging.

    PubMed

    Lan, Kun-Chan; Chang, Da-Wei; Kuo, Chih-En; Wei, Ming-Zhi; Li, Yu-Hung; Shaw, Fu-Zen; Liang, Sheng-Fu

    2015-05-15

    Recently, there has been increasing interest in the development of wireless home sleep staging systems that allow the patient to be monitored remotely while remaining in the comfort of their home. However, transmitting large amount of Polysomnography (PSG) data over the Internet is an important issue needed to be considered. In this work, we aim to reduce the amount of PSG data which has to be transmitted or stored, while having as little impact as possible on the information in the signal relevant to classify sleep stages. We examine the effects of off-the-shelf lossy compression on an all-night PSG dataset from 20 healthy subjects, in the context of automated sleep staging. The popular compression method Set Partitioning in Hierarchical Trees (SPIHT) was used, and a range of compression levels was selected in order to compress the signals with various degrees of loss. In addition, a rule-based automatic sleep staging method was used to automatically classify the sleep stages. Considering the criteria of clinical usefulness, the experimental results show that the system can achieve more than 60% energy saving with a high accuracy (>84%) in classifying sleep stages by using a lossy compression algorithm like SPIHT. As far as we know, our study is the first that focuses how much loss can be tolerated in compressing complex multi-channel PSG data for sleep analysis. We demonstrate the feasibility of using lossy SPIHT compression for wireless home sleep staging. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Volumetric Medical Image Coding: An Object-based, Lossy-to-lossless and Fully Scalable Approach

    PubMed Central

    Danyali, Habibiollah; Mertins, Alfred

    2011-01-01

    In this article, an object-based, highly scalable, lossy-to-lossless 3D wavelet coding approach for volumetric medical image data (e.g., magnetic resonance (MR) and computed tomography (CT)) is proposed. The new method, called 3DOBHS-SPIHT, is based on the well-known set partitioning in the hierarchical trees (SPIHT) algorithm and supports both quality and resolution scalability. The 3D input data is grouped into groups of slices (GOS) and each GOS is encoded and decoded as a separate unit. The symmetric tree definition of the original 3DSPIHT is improved by introducing a new asymmetric tree structure. While preserving the compression efficiency, the new tree structure allows for a small size of each GOS, which not only reduces memory consumption during the encoding and decoding processes, but also facilitates more efficient random access to certain segments of slices. To achieve more compression efficiency, the algorithm only encodes the main object of interest in each 3D data set, which can have any arbitrary shape, and ignores the unnecessary background. The experimental results on some MR data sets show the good performance of the 3DOBHS-SPIHT algorithm for multi-resolution lossy-to-lossless coding. The compression efficiency, full scalability, and object-based features of the proposed approach, beside its lossy-to-lossless coding support, make it a very attractive candidate for volumetric medical image information archiving and transmission applications. PMID:22606653

  14. Achieving minimum-error discrimination of an arbitrary set of laser-light pulses

    NASA Astrophysics Data System (ADS)

    da Silva, Marcus P.; Guha, Saikat; Dutton, Zachary

    2013-05-01

    Laser light is widely used for communication and sensing applications, so the optimal discrimination of coherent states—the quantum states of light emitted by an ideal laser—has immense practical importance. Due to fundamental limits imposed by quantum mechanics, such discrimination has a finite minimum probability of error. While concrete optical circuits for the optimal discrimination between two coherent states are well known, the generalization to larger sets of coherent states has been challenging. In this paper, we show how to achieve optimal discrimination of any set of coherent states using a resource-efficient quantum computer. Our construction leverages a recent result on discriminating multicopy quantum hypotheses [Blume-Kohout, Croke, and Zwolak, arXiv:1201.6625]. As illustrative examples, we analyze the performance of discriminating a ternary alphabet and show how the quantum circuit of a receiver designed to discriminate a binary alphabet can be reused in discriminating multimode hypotheses. Finally, we show that our result can be used to achieve the quantum limit on the rate of classical information transmission on a lossy optical channel, which is known to exceed the Shannon rate of all conventional optical receivers.

  15. Regionalized PM2.5 Community Multiscale Air Quality model performance evaluation across a continuous spatiotemporal domain.

    PubMed

    Reyes, Jeanette M; Xu, Yadong; Vizuete, William; Serre, Marc L

    2017-01-01

    The regulatory Community Multiscale Air Quality (CMAQ) model is a means to understanding the sources, concentrations and regulatory attainment of air pollutants within a model's domain. Substantial resources are allocated to the evaluation of model performance. The Regionalized Air quality Model Performance (RAMP) method introduced here explores novel ways of visualizing and evaluating CMAQ model performance and errors for daily Particulate Matter ≤ 2.5 micrometers (PM2.5) concentrations across the continental United States. The RAMP method performs a non-homogenous, non-linear, non-homoscedastic model performance evaluation at each CMAQ grid. This work demonstrates that CMAQ model performance, for a well-documented 2001 regulatory episode, is non-homogeneous across space/time. The RAMP correction of systematic errors outperforms other model evaluation methods as demonstrated by a 22.1% reduction in Mean Square Error compared to a constant domain wide correction. The RAMP method is able to accurately reproduce simulated performance with a correlation of r = 76.1%. Most of the error coming from CMAQ is random error with only a minority of error being systematic. Areas of high systematic error are collocated with areas of high random error, implying both error types originate from similar sources. Therefore, addressing underlying causes of systematic error will have the added benefit of also addressing underlying causes of random error.

  16. Filled and Unfilled Temperature-Dependent Epoxy Resin Blends for Lossy Transducer Substrates

    PubMed Central

    Eames, Matthew D.C.; Hossack, John A.

    2016-01-01

    In the context of our ongoing investigation of low-cost 2-dimensional (2-D) arrays, we studied the temperature-dependent acoustic properties of epoxy blends that could serve as an acoustically lossy backing material in compact 2-D array-based devices. This material should be capable of being machined during array manufacture, while also providing adequate signal attenuation to mitigate backing block reverberation artifacts. The acoustic impedance and attenuation of 5 unfilled epoxy blends and 2 filled epoxy blends—tungsten and fiberglass fillers—were analyzed across a 35°C temperature range in 5°C increments. Unfilled epoxy materials possessed an approximately linear variation of impedance and sigmoidal variation of attenuation properties over the range of temperatures of interest. An intermediate epoxy blend was fitted to a quadratic trend line with R2 values of 0.94 and 0.99 for attenuation and impedance, respectively. It was observed that a fiberglass filler induces a strong quadratic trend in the impedance data with temperature, which results in increased error in the characterization of attenuation and impedance. The tungsten-filled epoxy was not susceptible to such problems because a different method of fabrication was required. At body temperature, the tungsten-filled epoxy could provide a 44 dB attenuation of the round-trip backing block echo in our application, in which the center frequency is 5 MHz and the backing material is 1.1 mm thick. This is an 11 dB increase in attenuation compared with the fiberglass-filled epoxy in the context of our application. This work provides motivation for exploring the use of custom-made tungsten-filled epoxy materials as a substitute PCB-based substrate to provide electrical signal interconnect. PMID:19406716

  17. Lossy Wavefield Compression for Full-Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Boehm, C.; Fichtner, A.; de la Puente, J.; Hanzich, M.

    2015-12-01

    We present lossy compression techniques, tailored to the inexact computation of sensitivity kernels, that significantly reduce the memory requirements of adjoint-based minimization schemes. Adjoint methods are a powerful tool to solve tomography problems in full-waveform inversion (FWI). Yet they face the challenge of massive memory requirements caused by the opposite directions of forward and adjoint simulations and the necessity to access both wavefields simultaneously during the computation of the sensitivity kernel. Thus, storage, I/O operations, and memory bandwidth become key topics in FWI. In this talk, we present strategies for the temporal and spatial compression of the forward wavefield. This comprises re-interpolation with coarse time steps and an adaptive polynomial degree of the spectral element shape functions. In addition, we predict the projection errors on a hierarchy of grids and re-quantize the residuals with an adaptive floating-point accuracy to improve the approximation. Furthermore, we use the first arrivals of adjoint waves to identify "shadow zones" that do not contribute to the sensitivity kernel at all. Updating and storing the wavefield within these shadow zones is skipped, which reduces memory requirements and computational costs at the same time. Compared to check-pointing, our approach has only a negligible computational overhead, utilizing the fact that a sufficiently accurate sensitivity kernel does not require a fully resolved forward wavefield. Furthermore, we use adaptive compression thresholds during the FWI iterations to ensure convergence. Numerical experiments on the reservoir scale and for the Western Mediterranean prove the high potential of this approach with an effective compression factor of 500-1000. Furthermore, it is computationally cheap and easy to integrate in both, finite-differences and finite-element wave propagation codes.

  18. Errors in radial velocity variance from Doppler wind lidar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, H.; Barthelmie, R. J.; Doubrawa, P.

    A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less

  19. Errors in radial velocity variance from Doppler wind lidar

    DOE PAGES

    Wang, H.; Barthelmie, R. J.; Doubrawa, P.; ...

    2016-08-29

    A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less

  20. On the Treatment of Electric and Magnetic Loss in the Linear Bicharacteristic Scheme for Electromagnetics

    NASA Technical Reports Server (NTRS)

    Beggs, John H.

    2000-01-01

    The upwind leapfrog or Linear Bicharacteristic Scheme (LBS) has previously been extended to treat lossy dielectric and magnetic materials. This paper examines different methodologies for treatment of the electric loss term in the Linear Bicharacteristic Scheme for computational electromagnetics. Several different treatments of the electric loss term using the LBS are explored and compared on one-dimensional model problems involving reflection from lossy dielectric materials on both uniform and nonuniform grids. Results using these LBS implementations are also compared with the FDTD method for convenience.

  1. Image compression software for the SOHO LASCO and EIT experiments

    NASA Technical Reports Server (NTRS)

    Grunes, Mitchell R.; Howard, Russell A.; Hoppel, Karl; Mango, Stephen A.; Wang, Dennis

    1994-01-01

    This paper describes the lossless and lossy image compression algorithms to be used on board the Solar Heliospheric Observatory (SOHO) in conjunction with the Large Angle Spectrometric Coronograph and Extreme Ultraviolet Imaging Telescope experiments. It also shows preliminary results obtained using similar prior imagery and discusses the lossy compression artifacts which will result. This paper is in part intended for the use of SOHO investigators who need to understand the results of SOHO compression in order to better allocate the transmission bits which they have been allocated.

  2. Current distribution on a cylindrical antenna with parallel orientation in a lossy magnetoplasma

    NASA Technical Reports Server (NTRS)

    Klein, C. A.; Klock, P. W.; Deschamps, G. A.

    1972-01-01

    The current distribution and impedance of a thin cylindrical antenna with parallel orientation to the static magnetic field of a lossy magnetoplasma is calculated with the method of moments. The electric field produced by an infinitesimal current source is first derived. Results are presented for a wide range of plasma parameters. Reasonable answers are obtained for all cases except for the overdense hyperbolic case. A discussion of the numerical stability is included which not only applies to this problem but other applications of the method of moments.

  3. A procedure for the significance testing of unmodeled errors in GNSS observations

    NASA Astrophysics Data System (ADS)

    Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling

    2018-01-01

    It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.

  4. Design and analysis of a sub-aperture scanning machine for the transmittance measurements of large-aperture optical system

    NASA Astrophysics Data System (ADS)

    He, Yingwei; Li, Ping; Feng, Guojin; Cheng, Li; Wang, Yu; Wu, Houping; Liu, Zilong; Zheng, Chundi; Sha, Dingguo

    2010-11-01

    For measuring large-aperture optical system transmittance, a novel sub-aperture scanning machine with double-rotating arms (SSMDA) was designed to obtain sub-aperture beam spot. Optical system full-aperture transmittance measurements can be achieved by applying sub-aperture beam spot scanning technology. The mathematical model of the SSMDA based on a homogeneous coordinate transformation matrix is established to develop a detailed methodology for analyzing the beam spot scanning errors. The error analysis methodology considers two fundamental sources of scanning errors, namely (1) the length systematic errors and (2) the rotational systematic errors. As the systematic errors of the parameters are given beforehand, computational results of scanning errors are between -0.007~0.028mm while scanning radius is not lager than 400.000mm. The results offer theoretical and data basis to the research on transmission characteristics of large optical system.

  5. Scattering from binary optics

    NASA Technical Reports Server (NTRS)

    Ricks, Douglas W.

    1993-01-01

    There are a number of sources of scattering in binary optics: etch depth errors, line edge errors, quantization errors, roughness, and the binary approximation to the ideal surface. These sources of scattering can be systematic (deterministic) or random. In this paper, scattering formulas for both systematic and random errors are derived using Fourier optics. These formulas can be used to explain the results of scattering measurements and computer simulations.

  6. Adaptive real time selection for quantum key distribution in lossy and turbulent free-space channels

    NASA Astrophysics Data System (ADS)

    Vallone, Giuseppe; Marangon, Davide G.; Canale, Matteo; Savorgnan, Ilaria; Bacco, Davide; Barbieri, Mauro; Calimani, Simon; Barbieri, Cesare; Laurenti, Nicola; Villoresi, Paolo

    2015-04-01

    The unconditional security in the creation of cryptographic keys obtained by quantum key distribution (QKD) protocols will induce a quantum leap in free-space communication privacy in the same way that we are beginning to realize secure optical fiber connections. However, free-space channels, in particular those with long links and the presence of atmospheric turbulence, are affected by losses, fluctuating transmissivity, and background light that impair the conditions for secure QKD. Here we introduce a method to contrast the atmospheric turbulence in QKD experiments. Our adaptive real time selection (ARTS) technique at the receiver is based on the selection of the intervals with higher channel transmissivity. We demonstrate, using data from the Canary Island 143-km free-space link, that conditions with unacceptable average quantum bit error rate which would prevent the generation of a secure key can be used once parsed according to the instantaneous scintillation using the ARTS technique.

  7. Experimental quantum data locking

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Cao, Zhu; Wu, Cheng; Fukuda, Daiji; You, Lixing; Zhong, Jiaqiang; Numata, Takayuki; Chen, Sijing; Zhang, Weijun; Shi, Sheng-Cai; Lu, Chao-Yang; Wang, Zhen; Ma, Xiongfeng; Fan, Jingyun; Zhang, Qiang; Pan, Jian-Wei

    2016-08-01

    Classical correlation can be locked via quantum means: quantum data locking. With a short secret key, one can lock an exponentially large amount of information in order to make it inaccessible to unauthorized users without the key. Quantum data locking presents a resource-efficient alternative to one-time pad encryption which requires a key no shorter than the message. We report experimental demonstrations of a quantum data locking scheme originally proposed by D. P. DiVincenzo et al. [Phys. Rev. Lett. 92, 067902 (2004), 10.1103/PhysRevLett.92.067902] and a loss-tolerant scheme developed by O. Fawzi et al. [J. ACM 60, 44 (2013), 10.1145/2518131]. We observe that the unlocked amount of information is larger than the key size in both experiments, exhibiting strong violation of the incremental proportionality property of classical information theory. As an application example, we show the successful transmission of a photo over a lossy channel with quantum data (un)locking and error correction.

  8. Distillation of mixed-state continuous-variable entanglement by photon subtraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Shengli; Loock, Peter van

    2010-12-15

    We present a detailed theoretical analysis for the distillation of one copy of a mixed two-mode continuous-variable entangled state using beam splitters and coherent photon-detection techniques, including conventional on-off detectors and photon-number-resolving detectors. The initial Gaussian mixed-entangled states are generated by transmitting a two-mode squeezed state through a lossy bosonic channel, corresponding to the primary source of errors in current approaches to optical quantum communication. We provide explicit formulas to calculate the entanglement in terms of logarithmic negativity before and after distillation, including losses in the channel and the photon detection, and show that one-copy distillation is still possible evenmore » for losses near the typical fiber channel attenuation length. A lower bound for the transmission coefficient of the photon-subtraction beam splitter is derived, representing the minimal value that still allows to enhance the entanglement.« less

  9. A new interferential multispectral image compression algorithm based on adaptive classification and curve-fitting

    NASA Astrophysics Data System (ADS)

    Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke

    2008-08-01

    A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.

  10. Systematic errors of EIT systems determined by easily-scalable resistive phantoms.

    PubMed

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-06-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.

  11. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks

    PubMed Central

    Besada, Juan A.

    2017-01-01

    In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation. PMID:28934157

  12. Sources of variability and systematic error in mouse timing behavior.

    PubMed

    Gallistel, C R; King, Adam; McDonald, Robert

    2004-01-01

    In the peak procedure, starts and stops in responding bracket the target time at which food is expected. The variability in start and stop times is proportional to the target time (scalar variability), as is the systematic error in the mean center (scalar error). The authors investigated the source of the error and the variability, using head poking in the mouse, with target intervals of 5 s, 15 s, and 45 s, in the standard procedure, and in a variant with 3 different target intervals at 3 different locations in a single trial. The authors conclude that the systematic error is due to the asymmetric location of start and stop decision criteria, and the scalar variability derives primarily from sources other than memory.

  13. Lossy compression of quality scores in genomic data.

    PubMed

    Cánovas, Rodrigo; Moffat, Alistair; Turpin, Andrew

    2014-08-01

    Next-generation sequencing technologies are revolutionizing medicine. Data from sequencing technologies are typically represented as a string of bases, an associated sequence of per-base quality scores and other metadata, and in aggregate can require a large amount of space. The quality scores show how accurate the bases are with respect to the sequencing process, that is, how confident the sequencer is of having called them correctly, and are the largest component in datasets in which they are retained. Previous research has examined how to store sequences of bases effectively; here we add to that knowledge by examining methods for compressing quality scores. The quality values originate in a continuous domain, and so if a fidelity criterion is introduced, it is possible to introduce flexibility in the way these values are represented, allowing lossy compression over the quality score data. We present existing compression options for quality score data, and then introduce two new lossy techniques. Experiments measuring the trade-off between compression ratio and information loss are reported, including quantifying the effect of lossy representations on a downstream application that carries out single nucleotide polymorphism and insert/deletion detection. The new methods are demonstrably superior to other techniques when assessed against the spectrum of possible trade-offs between storage required and fidelity of representation. An implementation of the methods described here is available at https://github.com/rcanovas/libCSAM. rcanovas@student.unimelb.edu.au Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Progress with lossy compression of data from the Community Earth System Model

    NASA Astrophysics Data System (ADS)

    Xu, H.; Baker, A.; Hammerling, D.; Li, S.; Clyne, J.

    2017-12-01

    Climate models, such as the Community Earth System Model (CESM), generate massive quantities of data, particularly when run at high spatial and temporal resolutions. The burden of storage is further exacerbated by creating large ensembles, generating large numbers of variables, outputting at high frequencies, and duplicating data archives (to protect against disk failures). Applying lossy compression methods to CESM datasets is an attractive means of reducing data storage requirements, but ensuring that the loss of information does not negatively impact science objectives is critical. In particular, test methods are needed to evaluate whether critical features (e.g., extreme values and spatial and temporal gradients) have been preserved and to boost scientists' confidence in the lossy compression process. We will provide an overview on our progress in applying lossy compression to CESM output and describe our unique suite of metric tests that evaluate the impact of information loss. Further, we will describe our processes how to choose an appropriate compression algorithm (and its associated parameters) given the diversity of CESM data (e.g., variables may be constant, smooth, change abruptly, contain missing values, or have large ranges). Traditional compression algorithms, such as those used for images, are not necessarily ideally suited for floating-point climate simulation data, and different methods may have different strengths and be more effective for certain types of variables than others. We will discuss our progress towards our ultimate goal of developing an automated multi-method parallel approach for compression of climate data that both maximizes data reduction and minimizes the impact of data loss on science results.

  15. A database for assessment of effect of lossy compression on digital mammograms

    NASA Astrophysics Data System (ADS)

    Wang, Jiheng; Sahiner, Berkman; Petrick, Nicholas; Pezeshk, Aria

    2018-03-01

    With widespread use of screening digital mammography, efficient storage of the vast amounts of data has become a challenge. While lossless image compression causes no risk to the interpretation of the data, it does not allow for high compression rates. Lossy compression and the associated higher compression ratios are therefore more desirable. The U.S. Food and Drug Administration (FDA) currently interprets the Mammography Quality Standards Act as prohibiting lossy compression of digital mammograms for primary image interpretation, image retention, or transfer to the patient or her designated recipient. Previous work has used reader studies to determine proper usage criteria for evaluating lossy image compression in mammography, and utilized different measures and metrics to characterize medical image quality. The drawback of such studies is that they rely on a threshold on compression ratio as the fundamental criterion for preserving the quality of images. However, compression ratio is not a useful indicator of image quality. On the other hand, many objective image quality metrics (IQMs) have shown excellent performance for natural image content for consumer electronic applications. In this paper, we create a new synthetic mammogram database with several unique features. We compare and characterize the impact of image compression on several clinically relevant image attributes such as perceived contrast and mass appearance for different kinds of masses. We plan to use this database to develop a new objective IQM for measuring the quality of compressed mammographic images to help determine the allowed maximum compression for different kinds of breasts and masses in terms of visual and diagnostic quality.

  16. Error analysis and system optimization of non-null aspheric testing system

    NASA Astrophysics Data System (ADS)

    Luo, Yongjie; Yang, Yongying; Liu, Dong; Tian, Chao; Zhuo, Yongmo

    2010-10-01

    A non-null aspheric testing system, which employs partial null lens (PNL for short) and reverse iterative optimization reconstruction (ROR for short) technique, is proposed in this paper. Based on system modeling in ray tracing software, the parameter of each optical element is optimized and this makes system modeling more precise. Systematic error of non-null aspheric testing system is analyzed and can be categorized into two types, the error due to surface parameters of PNL in the system modeling and the rest from non-null interferometer by the approach of error storage subtraction. Experimental results show that, after systematic error is removed from testing result of non-null aspheric testing system, the aspheric surface is precisely reconstructed by ROR technique and the consideration of systematic error greatly increase the test accuracy of non-null aspheric testing system.

  17. Synoptic scale forecast skill and systematic errors in the MASS 2.0 model. [Mesoscale Atmospheric Simulation System

    NASA Technical Reports Server (NTRS)

    Koch, S. E.; Skillman, W. C.; Kocin, P. J.; Wetzel, P. J.; Brill, K. F.

    1985-01-01

    The synoptic scale performance characteristics of MASS 2.0 are determined by comparing filtered 12-24 hr model forecasts to same-case forecasts made by the National Meteorological Center's synoptic-scale Limited-area Fine Mesh model. Characteristics of the two systems are contrasted, and the analysis methodology used to determine statistical skill scores and systematic errors is described. The overall relative performance of the two models in the sample is documented, and important systematic errors uncovered are presented.

  18. Method and apparatus for powering an electrodeless lamp with reduced radio frequency interference

    DOEpatents

    Simpson, James E.

    1999-01-01

    An electrodeless lamp waveguide structure includes tuned absorbers for spurious RF signals. A lamp waveguide with an integral frequency selective attenuation includes resonant absorbers positioned within the waveguide to absorb spurious out-of-band RF energy. The absorbers have a negligible effect on energy at the selected frequency used to excite plasma in the lamp. In a first embodiment, one or more thin slabs of lossy magnetic material are affixed to the sidewalls of the waveguide at approximately one quarter wavelength of the spurious signal from an end wall of the waveguide. The positioning of the lossy material optimizes absorption of power from the spurious signal. In a second embodiment, one or more thin slabs of lossy magnetic material are used in conjunction with band rejection waveguide filter elements. In a third embodiment, one or more microstrip filter elements are tuned to the frequency of the spurious signal and positioned within the waveguide to couple and absorb the spurious signal's energy. All three embodiments absorb negligible energy at the selected frequency and so do not significantly diminish the energy efficiency of the lamp.

  19. Method and apparatus for powering an electrodeless lamp with reduced radio frequency interference

    DOEpatents

    Simpson, J.E.

    1999-06-08

    An electrodeless lamp waveguide structure includes tuned absorbers for spurious RF signals. A lamp waveguide with an integral frequency selective attenuation includes resonant absorbers positioned within the waveguide to absorb spurious out-of-band RF energy. The absorbers have a negligible effect on energy at the selected frequency used to excite plasma in the lamp. In a first embodiment, one or more thin slabs of lossy magnetic material are affixed to the sidewalls of the waveguide at approximately one quarter wavelength of the spurious signal from an end wall of the waveguide. The positioning of the lossy material optimizes absorption of power from the spurious signal. In a second embodiment, one or more thin slabs of lossy magnetic material are used in conjunction with band rejection waveguide filter elements. In a third embodiment, one or more microstrip filter elements are tuned to the frequency of the spurious signal and positioned within the waveguide to couple and absorb the spurious signal's energy. All three embodiments absorb negligible energy at the selected frequency and so do not significantly diminish the energy efficiency of the lamp. 18 figs.

  20. Bit-Grooming: Shave Your Bits with Razor-sharp Precision

    NASA Astrophysics Data System (ADS)

    Zender, C. S.; Silver, J.

    2017-12-01

    Lossless compression can reduce climate data storage by 30-40%. Further reduction requires lossy compression that also reduces precision. Fortunately, geoscientific models and measurements generate false precision (scientifically meaningless data bits) that can be eliminated without sacrificing scientifically meaningful data. We introduce Bit Grooming, a lossy compression algorithm that removes the bloat due to false-precision, those bits and bytes beyond the meaningful precision of the data.Bit Grooming is statistically unbiased, applies to all floating point numbers, and is easy to use. Bit-Grooming reduces geoscience data storage requirements by 40-80%. We compared Bit Grooming to competitors Linear Packing, Layer Packing, and GRIB2/JPEG2000. The other compression methods have the edge in terms of compression, but Bit Grooming is the most accurate and certainly the most usable and portable.Bit Grooming provides flexible and well-balanced solutions to the trade-offs among compression, accuracy, and usability required by lossy compression. Geoscientists could reduce their long term storage costs, and show leadership in the elimination of false precision, by adopting Bit Grooming.

  1. Receiver-Assisted Congestion Control to Achieve High Throughput in Lossy Wireless Networks

    NASA Astrophysics Data System (ADS)

    Shi, Kai; Shu, Yantai; Yang, Oliver; Luo, Jiarong

    2010-04-01

    Many applications would require fast data transfer in high-speed wireless networks nowadays. However, due to its conservative congestion control algorithm, Transmission Control Protocol (TCP) cannot effectively utilize the network capacity in lossy wireless networks. In this paper, we propose a receiver-assisted congestion control mechanism (RACC) in which the sender performs loss-based control, while the receiver is performing delay-based control. The receiver measures the network bandwidth based on the packet interarrival interval and uses it to compute a congestion window size deemed appropriate for the sender. After receiving the advertised value feedback from the receiver, the sender then uses the additive increase and multiplicative decrease (AIMD) mechanism to compute the correct congestion window size to be used. By integrating the loss-based and the delay-based congestion controls, our mechanism can mitigate the effect of wireless losses, alleviate the timeout effect, and therefore make better use of network bandwidth. Simulation and experiment results in various scenarios show that our mechanism can outperform conventional TCP in high-speed and lossy wireless environments.

  2. A new systematic calibration method of ring laser gyroscope inertial navigation system

    NASA Astrophysics Data System (ADS)

    Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Xiong, Zhenyu; Long, Xingwu

    2016-10-01

    Inertial navigation system has been the core component of both military and civil navigation systems. Before the INS is put into application, it is supposed to be calibrated in the laboratory in order to compensate repeatability error caused by manufacturing. Discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed theories of error inspiration and separation in detail and presented a new systematic calibration method for ring laser gyroscope inertial navigation system. Error models and equations of calibrated Inertial Measurement Unit are given. Then proper rotation arrangement orders are depicted in order to establish the linear relationships between the change of velocity errors and calibrated parameter errors. Experiments have been set up to compare the systematic errors calculated by filtering calibration result with those obtained by discrete calibration result. The largest position error and velocity error of filtering calibration result are only 0.18 miles and 0.26m/s compared with 2 miles and 1.46m/s of discrete calibration result. These results have validated the new systematic calibration method and proved its importance for optimal design and accuracy improvement of calibration of mechanically dithered ring laser gyroscope inertial navigation system.

  3. Measuring Systematic Error with Curve Fits

    ERIC Educational Resources Information Center

    Rupright, Mark E.

    2011-01-01

    Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…

  4. Systematic Error Modeling and Bias Estimation

    PubMed Central

    Zhang, Feihu; Knoll, Alois

    2016-01-01

    This paper analyzes the statistic properties of the systematic error in terms of range and bearing during the transformation process. Furthermore, we rely on a weighted nonlinear least square method to calculate the biases based on the proposed models. The results show the high performance of the proposed approach for error modeling and bias estimation. PMID:27213386

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Juan; Beltran, Chris J., E-mail: beltran.chris@mayo.edu; Herman, Michael G.

    Purpose: To quantitatively and systematically assess dosimetric effects induced by spot positioning error as a function of spot spacing (SS) on intensity-modulated proton therapy (IMPT) plan quality and to facilitate evaluation of safety tolerance limits on spot position. Methods: Spot position errors (PE) ranging from 1 to 2 mm were simulated. Simple plans were created on a water phantom, and IMPT plans were calculated on two pediatric patients with a brain tumor of 28 and 3 cc, respectively, using a commercial planning system. For the phantom, a uniform dose was delivered to targets located at different depths from 10 tomore » 20 cm with various field sizes from 2{sup 2} to 15{sup 2} cm{sup 2}. Two nominal spot sizes, 4.0 and 6.6 mm of 1 σ in water at isocenter, were used for treatment planning. The SS ranged from 0.5 σ to 1.5 σ, which is 2–6 mm for the small spot size and 3.3–9.9 mm for the large spot size. Various perturbation scenarios of a single spot error and systematic and random multiple spot errors were studied. To quantify the dosimetric effects, percent dose error (PDE) depth profiles and the value of percent dose error at the maximum dose difference (PDE [ΔDmax]) were used for evaluation. Results: A pair of hot and cold spots was created per spot shift. PDE[ΔDmax] is found to be a complex function of PE, SS, spot size, depth, and global spot distribution that can be well defined in simple models. For volumetric targets, the PDE [ΔDmax] is not noticeably affected by the change of field size or target volume within the studied ranges. In general, reducing SS decreased the dose error. For the facility studied, given a single spot error with a PE of 1.2 mm and for both spot sizes, a SS of 1σ resulted in a 2% maximum dose error; a SS larger than 1.25 σ substantially increased the dose error and its sensitivity to PE. A similar trend was observed in multiple spot errors (both systematic and random errors). Systematic PE can lead to noticeable hot spots along the field edges, which may be near critical structures. However, random PE showed minimal dose error. Conclusions: Dose error dependence for PE was quantitatively and systematically characterized and an analytic tool was built to simulate systematic and random errors for patient-specific IMPT. This information facilitates the determination of facility specific spot position error thresholds.« less

  6. Quantum optics of lossy asymmetric beam splitters.

    PubMed

    Uppu, Ravitej; Wolterink, Tom A W; Tentrup, Tristan B H; Pinkse, Pepijn W H

    2016-07-25

    We theoretically investigate quantum interference of two single photons at a lossy asymmetric beam splitter, the most general passive 2×2 optical circuit. The losses in the circuit result in a non-unitary scattering matrix with a non-trivial set of constraints on the elements of the scattering matrix. Our analysis using the noise operator formalism shows that the loss allows tunability of quantum interference to an extent not possible with a lossless beam splitter. Our theoretical studies support the experimental demonstrations of programmable quantum interference in highly multimodal systems such as opaque scattering media and multimode fibers.

  7. Theory and Circuit Model for Lossy Coaxial Transmission Line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Genoni, T. C.; Anderson, C. N.; Clark, R. E.

    2017-04-01

    The theory of signal propagation in lossy coaxial transmission lines is revisited and new approximate analytic formulas for the line impedance and attenuation are derived. The accuracy of these formulas from DC to 100 GHz is demonstrated by comparison to numerical solutions of the exact field equations. Based on this analysis, a new circuit model is described which accurately reproduces the line response over the entire frequency range. Circuit model calculations are in excellent agreement with the numerical and analytic results, and with finite-difference-time-domain simulations which resolve the skindepths of the conducting walls.

  8. Addressing Systematic Errors in Correlation Tracking on HMI Magnetograms

    NASA Astrophysics Data System (ADS)

    Mahajan, Sushant S.; Hathaway, David H.; Munoz-Jaramillo, Andres; Martens, Petrus C.

    2017-08-01

    Correlation tracking in solar magnetograms is an effective method to measure the differential rotation and meridional flow on the solar surface. However, since the tracking accuracy required to successfully measure meridional flow is very high, small systematic errors have a noticeable impact on measured meridional flow profiles. Additionally, the uncertainties of this kind of measurements have been historically underestimated, leading to controversy regarding flow profiles at high latitudes extracted from measurements which are unreliable near the solar limb.Here we present a set of systematic errors we have identified (and potential solutions), including bias caused by physical pixel sizes, center-to-limb systematics, and discrepancies between measurements performed using different time intervals. We have developed numerical techniques to get rid of these systematic errors and in the process improve the accuracy of the measurements by an order of magnitude.We also present a detailed analysis of uncertainties in these measurements using synthetic magnetograms and the quantification of an upper limit below which meridional flow measurements cannot be trusted as a function of latitude.

  9. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis.

    PubMed

    Casas, Francisco J; Ortiz, David; Villa, Enrique; Cano, Juan L; Cagigas, Jaime; Pérez, Ana R; Aja, Beatriz; Terán, J Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo

    2015-08-05

    This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.

  10. Assessment of Systematic Measurement Errors for Acoustic Travel-Time Tomography of the Atmosphere

    DTIC Science & Technology

    2013-01-01

    measurements include assess- ment of the time delays in electronic circuits and mechanical hardware (e.g., drivers and microphones) of a tomography array ...hardware and electronic circuits of the tomography array and errors in synchronization of the transmitted and recorded signals. For example, if...coordinates can be as large as 30 cm. These errors are equivalent to the systematic errors in the travel times of 0.9 ms. Third, loudspeakers which are used

  11. A study of selected radiation and propagation problems related to antennas and probes in magneto-ionic media

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Research consisted of computations toward the solution of the problem of the current distribution on a cylindrical antenna in a magnetoplasma. The case of an antenna parallel to the applied magnetic field was investigated. A systematic method of asymptotic expansion was found which simplifies the solution in the general case by giving the field of a dipole even at relatively short range. Some useful properties of the dispersion surfaces in a lossy medium have also been found. A laboratory experiment was directed toward evaluating nonlinear effects, such as those due to power level, bias voltage and electron heating. The problem of reflection and transmission of waves in an electron heated plasma was treated theoretically. The profile inversion problem has been pursued. Some results are very encouraging, however, the general question of stability of the solution remains unsolved.

  12. DtaRefinery: a software tool for elimination of systematic errors from parent ion mass measurements in tandem mass spectra datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petyuk, Vladislav A.; Mayampurath, Anoop M.; Monroe, Matthew E.

    2009-12-16

    Hybrid two-stage mass spectrometers capable of both highly accurate mass measurement and MS/MS fragmentation have become widely available in recent years and have allowed for sig-nificantly better discrimination between true and false MS/MS pep-tide identifications by applying relatively narrow windows for maxi-mum allowable deviations for parent ion mass measurements. To fully gain the advantage of highly accurate parent ion mass meas-urements, it is important to limit systematic mass measurement errors. The DtaRefinery software tool can correct systematic errors in parent ion masses by reading a set of fragmentation spectra, searching for MS/MS peptide identifications, then fitting a model that canmore » estimate systematic errors, and removing them. This results in a new fragmentation spectrum file with updated parent ion masses.« less

  13. Fast and memory efficient text image compression with JBIG2.

    PubMed

    Ye, Yan; Cosman, Pamela

    2003-01-01

    In this paper, we investigate ways to reduce encoding time, memory consumption and substitution errors for text image compression with JBIG2. We first look at page striping where the encoder splits the input image into horizontal stripes and processes one stripe at a time. We propose dynamic dictionary updating procedures for page striping to reduce the bit rate penalty it incurs. Experiments show that splitting the image into two stripes can save 30% of encoding time and 40% of physical memory with a small coding loss of about 1.5%. Using more stripes brings further savings in time and memory but the return diminishes. We also propose an adaptive way to update the dictionary only when it has become out-of-date. The adaptive updating scheme can resolve the time versus bit rate tradeoff and the memory versus bit rate tradeoff well simultaneously. We then propose three speedup techniques for pattern matching, the most time-consuming encoding activity in JBIG2. When combined together, these speedup techniques can save up to 75% of the total encoding time with at most 1.7% of bit rate penalty. Finally, we look at improving reconstructed image quality for lossy compression. We propose enhanced prescreening and feature monitored shape unifying to significantly reduce substitution errors in the reconstructed images.

  14. Mapping the absolute magnetic field and evaluating the quadratic Zeeman-effect-induced systematic error in an atom interferometer gravimeter

    NASA Astrophysics Data System (ADS)

    Hu, Qing-Qing; Freier, Christian; Leykauf, Bastian; Schkolnik, Vladimir; Yang, Jun; Krutzik, Markus; Peters, Achim

    2017-09-01

    Precisely evaluating the systematic error induced by the quadratic Zeeman effect is important for developing atom interferometer gravimeters aiming at an accuracy in the μ Gal regime (1 μ Gal =10-8m /s2 ≈10-9g ). This paper reports on the experimental investigation of Raman spectroscopy-based magnetic field measurements and the evaluation of the systematic error in the gravimetric atom interferometer (GAIN) due to quadratic Zeeman effect. We discuss Raman duration and frequency step-size-dependent magnetic field measurement uncertainty, present vector light shift and tensor light shift induced magnetic field measurement offset, and map the absolute magnetic field inside the interferometer chamber of GAIN with an uncertainty of 0.72 nT and a spatial resolution of 12.8 mm. We evaluate the quadratic Zeeman-effect-induced gravity measurement error in GAIN as 2.04 μ Gal . The methods shown in this paper are important for precisely mapping the absolute magnetic field in vacuum and reducing the quadratic Zeeman-effect-induced systematic error in Raman transition-based precision measurements, such as atomic interferometer gravimeters.

  15. Effects of vertical distribution of water vapor and temperature on total column water vapor retrieval error

    NASA Technical Reports Server (NTRS)

    Sun, Jielun

    1993-01-01

    Results are presented of a test of the physically based total column water vapor retrieval algorithm of Wentz (1992) for sensitivity to realistic vertical distributions of temperature and water vapor. The ECMWF monthly averaged temperature and humidity fields are used to simulate the spatial pattern of systematic retrieval error of total column water vapor due to this sensitivity. The estimated systematic error is within 0.1 g/sq cm over about 70 percent of the global ocean area; systematic errors greater than 0.3 g/sq cm are expected to exist only over a few well-defined regions, about 3 percent of the global oceans, assuming that the global mean value is unbiased.

  16. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis

    PubMed Central

    Casas, Francisco J.; Ortiz, David; Villa, Enrique; Cano, Juan L.; Cagigas, Jaime; Pérez, Ana R.; Aja, Beatriz; Terán, J. Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo

    2015-01-01

    This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process. PMID:26251906

  17. Celiac disease biodetection using lossy-mode resonances generated in tapered single-mode optical fibers

    NASA Astrophysics Data System (ADS)

    Socorro, A. B.; Corres, J. M.; Del Villar, I.; Matias, I. R.; Arregui, F. J.

    2014-05-01

    This work presents the development and test of an anti-gliadin antibodies biosensor based on lossy mode resonances (LMRs) to detect celiac disease. Several polyelectrolites were used to perform layer-by-layer assembly processes in order to generate the LMR and to fabricate a gliadin-embedded thin-film. The LMR shifted 20 nm when immersed in a 5 ppm anti-gliadin antibodies-PBS solution, what makes this bioprobe suitable for detecting celiac disease. This is the first time, to our knowledge, that LMRs are used to detect celiac disease and these results suppose promising prospects on the use of such phenomena as biological detectors.

  18. Multiferroic properties in NdFeO3-PbTiO3 solid solutions

    NASA Astrophysics Data System (ADS)

    Kumar, Sunil; Pal, Jaswinder; Kaur, Shubhpreet; Agrawal, P.; Singh, Mandeep; Singh, Anupinder

    2018-05-01

    The x(NdFeO3) - 1-x(PbTiO3) where x = 0.2 solid solution was prepared using solid state reaction route. The X-ray diffraction (XRD) data reveals the single phase formation. The microstructure shows grain growth with lesser porosity. The energy dispersive analysis confirms the presence of elements in stochiometric proportion. The polarization vs. Electric field loop estabilished a ferroelectric type behavior but lossy in nature. This lossy nature may be due to the presence of large leakage current in solid solution. The Magnetization vs. Magnetic field plot exhibits a unsaturated hysteriss loop indicates that the sample is not purely ferromagnetic.

  19. Oblivious image watermarking combined with JPEG compression

    NASA Astrophysics Data System (ADS)

    Chen, Qing; Maitre, Henri; Pesquet-Popescu, Beatrice

    2003-06-01

    For most data hiding applications, the main source of concern is the effect of lossy compression on hidden information. The objective of watermarking is fundamentally in conflict with lossy compression. The latter attempts to remove all irrelevant and redundant information from a signal, while the former uses the irrelevant information to mask the presence of hidden data. Compression on a watermarked image can significantly affect the retrieval of the watermark. Past investigations of this problem have heavily relied on simulation. It is desirable not only to measure the effect of compression on embedded watermark, but also to control the embedding process to survive lossy compression. In this paper, we focus on oblivious watermarking by assuming that the watermarked image inevitably undergoes JPEG compression prior to watermark extraction. We propose an image-adaptive watermarking scheme where the watermarking algorithm and the JPEG compression standard are jointly considered. Watermark embedding takes into consideration the JPEG compression quality factor and exploits an HVS model to adaptively attain a proper trade-off among transparency, hiding data rate, and robustness to JPEG compression. The scheme estimates the image-dependent payload under JPEG compression to achieve the watermarking bit allocation in a determinate way, while maintaining consistent watermark retrieval performance.

  20. A Locally Modal B-Spline Based Full-Vector Finite-Element Method with PML for Nonlinear and Lossy Plasmonic Waveguide

    NASA Astrophysics Data System (ADS)

    Karimi, Hossein; Nikmehr, Saeid; Khodapanah, Ehsan

    2016-09-01

    In this paper, we develop a B-spline finite-element method (FEM) based on a locally modal wave propagation with anisotropic perfectly matched layers (PMLs), for the first time, to simulate nonlinear and lossy plasmonic waveguides. Conventional approaches like beam propagation method, inherently omit the wave spectrum and do not provide physical insight into nonlinear modes especially in the plasmonic applications, where nonlinear modes are constructed by linear modes with very close propagation constant quantities. Our locally modal B-spline finite element method (LMBS-FEM) does not suffer from the weakness of the conventional approaches. To validate our method, first, propagation of wave for various kinds of linear, nonlinear, lossless and lossy materials of metal-insulator plasmonic structures are simulated using LMBS-FEM in MATLAB and the comparisons are made with FEM-BPM module of COMSOL Multiphysics simulator and B-spline finite-element finite-difference wide angle beam propagation method (BSFEFD-WABPM). The comparisons show that not only our developed numerical approach is computationally more accurate and efficient than conventional approaches but also it provides physical insight into the nonlinear nature of the propagation modes.

  1. On scalable lossless video coding based on sub-pixel accurate MCTF

    NASA Astrophysics Data System (ADS)

    Yea, Sehoon; Pearlman, William A.

    2006-01-01

    We propose two approaches to scalable lossless coding of motion video. They achieve SNR-scalable bitstream up to lossless reconstruction based upon the subpixel-accurate MCTF-based wavelet video coding. The first approach is based upon a two-stage encoding strategy where a lossy reconstruction layer is augmented by a following residual layer in order to obtain (nearly) lossless reconstruction. The key advantages of our approach include an 'on-the-fly' determination of bit budget distribution between the lossy and the residual layers, freedom to use almost any progressive lossy video coding scheme as the first layer and an added feature of near-lossless compression. The second approach capitalizes on the fact that we can maintain the invertibility of MCTF with an arbitrary sub-pixel accuracy even in the presence of an extra truncation step for lossless reconstruction thanks to the lifting implementation. Experimental results show that the proposed schemes achieve compression ratios not obtainable by intra-frame coders such as Motion JPEG-2000 thanks to their inter-frame coding nature. Also they are shown to outperform the state-of-the-art non-scalable inter-frame coder H.264 (JM) lossless mode, with the added benefit of bitstream embeddedness.

  2. ChIPWig: a random access-enabling lossless and lossy compression method for ChIP-seq data.

    PubMed

    Ravanmehr, Vida; Kim, Minji; Wang, Zhiying; Milenkovic, Olgica

    2018-03-15

    Chromatin immunoprecipitation sequencing (ChIP-seq) experiments are inexpensive and time-efficient, and result in massive datasets that introduce significant storage and maintenance challenges. To address the resulting Big Data problems, we propose a lossless and lossy compression framework specifically designed for ChIP-seq Wig data, termed ChIPWig. ChIPWig enables random access, summary statistics lookups and it is based on the asymptotic theory of optimal point density design for nonuniform quantizers. We tested the ChIPWig compressor on 10 ChIP-seq datasets generated by the ENCODE consortium. On average, lossless ChIPWig reduced the file sizes to merely 6% of the original, and offered 6-fold compression rate improvement compared to bigWig. The lossy feature further reduced file sizes 2-fold compared to the lossless mode, with little or no effects on peak calling and motif discovery using specialized NarrowPeaks methods. The compression and decompression speed rates are of the order of 0.2 sec/MB using general purpose computers. The source code and binaries are freely available for download at https://github.com/vidarmehr/ChIPWig-v2, implemented in C ++. milenkov@illinois.edu. Supplementary data are available at Bioinformatics online.

  3. A study for systematic errors of the GLA forecast model in tropical regions

    NASA Technical Reports Server (NTRS)

    Chen, Tsing-Chang; Baker, Wayman E.; Pfaendtner, James; Corrigan, Martin

    1988-01-01

    From the sensitivity studies performed with the Goddard Laboratory for Atmospheres (GLA) analysis/forecast system, it was revealed that the forecast errors in the tropics affect the ability to forecast midlatitude weather in some cases. Apparently, the forecast errors occurring in the tropics can propagate to midlatitudes. Therefore, the systematic error analysis of the GLA forecast system becomes a necessary step in improving the model's forecast performance. The major effort of this study is to examine the possible impact of the hydrological-cycle forecast error on dynamical fields in the GLA forecast system.

  4. Determination of the precision error of the pulmonary artery thermodilution catheter using an in vitro continuous flow test rig.

    PubMed

    Yang, Xiao-Xing; Critchley, Lester A; Joynt, Gavin M

    2011-01-01

    Thermodilution cardiac output using a pulmonary artery catheter is the reference method against which all new methods of cardiac output measurement are judged. However, thermodilution lacks precision and has a quoted precision error of ± 20%. There is uncertainty about its true precision and this causes difficulty when validating new cardiac output technology. Our aim in this investigation was to determine the current precision error of thermodilution measurements. A test rig through which water circulated at different constant rates with ports to insert catheters into a flow chamber was assembled. Flow rate was measured by an externally placed transonic flowprobe and meter. The meter was calibrated by timed filling of a cylinder. Arrow and Edwards 7Fr thermodilution catheters, connected to a Siemens SC9000 cardiac output monitor, were tested. Thermodilution readings were made by injecting 5 mL of ice-cold water. Precision error was divided into random and systematic components, which were determined separately. Between-readings (random) variability was determined for each catheter by taking sets of 10 readings at different flow rates. Coefficient of variation (CV) was calculated for each set and averaged. Between-catheter systems (systematic) variability was derived by plotting calibration lines for sets of catheters. Slopes were used to estimate the systematic component. Performances of 3 cardiac output monitors were compared: Siemens SC9000, Siemens Sirecust 1261, and Philips MP50. Five Arrow and 5 Edwards catheters were tested using the Siemens SC9000 monitor. Flow rates between 0.7 and 7.0 L/min were studied. The CV (random error) for Arrow was 5.4% and for Edwards was 4.8%. The random precision error was ± 10.0% (95% confidence limits). CV (systematic error) was 5.8% and 6.0%, respectively. The systematic precision error was ± 11.6%. The total precision error of a single thermodilution reading was ± 15.3% and ± 13.0% for triplicate readings. Precision error increased by 45% when using the Sirecust monitor and 100% when using the Philips monitor. In vitro testing of pulmonary artery catheters enabled us to measure both the random and systematic error components of thermodilution cardiac output measurement, and thus calculate the precision error. Using the Siemens monitor, we established a precision error of ± 15.3% for single and ± 13.0% for triplicate reading, which was similar to the previous estimate of ± 20%. However, this precision error was significantly worsened by using the Sirecust and Philips monitors. Clinicians should recognize that the precision error of thermodilution cardiac output is dependent on the selection of catheter and monitor model.

  5. Measurement error is often neglected in medical literature: a systematic review.

    PubMed

    Brakenhoff, Timo B; Mitroiu, Marian; Keogh, Ruth H; Moons, Karel G M; Groenwold, Rolf H H; van Smeden, Maarten

    2018-06-01

    In medical research, covariates (e.g., exposure and confounder variables) are often measured with error. While it is well accepted that this introduces bias and imprecision in exposure-outcome relations, it is unclear to what extent such issues are currently considered in research practice. The objective was to study common practices regarding covariate measurement error via a systematic review of general medicine and epidemiology literature. Original research published in 2016 in 12 high impact journals was full-text searched for phrases relating to measurement error. Reporting of measurement error and methods to investigate or correct for it were quantified and characterized. Two hundred and forty-seven (44%) of the 565 original research publications reported on the presence of measurement error. 83% of these 247 did so with respect to the exposure and/or confounder variables. Only 18 publications (7% of 247) used methods to investigate or correct for measurement error. Consequently, it is difficult for readers to judge the robustness of presented results to the existence of measurement error in the majority of publications in high impact journals. Our systematic review highlights the need for increased awareness about the possible impact of covariate measurement error. Additionally, guidance on the use of measurement error correction methods is necessary. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Wavefront-aberration measurement and systematic-error analysis of a high numerical-aperture objective

    NASA Astrophysics Data System (ADS)

    Liu, Zhixiang; Xing, Tingwen; Jiang, Yadong; Lv, Baobin

    2018-02-01

    A two-dimensional (2-D) shearing interferometer based on an amplitude chessboard grating was designed to measure the wavefront aberration of a high numerical-aperture (NA) objective. Chessboard gratings offer better diffraction efficiencies and fewer disturbing diffraction orders than traditional cross gratings. The wavefront aberration of the tested objective was retrieved from the shearing interferogram using the Fourier transform and differential Zernike polynomial-fitting methods. Grating manufacturing errors, including the duty-cycle and pattern-deviation errors, were analyzed with the Fourier transform method. Then, according to the relation between the spherical pupil and planar detector coordinates, the influence of the distortion of the pupil coordinates was simulated. Finally, the systematic error attributable to grating alignment errors was deduced through the geometrical ray-tracing method. Experimental results indicate that the measuring repeatability (3σ) of the wavefront aberration of an objective with NA 0.4 was 3.4 mλ. The systematic-error results were consistent with previous analyses. Thus, the correct wavefront aberration can be obtained after calibration.

  7. The Systematics of Strong Lens Modeling Quantified: The Effects of Constraint Selection and Redshift Information on Magnification, Mass, and Multiple Image Predictability

    NASA Astrophysics Data System (ADS)

    Johnson, Traci L.; Sharon, Keren

    2016-11-01

    Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading as to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu Ke; Li Yanqiu; Wang Hai

    Characterization of measurement accuracy of the phase-shifting point diffraction interferometer (PS/PDI) is usually performed by two-pinhole null test. In this procedure, the geometrical coma and detector tilt astigmatism systematic errors are almost one or two magnitude higher than the desired accuracy of PS/PDI. These errors must be accurately removed from the null test result to achieve high accuracy. Published calibration methods, which can remove the geometrical coma error successfully, have some limitations in calibrating the astigmatism error. In this paper, we propose a method to simultaneously calibrate the geometrical coma and detector tilt astigmatism errors in PS/PDI null test. Basedmore » on the measurement results obtained from two pinhole pairs in orthogonal directions, the method utilizes the orthogonal and rotational symmetry properties of Zernike polynomials over unit circle to calculate the systematic errors introduced in null test of PS/PDI. The experiment using PS/PDI operated at visible light is performed to verify the method. The results show that the method is effective in isolating the systematic errors of PS/PDI and the measurement accuracy of the calibrated PS/PDI is 0.0088{lambda} rms ({lambda}= 632.8 nm).« less

  9. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    NASA Astrophysics Data System (ADS)

    Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.

    2012-02-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.

  10. Analyse des erreurs et grammaire generative: La syntaxe de l'interrogation en francais (Error Analysis and Generative Grammar: The Syntax of Interrogation in French).

    ERIC Educational Resources Information Center

    Py, Bernard

    A progress report is presented of a study which applies a system of generative grammar to error analysis. The objective of the study was to reconstruct the grammar of students' interlanguage, using a systematic analysis of errors. (Interlanguage refers to the linguistic competence of a student who possesses a relatively systematic body of rules,…

  11. SU-D-BRD-07: Evaluation of the Effectiveness of Statistical Process Control Methods to Detect Systematic Errors For Routine Electron Energy Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, S

    2015-06-15

    Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignmentmore » of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors using routine measurement of electron beam energy constancy.« less

  12. Thirty Years of Improving the NCEP Global Forecast System

    NASA Astrophysics Data System (ADS)

    White, G. H.; Manikin, G.; Yang, F.

    2014-12-01

    Current eight day forecasts by the NCEP Global Forecast System are as accurate as five day forecasts 30 years ago. This revolution in weather forecasting reflects increases in computer power, improvements in the assimilation of observations, especially satellite data, improvements in model physics, improvements in observations and international cooperation and competition. One important component has been and is the diagnosis, evaluation and reduction of systematic errors. The effect of proposed improvements in the GFS on systematic errors is one component of the thorough testing of such improvements by the Global Climate and Weather Modeling Branch. Examples of reductions in systematic errors in zonal mean temperatures and winds and other fields will be presented. One challenge in evaluating systematic errors is uncertainty in what reality is. Model initial states can be regarded as the best overall depiction of the atmosphere, but can be misleading in areas of few observations or for fields not well observed such as humidity or precipitation over the oceans. Verification of model physics is particularly difficult. The Environmental Modeling Center emphasizes the evaluation of systematic biases against observations. Recently EMC has placed greater emphasis on synoptic evaluation and on precipitation, 2-meter temperatures and dew points and 10 meter winds. A weekly EMC map discussion reviews the performance of many models over the United States and has helped diagnose and alleviate significant systematic errors in the GFS, including a near surface summertime evening cold wet bias over the eastern US and a multi-week period when the GFS persistently developed bogus tropical storms off Central America. The GFS exhibits a wet bias for light rain and a dry bias for moderate to heavy rain over the continental United States. Significant changes to the GFS are scheduled to be implemented in the fall of 2014. These include higher resolution, improved physics and improvements to the assimilation. These changes significantly improve the tropospheric flow and reduce a tropical upper tropospheric warm bias. One important error remaining is the failure of the GFS to maintain deep convection over Indonesia and in the tropical west Pacific. This and other current systematic errors will be presented.

  13. Within-Tunnel Variations in Pressure Data for Three Transonic Wind Tunnels

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2014-01-01

    This paper compares the results of pressure measurements made on the same test article with the same test matrix in three transonic wind tunnels. A comparison is presented of the unexplained variance associated with polar replicates acquired in each tunnel. The impact of a significance component of systematic (not random) unexplained variance is reviewed, and the results of analyses of variance are presented to assess the degree of significant systematic error in these representative wind tunnel tests. Total uncertainty estimates are reported for 140 samples of pressure data, quantifying the effects of within-polar random errors and between-polar systematic bias errors.

  14. The Origin of Systematic Errors in the GCM Simulation of ITCZ Precipitation

    NASA Technical Reports Server (NTRS)

    Chao, Winston C.; Suarez, M. J.; Bacmeister, J. T.; Chen, B.; Takacs, L. L.

    2006-01-01

    Previous GCM studies have found that the systematic errors in the GCM simulation of the seasonal mean ITCZ intensity and location could be substantially corrected by adding suitable amount of rain re-evaporation or cumulus momentum transport. However, the reason(s) for these systematic errors and solutions has remained a puzzle. In this work the knowledge gained from previous studies of the ITCZ in an aqua-planet model with zonally uniform SST is applied to solve this puzzle. The solution is supported by further aqua-planet and full model experiments using the latest version of the Goddard Earth Observing System GCM.

  15. Quantifying Errors in TRMM-Based Multi-Sensor QPE Products Over Land in Preparation for GPM

    NASA Technical Reports Server (NTRS)

    Peters-Lidard, Christa D.; Tian, Yudong

    2011-01-01

    Determining uncertainties in satellite-based multi-sensor quantitative precipitation estimates over land of fundamental importance to both data producers and hydro climatological applications. ,Evaluating TRMM-era products also lays the groundwork and sets the direction for algorithm and applications development for future missions including GPM. QPE uncertainties result mostly from the interplay of systematic errors and random errors. In this work, we will synthesize our recent results quantifying the error characteristics of satellite-based precipitation estimates. Both systematic errors and total uncertainties have been analyzed for six different TRMM-era precipitation products (3B42, 3B42RT, CMORPH, PERSIANN, NRL and GSMap). For systematic errors, we devised an error decomposition scheme to separate errors in precipitation estimates into three independent components, hit biases, missed precipitation and false precipitation. This decomposition scheme reveals hydroclimatologically-relevant error features and provides a better link to the error sources than conventional analysis, because in the latter these error components tend to cancel one another when aggregated or averaged in space or time. For the random errors, we calculated the measurement spread from the ensemble of these six quasi-independent products, and thus produced a global map of measurement uncertainties. The map yields a global view of the error characteristics and their regional and seasonal variations, reveals many undocumented error features over areas with no validation data available, and provides better guidance to global assimilation of satellite-based precipitation data. Insights gained from these results and how they could help with GPM will be highlighted.

  16. THE SYSTEMATICS OF STRONG LENS MODELING QUANTIFIED: THE EFFECTS OF CONSTRAINT SELECTION AND REDSHIFT INFORMATION ON MAGNIFICATION, MASS, AND MULTIPLE IMAGE PREDICTABILITY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Traci L.; Sharon, Keren, E-mail: tljohn@umich.edu

    Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading asmore » to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.« less

  17. A correction method for systematic error in (1)H-NMR time-course data validated through stochastic cell culture simulation.

    PubMed

    Sokolenko, Stanislav; Aucoin, Marc G

    2015-09-04

    The growing ubiquity of metabolomic techniques has facilitated high frequency time-course data collection for an increasing number of applications. While the concentration trends of individual metabolites can be modeled with common curve fitting techniques, a more accurate representation of the data needs to consider effects that act on more than one metabolite in a given sample. To this end, we present a simple algorithm that uses nonparametric smoothing carried out on all observed metabolites at once to identify and correct systematic error from dilution effects. In addition, we develop a simulation of metabolite concentration time-course trends to supplement available data and explore algorithm performance. Although we focus on nuclear magnetic resonance (NMR) analysis in the context of cell culture, a number of possible extensions are discussed. Realistic metabolic data was successfully simulated using a 4-step process. Starting with a set of metabolite concentration time-courses from a metabolomic experiment, each time-course was classified as either increasing, decreasing, concave, or approximately constant. Trend shapes were simulated from generic functions corresponding to each classification. The resulting shapes were then scaled to simulated compound concentrations. Finally, the scaled trends were perturbed using a combination of random and systematic errors. To detect systematic errors, a nonparametric fit was applied to each trend and percent deviations calculated at every timepoint. Systematic errors could be identified at time-points where the median percent deviation exceeded a threshold value, determined by the choice of smoothing model and the number of observed trends. Regardless of model, increasing the number of observations over a time-course resulted in more accurate error estimates, although the improvement was not particularly large between 10 and 20 samples per trend. The presented algorithm was able to identify systematic errors as small as 2.5 % under a wide range of conditions. Both the simulation framework and error correction method represent examples of time-course analysis that can be applied to further developments in (1)H-NMR methodology and the more general application of quantitative metabolomics.

  18. Flexible thin broadband microwave absorber based on a pyramidal periodic structure of lossy composite.

    PubMed

    Huang, Yixing; Yuan, Xujin; Wang, Changxian; Chen, Mingji; Tang, Liqun; Fang, Daining

    2018-06-15

    Microwave absorber with broadband absorption and thin thickness is one of the main research interests in this field. A flexible ultrathin and broadband microwave absorber comprising multiwall carbon nanotubes, spherical carbonyl iron, and silicone rubber is fabricated in a newly proposed pyramidal spatial periodic structure (SPS). The SPS with equivalent thickness of 3.73 mm covers the -10  dB and -15  dB absorption bandwidth in the frequency range 2-40 GHz and 10-40 GHz, respectively. The excellent absorption performance is achieved by concentration and dissipation of the electromagnetic field inside different parts of the magnetic-dielectric lossy protrusions in different frequency ranges.

  19. Loss compensation symmetry in dimers made of gain and lossy nanoparticles

    NASA Astrophysics Data System (ADS)

    Klimov, V. V.; Zabkov, I. V.; Guzatov, D. V.; Vinogradov, A. P.

    2018-03-01

    The eigenmodes in a two-dimensional dimer made of gain and lossy nanoparticles have been investigated within an exact analytical approach. It has been shown that there are eigenmodes for which all Joule losses are exactly compensated by the gain. Among such solutions there are solutions with a new type of symmetry, which we refer to as loss compensation symmetry, as well as well-known parity-time (PT) symmetric solutions. Unlike PT symmetric ones, the modes with loss compensation symmetry allow one to achieve full loss compensation with significantly less gain that in the case of PT symmetry. This effect paves the way to new loss compensation methods in optics.

  20. SU-E-J-87: Building Deformation Error Histogram and Quality Assurance of Deformable Image Registration.

    PubMed

    Park, S B; Kim, H; Yao, M; Ellis, R; Machtay, M; Sohn, J W

    2012-06-01

    To quantify the systematic error of a Deformable Image Registration (DIR) system and establish Quality Assurance (QA) procedure. To address the shortfall of landmark approach which it is only available at the significant visible feature points, we adapted a Deformation Vector Map (DVM) comparison approach. We used two CT image sets (R and T image sets) taken for the same patient at different time and generated a DVM, which includes the DIR systematic error. The DVM was calculated using fine-tuned B-Spline DIR and L-BFGS optimizer. By utilizing this DVM we generated R' image set to eliminate the systematic error in DVM,. Thus, we have truth data set, R' and T image sets, and the truth DVM. To test a DIR system, we use R' and T image sets to a DIR system. We compare the test DVM to the truth DVM. If there is no systematic error, they should be identical. We built Deformation Error Histogram (DEH) for quantitative analysis. The test registration was performed with an in-house B-Spline DIR system using a stochastic gradient descent optimizer. Our example data set was generated with a head and neck patient case. We also tested CT to CBCT deformable registration. We found skin regions which interface with the air has relatively larger errors. Also mobile joints such as shoulders had larger errors. Average error for ROIs were as follows; CTV: 0.4mm, Brain stem: 1.4mm, Shoulders: 1.6mm, and Normal tissues: 0.7mm. We succeeded to build DEH approach to quantify the DVM uncertainty. Our data sets are available for testing other systems in our web page. Utilizing DEH, users can decide how much systematic error they would accept. DEH and our data can be a tool for an AAPM task group to compose a DIR system QA guideline. This project is partially supported by the Agency for Healthcare Research and Quality (AHRQ) grant 1R18HS017424-01A2. © 2012 American Association of Physicists in Medicine.

  1. The Effect of Systematic Error in Forced Oscillation Testing

    NASA Technical Reports Server (NTRS)

    Williams, Brianne Y.; Landman, Drew; Flory, Isaac L., IV; Murphy, Patrick C.

    2012-01-01

    One of the fundamental problems in flight dynamics is the formulation of aerodynamic forces and moments acting on an aircraft in arbitrary motion. Classically, conventional stability derivatives are used for the representation of aerodynamic loads in the aircraft equations of motion. However, for modern aircraft with highly nonlinear and unsteady aerodynamic characteristics undergoing maneuvers at high angle of attack and/or angular rates the conventional stability derivative model is no longer valid. Attempts to formulate aerodynamic model equations with unsteady terms are based on several different wind tunnel techniques: for example, captive, wind tunnel single degree-of-freedom, and wind tunnel free-flying techniques. One of the most common techniques is forced oscillation testing. However, the forced oscillation testing method does not address the systematic and systematic correlation errors from the test apparatus that cause inconsistencies in the measured oscillatory stability derivatives. The primary objective of this study is to identify the possible sources and magnitude of systematic error in representative dynamic test apparatuses. Sensitivities of the longitudinal stability derivatives to systematic errors are computed, using a high fidelity simulation of a forced oscillation test rig, and assessed using both Design of Experiments and Monte Carlo methods.

  2. Global Warming Estimation from MSU

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, Robert; Yoo, Jung-Moon

    1998-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz) from sequential, sun-synchronous, polar-orbiting NOAA satellites contain small systematic errors. Some of these errors are time-dependent and some are time-independent. Small errors in Ch 2 data of successive satellites arise from calibration differences. Also, successive NOAA satellites tend to have different Local Equatorial Crossing Times (LECT), which introduce differences in Ch 2 data due to the diurnal cycle. These two sources of systematic error are largely time independent. However, because of atmospheric drag, there can be a drift in the LECT of a given satellite, which introduces time-dependent systematic errors. One of these errors is due to the progressive chance in the diurnal cycle and the other is due to associated chances in instrument heating by the sun. In order to infer global temperature trend from the these MSU data, we have eliminated explicitly the time-independent systematic errors. Both of the time-dependent errors cannot be assessed from each satellite. For this reason, their cumulative effect on the global temperature trend is evaluated implicitly. Christy et al. (1998) (CSL). based on their method of analysis of the MSU Ch 2 data, infer a global temperature cooling trend (-0.046 K per decade) from 1979 to 1997, although their near nadir measurements yield near zero trend (0.003 K/decade). Utilising an independent method of analysis, we infer global temperature warmed by 0.12 +/- 0.06 C per decade from the observations of the MSU Ch 2 during the period 1980 to 1997.

  3. Archie's law - a reappraisal

    NASA Astrophysics Data System (ADS)

    Glover, Paul W. J.

    2016-07-01

    When scientists apply Archie's first law they often include an extra parameter a, which was introduced about 10 years after the equation's first publication by Winsauer et al. (1952), and which is sometimes called the "tortuosity" or "lithology" parameter. This parameter is not, however, theoretically justified. Paradoxically, the Winsauer et al. (1952) form of Archie's law often performs better than the original, more theoretically correct version. The difference in the cementation exponent calculated from these two forms of Archie's law is important, and can lead to a misestimation of reserves by at least 20 % for typical reservoir parameter values. We have examined the apparent paradox, and conclude that while the theoretical form of the law is correct, the data that we have been analysing with Archie's law have been in error. There are at least three types of systematic error that are present in most measurements: (i) a porosity error, (ii) a pore fluid salinity error, and (iii) a temperature error. Each of these systematic errors is sufficient to ensure that a non-unity value of the parameter a is required in order to fit the electrical data well. Fortunately, the inclusion of this parameter in the fit has compensated for the presence of the systematic errors in the electrical and porosity data, leading to a value of cementation exponent that is correct. The exceptions are those cementation exponents that have been calculated for individual core plugs. We make a number of recommendations for reducing the systematic errors that contribute to the problem and suggest that the value of the parameter a may now be used as an indication of data quality.

  4. [Errors in Peruvian medical journals references].

    PubMed

    Huamaní, Charles; Pacheco-Romero, José

    2009-01-01

    References are fundamental in our studies; an adequate selection is asimportant as an adequate description. To determine the number of errors in a sample of references found in Peruvian medical journals. We reviewed 515 scientific papers references selected by systematic randomized sampling and corroborated reference information with the original document or its citation in Pubmed, LILACS or SciELO-Peru. We found errors in 47,6% (245) of the references, identifying 372 types of errors; the most frequent were errors in presentation style (120), authorship (100) and title (100), mainly due to spelling mistakes (91). References error percentage was high, varied and multiple. We suggest systematic revision of references in the editorial process as well as to extend the discussion on this theme. references, periodicals, research, bibliometrics.

  5. Capacity of optical communications over a lossy bosonic channel with a receiver employing the most general coherent electro-optic feedback control

    NASA Astrophysics Data System (ADS)

    Chung, Hye Won; Guha, Saikat; Zheng, Lizhong

    2017-07-01

    We study the problem of designing optical receivers to discriminate between multiple coherent states using coherent processing receivers—i.e., one that uses arbitrary coherent feedback control and quantum-noise-limited direct detection—which was shown by Dolinar to achieve the minimum error probability in discriminating any two coherent states. We first derive and reinterpret Dolinar's binary-hypothesis minimum-probability-of-error receiver as the one that optimizes the information efficiency at each time instant, based on recursive Bayesian updates within the receiver. Using this viewpoint, we propose a natural generalization of Dolinar's receiver design to discriminate M coherent states, each of which could now be a codeword, i.e., a sequence of N coherent states, each drawn from a modulation alphabet. We analyze the channel capacity of the pure-loss optical channel with a general coherent-processing receiver in the low-photon number regime and compare it with the capacity achievable with direct detection and the Holevo limit (achieving the latter would require a quantum joint-detection receiver). We show compelling evidence that despite the optimal performance of Dolinar's receiver for the binary coherent-state hypothesis test (either in error probability or mutual information), the asymptotic communication rate achievable by such a coherent-processing receiver is only as good as direct detection. This suggests that in the infinitely long codeword limit, all potential benefits of coherent processing at the receiver can be obtained by designing a good code and direct detection, with no feedback within the receiver.

  6. Component Analysis of Errors on PERSIANN Precipitation Estimates over Urmia Lake Basin, IRAN

    NASA Astrophysics Data System (ADS)

    Ghajarnia, N.; Daneshkar Arasteh, P.; Liaghat, A. M.; Araghinejad, S.

    2016-12-01

    In this study, PERSIANN daily dataset is evaluated from 2000 to 2011 in 69 pixels over Urmia Lake basin in northwest of Iran. Different analytical approaches and indexes are used to examine PERSIANN precision in detection and estimation of rainfall rate. The residuals are decomposed into Hit, Miss and FA estimation biases while continues decomposition of systematic and random error components are also analyzed seasonally and categorically. New interpretation of estimation accuracy named "reliability on PERSIANN estimations" is introduced while the changing manners of existing categorical/statistical measures and error components are also seasonally analyzed over different rainfall rate categories. This study yields new insights into the nature of PERSIANN errors over Urmia lake basin as a semi-arid region in the middle-east, including the followings: - The analyzed contingency table indexes indicate better detection precision during spring and fall. - A relatively constant level of error is generally observed among different categories. The range of precipitation estimates at different rainfall rate categories is nearly invariant as a sign for the existence of systematic error. - Low level of reliability is observed on PERSIANN estimations at different categories which are mostly associated with high level of FA error. However, it is observed that as the rate of precipitation increase, the ability and precision of PERSIANN in rainfall detection also increases. - The systematic and random error decomposition in this area shows that PERSIANN has more difficulty in modeling the system and pattern of rainfall rather than to have bias due to rainfall uncertainties. The level of systematic error also considerably increases in heavier rainfalls. It is also important to note that PERSIANN error characteristics at each season varies due to the condition and rainfall patterns of that season which shows the necessity of seasonally different approach for the calibration of this product. Overall, we believe that different error component's analysis performed in this study, can substantially help any further local studies for post-calibration and bias reduction of PERSIANN estimations.

  7. Modeling systematic errors: polychromatic sources of Beer-Lambert deviations in HPLC/UV and nonchromatographic spectrophotometric assays.

    PubMed

    Galli, C

    2001-07-01

    It is well established that the use of polychromatic radiation in spectrophotometric assays leads to excursions from the Beer-Lambert limit. This Note models the resulting systematic error as a function of assay spectral width, slope of molecular extinction coefficient, and analyte concentration. The theoretical calculations are compared with recent experimental results; a parameter is introduced which can be used to estimate the magnitude of the systematic error in both chromatographic and nonchromatographic spectrophotometric assays. It is important to realize that the polychromatic radiation employed in common laboratory equipment can yield assay errors up to approximately 4%, even at absorption levels generally considered 'safe' (i.e. absorption <1). Thus careful consideration of instrumental spectral width, analyte concentration, and slope of molecular extinction coefficient is required to ensure robust analytical methods.

  8. A service evaluation of on-line image-guided radiotherapy to lower extremity sarcoma: Investigating the workload implications of a 3 mm action level for image assessment and correction prior to delivery.

    PubMed

    Taylor, C; Parker, J; Stratford, J; Warren, M

    2018-05-01

    Although all systematic and random positional setup errors can be corrected for in entirety during on-line image-guided radiotherapy, the use of a specified action level, below which no correction occurs, is also an option. The following service evaluation aimed to investigate the use of this 3 mm action level for on-line image assessment and correction (online, systematic set-up error and weekly evaluation) for lower extremity sarcoma, and understand the impact on imaging frequency and patient positioning error within one cancer centre. All patients were immobilised using a thermoplastic shell attached to a plastic base and an individual moulded footrest. A retrospective analysis of 30 patients was performed. Patient setup and correctional data derived from cone beam CT analysis was retrieved. The timing, frequency and magnitude of corrections were evaluated. The population systematic and random error was derived. 20% of patients had no systematic corrections over the duration of treatment, and 47% had one. The maximum number of systematic corrections per course of radiotherapy was 4, which occurred for 2 patients. 34% of episodes occurred within the first 5 fractions. All patients had at least one observed translational error during their treatment greater than 0.3 cm, and 80% of patients had at least one observed translational error during their treatment greater than 0.5 cm. The population systematic error was 0.14 cm, 0.10 cm, 0.14 cm and random error was 0.27 cm, 0.22 cm, 0.23 cm in the lateral, caudocranial and anteroposterial directions. The required Planning Target Volume margin for the study population was 0.55 cm, 0.41 cm and 0.50 cm in the lateral, caudocranial and anteroposterial directions. The 3 mm action level for image assessment and correction prior to delivery reduced the imaging burden and focussed intervention on patients that exhibited greater positional variability. This strategy could be an efficient deployment of departmental resources if full daily correction of positional setup error is not possible. Copyright © 2017. Published by Elsevier Ltd.

  9. Impact of JPEG2000 compression on endmember extraction and unmixing of remotely sensed hyperspectral data

    NASA Astrophysics Data System (ADS)

    Martin, Gabriel; Gonzalez-Ruiz, Vicente; Plaza, Antonio; Ortiz, Juan P.; Garcia, Inmaculada

    2010-07-01

    Lossy hyperspectral image compression has received considerable interest in recent years due to the extremely high dimensionality of the data. However, the impact of lossy compression on spectral unmixing techniques has not been widely studied. These techniques characterize mixed pixels (resulting from insufficient spatial resolution) in terms of a suitable combination of spectrally pure substances (called endmembers) weighted by their estimated fractional abundances. This paper focuses on the impact of JPEG2000-based lossy compression of hyperspectral images on the quality of the endmembers extracted by different algorithms. The three considered algorithms are the orthogonal subspace projection (OSP), which uses only spatial information, and the automatic morphological endmember extraction (AMEE) and spatial spectral endmember extraction (SSEE), which integrate both spatial and spectral information in the search for endmembers. The impact of compression on the resulting abundance estimation based on the endmembers derived by different methods is also substantiated. Experimental results are conducted using a hyperspectral data set collected by NASA Jet Propulsion Laboratory over the Cuprite mining district in Nevada. The experimental results are quantitatively analyzed using reference information available from U.S. Geological Survey, resulting in recommendations to specialists interested in applying endmember extraction and unmixing algorithms to compressed hyperspectral data.

  10. Helical tomotherapy setup variations in canine nasal tumor patients immobilized with a bite block.

    PubMed

    Kubicek, Lyndsay N; Seo, Songwon; Chappell, Richard J; Jeraj, Robert; Forrest, Lisa J

    2012-01-01

    The purpose of our study was to compare setup variation in four degrees of freedom (vertical, longitudinal, lateral, and roll) between canine nasal tumor patients immobilized with a mattress and bite block, versus a mattress alone. Our secondary aim was to define a clinical target volume (CTV) to planning target volume (PTV) expansion margin based on our mean systematic error values associated with nasal tumor patients immobilized by a mattress and bite block. We evaluated six parameters for setup corrections: systematic error, random error, patient-patient variation in systematic errors, the magnitude of patient-specific random errors (root mean square [RMS]), distance error, and the variation of setup corrections from zero shift. The variations in all parameters were statistically smaller in the group immobilized by a mattress and bite block. The mean setup corrections in the mattress and bite block group ranged from 0.91 mm to 1.59 mm for the translational errors and 0.5°. Although most veterinary radiation facilities do not have access to Image-guided radiotherapy (IGRT), we identified a need for more rigid fixation, established the value of adding IGRT to veterinary radiation therapy, and define the CTV-PTV setup error margin for canine nasal tumor patients immobilized in a mattress and bite block. © 2012 Veterinary Radiology & Ultrasound.

  11. Dynamically correcting two-qubit gates against any systematic logical error

    NASA Astrophysics Data System (ADS)

    Calderon Vargas, Fernando Antonio

    The reliability of quantum information processing depends on the ability to deal with noise and error in an efficient way. A significant source of error in many settings is coherent, systematic gate error. This work introduces a set of composite pulse sequences that generate maximally entangling gates and correct all systematic errors within the logical subspace to arbitrary order. These sequences are applica- ble for any two-qubit interaction Hamiltonian, and make no assumptions about the underlying noise mechanism except that it is constant on the timescale of the opera- tion. The prime use for our results will be in cases where one has limited knowledge of the underlying physical noise and control mechanisms, highly constrained control, or both. In particular, we apply these composite pulse sequences to the quantum system formed by two capacitively coupled singlet-triplet qubits, which is charac- terized by having constrained control and noise sources that are low frequency and of a non-Markovian nature.

  12. The application of SHERPA (Systematic Human Error Reduction and Prediction Approach) in the development of compensatory cognitive rehabilitation strategies for stroke patients with left and right brain damage.

    PubMed

    Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim

    2015-01-01

    Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations.

  13. Internal robustness: systematic search for systematic bias in SN Ia data

    NASA Astrophysics Data System (ADS)

    Amendola, Luca; Marra, Valerio; Quartin, Miguel

    2013-04-01

    A great deal of effort is currently being devoted to understanding, estimating and removing systematic errors in cosmological data. In the particular case of Type Ia supernovae, systematics are starting to dominate the error budget. Here we propose a Bayesian tool for carrying out a systematic search for systematic contamination. This serves as an extension to the standard goodness-of-fit tests and allows not only to cross-check raw or processed data for the presence of systematics but also to pin-point the data that are most likely contaminated. We successfully test our tool with mock catalogues and conclude that the Union2.1 data do not possess a significant amount of systematics. Finally, we show that if one includes in Union2.1 the supernovae that originally failed the quality cuts, our tool signals the presence of systematics at over 3.8σ confidence level.

  14. High-Accuracy Decoupling Estimation of the Systematic Coordinate Errors of an INS and Intensified High Dynamic Star Tracker Based on the Constrained Least Squares Method

    PubMed Central

    Jiang, Jie; Yu, Wenbo; Zhang, Guangjun

    2017-01-01

    Navigation accuracy is one of the key performance indicators of an inertial navigation system (INS). Requirements for an accuracy assessment of an INS in a real work environment are exceedingly urgent because of enormous differences between real work and laboratory test environments. An attitude accuracy assessment of an INS based on the intensified high dynamic star tracker (IHDST) is particularly suitable for a real complex dynamic environment. However, the coupled systematic coordinate errors of an INS and the IHDST severely decrease the attitude assessment accuracy of an INS. Given that, a high-accuracy decoupling estimation method of the above systematic coordinate errors based on the constrained least squares (CLS) method is proposed in this paper. The reference frame of the IHDST is firstly converted to be consistent with that of the INS because their reference frames are completely different. Thereafter, the decoupling estimation model of the systematic coordinate errors is established and the CLS-based optimization method is utilized to estimate errors accurately. After compensating for error, the attitude accuracy of an INS can be assessed based on IHDST accurately. Both simulated experiments and real flight experiments of aircraft are conducted, and the experimental results demonstrate that the proposed method is effective and shows excellent performance for the attitude accuracy assessment of an INS in a real work environment. PMID:28991179

  15. Seeing Your Error Alters My Pointing: Observing Systematic Pointing Errors Induces Sensori-Motor After-Effects

    PubMed Central

    Ronchi, Roberta; Revol, Patrice; Katayama, Masahiro; Rossetti, Yves; Farnè, Alessandro

    2011-01-01

    During the procedure of prism adaptation, subjects execute pointing movements to visual targets under a lateral optical displacement: As consequence of the discrepancy between visual and proprioceptive inputs, their visuo-motor activity is characterized by pointing errors. The perception of such final errors triggers error-correction processes that eventually result into sensori-motor compensation, opposite to the prismatic displacement (i.e., after-effects). Here we tested whether the mere observation of erroneous pointing movements, similar to those executed during prism adaptation, is sufficient to produce adaptation-like after-effects. Neurotypical participants observed, from a first-person perspective, the examiner's arm making incorrect pointing movements that systematically overshot visual targets location to the right, thus simulating a rightward optical deviation. Three classical after-effect measures (proprioceptive, visual and visual-proprioceptive shift) were recorded before and after first-person's perspective observation of pointing errors. Results showed that mere visual exposure to an arm that systematically points on the right-side of a target (i.e., without error correction) produces a leftward after-effect, which mostly affects the observer's proprioceptive estimation of her body midline. In addition, being exposed to such a constant visual error induced in the observer the illusion “to feel” the seen movement. These findings indicate that it is possible to elicit sensori-motor after-effects by mere observation of movement errors. PMID:21731649

  16. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    PubMed

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.

  17. Directional Variance Adjustment: Bias Reduction in Covariance Matrices Based on Factor Analysis with an Application to Portfolio Optimization

    PubMed Central

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016

  18. WE-H-BRC-08: Examining Credentialing Criteria and Poor Performance Indicators for IROC Houston’s Anthropomorphic Head and Neck Phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carson, M; Molineu, A; Taylor, P

    Purpose: To analyze the most recent results of IROC Houston’s anthropomorphic H&N phantom to determine the nature of failing irradiations and the feasibility of altering pass/fail credentialing criteria. Methods: IROC Houston’s H&N phantom, used for IMRT credentialing for NCI-sponsored clinical trials, requires that an institution’s treatment plan must agree with measurement within 7% (TLD doses) and ≥85% pixels must pass 7%/4 mm gamma analysis. 156 phantom irradiations (November 2014 – October 2015) were re-evaluated using tighter criteria: 1) 5% TLD and 5%/4 mm, 2) 5% TLD and 5%/3 mm, 3) 4% TLD and 4%/4 mm, and 4) 3% TLD andmore » 3%/3 mm. Failure/poor performance rates were evaluated with respect to individual film and TLD performance by location in the phantom. Overall poor phantom results were characterized qualitatively as systematic (dosimetric) errors, setup errors/positional shifts, global but non-systematic errors, and errors affecting only a local region. Results: The pass rate for these phantoms using current criteria is 90%. Substituting criteria 1-4 reduces the overall pass rate to 77%, 70%, 63%, and 37%, respectively. Statistical analyses indicated the probability of noise-induced TLD failure at the 5% criterion was <0.5%. Using criteria 1, TLD results were most often the cause of failure (86% failed TLD while 61% failed film), with most failures identified in the primary PTV (77% cases). Other criteria posed similar results. Irradiations that failed from film only were overwhelmingly associated with phantom shifts/setup errors (≥80% cases). Results failing criteria 1 were primarily diagnosed as systematic: 58% of cases. 11% were setup/positioning errors, 8% were global non-systematic errors, and 22% were local errors. Conclusion: This study demonstrates that 5% TLD and 5%/4 mm gamma criteria may be both practically and theoretically achievable. Further work is necessary to diagnose and resolve dosimetric inaccuracy in these trials, particularly for systematic dose errors. This work is funded by NCI Grant CA180803.« less

  19. UNDERSTANDING SYSTEMATIC MEASUREMENT ERROR IN THERMAL-OPTICAL ANALYSIS FOR PM BLACK CARBON USING RESPONSE SURFACES AND SURFACE CONFIDENCE INTERVALS

    EPA Science Inventory

    Results from a NIST-EPA Interagency Agreement on Understanding Systematic Measurement Error in Thermal-Optical Analysis for PM Black Carbon Using Response Surfaces and Surface Confidence Intervals will be presented at the American Association for Aerosol Research (AAAR) 24th Annu...

  20. Variation across mitochondrial gene trees provides evidence for systematic error: How much gene tree variation is biological?

    PubMed

    Richards, Emilie J; Brown, Jeremy M; Barley, Anthony J; Chong, Rebecca A; Thomson, Robert C

    2018-02-19

    The use of large genomic datasets in phylogenetics has highlighted extensive topological variation across genes. Much of this discordance is assumed to result from biological processes. However, variation among gene trees can also be a consequence of systematic error driven by poor model fit, and the relative importance of biological versus methodological factors in explaining gene tree variation is a major unresolved question. Using mitochondrial genomes to control for biological causes of gene tree variation, we estimate the extent of gene tree discordance driven by systematic error and employ posterior prediction to highlight the role of model fit in producing this discordance. We find that the amount of discordance among mitochondrial gene trees is similar to the amount of discordance found in other studies that assume only biological causes of variation. This similarity suggests that the role of systematic error in generating gene tree variation is underappreciated and critical evaluation of fit between assumed models and the data used for inference is important for the resolution of unresolved phylogenetic questions.

  1. The accuracy of the measurements in Ulugh Beg's star catalogue

    NASA Astrophysics Data System (ADS)

    Krisciunas, K.

    1992-12-01

    The star catalogue compiled by Ulugh Beg and his collaborators in Samarkand (ca. 1437) is the only catalogue primarily based on original observations between the times of Ptolemy and Tycho Brahe. Evans (1987) has given convincing evidence that Ulugh Beg's star catalogue was based on measurements made with a zodiacal armillary sphere graduated to 15(') , with interpolation to 0.2 units. He and Shevchenko (1990) were primarily interested in the systematic errors in ecliptic longitude. Shevchenko's analysis of the random errors was limited to the twelve zodiacal constellations. We have analyzed all 843 ecliptic longitudes and latitudes attributed to Ulugh Beg by Knobel (1917). This required multiplying all the longitude errors by the respective values of the cosine of the celestial latitudes. We find a random error of +/- 17minp 7 for ecliptic longitude and +/- 16minp 5 for ecliptic latitude. On the whole, the random errors are largest near the ecliptic, decreasing towards the ecliptic poles. For all of Ulugh Beg's measurements (excluding outliers) the mean systematic error is -10minp 8 +/- 0minp 8 for ecliptic longitude and 7minp 5 +/- 0minp 7 for ecliptic latitude, with the errors in the sense ``computed minus Ulugh Beg''. For the brighter stars (those designated alpha , beta , and gamma in the respective constellations), the mean systematic errors are -11minp 3 +/- 1minp 9 for ecliptic longitude and 9minp 4 +/- 1minp 5 for ecliptic latitude. Within the errors this matches the systematic error in both coordinates for alpha Vir. With greater confidence we may conclude that alpha Vir was the principal reference star in the catalogues of Ulugh Beg and Ptolemy. Evans, J. 1987, J. Hist. Astr. 18, 155. Knobel, E. B. 1917, Ulugh Beg's Catalogue of Stars, Washington, D. C.: Carnegie Institution. Shevchenko, M. 1990, J. Hist. Astr. 21, 187.

  2. Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G

    NASA Astrophysics Data System (ADS)

    DeSalvo, Riccardo

    2015-06-01

    Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested.

  3. 13Check_RNA: A tool to evaluate 13C chemical shifts assignments of RNA.

    PubMed

    Icazatti, A A; Martin, O A; Villegas, M; Szleifer, I; Vila, J A

    2018-06-19

    Chemical shifts (CS) are an important source of structural information of macromolecules such as RNA. In addition to the scarce availability of CS for RNA, the observed values are prone to errors due to a wrong re-calibration or miss assignments. Different groups have dedicated their efforts to correct CS systematic errors on RNA. Despite this, there are not automated and freely available algorithms for correct assignments of RNA 13C CS before their deposition to the BMRB or re-reference already deposited CS with systematic errors. Based on an existent method we have implemented an open source python module to correct 13C CS (from here on 13Cexp) systematic errors of RNAs and then return the results in 3 formats including the nmrstar one. This software is available on GitHub at https://github.com/BIOS-IMASL/13Check_RNA under a MIT license. Supplementary data are available at Bioinformatics online.

  4. The Gnomon Experiment

    NASA Astrophysics Data System (ADS)

    Krisciunas, Kevin

    2007-12-01

    A gnomon, or vertical pointed stick, can be used to determine the north-south direction at a site, as well as one's latitude. If one has accurate time and knows one's time zone, it is also possible to determine one's longitude. From observations on the first day of winter and the first day of summer one can determine the obliquity of the ecliptic. Since we can obtain accurate geographical coordinates from Google Earth or a GPS device, analysis of set of shadow length measurements can be used by students to learn about astronomical coordinate systems, time systems, systematic errors, and random errors. Systematic latitude errors of student datasets are typically 30 nautical miles (0.5 degree) or more, but with care one can achieve systematic and random errors less than 8 nautical miles. One of the advantages of this experiment is that it can be carried out during the day. Also, it is possible to determine if a student has made up his data.

  5. Evaluation of seasonal and spatial variations of lumped water balance model sensitivity to precipitation data errors

    NASA Astrophysics Data System (ADS)

    Xu, Chong-yu; Tunemar, Liselotte; Chen, Yongqin David; Singh, V. P.

    2006-06-01

    Sensitivity of hydrological models to input data errors have been reported in the literature for particular models on a single or a few catchments. A more important issue, i.e. how model's response to input data error changes as the catchment conditions change has not been addressed previously. This study investigates the seasonal and spatial effects of precipitation data errors on the performance of conceptual hydrological models. For this study, a monthly conceptual water balance model, NOPEX-6, was applied to 26 catchments in the Mälaren basin in Central Sweden. Both systematic and random errors were considered. For the systematic errors, 5-15% of mean monthly precipitation values were added to the original precipitation to form the corrupted input scenarios. Random values were generated by Monte Carlo simulation and were assumed to be (1) independent between months, and (2) distributed according to a Gaussian law of zero mean and constant standard deviation that were taken as 5, 10, 15, 20, and 25% of the mean monthly standard deviation of precipitation. The results show that the response of the model parameters and model performance depends, among others, on the type of the error, the magnitude of the error, physical characteristics of the catchment, and the season of the year. In particular, the model appears less sensitive to the random error than to the systematic error. The catchments with smaller values of runoff coefficients were more influenced by input data errors than were the catchments with higher values. Dry months were more sensitive to precipitation errors than were wet months. Recalibration of the model with erroneous data compensated in part for the data errors by altering the model parameters.

  6. Detecting and overcoming systematic errors in genome-scale phylogenies.

    PubMed

    Rodríguez-Ezpeleta, Naiara; Brinkmann, Henner; Roure, Béatrice; Lartillot, Nicolas; Lang, B Franz; Philippe, Hervé

    2007-06-01

    Genome-scale data sets result in an enhanced resolution of the phylogenetic inference by reducing stochastic errors. However, there is also an increase of systematic errors due to model violations, which can lead to erroneous phylogenies. Here, we explore the impact of systematic errors on the resolution of the eukaryotic phylogeny using a data set of 143 nuclear-encoded proteins from 37 species. The initial observation was that, despite the impressive amount of data, some branches had no significant statistical support. To demonstrate that this lack of resolution is due to a mutual annihilation of phylogenetic and nonphylogenetic signals, we created a series of data sets with slightly different taxon sampling. As expected, these data sets yielded strongly supported but mutually exclusive trees, thus confirming the presence of conflicting phylogenetic and nonphylogenetic signals in the original data set. To decide on the correct tree, we applied several methods expected to reduce the impact of some kinds of systematic error. Briefly, we show that (i) removing fast-evolving positions, (ii) recoding amino acids into functional categories, and (iii) using a site-heterogeneous mixture model (CAT) are three effective means of increasing the ratio of phylogenetic to nonphylogenetic signal. Finally, our results allow us to formulate guidelines for detecting and overcoming phylogenetic artefacts in genome-scale phylogenetic analyses.

  7. Full Wave Analysis of RF Signal Attenuation in a Lossy Cave using a High Order Time Domain Vector Finite Element Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pingenot, J; Rieben, R; White, D

    2004-12-06

    We present a computational study of signal propagation and attenuation of a 200 MHz dipole antenna in a cave environment. The cave is modeled as a straight and lossy random rough wall. To simulate a broad frequency band, the full wave Maxwell equations are solved directly in the time domain via a high order vector finite element discretization using the massively parallel CEM code EMSolve. The simulation is performed for a series of random meshes in order to generate statistical data for the propagation and attenuation properties of the cave environment. Results for the power spectral density and phase ofmore » the electric field vector components are presented and discussed.« less

  8. Local systematic differences in 2MASS positions

    NASA Astrophysics Data System (ADS)

    Bustos Fierro, I. H.; Calderón, J. H.

    2018-01-01

    We have found that positions in the 2MASS All-sky Catalog of Point Sources show local systematic differences with characteristic length-scales of ˜ 5 to ˜ 8 arcminutes when compared with several catalogs. We have observed that when 2MASS positions are used in the computation of proper motions, the mentioned systematic differences cause systematic errors in the resulting proper motions. We have developed a method to locally rectify 2MASS with respect to UCAC4 in order to diminish the systematic differences between these catalogs. The rectified 2MASS catalog with the proposed method can be regarded as an extension of UCAC4 for astrometry with accuracy ˜ 90 mas in its positions, with negligible systematic errors. Also we show that the use of these rectified positions removes the observed systematic pattern in proper motions derived from original 2MASS positions.

  9. SU-E-CAMPUS-J-05: Quantitative Investigation of Random and Systematic Uncertainties From Hardware and Software Components in the Frameless 6DBrainLAB ExacTrac System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keeling, V; Jin, H; Hossain, S

    2014-06-15

    Purpose: To evaluate setup accuracy and quantify individual systematic and random errors for the various hardware and software components of the frameless 6D-BrainLAB ExacTrac system. Methods: 35 patients with cranial lesions, some with multiple isocenters (50 total lesions treated in 1, 3, 5 fractions), were investigated. All patients were simulated with a rigid head-and-neck mask and the BrainLAB localizer. CT images were transferred to the IPLAN treatment planning system where optimized plans were generated using stereotactic reference frame based on the localizer. The patients were setup initially with infrared (IR) positioning ExacTrac system. Stereoscopic X-ray images (XC: X-ray Correction) weremore » registered to their corresponding digitally-reconstructed-radiographs, based on bony anatomy matching, to calculate 6D-translational and rotational (Lateral, Longitudinal, Vertical, Pitch, Roll, Yaw) shifts. XC combines systematic errors of the mask, localizer, image registration, frame, and IR. If shifts were below tolerance (0.7 mm translational and 1 degree rotational), treatment was initiated; otherwise corrections were applied and additional X-rays were acquired to verify patient position (XV: X-ray Verification). Statistical analysis was used to extract systematic and random errors of the different components of the 6D-ExacTrac system and evaluate the cumulative setup accuracy. Results: Mask systematic errors (translational; rotational) were the largest and varied from one patient to another in the range (−15 to 4mm; −2.5 to 2.5degree) obtained from mean of XC for each patient. Setup uncertainty in IR positioning (0.97,2.47,1.62mm;0.65,0.84,0.96degree) was extracted from standard-deviation of XC. Combined systematic errors of the frame and localizer (0.32,−0.42,−1.21mm; −0.27,0.34,0.26degree) was extracted from mean of means of XC distributions. Final patient setup uncertainty was obtained from the standard deviations of XV (0.57,0.77,0.67mm,0.39,0.35,0.30degree). Conclusion: Statistical analysis was used to calculate cumulative and individual systematic errors from the different hardware and software components of the 6D-ExacTrac-system. Patients were treated with cumulative errors (<1mm,<1degree) with XV image guidance.« less

  10. Combined influence of CT random noise and HU-RSP calibration curve nonlinearities on proton range systematic errors

    NASA Astrophysics Data System (ADS)

    Brousmiche, S.; Souris, K.; Orban de Xivry, J.; Lee, J. A.; Macq, B.; Seco, J.

    2017-11-01

    Proton range random and systematic uncertainties are the major factors undermining the advantages of proton therapy, namely, a sharp dose falloff and a better dose conformality for lower doses in normal tissues. The influence of CT artifacts such as beam hardening or scatter can easily be understood and estimated due to their large-scale effects on the CT image, like cupping and streaks. In comparison, the effects of weakly-correlated stochastic noise are more insidious and less attention is drawn on them partly due to the common belief that they only contribute to proton range uncertainties and not to systematic errors thanks to some averaging effects. A new source of systematic errors on the range and relative stopping powers (RSP) has been highlighted and proved not to be negligible compared to the 3.5% uncertainty reference value used for safety margin design. Hence, we demonstrate that the angular points in the HU-to-RSP calibration curve are an intrinsic source of proton range systematic error for typical levels of zero-mean stochastic CT noise. Systematic errors on RSP of up to 1% have been computed for these levels. We also show that the range uncertainty does not generally vary linearly with the noise standard deviation. We define a noise-dependent effective calibration curve that better describes, for a given material, the RSP value that is actually used. The statistics of the RSP and the range continuous slowing down approximation (CSDA) have been analytically derived for the general case of a calibration curve obtained by the stoichiometric calibration procedure. These models have been validated against actual CSDA simulations for homogeneous and heterogeneous synthetical objects as well as on actual patient CTs for prostate and head-and-neck treatment planning situations.

  11. Medication errors in the Middle East countries: a systematic review of the literature.

    PubMed

    Alsulami, Zayed; Conroy, Sharon; Choonara, Imti

    2013-04-01

    Medication errors are a significant global concern and can cause serious medical consequences for patients. Little is known about medication errors in Middle Eastern countries. The objectives of this systematic review were to review studies of the incidence and types of medication errors in Middle Eastern countries and to identify the main contributory factors involved. A systematic review of the literature related to medication errors in Middle Eastern countries was conducted in October 2011 using the following databases: Embase, Medline, Pubmed, the British Nursing Index and the Cumulative Index to Nursing & Allied Health Literature. The search strategy included all ages and languages. Inclusion criteria were that the studies assessed or discussed the incidence of medication errors and contributory factors to medication errors during the medication treatment process in adults or in children. Forty-five studies from 10 of the 15 Middle Eastern countries met the inclusion criteria. Nine (20 %) studies focused on medication errors in paediatric patients. Twenty-one focused on prescribing errors, 11 measured administration errors, 12 were interventional studies and one assessed transcribing errors. Dispensing and documentation errors were inadequately evaluated. Error rates varied from 7.1 % to 90.5 % for prescribing and from 9.4 % to 80 % for administration. The most common types of prescribing errors reported were incorrect dose (with an incidence rate from 0.15 % to 34.8 % of prescriptions), wrong frequency and wrong strength. Computerised physician rder entry and clinical pharmacist input were the main interventions evaluated. Poor knowledge of medicines was identified as a contributory factor for errors by both doctors (prescribers) and nurses (when administering drugs). Most studies did not assess the clinical severity of the medication errors. Studies related to medication errors in the Middle Eastern countries were relatively few in number and of poor quality. Educational programmes on drug therapy for doctors and nurses are urgently needed.

  12. On-board error correction improves IR earth sensor accuracy

    NASA Astrophysics Data System (ADS)

    Alex, T. K.; Kasturirangan, K.; Shrivastava, S. K.

    1989-10-01

    Infra-red earth sensors are used in satellites for attitude sensing. Their accuracy is limited by systematic and random errors. The sources of errors in a scanning infra-red earth sensor are analyzed in this paper. The systematic errors arising from seasonal variation of infra-red radiation, oblate shape of the earth, ambient temperature of sensor, changes in scan/spin rates have been analyzed. Simple relations are derived using least square curve fitting for on-board correction of these errors. Random errors arising out of noise from detector and amplifiers, instability of alignment and localized radiance anomalies are analyzed and possible correction methods are suggested. Sun and Moon interference on earth sensor performance has seriously affected a number of missions. The on-board processor detects Sun/Moon interference and corrects the errors on-board. It is possible to obtain eight times improvement in sensing accuracy, which will be comparable with ground based post facto attitude refinement.

  13. Patient disclosure of medical errors in paediatrics: A systematic literature review

    PubMed Central

    Koller, Donna; Rummens, Anneke; Le Pouesard, Morgane; Espin, Sherry; Friedman, Jeremy; Coffey, Maitreya; Kenneally, Noah

    2016-01-01

    Medical errors are common within paediatrics; however, little research has examined the process of disclosing medical errors in paediatric settings. The present systematic review of current research and policy initiatives examined evidence regarding the disclosure of medical errors involving paediatric patients. Peer-reviewed research from a range of scientific journals from the past 10 years is presented, and an overview of Canadian and international policies regarding disclosure in paediatric settings are provided. The purpose of the present review was to scope the existing literature and policy, and to synthesize findings into an integrated and accessible report. Future research priorities and policy implications are then identified. PMID:27429578

  14. System calibration method for Fourier ptychographic microscopy

    NASA Astrophysics Data System (ADS)

    Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli

    2017-09-01

    Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic.

  15. Non-linear glasses and metaglasses for photonics, a review: Part II. Kerr nonlinearity and metaglasses of positive and negative refraction

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2008-01-01

    This is the second part of a paper on nonlinear properties of optical glasses and metaglasses. A subject of the paper is a review of the basic properties of several families of high optical quality glasses for photonics. The emphasis is put on nonlinear properties of these glasses, including nonlinearities of higher order. Nonlinear effects were debated and systematized. Interactions between optical wave of high power density with glass were described. All parameters of the glass increasing the optical nonlinearities were categorized. Optical nonlinearities in glasses were grouped into the following categories: time and frequency domain, amplitude and phase, resonant and non-resonant, elastic and inelastic, lossy and lossless, reversible and irreversible, instant and slow, adiabatic and non-adiabatic, with virtual versus real excitation of glass, destroying and non-destroying, etc. Nonlinear effects in glasses are based on the following effects: optical, thermal, mechanical and/or acoustic, electrical, magnetic, density and refraction modulation, chemical, etc.

  16. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    DOE PAGES

    Li, T. S.; DePoy, D. L.; Marshall, J. L.; ...

    2016-06-01

    Here, we report that meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations inmore » the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. In conclusion, the residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  17. Systematic error of diode thermometer.

    PubMed

    Iskrenovic, Predrag S

    2009-08-01

    Semiconductor diodes are often used for measuring temperatures. The forward voltage across a diode decreases, approximately linearly, with the increase in temperature. The applied method is mainly the simplest one. A constant direct current flows through the diode, and voltage is measured at diode terminals. The direct current that flows through the diode, putting it into operating mode, heats up the diode. The increase in temperature of the diode-sensor, i.e., the systematic error due to self-heating, depends on the intensity of current predominantly and also on other factors. The results of systematic error measurements due to heating up by the forward-bias current have been presented in this paper. The measurements were made at several diodes over a wide range of bias current intensity.

  18. Hadronic Contribution to Muon g-2 with Systematic Error Correlations

    NASA Astrophysics Data System (ADS)

    Brown, D. H.; Worstell, W. A.

    1996-05-01

    We have performed a new evaluation of the hadronic contribution to a_μ=(g-2)/2 of the muon with explicit correlations of systematic errors among the experimental data on σ( e^+e^- → hadrons ). Our result for the lowest order hadronic vacuum polarization contribution is a_μ^hvp = 701.7(7.6)(13.4) × 10-10 where the total systematic error contributions from below and above √s = 1.4 GeV are (12.5) × 10-10 and (4.8) × 10-10 respectively. Therefore new measurements on σ( e^+e^- → hadrons ) below 1.4 GeV in Novosibirsk, Russia can significantly reduce the total error on a_μ^hvp. This contrasts with a previous evaluation which indicated that the dominant error is due to the energy region above 1.4 GeV. The latter analysis correlated systematic errors at each energy point separately but not across energy ranges as we have done. Combination with higher order hadronic contributions is required for a new measurement of a_μ at Brookhaven National Laboratory to be sensitive to electroweak and possibly supergravity and muon substructure effects. Our analysis may also be applied to calculations of hadronic contributions to the running of α(s) at √s= M_Z, the hyperfine structure of muonium, and the running of sin^2 θW in Møller scattering. The analysis of the new Novosibirsk data will also be given.

  19. Flux control coefficients determined by inhibitor titration: the design and analysis of experiments to minimize errors.

    PubMed Central

    Small, J R

    1993-01-01

    This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434

  20. Variable Coupling Scheme for High Frequency Electron Spin Resonance Resonators Using Asymmetric Meshes

    PubMed Central

    Tipikin, D. S.; Earle, K. A.; Freed, J. H.

    2010-01-01

    The sensitivity of a high frequency electron spin resonance (ESR) spectrometer depends strongly on the structure used to couple the incident millimeter wave to the sample that generates the ESR signal. Subsequent coupling of the ESR signal to the detection arm of the spectrometer is also a crucial consideration for achieving high spectrometer sensitivity. In previous work, we found that a means for continuously varying the coupling was necessary for attaining high sensitivity reliably and reproducibly. We report here on a novel asymmetric mesh structure that achieves continuously variable coupling by rotating the mesh in its own plane about the millimeter wave transmission line optical axis. We quantify the performance of this device with nitroxide spin-label spectra in both a lossy aqueous solution and a low loss solid state system. These two systems have very different coupling requirements and are representative of the range of coupling achievable with this technique. Lossy systems in particular are a demanding test of the achievable sensitivity and allow us to assess the suitability of this approach for applying high frequency ESR to the study of biological systems at physiological conditions, for example. The variable coupling technique reported on here allows us to readily achieve a factor of ca. 7 improvement in signal to noise at 170 GHz and a factor of ca. 5 at 95 GHz over what has previously been reported for lossy samples. PMID:20458356

  1. Upper bounds on secret-key agreement over lossy thermal bosonic channels

    NASA Astrophysics Data System (ADS)

    Kaur, Eneet; Wilde, Mark M.

    2017-12-01

    Upper bounds on the secret-key-agreement capacity of a quantum channel serve as a way to assess the performance of practical quantum-key-distribution protocols conducted over that channel. In particular, if a protocol employs a quantum repeater, achieving secret-key rates exceeding these upper bounds is evidence of having a working quantum repeater. In this paper, we extend a recent advance [Liuzzo-Scorpo et al., Phys. Rev. Lett. 119, 120503 (2017), 10.1103/PhysRevLett.119.120503] in the theory of the teleportation simulation of single-mode phase-insensitive Gaussian channels such that it now applies to the relative entropy of entanglement measure. As a consequence of this extension, we find tighter upper bounds on the nonasymptotic secret-key-agreement capacity of the lossy thermal bosonic channel than were previously known. The lossy thermal bosonic channel serves as a more realistic model of communication than the pure-loss bosonic channel, because it can model the effects of eavesdropper tampering and imperfect detectors. An implication of our result is that the previously known upper bounds on the secret-key-agreement capacity of the thermal channel are too pessimistic for the practical finite-size regime in which the channel is used a finite number of times, and so it should now be somewhat easier to witness a working quantum repeater when using secret-key-agreement capacity upper bounds as a benchmark.

  2. Compression of high-density EMG signals for trapezius and gastrocnemius muscles.

    PubMed

    Itiki, Cinthia; Furuie, Sergio S; Merletti, Roberto

    2014-03-10

    New technologies for data transmission and multi-electrode arrays increased the demand for compressing high-density electromyography (HD EMG) signals. This article aims the compression of HD EMG signals recorded by two-dimensional electrode matrices at different muscle-contraction forces. It also shows methodological aspects of compressing HD EMG signals for non-pinnate (upper trapezius) and pinnate (medial gastrocnemius) muscles, using image compression techniques. HD EMG signals were placed in image rows, according to two distinct electrode orders: parallel and perpendicular to the muscle longitudinal axis. For the lossless case, the images obtained from single-differential signals as well as their differences in time were compressed. For the lossy algorithm, the images associated to the recorded monopolar or single-differential signals were compressed for different compression levels. Lossless compression provided up to 59.3% file-size reduction (FSR), with lower contraction forces associated to higher FSR. For lossy compression, a 90.8% reduction on the file size was attained, while keeping the signal-to-noise ratio (SNR) at 21.19 dB. For a similar FSR, higher contraction forces corresponded to higher SNR CONCLUSIONS: The computation of signal differences in time improves the performance of lossless compression while the selection of signals in the transversal order improves the lossy compression of HD EMG, for both pinnate and non-pinnate muscles.

  3. Compression of high-density EMG signals for trapezius and gastrocnemius muscles

    PubMed Central

    2014-01-01

    Background New technologies for data transmission and multi-electrode arrays increased the demand for compressing high-density electromyography (HD EMG) signals. This article aims the compression of HD EMG signals recorded by two-dimensional electrode matrices at different muscle-contraction forces. It also shows methodological aspects of compressing HD EMG signals for non-pinnate (upper trapezius) and pinnate (medial gastrocnemius) muscles, using image compression techniques. Methods HD EMG signals were placed in image rows, according to two distinct electrode orders: parallel and perpendicular to the muscle longitudinal axis. For the lossless case, the images obtained from single-differential signals as well as their differences in time were compressed. For the lossy algorithm, the images associated to the recorded monopolar or single-differential signals were compressed for different compression levels. Results Lossless compression provided up to 59.3% file-size reduction (FSR), with lower contraction forces associated to higher FSR. For lossy compression, a 90.8% reduction on the file size was attained, while keeping the signal-to-noise ratio (SNR) at 21.19 dB. For a similar FSR, higher contraction forces corresponded to higher SNR Conclusions The computation of signal differences in time improves the performance of lossless compression while the selection of signals in the transversal order improves the lossy compression of HD EMG, for both pinnate and non-pinnate muscles. PMID:24612604

  4. A Scalable Context-Aware Objective Function (SCAOF) of Routing Protocol for Agricultural Low-Power and Lossy Networks (RPAL).

    PubMed

    Chen, Yibo; Chanet, Jean-Pierre; Hou, Kun-Mean; Shi, Hongling; de Sousa, Gil

    2015-08-10

    In recent years, IoT (Internet of Things) technologies have seen great advances, particularly, the IPv6 Routing Protocol for Low-power and Lossy Networks (RPL), which provides a powerful and flexible routing framework that can be applied in a variety of application scenarios. In this context, as an important role of IoT, Wireless Sensor Networks (WSNs) can utilize RPL to design efficient routing protocols for a specific application to increase the ubiquity of networks with resource-constrained WSN nodes that are low-cost and easy to deploy. In this article, our work starts with the description of Agricultural Low-power and Lossy Networks (A-LLNs) complying with the LLN framework, and to clarify the requirements of this application-oriented routing solution. After a brief review of existing optimization techniques for RPL, our contribution is dedicated to a Scalable Context-Aware Objective Function (SCAOF) that can adapt RPL to the environmental monitoring of A-LLNs, through combining energy-aware, reliability-aware, robustness-aware and resource-aware contexts according to the composite routing metrics approach. The correct behavior of this enhanced RPL version (RPAL) was verified by performance evaluations on both simulation and field tests. The obtained experimental results confirm that SCAOF can deliver the desired advantages on network lifetime extension, and high reliability and efficiency in different simulation scenarios and hardware testbeds.

  5. A Scalable Context-Aware Objective Function (SCAOF) of Routing Protocol for Agricultural Low-Power and Lossy Networks (RPAL)

    PubMed Central

    Chen, Yibo; Chanet, Jean-Pierre; Hou, Kun-Mean; Shi, Hongling; de Sousa, Gil

    2015-01-01

    In recent years, IoT (Internet of Things) technologies have seen great advances, particularly, the IPv6 Routing Protocol for Low-power and Lossy Networks (RPL), which provides a powerful and flexible routing framework that can be applied in a variety of application scenarios. In this context, as an important role of IoT, Wireless Sensor Networks (WSNs) can utilize RPL to design efficient routing protocols for a specific application to increase the ubiquity of networks with resource-constrained WSN nodes that are low-cost and easy to deploy. In this article, our work starts with the description of Agricultural Low-power and Lossy Networks (A-LLNs) complying with the LLN framework, and to clarify the requirements of this application-oriented routing solution. After a brief review of existing optimization techniques for RPL, our contribution is dedicated to a Scalable Context-Aware Objective Function (SCAOF) that can adapt RPL to the environmental monitoring of A-LLNs, through combining energy-aware, reliability-aware, robustness-aware and resource-aware contexts according to the composite routing metrics approach. The correct behavior of this enhanced RPL version (RPAL) was verified by performance evaluations on both simulation and field tests. The obtained experimental results confirm that SCAOF can deliver the desired advantages on network lifetime extension, and high reliability and efficiency in different simulation scenarios and hardware testbeds. PMID:26266411

  6. Suppressing lossy-film-induced angular mismatches between reflectance and transmittance extrema: optimum optical designs of interlayers and AR coating for maximum transmittance into active layers of CIGS solar cells.

    PubMed

    Chang, Yin-Jung

    2014-01-13

    The investigation of optimum optical designs of interlayers and antireflection (AR) coating for achieving maximum average transmittance (T(ave)) into the CuIn(1-x)Ga(x)Se2 (CIGS) absorber of a typical CIGS solar cell through the suppression of lossy-film-induced angular mismatches is described. Simulated-annealing algorithm incorporated with rigorous electromagnetic transmission-line network approach is applied with criteria of minimum average reflectance (R(ave)) from the cell surface or maximum T(ave) into the CIGS absorber. In the presence of one MgF2 coating, difference in R(ave) associated with optimum designs based upon the two distinct criteria is only 0.3% under broadband and nearly omnidirectional incidence; however, their corresponding T(ave) values could be up to 14.34% apart. Significant T(ave) improvements associated with the maximum-T(ave)-based design are found mainly in the mid to longer wavelengths and are attributed to the largest suppression of lossy-film-induced angular mismatches over the entire CIGS absorption spectrum. Maximum-T(ave)-based designs with a MgF2 coating optimized under extreme deficiency of angular information is shown, as opposed to their minimum-R(ave)-based counterparts, to be highly robust to omnidirectional incidence.

  7. Adverse effects in dual-feed interferometry

    NASA Astrophysics Data System (ADS)

    Colavita, M. Mark

    2009-11-01

    Narrow-angle dual-star interferometric astrometry can provide very high accuracy in the presence of the Earth's turbulent atmosphere. However, to exploit the high atmospherically-limited accuracy requires control of systematic errors in measurement of the interferometer baseline, internal OPDs, and fringe phase. In addition, as high photometric SNR is required, care must be taken to maximize throughput and coherence to obtain high accuracy on faint stars. This article reviews the key aspects of the dual-star approach and implementation, the main contributors to the systematic error budget, and the coherence terms in the photometric error budget.

  8. Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary

    NASA Astrophysics Data System (ADS)

    Anugu, N.; Garcia, P.

    2016-04-01

    Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its computational efficiency. In the first step, the cross-correlation is implemented at the original image spatial resolution grid (1 pixel). In the second step, the cross-correlation is performed using a sub-pixel level grid by limiting the field of search to 4 × 4 pixels centered at the first step delivered initial position. The generation of these sub-pixel grid based region of interest images is achieved with the bi-cubic interpolation. The correlation matching with sub-pixel grid technique was previously reported in electronic speckle photography Sjö'dahl (1994). This technique is applied here for the solar wavefront sensing. A large dynamic range and a better accuracy in the measurements are achieved with the combination of the original pixel grid based correlation matching in a large field of view and a sub-pixel interpolated image grid based correlation matching within a small field of view. The results revealed that the proposed method outperforms all the different peak-finding algorithms studied in the first approach. It reduces both the systematic error and the RMS error by a factor of 5 (i.e., 75% systematic error reduction), when 5 times improved image sampling was used. This measurement is achieved at the expense of twice the computational cost. With the 5 times improved image sampling, the wave front accuracy is increased by a factor of 5. The proposed solution is strongly recommended for wave front sensing in the solar telescopes, particularly, for measuring large dynamic image shifts involved open loop adaptive optics. Also, by choosing an appropriate increment of image sampling in trade-off between the computational speed limitation and the aimed sub-pixel image shift accuracy, it can be employed in closed loop adaptive optics. The study is extended to three other class of sub-aperture images (a point source; a laser guide star; a Galactic Center extended scene). The results are planned to submit for the Optical Express journal.

  9. Scattering from a random layer of leaves in the physical optics limit

    NASA Technical Reports Server (NTRS)

    Lang, R. H.; Seker, S. S.; Le Vine, D. M.

    1982-01-01

    Backscatter of electromagnetic radiation from a layer of vegetation over flat lossy ground has been studied in collaborative research at the George Washingnton University and the Goddard Space Flight Center. In this work the vegetation is composed of leaves which are modeled by a random collection of lossy dielectric disks. Backscattering coefficients for the vegetation layer have been calculated in the case of disks whose diameter is large compared to wavelength. These backscattering coefficients are obtained in terms of the scattering amplitude of an individual disk by employing the distorted Born procedure. The scattering amplitude for a disk which is large compared to wavelength is then found by physical optic techniques. Computed results are interpreted in terms of dominant reflected and transmitted contributions from the disks and ground.

  10. Survey Of Lossless Image Coding Techniques

    NASA Astrophysics Data System (ADS)

    Melnychuck, Paul W.; Rabbani, Majid

    1989-04-01

    Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original. Lossless image coding algorithms meet this requirement by generating a decoded image that is numerically identical to the original. Several lossless coding techniques are modifications of well-known lossy schemes, whereas others are new. Traditional Markov-based models and newer arithmetic coding techniques are applied to predictive coding, bit plane processing, and lossy plus residual coding. Generally speaking, the compression ratio offered by these techniques are in the area of 1.6:1 to 3:1 for 8-bit pictorial images. Compression ratios for 12-bit radiological images approach 3:1, as these images have less detailed structure, and hence, their higher pel correlation leads to a greater removal of image redundancy.

  11. 2D-RBUC for efficient parallel compression of residuals

    NASA Astrophysics Data System (ADS)

    Đurđević, Đorđe M.; Tartalja, Igor I.

    2018-02-01

    In this paper, we present a method for lossless compression of residuals with an efficient SIMD parallel decompression. The residuals originate from lossy or near lossless compression of height fields, which are commonly used to represent models of terrains. The algorithm is founded on the existing RBUC method for compression of non-uniform data sources. We have adapted the method to capture 2D spatial locality of height fields, and developed the data decompression algorithm for modern GPU architectures already present even in home computers. In combination with the point-level SIMD-parallel lossless/lossy high field compression method HFPaC, characterized by fast progressive decompression and seamlessly reconstructed surface, the newly proposed method trades off small efficiency degradation for a non negligible compression ratio (measured up to 91%) benefit.

  12. Exploring Measurement Error with Cookies: A Real and Virtual Approach via Interactive Excel

    ERIC Educational Resources Information Center

    Sinex, Scott A; Gage, Barbara A.; Beck, Peggy J.

    2007-01-01

    A simple, guided-inquiry investigation using stacked sandwich cookies is employed to develop a simple linear mathematical model and to explore measurement error by incorporating errors as part of the investigation. Both random and systematic errors are presented. The model and errors are then investigated further by engaging with an interactive…

  13. Uncertainty Analysis of Seebeck Coefficient and Electrical Resistivity Characterization

    NASA Technical Reports Server (NTRS)

    Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred

    2014-01-01

    In order to provide a complete description of a materials thermoelectric power factor, in addition to the measured nominal value, an uncertainty interval is required. The uncertainty may contain sources of measurement error including systematic bias error and precision error of a statistical nature. The work focuses specifically on the popular ZEM-3 (Ulvac Technologies) measurement system, but the methods apply to any measurement system. The analysis accounts for sources of systematic error including sample preparation tolerance, measurement probe placement, thermocouple cold-finger effect, and measurement parameters; in addition to including uncertainty of a statistical nature. Complete uncertainty analysis of a measurement system allows for more reliable comparison of measurement data between laboratories.

  14. Communications and information research: Improved space link performance via concatenated forward error correction coding

    NASA Technical Reports Server (NTRS)

    Rao, T. R. N.; Seetharaman, G.; Feng, G. L.

    1996-01-01

    With the development of new advanced instruments for remote sensing applications, sensor data will be generated at a rate that not only requires increased onboard processing and storage capability, but imposes demands on the space to ground communication link and ground data management-communication system. Data compression and error control codes provide viable means to alleviate these demands. Two types of data compression have been studied by many researchers in the area of information theory: a lossless technique that guarantees full reconstruction of the data, and a lossy technique which generally gives higher data compaction ratio but incurs some distortion in the reconstructed data. To satisfy the many science disciplines which NASA supports, lossless data compression becomes a primary focus for the technology development. While transmitting the data obtained by any lossless data compression, it is very important to use some error-control code. For a long time, convolutional codes have been widely used in satellite telecommunications. To more efficiently transform the data obtained by the Rice algorithm, it is required to meet the a posteriori probability (APP) for each decoded bit. A relevant algorithm for this purpose has been proposed which minimizes the bit error probability in the decoding linear block and convolutional codes and meets the APP for each decoded bit. However, recent results on iterative decoding of 'Turbo codes', turn conventional wisdom on its head and suggest fundamentally new techniques. During the past several months of this research, the following approaches have been developed: (1) a new lossless data compression algorithm, which is much better than the extended Rice algorithm for various types of sensor data, (2) a new approach to determine the generalized Hamming weights of the algebraic-geometric codes defined by a large class of curves in high-dimensional spaces, (3) some efficient improved geometric Goppa codes for disk memory systems and high-speed mass memory systems, and (4) a tree based approach for data compression using dynamic programming.

  15. Characterizing the impact of model error in hydrologic time series recovery inverse problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Scott K.; He, Jiachuan; Vesselinov, Velimir V.

    Hydrologic models are commonly over-smoothed relative to reality, owing to computational limitations and to the difficulty of obtaining accurate high-resolution information. When used in an inversion context, such models may introduce systematic biases which cannot be encapsulated by an unbiased “observation noise” term of the type assumed by standard regularization theory and typical Bayesian formulations. Despite its importance, model error is difficult to encapsulate systematically and is often neglected. In this paper, model error is considered for an important class of inverse problems that includes interpretation of hydraulic transients and contaminant source history inference: reconstruction of a time series thatmore » has been convolved against a transfer function (i.e., impulse response) that is only approximately known. Using established harmonic theory along with two results established here regarding triangular Toeplitz matrices, upper and lower error bounds are derived for the effect of systematic model error on time series recovery for both well-determined and over-determined inverse problems. It is seen that use of additional measurement locations does not improve expected performance in the face of model error. A Monte Carlo study of a realistic hydraulic reconstruction problem is presented, and the lower error bound is seen informative about expected behavior. Finally, a possible diagnostic criterion for blind transfer function characterization is also uncovered.« less

  16. Characterizing the impact of model error in hydrologic time series recovery inverse problems

    DOE PAGES

    Hansen, Scott K.; He, Jiachuan; Vesselinov, Velimir V.

    2017-10-28

    Hydrologic models are commonly over-smoothed relative to reality, owing to computational limitations and to the difficulty of obtaining accurate high-resolution information. When used in an inversion context, such models may introduce systematic biases which cannot be encapsulated by an unbiased “observation noise” term of the type assumed by standard regularization theory and typical Bayesian formulations. Despite its importance, model error is difficult to encapsulate systematically and is often neglected. In this paper, model error is considered for an important class of inverse problems that includes interpretation of hydraulic transients and contaminant source history inference: reconstruction of a time series thatmore » has been convolved against a transfer function (i.e., impulse response) that is only approximately known. Using established harmonic theory along with two results established here regarding triangular Toeplitz matrices, upper and lower error bounds are derived for the effect of systematic model error on time series recovery for both well-determined and over-determined inverse problems. It is seen that use of additional measurement locations does not improve expected performance in the face of model error. A Monte Carlo study of a realistic hydraulic reconstruction problem is presented, and the lower error bound is seen informative about expected behavior. Finally, a possible diagnostic criterion for blind transfer function characterization is also uncovered.« less

  17. Systematic Biases in Parameter Estimation of Binary Black-Hole Mergers

    NASA Technical Reports Server (NTRS)

    Littenberg, Tyson B.; Baker, John G.; Buonanno, Alessandra; Kelly, Bernard J.

    2012-01-01

    Parameter estimation of binary-black-hole merger events in gravitational-wave data relies on matched filtering techniques, which, in turn, depend on accurate model waveforms. Here we characterize the systematic biases introduced in measuring astrophysical parameters of binary black holes by applying the currently most accurate effective-one-body templates to simulated data containing non-spinning numerical-relativity waveforms. For advanced ground-based detectors, we find that the systematic biases are well within the statistical error for realistic signal-to-noise ratios (SNR). These biases grow to be comparable to the statistical errors at high signal-to-noise ratios for ground-based instruments (SNR approximately 50) but never dominate the error budget. At the much larger signal-to-noise ratios expected for space-based detectors, these biases will become large compared to the statistical errors but are small enough (at most a few percent in the black-hole masses) that we expect they should not affect broad astrophysical conclusions that may be drawn from the data.

  18. System calibration method for Fourier ptychographic microscopy.

    PubMed

    Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli

    2017-09-01

    Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  19. Error Analysis of Indirect Broadband Monitoring of Multilayer Optical Coatings using Computer Simulations

    NASA Astrophysics Data System (ADS)

    Semenov, Z. V.; Labusov, V. A.

    2017-11-01

    Results of studying the errors of indirect monitoring by means of computer simulations are reported. The monitoring method is based on measuring spectra of reflection from additional monitoring substrates in a wide spectral range. Special software (Deposition Control Simulator) is developed, which allows one to estimate the influence of the monitoring system parameters (noise of the photodetector array, operating spectral range of the spectrometer and errors of its calibration in terms of wavelengths, drift of the radiation source intensity, and errors in the refractive index of deposited materials) on the random and systematic errors of deposited layer thickness measurements. The direct and inverse problems of multilayer coatings are solved using the OptiReOpt library. Curves of the random and systematic errors of measurements of the deposited layer thickness as functions of the layer thickness are presented for various values of the system parameters. Recommendations are given on using the indirect monitoring method for the purpose of reducing the layer thickness measurement error.

  20. The effects of recall errors and of selection bias in epidemiologic studies of mobile phone use and cancer risk.

    PubMed

    Vrijheid, Martine; Deltour, Isabelle; Krewski, Daniel; Sanchez, Marie; Cardis, Elisabeth

    2006-07-01

    This paper examines the effects of systematic and random errors in recall and of selection bias in case-control studies of mobile phone use and cancer. These sensitivity analyses are based on Monte-Carlo computer simulations and were carried out within the INTERPHONE Study, an international collaborative case-control study in 13 countries. Recall error scenarios simulated plausible values of random and systematic, non-differential and differential recall errors in amount of mobile phone use reported by study subjects. Plausible values for the recall error were obtained from validation studies. Selection bias scenarios assumed varying selection probabilities for cases and controls, mobile phone users, and non-users. Where possible these selection probabilities were based on existing information from non-respondents in INTERPHONE. Simulations used exposure distributions based on existing INTERPHONE data and assumed varying levels of the true risk of brain cancer related to mobile phone use. Results suggest that random recall errors of plausible levels can lead to a large underestimation in the risk of brain cancer associated with mobile phone use. Random errors were found to have larger impact than plausible systematic errors. Differential errors in recall had very little additional impact in the presence of large random errors. Selection bias resulting from underselection of unexposed controls led to J-shaped exposure-response patterns, with risk apparently decreasing at low to moderate exposure levels. The present results, in conjunction with those of the validation studies conducted within the INTERPHONE study, will play an important role in the interpretation of existing and future case-control studies of mobile phone use and cancer risk, including the INTERPHONE study.

  1. Analyzing False Positives of Four Questions in the Force Concept Inventory

    ERIC Educational Resources Information Center

    Yasuda, Jun-ichro; Mae, Naohiro; Hull, Michael M.; Taniguchi, Masa-aki

    2018-01-01

    In this study, we analyze the systematic error from false positives of the Force Concept Inventory (FCI). We compare the systematic errors of question 6 (Q.6), Q.7, and Q.16, for which clearly erroneous reasoning has been found, with Q.5, for which clearly erroneous reasoning has not been found. We determine whether or not a correct response to a…

  2. Error Sources in Asteroid Astrometry

    NASA Technical Reports Server (NTRS)

    Owen, William M., Jr.

    2000-01-01

    Asteroid astrometry, like any other scientific measurement process, is subject to both random and systematic errors, not all of which are under the observer's control. To design an astrometric observing program or to improve an existing one requires knowledge of the various sources of error, how different errors affect one's results, and how various errors may be minimized by careful observation or data reduction techniques.

  3. The Accuracy of GBM GRB Localizations

    NASA Astrophysics Data System (ADS)

    Briggs, Michael Stephen; Connaughton, V.; Meegan, C.; Hurley, K.

    2010-03-01

    We report an study of the accuracy of GBM GRB localizations, analyzing three types of localizations: those produced automatically by the GBM Flight Software on board GBM, those produced automatically with ground software in near real time, and localizations produced with human guidance. The two types of automatic locations are distributed in near real-time via GCN Notices; the human-guided locations are distributed on timescale of many minutes or hours using GCN Circulars. This work uses a Bayesian analysis that models the distribution of the GBM total location error by comparing GBM locations to more accurate locations obtained with other instruments. Reference locations are obtained from Swift, Super-AGILE, the LAT, and with the IPN. We model the GBM total location errors as having systematic errors in addition to the statistical errors and use the Bayesian analysis to constrain the systematic errors.

  4. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review.

    PubMed

    Mathes, Tim; Klaßen, Pauline; Pieper, Dawid

    2017-11-28

    Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. We performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second. The analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates. The evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods.

  5. Procedures for dealing with certain types of noise and systematic errors common to many Hadamard transform optical systems

    NASA Technical Reports Server (NTRS)

    Harwit, M.

    1977-01-01

    Sources of noise and error correcting procedures characteristic of Hadamard transform optical systems were investigated. Reduction of spectral noise due to noise spikes in the data, the effect of random errors, the relative performance of Fourier and Hadamard transform spectrometers operated under identical detector-noise-limited conditions, and systematic means for dealing with mask defects are among the topics discussed. The distortion in Hadamard transform optical instruments caused by moving Masks, incorrect mask alignment, missing measurements, and diffraction is analyzed and techniques for reducing or eliminating this distortion are described.

  6. Systematic reviews, systematic error and the acquisition of clinical knowledge

    PubMed Central

    2010-01-01

    Background Since its inception, evidence-based medicine and its application through systematic reviews, has been widely accepted. However, it has also been strongly criticised and resisted by some academic groups and clinicians. One of the main criticisms of evidence-based medicine is that it appears to claim to have unique access to absolute scientific truth and thus devalues and replaces other types of knowledge sources. Discussion The various types of clinical knowledge sources are categorised on the basis of Kant's categories of knowledge acquisition, as being either 'analytic' or 'synthetic'. It is shown that these categories do not act in opposition but rather, depend upon each other. The unity of analysis and synthesis in knowledge acquisition is demonstrated during the process of systematic reviewing of clinical trials. Systematic reviews constitute comprehensive synthesis of clinical knowledge but depend upon plausible, analytical hypothesis development for the trials reviewed. The dangers of systematic error regarding the internal validity of acquired knowledge are highlighted on the basis of empirical evidence. It has been shown that the systematic review process reduces systematic error, thus ensuring high internal validity. It is argued that this process does not exclude other types of knowledge sources. Instead, amongst these other types it functions as an integrated element during the acquisition of clinical knowledge. Conclusions The acquisition of clinical knowledge is based on interaction between analysis and synthesis. Systematic reviews provide the highest form of synthetic knowledge acquisition in terms of achieving internal validity of results. In that capacity it informs the analytic knowledge of the clinician but does not replace it. PMID:20537172

  7. Embedded importance watermarking for image verification in radiology

    NASA Astrophysics Data System (ADS)

    Osborne, Domininc; Rogers, D.; Sorell, M.; Abbott, Derek

    2004-03-01

    Digital medical images used in radiology are quite different to everyday continuous tone images. Radiology images require that all detailed diagnostic information can be extracted, which traditionally constrains digital medical images to be of large size and stored without loss of information. In order to transmit diagnostic images over a narrowband wireless communication link for remote diagnosis, lossy compression schemes must be used. This involves discarding detailed information and compressing the data, making it more susceptible to error. The loss of image detail and incidental degradation occurring during transmission have potential legal accountability issues, especially in the case of the null diagnosis of a tumor. The work proposed here investigates techniques for verifying the voracity of medical images - in particular, detailing the use of embedded watermarking as an objective means to ensure that important parts of the medical image can be verified. We propose a result to show how embedded watermarking can be used to differentiate contextual from detailed information. The type of images that will be used include spiral hairline fractures and small tumors, which contain the essential diagnostic high spatial frequency information.

  8. Methods, analysis, and the treatment of systematic errors for the electron electric dipole moment search in thorium monoxide

    NASA Astrophysics Data System (ADS)

    Baron, J.; Campbell, W. C.; DeMille, D.; Doyle, J. M.; Gabrielse, G.; Gurevich, Y. V.; Hess, P. W.; Hutzler, N. R.; Kirilov, E.; Kozyryev, I.; O'Leary, B. R.; Panda, C. D.; Parsons, M. F.; Spaun, B.; Vutha, A. C.; West, A. D.; West, E. P.; ACME Collaboration

    2017-07-01

    We recently set a new limit on the electric dipole moment of the electron (eEDM) (J Baron et al and ACME collaboration 2014 Science 343 269-272), which represented an order-of-magnitude improvement on the previous limit and placed more stringent constraints on many charge-parity-violating extensions to the standard model. In this paper we discuss the measurement in detail. The experimental method and associated apparatus are described, together with the techniques used to isolate the eEDM signal. In particular, we detail the way experimental switches were used to suppress effects that can mimic the signal of interest. The methods used to search for systematic errors, and models explaining observed systematic errors, are also described. We briefly discuss possible improvements to the experiment.

  9. Phase-demodulation error of a fiber-optic Fabry-Perot sensor with complex reflection coefficients.

    PubMed

    Kilpatrick, J M; MacPherson, W N; Barton, J S; Jones, J D

    2000-03-20

    The influence of reflector losses attracts little discussion in standard treatments of the Fabry-Perot interferometer yet may be an important factor contributing to errors in phase-stepped demodulation of fiber optic Fabry-Perot (FFP) sensors. We describe a general transfer function for FFP sensors with complex reflection coefficients and estimate systematic phase errors that arise when the asymmetry of the reflected fringe system is neglected, as is common in the literature. The measured asymmetric response of higher-finesse metal-dielectric FFP constructions corroborates a model that predicts systematic phase errors of 0.06 rad in three-step demodulation of a low-finesse FFP sensor (R = 0.05) with internal reflector losses of 25%.

  10. Nature versus nurture: A systematic approach to elucidate gene-environment interactions in the development of myopic refractive errors.

    PubMed

    Miraldi Utz, Virginia

    2017-01-01

    Myopia is the most common eye disorder and major cause of visual impairment worldwide. As the incidence of myopia continues to rise, the need to further understand the complex roles of molecular and environmental factors controlling variation in refractive error is of increasing importance. Tkatchenko and colleagues applied a systematic approach using a combination of gene set enrichment analysis, genome-wide association studies, and functional analysis of a murine model to identify a myopia susceptibility gene, APLP2. Differential expression of refractive error was associated with time spent reading for those with low frequency variants in this gene. This provides support for the longstanding hypothesis of gene-environment interactions in refractive error development.

  11. Systematic errors in regional climate model RegCM over Europe and sensitivity to variations in PBL parameterizations

    NASA Astrophysics Data System (ADS)

    Güttler, I.

    2012-04-01

    Systematic errors in near-surface temperature (T2m), total cloud cover (CLD), shortwave albedo (ALB) and surface net longwave (SNL) and shortwave energy flux (SNS) are detected in simulations of RegCM on 50 km resolution over the European CORDEX domain when forced with ERA-Interim reanalysis. Simulated T2m is compared to CRU 3.0 and other variables to GEWEX-SRB 3.0 dataset. Most of systematic errors found in SNL and SNS are consistent with errors in T2m, CLD and ALB: they include prevailing negative errors in T2m and positive errors in CLD present during most of the year. Errors in T2m and CLD can be associated with the overestimation of SNL and SNS in most simulations. Impact of errors in albedo are primarily confined to north Africa, where e.g. underestimation of albedo in JJA is consistent with associated surface heating and positive SNS and T2m errors. Sensitivity to the choice of the PBL scheme and various parameters in PBL schemes is examined from an ensemble of 20 simulations. The recently implemented prognostic PBL scheme performs over Europe with a mixed success when compared to standard diagnostic scheme with a general increase of errors in T2m and CLD over all of the domain. Nevertheless, the improvements in T2m can be found in e.g. north-eastern Europe during DJF and western Europe during JJA where substantial warm biases existed in simulations with the diagnostic scheme. The most detectable impact, in terms of the JJA T2m errors over western Europe, comes form the variation in the formulation of mixing length. In order to reduce the above errors an update of the RegCM albedo values and further work in customizing PBL scheme is suggested.

  12. Interventions to reduce medication errors in neonatal care: a systematic review

    PubMed Central

    Nguyen, Minh-Nha Rhylie; Mosel, Cassandra

    2017-01-01

    Background: Medication errors represent a significant but often preventable cause of morbidity and mortality in neonates. The objective of this systematic review was to determine the effectiveness of interventions to reduce neonatal medication errors. Methods: A systematic review was undertaken of all comparative and noncomparative studies published in any language, identified from searches of PubMed and EMBASE and reference-list checking. Eligible studies were those investigating the impact of any medication safety interventions aimed at reducing medication errors in neonates in the hospital setting. Results: A total of 102 studies were identified that met the inclusion criteria, including 86 comparative and 16 noncomparative studies. Medication safety interventions were classified into six themes: technology (n = 38; e.g. electronic prescribing), organizational (n = 16; e.g. guidelines, policies, and procedures), personnel (n = 13; e.g. staff education), pharmacy (n = 9; e.g. clinical pharmacy service), hazard and risk analysis (n = 8; e.g. error detection tools), and multifactorial (n = 18; e.g. any combination of previous interventions). Significant variability was evident across all included studies, with differences in intervention strategies, trial methods, types of medication errors evaluated, and how medication errors were identified and evaluated. Most studies demonstrated an appreciable risk of bias. The vast majority of studies (>90%) demonstrated a reduction in medication errors. A similar median reduction of 50–70% in medication errors was evident across studies included within each of the identified themes, but findings varied considerably from a 16% increase in medication errors to a 100% reduction in medication errors. Conclusion: While neonatal medication errors can be reduced through multiple interventions aimed at improving the medication use process, no single intervention appeared clearly superior. Further research is required to evaluate the relative cost-effectiveness of the various medication safety interventions to facilitate decisions regarding uptake and implementation into clinical practice. PMID:29387337

  13. A comparison of spectral decorrelation techniques and performance evaluation metrics for a wavelet-based, multispectral data compression algorithm

    NASA Technical Reports Server (NTRS)

    Matic, Roy M.; Mosley, Judith I.

    1994-01-01

    Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.

  14. Graphene Oxide in Lossy Mode Resonance-Based Optical Fiber Sensors for Ethanol Detection.

    PubMed

    Hernaez, Miguel; Mayes, Andrew G; Melendi-Espina, Sonia

    2017-12-27

    The influence of graphene oxide (GO) over the features of an optical fiber ethanol sensor based on lossy mode resonances (LMR) has been studied in this work. Four different sensors were built with this aim, each comprising a multimode optical fiber core fragment coated with a SnO₂ thin film. Layer by layer (LbL) coatings made of 1, 2 and 4 bilayers of polyethyleneimine (PEI) and graphene oxide were deposited onto three of these devices and their behavior as aqueous ethanol sensors was characterized and compared with the sensor without GO. The sensors with GO showed much better performance with a maximum sensitivity enhancement of 176% with respect to the sensor without GO. To our knowledge, this is the first time that GO has been used to make an optical fiber sensor based on LMR.

  15. A Modified Alderman-Grant Coil makes possible an efficient cross-coil probe for high field solid-state NMR of lossy biological samples

    NASA Astrophysics Data System (ADS)

    Grant, Christopher V.; Yang, Yuan; Glibowicka, Mira; Wu, Chin H.; Park, Sang Ho; Deber, Charles M.; Opella, Stanley J.

    2009-11-01

    The design, construction, and performance of a cross-coil double-resonance probe for solid-state NMR experiments on lossy biological samples at high magnetic fields are described. The outer coil is a Modified Alderman-Grant Coil (MAGC) tuned to the 1H frequency. The inner coil consists of a multi-turn solenoid coil that produces a B 1 field orthogonal to that of the outer coil. This results in a compact nested cross-coil pair with the inner solenoid coil tuned to the low frequency detection channel. This design has several advantages over multiple-tuned solenoid coil probes, since RF heating from the 1H channel is substantially reduced, it can be tuned for samples with a wide range of dielectric constants, and the simplified circuit design and high inductance inner coil provides excellent sensitivity. The utility of this probe is demonstrated on two electrically lossy samples of membrane proteins in phospholipid bilayers (bicelles) that are particularly difficult for conventional NMR probes. The 72-residue polypeptide embedding the transmembrane helices 3 and 4 of the Cystic Fibrosis Transmembrane Conductance Regulator (CFTR) (residues 194-241) requires a high salt concentration in order to be successfully reconstituted in phospholipid bicelles. A second application is to paramagnetic relaxation enhancement applied to the membrane-bound form of Pf1 coat protein in phospholipid bicelles where the resistance to sample heating enables high duty cycle solid-state NMR experiments to be performed.

  16. Displaying radiologic images on personal computers: image storage and compression--Part 2.

    PubMed

    Gillespy, T; Rowberg, A H

    1994-02-01

    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.

  17. Exploiting Fractional Order PID Controller Methods in Improving the Performance of Integer Order PID Controllers: A GA Based Approach

    NASA Astrophysics Data System (ADS)

    Mukherjee, Bijoy K.; Metia, Santanu

    2009-10-01

    The paper is divided into three parts. The first part gives a brief introduction to the overall paper, to fractional order PID (PIλDμ) controllers and to Genetic Algorithm (GA). In the second part, first it has been studied how the performance of an integer order PID controller deteriorates when implemented with lossy capacitors in its analog realization. Thereafter it has been shown that the lossy capacitors can be effectively modeled by fractional order terms. Then, a novel GA based method has been proposed to tune the controller parameters such that the original performance is retained even though realized with the same lossy capacitors. Simulation results have been presented to validate the usefulness of the method. Some Ziegler-Nichols type tuning rules for design of fractional order PID controllers have been proposed in the literature [11]. In the third part, a novel GA based method has been proposed which shows how equivalent integer order PID controllers can be obtained which will give performance level similar to those of the fractional order PID controllers thereby removing the complexity involved in the implementation of the latter. It has been shown with extensive simulation results that the equivalent integer order PID controllers more or less retain the robustness and iso-damping properties of the original fractional order PID controllers. Simulation results also show that the equivalent integer order PID controllers are more robust than the normal Ziegler-Nichols tuned PID controllers.

  18. Low-loss plasmon-assisted electro-optic modulator.

    PubMed

    Haffner, Christian; Chelladurai, Daniel; Fedoryshyn, Yuriy; Josten, Arne; Baeuerle, Benedikt; Heni, Wolfgang; Watanabe, Tatsuhiko; Cui, Tong; Cheng, Bojun; Saha, Soham; Elder, Delwin L; Dalton, Larry R; Boltasseva, Alexandra; Shalaev, Vladimir M; Kinsey, Nathaniel; Leuthold, Juerg

    2018-04-01

    For nearly two decades, researchers in the field of plasmonics 1 -which studies the coupling of electromagnetic waves to the motion of free electrons near the surface of a metal 2 -have sought to realize subwavelength optical devices for information technology 3-6 , sensing 7,8 , nonlinear optics 9,10 , optical nanotweezers 11 and biomedical applications 12 . However, the electron motion generates heat through ohmic losses. Although this heat is desirable for some applications such as photo-thermal therapy, it is a disadvantage in plasmonic devices for sensing and information technology 13 and has led to a widespread view that plasmonics is too lossy to be practical. Here we demonstrate that the ohmic losses can be bypassed by using 'resonant switching'. In the proposed approach, light is coupled to the lossy surface plasmon polaritons only in the device's off state (in resonance) in which attenuation is desired, to ensure large extinction ratios between the on and off states and allow subpicosecond switching. In the on state (out of resonance), destructive interference prevents the light from coupling to the lossy plasmonic section of a device. To validate the approach, we fabricated a plasmonic electro-optic ring modulator. The experiments confirm that low on-chip optical losses, operation at over 100 gigahertz, good energy efficiency, low thermal drift and a compact footprint can be combined in a single device. Our result illustrates that plasmonics has the potential to enable fast, compact on-chip sensing and communications technologies.

  19. Empirical Analysis of Systematic Communication Errors.

    DTIC Science & Technology

    1981-09-01

    human o~ . .... 8 components in communication systems. (Systematic errors were defined to be those that occur regularly in human communication links...phase of the human communication process and focuses on the linkage between a specific piece of information (and the receiver) and the transmission...communication flow. (2) Exchange. Exchange is the next phase in human communication and entails a concerted effort on the part of the sender and receiver to share

  20. Systematics errors in strong lens modeling

    NASA Astrophysics Data System (ADS)

    Johnson, Traci L.; Sharon, Keren; Bayliss, Matthew B.

    We investigate how varying the number of multiple image constraints and the available redshift information can influence the systematic errors of strong lens models, specifically, the image predictability, mass distribution, and magnifications of background sources. This work will not only inform upon Frontier Field science, but also for work on the growing collection of strong lensing galaxy clusters, most of which are less massive and are capable of lensing a handful of galaxies.

  1. Low-Energy Proton Testing Methodology

    NASA Technical Reports Server (NTRS)

    Pellish, Jonathan A.; Marshall, Paul W.; Heidel, David F.; Schwank, James R.; Shaneyfelt, Marty R.; Xapsos, M.A.; Ladbury, Raymond L.; LaBel, Kenneth A.; Berg, Melanie; Kim, Hak S.; hide

    2009-01-01

    Use of low-energy protons and high-energy light ions is becoming necessary to investigate current-generation SEU thresholds. Systematic errors can dominate measurements made with low-energy protons. Range and energy straggling contribute to systematic error. Low-energy proton testing is not a step-and-repeat process. Low-energy protons and high-energy light ions can be used to measure SEU cross section of single sensitive features; important for simulation.

  2. Focusing cosmic telescopes: systematics of strong lens modeling

    NASA Astrophysics Data System (ADS)

    Johnson, Traci Lin; Sharon, Keren q.

    2018-01-01

    The use of strong gravitational lensing by galaxy clusters has become a popular method for studying the high redshift universe. While diverse in computational methods, lens modeling techniques have grasped the means for determining statistical errors on cluster masses and magnifications. However, the systematic errors have yet to be quantified, arising from the number of constraints, availablity of spectroscopic redshifts, and various types of image configurations. I will be presenting my dissertation work on quantifying systematic errors in parametric strong lensing techniques. I have participated in the Hubble Frontier Fields lens model comparison project, using simulated clusters to compare the accuracy of various modeling techniques. I have extended this project to understanding how changing the quantity of constraints affects the mass and magnification. I will also present my recent work extending these studies to clusters in the Outer Rim Simulation. These clusters are typical of the clusters found in wide-field surveys, in mass and lensing cross-section. These clusters have fewer constraints than the HFF clusters and thus, are more susceptible to systematic errors. With the wealth of strong lensing clusters discovered in surveys such as SDSS, SPT, DES, and in the future, LSST, this work will be influential in guiding the lens modeling efforts and follow-up spectroscopic campaigns.

  3. A probabilistic approach to remote compositional analysis of planetary surfaces

    USGS Publications Warehouse

    Lapotre, Mathieu G.A.; Ehlmann, Bethany L.; Minson, Sarah E.

    2017-01-01

    Reflected light from planetary surfaces provides information, including mineral/ice compositions and grain sizes, by study of albedo and absorption features as a function of wavelength. However, deconvolving the compositional signal in spectra is complicated by the nonuniqueness of the inverse problem. Trade-offs between mineral abundances and grain sizes in setting reflectance, instrument noise, and systematic errors in the forward model are potential sources of uncertainty, which are often unquantified. Here we adopt a Bayesian implementation of the Hapke model to determine sets of acceptable-fit mineral assemblages, as opposed to single best fit solutions. We quantify errors and uncertainties in mineral abundances and grain sizes that arise from instrument noise, compositional end members, optical constants, and systematic forward model errors for two suites of ternary mixtures (olivine-enstatite-anorthite and olivine-nontronite-basaltic glass) in a series of six experiments in the visible-shortwave infrared (VSWIR) wavelength range. We show that grain sizes are generally poorly constrained from VSWIR spectroscopy. Abundance and grain size trade-offs lead to typical abundance errors of ≤1 wt % (occasionally up to ~5 wt %), while ~3% noise in the data increases errors by up to ~2 wt %. Systematic errors further increase inaccuracies by a factor of 4. Finally, phases with low spectral contrast or inaccurate optical constants can further increase errors. Overall, typical errors in abundance are <10%, but sometimes significantly increase for specific mixtures, prone to abundance/grain-size trade-offs that lead to high unmixing uncertainties. These results highlight the need for probabilistic approaches to remote determination of planetary surface composition.

  4. Accurate Magnetometer/Gyroscope Attitudes Using a Filter with Correlated Sensor Noise

    NASA Technical Reports Server (NTRS)

    Sedlak, J.; Hashmall, J.

    1997-01-01

    Magnetometers and gyroscopes have been shown to provide very accurate attitudes for a variety of spacecraft. These results have been obtained, however, using a batch-least-squares algorithm and long periods of data. For use in onboard applications, attitudes are best determined using sequential estimators such as the Kalman filter. When a filter is used to determine attitudes using magnetometer and gyroscope data for input, the resulting accuracy is limited by both the sensor accuracies and errors inherent in the Earth magnetic field model. The Kalman filter accounts for the random component by modeling the magnetometer and gyroscope errors as white noise processes. However, even when these tuning parameters are physically realistic, the rate biases (included in the state vector) have been found to show systematic oscillations. These are attributed to the field model errors. If the gyroscope noise is sufficiently small, the tuned filter 'memory' will be long compared to the orbital period. In this case, the variations in the rate bias induced by field model errors are substantially reduced. Mistuning the filter to have a short memory time leads to strongly oscillating rate biases and increased attitude errors. To reduce the effect of the magnetic field model errors, these errors are estimated within the filter and used to correct the reference model. An exponentially-correlated noise model is used to represent the filter estimate of the systematic error. Results from several test cases using in-flight data from the Compton Gamma Ray Observatory are presented. These tests emphasize magnetometer errors, but the method is generally applicable to any sensor subject to a combination of random and systematic noise.

  5. The Effects of Bar-coding Technology on Medication Errors: A Systematic Literature Review.

    PubMed

    Hutton, Kevin; Ding, Qian; Wellman, Gregory

    2017-02-24

    The bar-coding technology adoptions have risen drastically in U.S. health systems in the past decade. However, few studies have addressed the impact of bar-coding technology with strong prospective methodologies and the research, which has been conducted from both in-pharmacy and bedside implementations. This systematic literature review is to examine the effectiveness of bar-coding technology on preventing medication errors and what types of medication errors may be prevented in the hospital setting. A systematic search of databases was performed from 1998 to December 2016. Studies measuring the effect of bar-coding technology on medication errors were included in a full-text review. Studies with the outcomes other than medication errors such as efficiency or workarounds were excluded. The outcomes were measured and findings were summarized for each retained study. A total of 2603 articles were initially identified and 10 studies, which used prospective before-and-after study design, were fully reviewed in this article. Of the 10 included studies, 9 took place in the United States, whereas the remaining was conducted in the United Kingdom. One research article focused on bar-coding implementation in a pharmacy setting, whereas the other 9 focused on bar coding within patient care areas. All 10 studies showed overall positive effects associated with bar-coding implementation. The results of this review show that bar-coding technology may reduce medication errors in hospital settings, particularly on preventing targeted wrong dose, wrong drug, wrong patient, unauthorized drug, and wrong route errors.

  6. Minimizing systematic errors from atmospheric multiple scattering and satellite viewing geometry in coastal zone color scanner level IIA imagery

    NASA Technical Reports Server (NTRS)

    Martin, D. L.; Perry, M. J.

    1994-01-01

    Water-leaving radiances and phytoplankton pigment concentrations are calculated from coastal zone color scanner (CZCS) radiance measurements by removing atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. The single greatest source of error in CZCS atmospheric correction algorithms in the assumption that these Rayleigh and aerosol radiances are separable. Multiple-scattering interactions between Rayleigh and aerosol components cause systematic errors in calculated aerosol radiances, and the magnitude of these errors is dependent on aerosol type and optical depth and on satellite viewing geometry. A technique was developed which extends the results of previous radiative transfer modeling by Gordon and Castano to predict the magnitude of these systematic errors for simulated CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere. The simulated image mathematically duplicates the exact satellite, Sun, and pixel locations of an actual CZCS image. Errors in the aerosol radiance at 443 nm are calculated for a range of aerosol optical depths. When pixels in the simulated image exceed an error threshhold, the corresponding pixels in the actual CZCS image are flagged and excluded from further analysis or from use in image compositing or compilation of pigment concentration databases. Studies based on time series analyses or compositing of CZCS imagery which do not address Rayleigh-aerosol multiple scattering should be interpreted cautiously, since the fundamental assumption used in their atmospheric correction algorithm is flawed.

  7. Statistical and systematic errors in the measurement of weak-lensing Minkowski functionals: Application to the Canada-France-Hawaii Lensing Survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shirasaki, Masato; Yoshida, Naoki, E-mail: masato.shirasaki@utap.phys.s.u-tokyo.ac.jp

    2014-05-01

    The measurement of cosmic shear using weak gravitational lensing is a challenging task that involves a number of complicated procedures. We study in detail the systematic errors in the measurement of weak-lensing Minkowski Functionals (MFs). Specifically, we focus on systematics associated with galaxy shape measurements, photometric redshift errors, and shear calibration correction. We first generate mock weak-lensing catalogs that directly incorporate the actual observational characteristics of the Canada-France-Hawaii Lensing Survey (CFHTLenS). We then perform a Fisher analysis using the large set of mock catalogs for various cosmological models. We find that the statistical error associated with the observational effects degradesmore » the cosmological parameter constraints by a factor of a few. The Subaru Hyper Suprime-Cam (HSC) survey with a sky coverage of ∼1400 deg{sup 2} will constrain the dark energy equation of the state parameter with an error of Δw {sub 0} ∼ 0.25 by the lensing MFs alone, but biases induced by the systematics can be comparable to the 1σ error. We conclude that the lensing MFs are powerful statistics beyond the two-point statistics only if well-calibrated measurement of both the redshifts and the shapes of source galaxies is performed. Finally, we analyze the CFHTLenS data to explore the ability of the MFs to break degeneracies between a few cosmological parameters. Using a combined analysis of the MFs and the shear correlation function, we derive the matter density Ω{sub m0}=0.256±{sub 0.046}{sup 0.054}.« less

  8. The Role of Model and Initial Condition Error in Numerical Weather Forecasting Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, Nikki C.; Errico, Ronald M.

    2013-01-01

    A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.

  9. Robust video transmission with distributed source coded auxiliary channel.

    PubMed

    Wang, Jiajun; Majumdar, Abhik; Ramchandran, Kannan

    2009-12-01

    We propose a novel solution to the problem of robust, low-latency video transmission over lossy channels. Predictive video codecs, such as MPEG and H.26x, are very susceptible to prediction mismatch between encoder and decoder or "drift" when there are packet losses. These mismatches lead to a significant degradation in the decoded quality. To address this problem, we propose an auxiliary codec system that sends additional information alongside an MPEG or H.26x compressed video stream to correct for errors in decoded frames and mitigate drift. The proposed system is based on the principles of distributed source coding and uses the (possibly erroneous) MPEG/H.26x decoder reconstruction as side information at the auxiliary decoder. The distributed source coding framework depends upon knowing the statistical dependency (or correlation) between the source and the side information. We propose a recursive algorithm to analytically track the correlation between the original source frame and the erroneous MPEG/H.26x decoded frame. Finally, we propose a rate-distortion optimization scheme to allocate the rate used by the auxiliary encoder among the encoding blocks within a video frame. We implement the proposed system and present extensive simulation results that demonstrate significant gains in performance both visually and objectively (on the order of 2 dB in PSNR over forward error correction based solutions and 1.5 dB in PSNR over intrarefresh based solutions for typical scenarios) under tight latency constraints.

  10. An Analysis of Computational Errors in the Use of Division Algorithms by Fourth-Grade Students.

    ERIC Educational Resources Information Center

    Stefanich, Greg P.; Rokusek, Teri

    1992-01-01

    Presents a study that analyzed errors made by randomly chosen fourth grade students (25 of 57) while using the division algorithm and investigated the effect of remediation on identified systematic errors. Results affirm that error pattern diagnosis and directed remediation lead to new learning and long-term retention. (MDH)

  11. Increased errors and decreased performance at night: A systematic review of the evidence concerning shift work and quality.

    PubMed

    de Cordova, Pamela B; Bradford, Michelle A; Stone, Patricia W

    2016-02-15

    Shift workers have worse health outcomes than employees who work standard business hours. However, it is unclear how this poorer health shift may be related to employee work productivity. The purpose of this systematic review is to assess the relationship between shift work and errors and performance. Searches of MEDLINE/PubMed, EBSCOhost, and CINAHL were conducted to identify articles that examined the relationship between shift work, errors, quality, productivity, and performance. All articles were assessed for study quality. A total of 435 abstracts were screened with 13 meeting inclusion criteria. Eight studies were rated to be of strong, methodological quality. Nine studies demonstrated a positive relationship that night shift workers committed more errors and had decreased performance. Night shift workers have worse health that may contribute to errors and decreased performance in the workplace.

  12. Filtering Methods for Error Reduction in Spacecraft Attitude Estimation Using Quaternion Star Trackers

    NASA Technical Reports Server (NTRS)

    Calhoun, Philip C.; Sedlak, Joseph E.; Superfin, Emil

    2011-01-01

    Precision attitude determination for recent and planned space missions typically includes quaternion star trackers (ST) and a three-axis inertial reference unit (IRU). Sensor selection is based on estimates of knowledge accuracy attainable from a Kalman filter (KF), which provides the optimal solution for the case of linear dynamics with measurement and process errors characterized by random Gaussian noise with white spectrum. Non-Gaussian systematic errors in quaternion STs are often quite large and have an unpredictable time-varying nature, particularly when used in non-inertial pointing applications. Two filtering methods are proposed to reduce the attitude estimation error resulting from ST systematic errors, 1) extended Kalman filter (EKF) augmented with Markov states, 2) Unscented Kalman filter (UKF) with a periodic measurement model. Realistic assessments of the attitude estimation performance gains are demonstrated with both simulation and flight telemetry data from the Lunar Reconnaissance Orbiter.

  13. A water-vapor radiometer error model. [for ionosphere in geodetic microwave techniques

    NASA Technical Reports Server (NTRS)

    Beckman, B.

    1985-01-01

    The water-vapor radiometer (WVR) is used to calibrate unpredictable delays in the wet component of the troposphere in geodetic microwave techniques such as very-long-baseline interferometry (VLBI) and Global Positioning System (GPS) tracking. Based on experience with Jet Propulsion Laboratory (JPL) instruments, the current level of accuracy in wet-troposphere calibration limits the accuracy of local vertical measurements to 5-10 cm. The goal for the near future is 1-3 cm. Although the WVR is currently the best calibration method, many instruments are prone to systematic error. In this paper, a treatment of WVR data is proposed and evaluated. This treatment reduces the effect of WVR systematic errors by estimating parameters that specify an assumed functional form for the error. The assumed form of the treatment is evaluated by comparing the results of two similar WVR's operating near each other. Finally, the observability of the error parameters is estimated by covariance analysis.

  14. Investigating Systematic Errors of the Interstellar Flow Longitude Derived from the Pickup Ion Cutoff

    NASA Astrophysics Data System (ADS)

    Taut, A.; Berger, L.; Drews, C.; Bower, J.; Keilbach, D.; Lee, M. A.; Moebius, E.; Wimmer-Schweingruber, R. F.

    2017-12-01

    Complementary to the direct neutral particle measurements performed by e.g. IBEX, the measurement of PickUp Ions (PUIs) constitutes a diagnostic tool to investigate the local interstellar medium. PUIs are former neutral particles that have been ionized in the inner heliosphere. Subsequently, they are picked up by the solar wind and its frozen-in magnetic field. Due to this process, a characteristic Velocity Distribution Function (VDF) with a sharp cutoff evolves, which carries information about the PUI's injection speed and thus the former neutral particle velocity. The symmetry of the injection speed about the interstellar flow vector is used to derive the interstellar flow longitude from PUI measurements. Using He PUI data obtained by the PLASTIC sensor on STEREO A, we investigate how this concept may be affected by systematic errors. The PUI VDF strongly depends on the orientation of the local interplanetary magnetic field. Recently injected PUIs with speeds just below the cutoff speed typically form a highly anisotropic torus distribution in velocity space, which leads to a longitudinal transport for certain magnetic field orientation. Therefore, we investigate how the selection of magnetic field configurations in the data affects the result for the interstellar flow longitude that we derive from the PUI cutoff. Indeed, we find that the results follow a systematic trend with the filtered magnetic field angles that can lead to a shift of the result up to 5°. In turn, this means that every value for the interstellar flow longitude derived from the PUI cutoff is affected by a systematic error depending on the utilized magnetic field orientations. Here, we present our observations, discuss possible reasons for the systematic trend we discovered, and indicate selections that may minimize the systematic errors.

  15. Variable ratio beam splitter for laser applications

    NASA Technical Reports Server (NTRS)

    Brown, R. M.

    1971-01-01

    Beam splitter employing birefringent optics provides either widely different or precisely equal beam ratios, it can be used with laser light source systems for interferometry of lossy media, holography, scattering measurements, and precise beam ratio applications.

  16. A cognitive taxonomy of medical errors.

    PubMed

    Zhang, Jiajie; Patel, Vimla L; Johnson, Todd R; Shortliffe, Edward H

    2004-06-01

    Propose a cognitive taxonomy of medical errors at the level of individuals and their interactions with technology. Use cognitive theories of human error and human action to develop the theoretical foundations of the taxonomy, develop the structure of the taxonomy, populate the taxonomy with examples of medical error cases, identify cognitive mechanisms for each category of medical error under the taxonomy, and apply the taxonomy to practical problems. Four criteria were used to evaluate the cognitive taxonomy. The taxonomy should be able (1) to categorize major types of errors at the individual level along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to describe how and explain why a specific error occurs, and (4) to generate intervention strategies for each type of error. The proposed cognitive taxonomy largely satisfies the four criteria at a theoretical and conceptual level. Theoretically, the proposed cognitive taxonomy provides a method to systematically categorize medical errors at the individual level along cognitive dimensions, leads to a better understanding of the underlying cognitive mechanisms of medical errors, and provides a framework that can guide future studies on medical errors. Practically, it provides guidelines for the development of cognitive interventions to decrease medical errors and foundation for the development of medical error reporting system that not only categorizes errors but also identifies problems and helps to generate solutions. To validate this model empirically, we will next be performing systematic experimental studies.

  17. Removal of batch effects using distribution-matching residual networks.

    PubMed

    Shaham, Uri; Stanton, Kelly P; Zhao, Jun; Li, Huamin; Raddassi, Khadir; Montgomery, Ruth; Kluger, Yuval

    2017-08-15

    Sources of variability in experimentally derived data include measurement error in addition to the physical phenomena of interest. This measurement error is a combination of systematic components, originating from the measuring instrument and random measurement errors. Several novel biological technologies, such as mass cytometry and single-cell RNA-seq (scRNA-seq), are plagued with systematic errors that may severely affect statistical analysis if the data are not properly calibrated. We propose a novel deep learning approach for removing systematic batch effects. Our method is based on a residual neural network, trained to minimize the Maximum Mean Discrepancy between the multivariate distributions of two replicates, measured in different batches. We apply our method to mass cytometry and scRNA-seq datasets, and demonstrate that it effectively attenuates batch effects. our codes and data are publicly available at https://github.com/ushaham/BatchEffectRemoval.git. yuval.kluger@yale.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  18. Improved methods for the measurement and analysis of stellar magnetic fields

    NASA Technical Reports Server (NTRS)

    Saar, Steven H.

    1988-01-01

    The paper presents several improved methods for the measurement of magnetic fields on cool stars which take into account simple radiative transfer effects and the exact Zeeman patterns. Using these methods, high-resolution, low-noise data can be fitted with theoretical line profiles to determine the mean magnetic field strength in stellar active regions and a model-dependent fraction of the stellar surface (filling factor) covered by these regions. Random errors in the derived field strength and filling factor are parameterized in terms of signal-to-noise ratio, wavelength, spectral resolution, stellar rotation rate, and the magnetic parameters themselves. Weak line blends, if left uncorrected, can have significant systematic effects on the derived magnetic parameters, and thus several methods are developed to compensate partially for them. The magnetic parameters determined by previous methods likely have systematic errors because of such line blends and because of line saturation effects. Other sources of systematic error are explored in detail. These sources of error currently make it difficult to determine the magnetic parameters of individual stars to better than about + or - 20 percent.

  19. Local blur analysis and phase error correction method for fringe projection profilometry systems.

    PubMed

    Rao, Li; Da, Feipeng

    2018-05-20

    We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.

  20. Dynamically corrected gates for singlet-triplet spin qubits with control-dependent errors

    NASA Astrophysics Data System (ADS)

    Jacobson, N. Tobias; Witzel, Wayne M.; Nielsen, Erik; Carroll, Malcolm S.

    2013-03-01

    Magnetic field inhomogeneity due to random polarization of quasi-static local magnetic impurities is a major source of environmentally induced error for singlet-triplet double quantum dot (DQD) spin qubits. Moreover, for singlet-triplet qubits this error may depend on the applied controls. This effect is significant when a static magnetic field gradient is applied to enable full qubit control. Through a configuration interaction analysis, we observe that the dependence of the field inhomogeneity-induced error on the DQD bias voltage can vary systematically as a function of the controls for certain experimentally relevant operating regimes. To account for this effect, we have developed a straightforward prescription for adapting dynamically corrected gate sequences that assume control-independent errors into sequences that compensate for systematic control-dependent errors. We show that accounting for such errors may lead to a substantial increase in gate fidelities. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Y; Fullerton, G; Goins, B

    Purpose: In our previous study a preclinical multi-modality quality assurance (QA) phantom that contains five tumor-simulating test objects with 2, 4, 7, 10 and 14 mm diameters was developed for accurate tumor size measurement by researchers during cancer drug development and testing. This study analyzed the errors during tumor volume measurement from preclinical magnetic resonance (MR), micro-computed tomography (micro- CT) and ultrasound (US) images acquired in a rodent tumor model using the preclinical multi-modality QA phantom. Methods: Using preclinical 7-Tesla MR, US and micro-CT scanners, images were acquired of subcutaneous SCC4 tumor xenografts in nude rats (3–4 rats per group;more » 5 groups) along with the QA phantom using the same imaging protocols. After tumors were excised, in-air micro-CT imaging was performed to determine reference tumor volume. Volumes measured for the rat tumors and phantom test objects were calculated using formula V = (π/6)*a*b*c where a, b and c are the maximum diameters in three perpendicular dimensions determined by the three imaging modalities. Then linear regression analysis was performed to compare image-based tumor volumes with the reference tumor volume and known test object volume for the rats and the phantom respectively. Results: The slopes of regression lines for in-vivo tumor volumes measured by three imaging modalities were 1.021, 1.101 and 0.862 for MRI, micro-CT and US respectively. For phantom, the slopes were 0.9485, 0.9971 and 0.9734 for MRI, micro-CT and US respectively. Conclusion: For both animal and phantom studies, random and systematic errors were observed. Random errors were observer-dependent and systematic errors were mainly due to selected imaging protocols and/or measurement method. In the animal study, there were additional systematic errors attributed to ellipsoidal assumption for tumor shape. The systematic errors measured using the QA phantom need to be taken into account to reduce measurement errors during the animal study.« less

  2. Atmospheric Dispersion Effects in Weak Lensing Measurements

    DOE PAGES

    Plazas, Andrés Alejandro; Bernstein, Gary

    2012-10-01

    The wavelength dependence of atmospheric refraction causes elongation of finite-bandwidth images along the elevation vector, which produces spurious signals in weak gravitational lensing shear measurements unless this atmospheric dispersion is calibrated and removed to high precision. Because astrometric solutions and PSF characteristics are typically calibrated from stellar images, differences between the reference stars' spectra and the galaxies' spectra will leave residual errors in both the astrometric positions (dr) and in the second moment (width) of the wavelength-averaged PSF (dv) for galaxies.We estimate the level of dv that will induce spurious weak lensing signals in PSF-corrected galaxy shapes that exceed themore » statistical errors of the DES and the LSST cosmic-shear experiments. We also estimate the dr signals that will produce unacceptable spurious distortions after stacking of exposures taken at different airmasses and hour angles. We also calculate the errors in the griz bands, and find that dispersion systematics, uncorrected, are up to 6 and 2 times larger in g and r bands,respectively, than the requirements for the DES error budget, but can be safely ignored in i and z bands. For the LSST requirements, the factors are about 30, 10, and 3 in g, r, and i bands,respectively. We find that a simple correction linear in galaxy color is accurate enough to reduce dispersion shear systematics to insignificant levels in the r band for DES and i band for LSST,but still as much as 5 times than the requirements for LSST r-band observations. More complex corrections will likely be able to reduce the systematic cosmic-shear errors below statistical errors for LSST r band. But g-band effects remain large enough that it seems likely that induced systematics will dominate the statistical errors of both surveys, and cosmic-shear measurements should rely on the redder bands.« less

  3. Ground state properties of 3d metals from self-consistent GW approach

    DOE PAGES

    Kutepov, Andrey L.

    2017-10-06

    The self consistent GW approach (scGW) has been applied to calculate the ground state properties (equilibrium Wigner–Seitz radius S WZ and bulk modulus B) of 3d transition metals Sc, Ti, V, Fe, Co, Ni, and Cu. The approach systematically underestimates S WZ with average relative deviation from the experimental data of about 1% and it overestimates the calculated bulk modulus with relative error of about 25%. We show that scGW is superior in accuracy as compared to the local density approximation but it is less accurate than the generalized gradient approach for the materials studied. If compared to the randommore » phase approximation, scGW is slightly less accurate, but its error for 3d metals looks more systematic. Lastly, the systematic nature of the deviation from the experimental data suggests that the next order of the perturbation theory should allow one to reduce the error.« less

  4. Ground state properties of 3d metals from self-consistent GW approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kutepov, Andrey L.

    The self consistent GW approach (scGW) has been applied to calculate the ground state properties (equilibrium Wigner–Seitz radius S WZ and bulk modulus B) of 3d transition metals Sc, Ti, V, Fe, Co, Ni, and Cu. The approach systematically underestimates S WZ with average relative deviation from the experimental data of about 1% and it overestimates the calculated bulk modulus with relative error of about 25%. We show that scGW is superior in accuracy as compared to the local density approximation but it is less accurate than the generalized gradient approach for the materials studied. If compared to the randommore » phase approximation, scGW is slightly less accurate, but its error for 3d metals looks more systematic. Lastly, the systematic nature of the deviation from the experimental data suggests that the next order of the perturbation theory should allow one to reduce the error.« less

  5. A systematic comparison of error correction enzymes by next-generation sequencing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lubock, Nathan B.; Zhang, Di; Sidore, Angus M.

    Gene synthesis, the process of assembling genelength fragments from shorter groups of oligonucleotides (oligos), is becoming an increasingly important tool in molecular and synthetic biology. The length, quality and cost of gene synthesis are limited by errors produced during oligo synthesis and subsequent assembly. Enzymatic error correction methods are cost-effective means to ameliorate errors in gene synthesis. Previous analyses of these methods relied on cloning and Sanger sequencing to evaluate their efficiencies, limiting quantitative assessment. Here, we develop a method to quantify errors in synthetic DNA by next-generation sequencing. We analyzed errors in model gene assemblies and systematically compared sixmore » different error correction enzymes across 11 conditions. We find that ErrASE and T7 Endonuclease I are the most effective at decreasing average error rates (up to 5.8-fold relative to the input), whereas MutS is the best for increasing the number of perfect assemblies (up to 25.2-fold). We are able to quantify differential specificities such as ErrASE preferentially corrects C/G transversions whereas T7 Endonuclease I preferentially corrects A/T transversions. More generally, this experimental and computational pipeline is a fast, scalable and extensible way to analyze errors in gene assemblies, to profile error correction methods, and to benchmark DNA synthesis methods.« less

  6. A systematic comparison of error correction enzymes by next-generation sequencing

    DOE PAGES

    Lubock, Nathan B.; Zhang, Di; Sidore, Angus M.; ...

    2017-08-01

    Gene synthesis, the process of assembling genelength fragments from shorter groups of oligonucleotides (oligos), is becoming an increasingly important tool in molecular and synthetic biology. The length, quality and cost of gene synthesis are limited by errors produced during oligo synthesis and subsequent assembly. Enzymatic error correction methods are cost-effective means to ameliorate errors in gene synthesis. Previous analyses of these methods relied on cloning and Sanger sequencing to evaluate their efficiencies, limiting quantitative assessment. Here, we develop a method to quantify errors in synthetic DNA by next-generation sequencing. We analyzed errors in model gene assemblies and systematically compared sixmore » different error correction enzymes across 11 conditions. We find that ErrASE and T7 Endonuclease I are the most effective at decreasing average error rates (up to 5.8-fold relative to the input), whereas MutS is the best for increasing the number of perfect assemblies (up to 25.2-fold). We are able to quantify differential specificities such as ErrASE preferentially corrects C/G transversions whereas T7 Endonuclease I preferentially corrects A/T transversions. More generally, this experimental and computational pipeline is a fast, scalable and extensible way to analyze errors in gene assemblies, to profile error correction methods, and to benchmark DNA synthesis methods.« less

  7. The effect of the Earth's oblate spheroid shape on the accuracy of a time-of-arrival lightning ground strike locating system

    NASA Technical Reports Server (NTRS)

    Casper, Paul W.; Bent, Rodney B.

    1991-01-01

    The algorithm used in previous technology time-of-arrival lightning mapping systems was based on the assumption that the earth is a perfect spheroid. These systems yield highly-accurate lightning locations, which is their major strength. However, extensive analysis of tower strike data has revealed occasionally significant (one to two kilometer) systematic offset errors which are not explained by the usual error sources. It was determined that these systematic errors reduce dramatically (in some cases) when the oblate shape of the earth is taken into account. The oblate spheroid correction algorithm and a case example is presented.

  8. Error reduction and parameter optimization of the TAPIR method for fast T1 mapping.

    PubMed

    Zaitsev, M; Steinhoff, S; Shah, N J

    2003-06-01

    A methodology is presented for the reduction of both systematic and random errors in T(1) determination using TAPIR, a Look-Locker-based fast T(1) mapping technique. The relations between various sequence parameters were carefully investigated in order to develop recipes for choosing optimal sequence parameters. Theoretical predictions for the optimal flip angle were verified experimentally. Inversion pulse imperfections were identified as the main source of systematic errors in T(1) determination with TAPIR. An effective remedy is demonstrated which includes extension of the measurement protocol to include a special sequence for mapping the inversion efficiency itself. Copyright 2003 Wiley-Liss, Inc.

  9. SYSTEMATIC EFFECTS IN POLARIZING FOURIER TRANSFORM SPECTROMETERS FOR COSMIC MICROWAVE BACKGROUND OBSERVATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagler, Peter C.; Tucker, Gregory S.; Fixsen, Dale J.

    The detection of the primordial B-mode polarization signal of the cosmic microwave background (CMB) would provide evidence for inflation. Yet as has become increasingly clear, the detection of a such a faint signal requires an instrument with both wide frequency coverage to reject foregrounds and excellent control over instrumental systematic effects. Using a polarizing Fourier transform spectrometer (FTS) for CMB observations meets both of these requirements. In this work, we present an analysis of instrumental systematic effects in polarizing FTSs, using the Primordial Inflation Explorer (PIXIE) as a worked example. We analytically solve for the most important systematic effects inherentmore » to the FTS—emissive optical components, misaligned optical components, sampling and phase errors, and spin synchronous effects—and demonstrate that residual systematic error terms after corrections will all be at the sub-nK level, well below the predicted 100 nK B-mode signal.« less

  10. Performance evaluation of a lossy transmission lines based diode detector at cryogenic temperature.

    PubMed

    Villa, E; Aja, B; de la Fuente, L; Artal, E

    2016-01-01

    This work is focused on the design, fabrication, and performance analysis of a square-law Schottky diode detector based on lossy transmission lines working under cryogenic temperature (15 K). The design analysis of a microwave detector, based on a planar gallium-arsenide low effective Schottky barrier height diode, is reported, which is aimed for achieving large input return loss as well as flat sensitivity versus frequency. The designed circuit demonstrates good sensitivity, as well as a good return loss in a wide bandwidth at Ka-band, at both room (300 K) and cryogenic (15 K) temperatures. A good sensitivity of 1000 mV/mW and input return loss better than 12 dB have been achieved when it works as a zero-bias Schottky diode detector at room temperature, increasing the sensitivity up to a minimum of 2200 mV/mW, with the need of a DC bias current, at cryogenic temperature.

  11. Mode-dependent templates and scan order for H.264/AVC-based intra lossless coding.

    PubMed

    Gu, Zhouye; Lin, Weisi; Lee, Bu-Sung; Lau, Chiew Tong; Sun, Ming-Ting

    2012-09-01

    In H.264/advanced video coding (AVC), lossless coding and lossy coding share the same entropy coding module. However, the entropy coders in the H.264/AVC standard were original designed for lossy video coding and do not yield adequate performance for lossless video coding. In this paper, we analyze the problem with the current lossless coding scheme and propose a mode-dependent template (MD-template) based method for intra lossless coding. By exploring the statistical redundancy of the prediction residual in the H.264/AVC intra prediction modes, more zero coefficients are generated. By designing a new scan order for each MD-template, the scanned coefficients sequence fits the H.264/AVC entropy coders better. A fast implementation algorithm is also designed. With little computation increase, experimental results confirm that the proposed fast algorithm achieves about 7.2% bit saving compared with the current H.264/AVC fidelity range extensions high profile.

  12. SURGNET: An Integrated Surgical Data Transmission System for Telesurgery.

    PubMed

    Natarajan, Sriram; Ganz, Aura

    2009-01-01

    Remote surgery information requires quick and reliable transmission between the surgeon and the patient site. However, the networks that interconnect the surgeon and patient sites are usually time varying and lossy which can cause packet loss and delay jitter. In this paper we propose SURGNET, a telesurgery system for which we developed the architecture, algorithms and implemented it on a testbed. The algorithms include adaptive packet prediction and buffer time adjustment techniques which reduce the negative effects caused by the lossy and time varying networks. To evaluate the proposed SURGNET system, at the therapist site, we implemented a therapist panel which controls the force feedback device movements and provides image analysis functionality. At the patient site we controlled a virtual reality applet built in Matlab. The varying network conditions were emulated using NISTNet emulator. Our results show that even for severe packet loss and variable delay jitter, the proposed integrated synchronization techniques significantly improve SURGNET performance.

  13. Anti-coalescence of bosons on a lossy beam splitter.

    PubMed

    Vest, Benjamin; Dheur, Marie-Christine; Devaux, Éloïse; Baron, Alexandre; Rousseau, Emmanuel; Hugonin, Jean-Paul; Greffet, Jean-Jacques; Messin, Gaétan; Marquier, François

    2017-06-30

    Two-boson interference, a fundamentally quantum effect, has been extensively studied with photons through the Hong-Ou-Mandel effect and observed with guided plasmons. Using two freely propagating surface plasmon polaritons (SPPs) interfering on a lossy beam splitter, we show that the presence of loss enables us to modify the reflection and transmission factors of the beam splitter, thus revealing quantum interference paths that do not exist in a lossless configuration. We investigate the two-plasmon interference on beam splitters with different sets of reflection and transmission factors. Through coincidence-detection measurements, we observe either coalescence or anti-coalescence of SPPs. The results show that losses can be viewed as a degree of freedom to control quantum processes. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  14. Lossy chaotic electromagnetic reverberation chambers: Universal statistical behavior of the vectorial field

    NASA Astrophysics Data System (ADS)

    Gros, J.-B.; Kuhl, U.; Legrand, O.; Mortessagne, F.

    2016-03-01

    The effective Hamiltonian formalism is extended to vectorial electromagnetic waves in order to describe statistical properties of the field in reverberation chambers. The latter are commonly used in electromagnetic compatibility tests. As a first step, the distribution of wave intensities in chaotic systems with varying opening in the weak coupling limit for scalar quantum waves is derived by means of random matrix theory. In this limit the only parameters are the modal overlap and the number of open channels. Using the extended effective Hamiltonian, we describe the intensity statistics of the vectorial electromagnetic eigenmodes of lossy reverberation chambers. Finally, the typical quantity of interest in such chambers, namely, the distribution of the electromagnetic response, is discussed. By determining the distribution of the phase rigidity, describing the coupling to the environment, using random matrix numerical data, we find good agreement between the theoretical prediction and numerical calculations of the response.

  15. Why GPS makes distances bigger than they are

    PubMed Central

    Ranacher, Peter; Brunauer, Richard; Trutschnig, Wolfgang; Van der Spek, Stefan; Reich, Siegfried

    2016-01-01

    ABSTRACT Global navigation satellite systems such as the Global Positioning System (GPS) is one of the most important sensors for movement analysis. GPS is widely used to record the trajectories of vehicles, animals and human beings. However, all GPS movement data are affected by both measurement and interpolation errors. In this article we show that measurement error causes a systematic bias in distances recorded with a GPS; the distance between two points recorded with a GPS is – on average – bigger than the true distance between these points. This systematic ‘overestimation of distance’ becomes relevant if the influence of interpolation error can be neglected, which in practice is the case for movement sampled at high frequencies. We provide a mathematical explanation of this phenomenon and illustrate that it functionally depends on the autocorrelation of GPS measurement error (C). We argue that C can be interpreted as a quality measure for movement data recorded with a GPS. If there is a strong autocorrelation between any two consecutive position estimates, they have very similar error. This error cancels out when average speed, distance or direction is calculated along the trajectory. Based on our theoretical findings we introduce a novel approach to determine C in real-world GPS movement data sampled at high frequencies. We apply our approach to pedestrian trajectories and car trajectories. We found that the measurement error in the data was strongly spatially and temporally autocorrelated and give a quality estimate of the data. Most importantly, our findings are not limited to GPS alone. The systematic bias and its implications are bound to occur in any movement data collected with absolute positioning if interpolation error can be neglected. PMID:27019610

  16. Using Analysis Increments (AI) to Estimate and Correct Systematic Errors in the Global Forecast System (GFS) Online

    NASA Astrophysics Data System (ADS)

    Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.

    2017-12-01

    Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub-grid scale physical parameterizations, more accurate discretization of the model dynamics, boundary conditions, radiative transfer codes, and other potential model improvements which can then replace the empirical correction scheme. The analysis increments also provide guidance in testing new physical parameterizations.

  17. The impact of command signal power distribution, processing delays, and speed scaling on neurally-controlled devices.

    PubMed

    Marathe, A R; Taylor, D M

    2015-08-01

    Decoding algorithms for brain-machine interfacing (BMI) are typically only optimized to reduce the magnitude of decoding errors. Our goal was to systematically quantify how four characteristics of BMI command signals impact closed-loop performance: (1) error magnitude, (2) distribution of different frequency components in the decoding errors, (3) processing delays, and (4) command gain. To systematically evaluate these different command features and their interactions, we used a closed-loop BMI simulator where human subjects used their own wrist movements to command the motion of a cursor to targets on a computer screen. Random noise with three different power distributions and four different relative magnitudes was added to the ongoing cursor motion in real time to simulate imperfect decoding. These error characteristics were tested with four different visual feedback delays and two velocity gains. Participants had significantly more trouble correcting for errors with a larger proportion of low-frequency, slow-time-varying components than they did with jittery, higher-frequency errors, even when the error magnitudes were equivalent. When errors were present, a movement delay often increased the time needed to complete the movement by an order of magnitude more than the delay itself. Scaling down the overall speed of the velocity command can actually speed up target acquisition time when low-frequency errors and delays are present. This study is the first to systematically evaluate how the combination of these four key command signal features (including the relatively-unexplored error power distribution) and their interactions impact closed-loop performance independent of any specific decoding method. The equations we derive relating closed-loop movement performance to these command characteristics can provide guidance on how best to balance these different factors when designing BMI systems. The equations reported here also provide an efficient way to compare a diverse range of decoding options offline.

  18. The impact of command signal power distribution, processing delays, and speed scaling on neurally-controlled devices

    NASA Astrophysics Data System (ADS)

    Marathe, A. R.; Taylor, D. M.

    2015-08-01

    Objective. Decoding algorithms for brain-machine interfacing (BMI) are typically only optimized to reduce the magnitude of decoding errors. Our goal was to systematically quantify how four characteristics of BMI command signals impact closed-loop performance: (1) error magnitude, (2) distribution of different frequency components in the decoding errors, (3) processing delays, and (4) command gain. Approach. To systematically evaluate these different command features and their interactions, we used a closed-loop BMI simulator where human subjects used their own wrist movements to command the motion of a cursor to targets on a computer screen. Random noise with three different power distributions and four different relative magnitudes was added to the ongoing cursor motion in real time to simulate imperfect decoding. These error characteristics were tested with four different visual feedback delays and two velocity gains. Main results. Participants had significantly more trouble correcting for errors with a larger proportion of low-frequency, slow-time-varying components than they did with jittery, higher-frequency errors, even when the error magnitudes were equivalent. When errors were present, a movement delay often increased the time needed to complete the movement by an order of magnitude more than the delay itself. Scaling down the overall speed of the velocity command can actually speed up target acquisition time when low-frequency errors and delays are present. Significance. This study is the first to systematically evaluate how the combination of these four key command signal features (including the relatively-unexplored error power distribution) and their interactions impact closed-loop performance independent of any specific decoding method. The equations we derive relating closed-loop movement performance to these command characteristics can provide guidance on how best to balance these different factors when designing BMI systems. The equations reported here also provide an efficient way to compare a diverse range of decoding options offline.

  19. Multidimensional incremental parsing for universal source coding.

    PubMed

    Bae, Soo Hyun; Juang, Biing-Hwang

    2008-10-01

    A multidimensional incremental parsing algorithm (MDIP) for multidimensional discrete sources, as a generalization of the Lempel-Ziv coding algorithm, is investigated. It consists of three essential component schemes, maximum decimation matching, hierarchical structure of multidimensional source coding, and dictionary augmentation. As a counterpart of the longest match search in the Lempel-Ziv algorithm, two classes of maximum decimation matching are studied. Also, an underlying behavior of the dictionary augmentation scheme for estimating the source statistics is examined. For an m-dimensional source, m augmentative patches are appended into the dictionary at each coding epoch, thus requiring the transmission of a substantial amount of information to the decoder. The property of the hierarchical structure of the source coding algorithm resolves this issue by successively incorporating lower dimensional coding procedures in the scheme. In regard to universal lossy source coders, we propose two distortion functions, the local average distortion and the local minimax distortion with a set of threshold levels for each source symbol. For performance evaluation, we implemented three image compression algorithms based upon the MDIP; one is lossless and the others are lossy. The lossless image compression algorithm does not perform better than the Lempel-Ziv-Welch coding, but experimentally shows efficiency in capturing the source structure. The two lossy image compression algorithms are implemented using the two distortion functions, respectively. The algorithm based on the local average distortion is efficient at minimizing the signal distortion, but the images by the one with the local minimax distortion have a good perceptual fidelity among other compression algorithms. Our insights inspire future research on feature extraction of multidimensional discrete sources.

  20. Markovian Dynamics of Josephson Parametric Amplification

    NASA Astrophysics Data System (ADS)

    Kaiser, Waldemar; Haider, Michael; Russer, Johannes A.; Russer, Peter; Jirauschek, Christian

    2017-09-01

    In this work, we derive the dynamics of the lossy DC pumped non-degenerate Josephson parametric amplifier (DCPJPA). The main element in a DCPJPA is the superconducting Josephson junction. The DC bias generates the AC Josephson current varying the nonlinear inductance of the junction. By this way the Josephson junction acts as the pump oscillator as well as the time varying reactance of the parametric amplifier. In quantum-limited amplification, losses and noise have an increased impact on the characteristics of an amplifier. We outline the classical model of the lossy DCPJPA and derive the available noise power spectral densities. A classical treatment is not capable of including properties like spontaneous emission which is mandatory in case of amplification at the quantum limit. Thus, we derive a quantum mechanical model of the lossy DCPJPA. Thermal losses are modeled by the quantum Langevin approach, by coupling the quantized system to a photon heat bath in thermodynamic equilibrium. The mode occupation in the bath follows the Bose-Einstein statistics. Based on the second quantization formalism, we derive the Heisenberg equations of motion of both resonator modes. We assume the dynamics of the system to follow the Markovian approximation, i.e. the system only depends on its actual state and is memory-free. We explicitly compute the time evolution of the contributions to the signal mode energy and give numeric examples based on different damping and coupling constants. Our analytic results show, that this model is capable of including thermal noise into the description of the DC pumped non-degenerate Josephson parametric amplifier.

  1. Application of content-based image compression to telepathology

    NASA Astrophysics Data System (ADS)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  2. The Observational Determination of the Primordial Helium Abundance: a Y2K Status Report

    NASA Astrophysics Data System (ADS)

    Skillman, Evan D.

    I review observational progress and assess the current state of the determination of the primordial helium abundance, Yp. At present there are two determinations with non-overlapping errors. My impression is that the errors have been under-estimated in both studies. I review recent work on errors assessment and give suggestions for decreasing systematic errors in future studies.

  3. Improved Quality in Aerospace Testing Through the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    DeLoach, R.

    2000-01-01

    This paper illustrates how, in the presence of systematic error, the quality of an experimental result can be influenced by the order in which the independent variables are set. It is suggested that in typical experimental circumstances in which systematic errors are significant, the common practice of organizing the set point order of independent variables to maximize data acquisition rate results in a test matrix that fails to produce the highest quality research result. With some care to match the volume of data required to satisfy inference error risk tolerances, it is possible to accept a lower rate of data acquisition and still produce results of higher technical quality (lower experimental error) with less cost and in less time than conventional test procedures, simply by optimizing the sequence in which independent variable levels are set.

  4. Detecting Spatial Patterns in Biological Array Experiments

    PubMed Central

    ROOT, DAVID E.; KELLEY, BRIAN P.; STOCKWELL, BRENT R.

    2005-01-01

    Chemical genetic screening and DNA and protein microarrays are among a number of increasingly important and widely used biological research tools that involve large numbers of parallel experiments arranged in a spatial array. It is often difficult to ensure that uniform experimental conditions are present throughout the entire array, and as a result, one often observes systematic spatially correlated errors, especially when array experiments are performed using robots. Here, the authors apply techniques based on the discrete Fourier transform to identify and quantify spatially correlated errors superimposed on a spatially random background. They demonstrate that these techniques are effective in identifying common spatially systematic errors in high-throughput 384-well microplate assay data. In addition, the authors employ a statistical test to allow for automatic detection of such errors. Software tools for using this approach are provided. PMID:14567791

  5. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China).

    PubMed

    Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu

    2017-05-25

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems.

  6. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China)

    PubMed Central

    Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu

    2017-01-01

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3–5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems. PMID:28587086

  7. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China)

    NASA Astrophysics Data System (ADS)

    Zhao, Q.

    2017-12-01

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems.

  8. Calibration of the aerodynamic coefficient identification package measurements from the shuttle entry flights using inertial measurement unit data

    NASA Technical Reports Server (NTRS)

    Heck, M. L.; Findlay, J. T.; Compton, H. R.

    1983-01-01

    The Aerodynamic Coefficient Identification Package (ACIP) is an instrument consisting of body mounted linear accelerometers, rate gyros, and angular accelerometers for measuring the Space Shuttle vehicular dynamics. The high rate recorded data are utilized for postflight aerodynamic coefficient extraction studies. Although consistent with pre-mission accuracies specified by the manufacturer, the ACIP data were found to contain detectable levels of systematic error, primarily bias, as well as scale factor, static misalignment, and temperature dependent errors. This paper summarizes the technique whereby the systematic ACIP error sources were detected, identified, and calibrated with the use of recorded dynamic data from the low rate, highly accurate Inertial Measurement Units.

  9. Single molecule counting and assessment of random molecular tagging errors with transposable giga-scale error-correcting barcodes.

    PubMed

    Lau, Billy T; Ji, Hanlee P

    2017-09-21

    RNA-Seq measures gene expression by counting sequence reads belonging to unique cDNA fragments. Molecular barcodes commonly in the form of random nucleotides were recently introduced to improve gene expression measures by detecting amplification duplicates, but are susceptible to errors generated during PCR and sequencing. This results in false positive counts, leading to inaccurate transcriptome quantification especially at low input and single-cell RNA amounts where the total number of molecules present is minuscule. To address this issue, we demonstrated the systematic identification of molecular species using transposable error-correcting barcodes that are exponentially expanded to tens of billions of unique labels. We experimentally showed random-mer molecular barcodes suffer from substantial and persistent errors that are difficult to resolve. To assess our method's performance, we applied it to the analysis of known reference RNA standards. By including an inline random-mer molecular barcode, we systematically characterized the presence of sequence errors in random-mer molecular barcodes. We observed that such errors are extensive and become more dominant at low input amounts. We described the first study to use transposable molecular barcodes and its use for studying random-mer molecular barcode errors. Extensive errors found in random-mer molecular barcodes may warrant the use of error correcting barcodes for transcriptome analysis as input amounts decrease.

  10. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    PubMed Central

    Sun, Ting; Xing, Fei; You, Zheng

    2013-01-01

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527

  11. Seeing in the Dark: Weak Lensing from the Sloan Digital Sky Survey

    NASA Astrophysics Data System (ADS)

    Huff, Eric Michael

    Statistical weak lensing by large-scale structure { cosmic shear { is a promising cosmological tool, which has motivated the design of several large upcoming astronomical surveys. This Thesis presents a measurement of cosmic shear using coadded Sloan Digital Sky Survey (SDSS) imaging in 168 square degrees of the equatorial region, with r < 23:5 and i < 22:5, a source number density of 2.2 per arcmin2 and median redshift of zmed = 0.52. These coadds were generated using a new rounding kernel method that was intended to minimize systematic errors in the lensing measurement due to coherent PSF anisotropies that are otherwise prevalent in the SDSS imaging data. Measurements of cosmic shear out to angular separations of 2 degrees are presented, along with systematics tests of the catalog generation and shear measurement steps that demonstrate that these results are dominated by statistical rather than systematic errors. Assuming a cosmological model corresponding to WMAP7 (Komatsu et al., 2011) and allowing only the amplitude of matter fluctuations sigma8 to vary, the best-t value of the amplitude of matter fluctuations is sigma 8=0.636+0.109-0.154 (1sigma); without systematic errors this would be sigma8=0.636+0.099 -0.137 (1sigma). Assuming a flat Λ CDM model, the combined constraints with WMAP7 are sigma8=0.784+0.028 -0.026 (1sigma). The 2sigma error range is 14 percent smaller than WMAP7 alone. Aside from the intrinsic value of such cosmological constraints from the growth of structure, some important lessons are identified for upcoming surveys that may face similar issues when combining multi-epoch data to measure cosmic shear. Motivated by the challenges faced in the cosmic shear measurement, two new lensing probes are suggested for increasing the available weak lensing signal. Both use galaxy scaling relations to control for scatter in lensing observables. The first employs a version of the well-known fundamental plane relation for early type galaxies. This modified "photometric fundamental plane" replaces velocity dispersions with photometric galaxy properties, thus obviating the need for spectroscopic data. We present the first detection of magnification using this method by applying it to photometric catalogs from the Sloan Digital Sky Survey. This analysis shows that the derived magnification signal is comparable to that available from conventional methods using gravitational shear. We suppress the dominant sources of systematic error and discuss modest improvements that may allow this method to equal or even surpass the signal-to-noise achievable with shear. Moreover, some of the dominant sources of systematic error are substantially different from those of shear-based techniques. The second outlines an idea for using the optical Tully-Fisher relation to dramatically improve the signal-to-noise and systematic error control for shear measurements. The expected error properties and potential advantages of such a measurement are proposed, and a pilot study is suggested in order to test the viability of Tully-Fisher weak lensing in the context of the forthcoming generation of large spectroscopic surveys.

  12. Barcode medication administration work-arounds: a systematic review and implications for nurse executives.

    PubMed

    Voshall, Barbara; Piscotty, Ronald; Lawrence, Jeanette; Targosz, Mary

    2013-10-01

    Safe medication administration is necessary to ensure quality healthcare. Barcode medication administration systems were developed to reduce drug administration errors and the related costs and improve patient safety. Work-arounds created by nurses in the execution of the required processes can lead to unintended consequences, including errors. This article provides a systematic review of the literature associated with barcoded medication administration and work-arounds and suggests interventions that should be adopted by nurse executives to ensure medication safety.

  13. A constant altitude flight survey method for mapping atmospheric ambient pressures and systematic radar errors

    NASA Technical Reports Server (NTRS)

    Larson, T. J.; Ehernberger, L. J.

    1985-01-01

    The flight test technique described uses controlled survey runs to determine horizontal atmospheric pressure variations and systematic altitude errors that result from space positioning measurements. The survey data can be used not only for improved air data calibrations, but also for studies of atmospheric structure and space positioning accuracy performance. The examples presented cover a wide range of radar tracking conditions for both subsonic and supersonic flight to an altitude of 42,000 ft.

  14. Systematic errors in long baseline oscillation experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, Deborah A.; /Fermilab

    This article gives a brief overview of long baseline neutrino experiments and their goals, and then describes the different kinds of systematic errors that are encountered in these experiments. Particular attention is paid to the uncertainties that come about because of imperfect knowledge of neutrino cross sections and more generally how neutrinos interact in nuclei. Near detectors are planned for most of these experiments, and the extent to which certain uncertainties can be reduced by the presence of near detectors is also discussed.

  15. Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun

    1996-01-01

    In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.

  16. Global Warming Estimation from MSU

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, Robert, Jr.

    1999-01-01

    In this study, we have developed time series of global temperature from 1980-97 based on the Microwave Sounding Unit (MSU) Ch 2 (53.74 GHz) observations taken from polar-orbiting NOAA operational satellites. In order to create these time series, systematic errors (approx. 0.1 K) in the Ch 2 data arising from inter-satellite differences are removed objectively. On the other hand, smaller systematic errors (approx. 0.03 K) in the data due to orbital drift of each satellite cannot be removed objectively. Such errors are expected to remain in the time series and leave an uncertainty in the inferred global temperature trend. With the help of a statistical method, the error in the MSU inferred global temperature trend resulting from orbital drifts and residual inter-satellite differences of all satellites is estimated to be 0.06 K decade. Incorporating this error, our analysis shows that the global temperature increased at a rate of 0.13 +/- 0.06 K decade during 1980-97.

  17. Systematic ionospheric electron density tilts (SITs) at mid-latitudes and their associated HF bearing errors

    NASA Astrophysics Data System (ADS)

    Tedd, B. L.; Strangeways, H. J.; Jones, T. B.

    1985-11-01

    Systematic ionospheric tilts (SITs) at midlatitudes and the diurnal variation of bearing error for different transmission paths are examined. An explanation of diurnal variations of bearing error based on the dependence of ionospheric tilt on solar zenith angle and plasma transport processes is presented. The effect of vertical ion drift and the momentum transfer of neutral winds is investigated. During the daytime the transmissions are low and photochemical processes control SITs; however, at night transmissions are at higher heights and spatial and temporal variations of plasma transport processes influence SITs. A HF ray tracing technique which uses a three-dimensional ionospheric model based on predictions to simulate SIT-induced bearing errors is described; poor correlation with experimental data is observed and the causes for this are studied. A second model based on measured vertical-sounder data is proposed. Model two is applicable for predicting bearing error for a range of transmission paths and correlates well with experimental data.

  18. Errors in measurements by ultrasonic thickness gauges caused by the variation in ultrasonic velocity in constructional steels and metal alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalinin, V.A.; Tarasenko, V.L.; Tselser, L.B.

    1988-09-01

    Numerical values of the variation in ultrasonic velocity in constructional metal alloys and the measurement errors related to them are systematized. The systematization is based on the measurement results of the group ultrasonic velocity made in the All-Union Scientific-Research Institute for Nondestructive Testing in 1983-1984 and also on the measurement results of the group velocity made by various authors. The variations in ultrasonic velocity were systematized for carbon, low-alloy, and medium-alloy constructional steels; high-alloy iron base alloys; nickel-base heat-resistant alloys; wrought aluminum constructional alloys; titanium alloys; and cast irons and copper alloys.

  19. High-accuracy self-calibration method for dual-axis rotation-modulating RLG-INS

    NASA Astrophysics Data System (ADS)

    Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Long, Xingwu

    2017-05-01

    Inertial navigation system has been the core component of both military and civil navigation systems. Dual-axis rotation modulation can completely eliminate the inertial elements constant errors of the three axes to improve the system accuracy. But the error caused by the misalignment angles and the scale factor error cannot be eliminated through dual-axis rotation modulation. And discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed the effect of calibration error during one modulated period and presented a new systematic self-calibration method for dual-axis rotation-modulating RLG-INS. Procedure for self-calibration of dual-axis rotation-modulating RLG-INS has been designed. The results of self-calibration simulation experiment proved that: this scheme can estimate all the errors in the calibration error model, the calibration precision of the inertial sensors scale factor error is less than 1ppm and the misalignment is less than 5″. These results have validated the systematic self-calibration method and proved its importance for accuracy improvement of dual -axis rotation inertial navigation system with mechanically dithered ring laser gyroscope.

  20. The role of the basic state in the ENSO-monsoon relationship and implications for predictability

    NASA Astrophysics Data System (ADS)

    Turner, A. G.; Inness, P. M.; Slingo, J. M.

    2005-04-01

    The impact of systematic model errors on a coupled simulation of the Asian summer monsoon and its interannual variability is studied. Although the mean monsoon climate is reasonably well captured, systematic errors in the equatorial Pacific mean that the monsoon-ENSO teleconnection is rather poorly represented in the general-circulation model. A system of ocean-surface heat flux adjustments is implemented in the tropical Pacific and Indian Oceans in order to reduce the systematic biases. In this version of the general-circulation model, the monsoon-ENSO teleconnection is better simulated, particularly the lag-lead relationships in which weak monsoons precede the peak of El Niño. In part this is related to changes in the characteristics of El Niño, which has a more realistic evolution in its developing phase. A stronger ENSO amplitude in the new model version also feeds back to further strengthen the teleconnection. These results have important implications for the use of coupled models for seasonal prediction of systems such as the monsoon, and suggest that some form of flux correction may have significant benefits where model systematic error compromises important teleconnections and modes of interannual variability.

  1. A Bayesian Approach to Systematic Error Correction in Kepler Photometric Time Series

    NASA Astrophysics Data System (ADS)

    Jenkins, Jon Michael; VanCleve, J.; Twicken, J. D.; Smith, J. C.; Kepler Science Team

    2011-01-01

    In order for the Kepler mission to achieve its required 20 ppm photometric precision for 6.5 hr observations of 12th magnitude stars, the Presearch Data Conditioning (PDC) software component of the Kepler Science Processing Pipeline must reduce systematic errors in flux time series to the limit of stochastic noise for errors with time-scales less than three days, without smoothing or over-fitting away the transits that Kepler seeks. The current version of PDC co-trends against ancillary engineering data and Pipeline generated data using essentially a least squares (LS) approach. This approach is successful for quiet stars when all sources of systematic error have been identified. If the stars are intrinsically variable or some sources of systematic error are unknown, LS will nonetheless attempt to explain all of a given time series, not just the part the model can explain well. Negative consequences can include loss of astrophysically interesting signal, and injection of high-frequency noise into the result. As a remedy, we present a Bayesian Maximum A Posteriori (MAP) approach, in which a subset of intrinsically quiet and highly-correlated stars is used to establish the probability density function (PDF) of robust fit parameters in a diagonalized basis. The PDFs then determine a "reasonable” range for the fit parameters for all stars, and brake the runaway fitting that can distort signals and inject noise. We present a closed-form solution for Gaussian PDFs, and show examples using publically available Quarter 1 Kepler data. A companion poster (Van Cleve et al.) shows applications and discusses current work in more detail. Kepler was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA, Science Mission Directorate.

  2. Systematic error of the Gaia DR1 TGAS parallaxes from data for the red giant clump

    NASA Astrophysics Data System (ADS)

    Gontcharov, G. A.

    2017-08-01

    Based on the Gaia DR1 TGAS parallaxes and photometry from the Tycho-2, Gaia, 2MASS, andWISE catalogues, we have produced a sample of 100 000 clump red giants within 800 pc of the Sun. The systematic variations of the mode of their absolute magnitude as a function of the distance, magnitude, and other parameters have been analyzed. We show that these variations reach 0.7 mag and cannot be explained by variations in the interstellar extinction or intrinsic properties of stars and by selection. The only explanation seems to be a systematic error of the Gaia DR1 TGAS parallax dependent on the square of the observed distance in kpc: 0.18 R 2 mas. Allowance for this error reduces significantly the systematic dependences of the absolute magnitude mode on all parameters. This error reaches 0.1 mas within 800 pc of the Sun and allows an upper limit for the accuracy of the TGAS parallaxes to be estimated as 0.2 mas. A careful allowance for such errors is needed to use clump red giants as "standard candles." This eliminates all discrepancies between the theoretical and empirical estimates of the characteristics of these stars and allows us to obtain the first estimates of the modes of their absolute magnitudes from the Gaia parallaxes: mode( M H ) = -1.49 m ± 0.04 m , mode( M Ks ) = -1.63 m ± 0.03 m , mode( M W1) = -1.67 m ± 0.05 m mode( M W2) = -1.67 m ± 0.05 m , mode( M W3) = -1.66 m ± 0.02 m , mode( M W4) = -1.73 m ± 0.03 m , as well as the corresponding estimates of their de-reddened colors.

  3. Effects of Systematic and Random Errors on the Retrieval of Particle Microphysical Properties from Multiwavelength Lidar Measurements Using Inversion with Regularization

    NASA Technical Reports Server (NTRS)

    Ramirez, Daniel Perez; Whiteman, David N.; Veselovskii, Igor; Kolgotin, Alexei; Korenskiy, Michael; Alados-Arboledas, Lucas

    2013-01-01

    In this work we study the effects of systematic and random errors on the inversion of multiwavelength (MW) lidar data using the well-known regularization technique to obtain vertically resolved aerosol microphysical properties. The software implementation used here was developed at the Physics Instrumentation Center (PIC) in Troitsk (Russia) in conjunction with the NASA/Goddard Space Flight Center. Its applicability to Raman lidar systems based on backscattering measurements at three wavelengths (355, 532 and 1064 nm) and extinction measurements at two wavelengths (355 and 532 nm) has been demonstrated widely. The systematic error sensitivity is quantified by first determining the retrieved parameters for a given set of optical input data consistent with three different sets of aerosol physical parameters. Then each optical input is perturbed by varying amounts and the inversion is repeated. Using bimodal aerosol size distributions, we find a generally linear dependence of the retrieved errors in the microphysical properties on the induced systematic errors in the optical data. For the retrievals of effective radius, number/surface/volume concentrations and fine-mode radius and volume, we find that these results are not significantly affected by the range of the constraints used in inversions. But significant sensitivity was found to the allowed range of the imaginary part of the particle refractive index. Our results also indicate that there exists an additive property for the deviations induced by the biases present in the individual optical data. This property permits the results here to be used to predict deviations in retrieved parameters when multiple input optical data are biased simultaneously as well as to study the influence of random errors on the retrievals. The above results are applied to questions regarding lidar design, in particular for the spaceborne multiwavelength lidar under consideration for the upcoming ACE mission.

  4. Assessment of the accuracy of global geodetic satellite laser ranging observations and estimated impact on ITRF scale: estimation of systematic errors in LAGEOS observations 1993-2014

    NASA Astrophysics Data System (ADS)

    Appleby, Graham; Rodríguez, José; Altamimi, Zuheir

    2016-12-01

    Satellite laser ranging (SLR) to the geodetic satellites LAGEOS and LAGEOS-2 uniquely determines the origin of the terrestrial reference frame and, jointly with very long baseline interferometry, its scale. Given such a fundamental role in satellite geodesy, it is crucial that any systematic errors in either technique are at an absolute minimum as efforts continue to realise the reference frame at millimetre levels of accuracy to meet the present and future science requirements. Here, we examine the intrinsic accuracy of SLR measurements made by tracking stations of the International Laser Ranging Service using normal point observations of the two LAGEOS satellites in the period 1993 to 2014. The approach we investigate in this paper is to compute weekly reference frame solutions solving for satellite initial state vectors, station coordinates and daily Earth orientation parameters, estimating along with these weekly average range errors for each and every one of the observing stations. Potential issues in any of the large number of SLR stations assumed to have been free of error in previous realisations of the ITRF may have been absorbed in the reference frame, primarily in station height. Likewise, systematic range errors estimated against a fixed frame that may itself suffer from accuracy issues will absorb network-wide problems into station-specific results. Our results suggest that in the past two decades, the scale of the ITRF derived from the SLR technique has been close to 0.7 ppb too small, due to systematic errors either or both in the range measurements and their treatment. We discuss these results in the context of preparations for ITRF2014 and additionally consider the impact of this work on the currently adopted value of the geocentric gravitational constant, GM.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.

    Here, we report that meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations inmore » the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. In conclusion, the residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey’s stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence ofmore » the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. The residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  7. Edge profile analysis of Joint European Torus (JET) Thomson scattering data: Quantifying the systematic error due to edge localised mode synchronisation.

    PubMed

    Leyland, M J; Beurskens, M N A; Flanagan, J C; Frassinetti, L; Gibson, K J; Kempenaars, M; Maslov, M; Scannell, R

    2016-01-01

    The Joint European Torus (JET) high resolution Thomson scattering (HRTS) system measures radial electron temperature and density profiles. One of the key capabilities of this diagnostic is measuring the steep pressure gradient, termed the pedestal, at the edge of JET plasmas. The pedestal is susceptible to limiting instabilities, such as Edge Localised Modes (ELMs), characterised by a periodic collapse of the steep gradient region. A common method to extract the pedestal width, gradient, and height, used on numerous machines, is by performing a modified hyperbolic tangent (mtanh) fit to overlaid profiles selected from the same region of the ELM cycle. This process of overlaying profiles, termed ELM synchronisation, maximises the number of data points defining the pedestal region for a given phase of the ELM cycle. When fitting to HRTS profiles, it is necessary to incorporate the diagnostic radial instrument function, particularly important when considering the pedestal width. A deconvolved fit is determined by a forward convolution method requiring knowledge of only the instrument function and profiles. The systematic error due to the deconvolution technique incorporated into the JET pedestal fitting tool has been documented by Frassinetti et al. [Rev. Sci. Instrum. 83, 013506 (2012)]. This paper seeks to understand and quantify the systematic error introduced to the pedestal width due to ELM synchronisation. Synthetic profiles, generated with error bars and point-to-point variation characteristic of real HRTS profiles, are used to evaluate the deviation from the underlying pedestal width. We find on JET that the ELM synchronisation systematic error is negligible in comparison to the statistical error when assuming ten overlaid profiles (typical for a pre-ELM fit to HRTS profiles). This confirms that fitting a mtanh to ELM synchronised profiles is a robust and practical technique for extracting the pedestal structure.

  8. Estimation of daily interfractional larynx residual setup error after isocentric alignment for head and neck radiotherapy: Quality-assurance implications for target volume and organ-at-risk margination using daily CT-on-rails imaging

    PubMed Central

    Baron, Charles A.; Awan, Musaddiq J.; Mohamed, Abdallah S. R.; Akel, Imad; Rosenthal, David I.; Gunn, G. Brandon; Garden, Adam S.; Dyer, Brandon A.; Court, Laurence; Sevak, Parag R; Kocak-Uzel, Esengul; Fuller, Clifton D.

    2016-01-01

    Larynx may alternatively serve as a target or organ-at-risk (OAR) in head and neck cancer (HNC) image-guided radiotherapy (IGRT). The objective of this study was to estimate IGRT parameters required for larynx positional error independent of isocentric alignment and suggest population–based compensatory margins. Ten HNC patients receiving radiotherapy (RT) with daily CT-on-rails imaging were assessed. Seven landmark points were placed on each daily scan. Taking the most superior anterior point of the C5 vertebra as a reference isocenter for each scan, residual displacement vectors to the other 6 points were calculated post-isocentric alignment. Subsequently, using the first scan as a reference, the magnitude of vector differences for all 6 points for all scans over the course of treatment were calculated. Residual systematic and random error, and the necessary compensatory CTV-to-PTV and OAR-to-PRV margins were calculated, using both observational cohort data and a bootstrap-resampled population estimator. The grand mean displacements for all anatomical points was 5.07mm, with mean systematic error of 1.1mm and mean random setup error of 2.63mm, while bootstrapped POIs grand mean displacement was 5.09mm, with mean systematic error of 1.23mm and mean random setup error of 2.61mm. Required margin for CTV-PTV expansion was 4.6mm for all cohort points, while the bootstrap estimator of the equivalent margin was 4.9mm. The calculated OAR-to-PRV expansion for the observed residual set-up error was 2.7mm, and bootstrap estimated expansion of 2.9mm. We conclude that the interfractional larynx setup error is a significant source of RT set-up/delivery error in HNC both when the larynx is considered as a CTV or OAR. We estimate the need for a uniform expansion of 5mm to compensate for set up error if the larynx is a target or 3mm if the larynx is an OAR when using a non-laryngeal bony isocenter. PMID:25679151

  9. Estimation of daily interfractional larynx residual setup error after isocentric alignment for head and neck radiotherapy: quality assurance implications for target volume and organs‐at‐risk margination using daily CT on‐rails imaging

    PubMed Central

    Baron, Charles A.; Awan, Musaddiq J.; Mohamed, Abdallah S.R.; Akel, Imad; Rosenthal, David I.; Gunn, G. Brandon; Garden, Adam S.; Dyer, Brandon A.; Court, Laurence; Sevak, Parag R.; Kocak‐Uzel, Esengul

    2014-01-01

    Larynx may alternatively serve as a target or organs at risk (OAR) in head and neck cancer (HNC) image‐guided radiotherapy (IGRT). The objective of this study was to estimate IGRT parameters required for larynx positional error independent of isocentric alignment and suggest population‐based compensatory margins. Ten HNC patients receiving radiotherapy (RT) with daily CT on‐rails imaging were assessed. Seven landmark points were placed on each daily scan. Taking the most superior‐anterior point of the C5 vertebra as a reference isocenter for each scan, residual displacement vectors to the other six points were calculated postisocentric alignment. Subsequently, using the first scan as a reference, the magnitude of vector differences for all six points for all scans over the course of treatment was calculated. Residual systematic and random error and the necessary compensatory CTV‐to‐PTV and OAR‐to‐PRV margins were calculated, using both observational cohort data and a bootstrap‐resampled population estimator. The grand mean displacements for all anatomical points was 5.07 mm, with mean systematic error of 1.1 mm and mean random setup error of 2.63 mm, while bootstrapped POIs grand mean displacement was 5.09 mm, with mean systematic error of 1.23 mm and mean random setup error of 2.61 mm. Required margin for CTV‐PTV expansion was 4.6 mm for all cohort points, while the bootstrap estimator of the equivalent margin was 4.9 mm. The calculated OAR‐to‐PRV expansion for the observed residual setup error was 2.7 mm and bootstrap estimated expansion of 2.9 mm. We conclude that the interfractional larynx setup error is a significant source of RT setup/delivery error in HNC, both when the larynx is considered as a CTV or OAR. We estimate the need for a uniform expansion of 5 mm to compensate for setup error if the larynx is a target, or 3 mm if the larynx is an OAR, when using a nonlaryngeal bony isocenter. PACS numbers: 87.55.D‐, 87.55.Qr

  10. Functional Independent Scaling Relation for ORR/OER Catalysts

    DOE PAGES

    Christensen, Rune; Hansen, Heine A.; Dickens, Colin F.; ...

    2016-10-11

    A widely used adsorption energy scaling relation between OH* and OOH* intermediates in the oxygen reduction reaction (ORR) and oxygen evolution reaction (OER), has previously been determined using density functional theory and shown to dictate a minimum thermodynamic overpotential for both reactions. Here, we show that the oxygen–oxygen bond in the OOH* intermediate is, however, not well described with the previously used class of exchange-correlation functionals. By quantifying and correcting the systematic error, an improved description of gaseous peroxide species versus experimental data and a reduction in calculational uncertainty is obtained. For adsorbates, we find that the systematic error largelymore » cancels the vdW interaction missing in the original determination of the scaling relation. An improved scaling relation, which is fully independent of the applied exchange–correlation functional, is obtained and found to differ by 0.1 eV from the original. Lastly, this largely confirms that, although obtained with a method suffering from systematic errors, the previously obtained scaling relation is applicable for predictions of catalytic activity.« less

  11. Effects of waveform model systematics on the interpretation of GW150914

    NASA Astrophysics Data System (ADS)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; E Barclay, S.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Beer, C.; Bejger, M.; Belahcene, I.; Belgin, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bohe, A.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; E Brau, J.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; E Broida, J.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, H.-P.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, A. J. K.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conti, L.; Cooper, S. J.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; E Cowan, E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; E Creighton, J. D.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Davis, D.; Daw, E. J.; Day, B.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devenson, J.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorrington, I.; Douglas, R.; Dovale Álvarez, M.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; E Dwyer, S.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Eisenstein, R. A.; Essick, R. C.; Etienne, Z.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fernández Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fong, H.; Forsyth, S. S.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gorodetsky, M. L.; E Gossan, S.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; E Gushwa, K.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; E Holz, D.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Keitel, D.; Kelley, D. B.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, Whansun; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kirchhoff, R.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Krämer, C.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Liu, J.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; E Lord, J.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lovelace, G.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; E McClelland, D.; McCormick, S.; McGrath, C.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; E Mikhailov, E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muniz, E. A. M.; Murray, P. G.; Mytidis, A.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Nery, M.; Neunzert, A.; Newport, J. M.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Noack, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; E Pace, A.; Page, J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perez, C. J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Rhoades, E.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sandberg, V.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheuer, J.; Schmidt, E.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T. J.; Shahriar, M. S.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; E Smith, R. J.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S. P.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; E Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Taracchini, A.; Taylor, R.; Theeg, T.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tippens, T.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tse, M.; Tso, R.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasúth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Viceré, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; E Wade, L.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, Hang; Yu, Haocun; Yvert, M.; Zadrożny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S. J.; Zhu, X. J.; E Zucker, M.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration; Boyle, M.; Chu, T.; Hemberger, D.; Hinder, I.; E Kidder, L.; Ossokine, S.; Scheel, M.; Szilagyi, B.; Teukolsky, S.; Vano Vinuales, A.

    2017-05-01

    Parameter estimates of GW150914 were obtained using Bayesian inference, based on three semi-analytic waveform models for binary black hole coalescences. These waveform models differ from each other in their treatment of black hole spins, and all three models make some simplifying assumptions, notably to neglect sub-dominant waveform harmonic modes and orbital eccentricity. Furthermore, while the models are calibrated to agree with waveforms obtained by full numerical solutions of Einstein’s equations, any such calibration is accurate only to some non-zero tolerance and is limited by the accuracy of the underlying phenomenology, availability, quality, and parameter-space coverage of numerical simulations. This paper complements the original analyses of GW150914 with an investigation of the effects of possible systematic errors in the waveform models on estimates of its source parameters. To test for systematic errors we repeat the original Bayesian analysis on mock signals from numerical simulations of a series of binary configurations with parameters similar to those found for GW150914. Overall, we find no evidence for a systematic bias relative to the statistical error of the original parameter recovery of GW150914 due to modeling approximations or modeling inaccuracies. However, parameter biases are found to occur for some configurations disfavored by the data of GW150914: for binaries inclined edge-on to the detector over a small range of choices of polarization angles, and also for eccentricities greater than  ˜0.05. For signals with higher signal-to-noise ratio than GW150914, or in other regions of the binary parameter space (lower masses, larger mass ratios, or higher spins), we expect that systematic errors in current waveform models may impact gravitational-wave measurements, making more accurate models desirable for future observations.

  12. Prevalence of refractive errors in children in India: a systematic review.

    PubMed

    Sheeladevi, Sethu; Seelam, Bharani; Nukella, Phanindra B; Modi, Aditi; Ali, Rahul; Keay, Lisa

    2018-04-22

    Uncorrected refractive error is an avoidable cause of visual impairment which affects children in India. The objective of this review is to estimate the prevalence of refractive errors in children ≤ 15 years of age. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines were followed in this review. A detailed literature search was performed to include all population and school-based studies published from India between January 1990 and January 2017, using the Cochrane Library, Medline and Embase. The quality of the included studies was assessed based on a critical appraisal tool developed for systematic reviews of prevalence studies. Four population-based studies and eight school-based studies were included. The overall prevalence of refractive error per 100 children was 8.0 (CI: 7.4-8.1) and in schools it was 10.8 (CI: 10.5-11.2). The population-based prevalence of myopia, hyperopia (≥ +2.00 D) and astigmatism was 5.3 per cent, 4.0 per cent and 5.4 per cent, respectively. Combined refractive error and myopia alone were higher in urban areas compared to rural areas (odds ratio [OR]: 2.27 [CI: 2.09-2.45]) and (OR: 2.12 [CI: 1.79-2.50]), respectively. The prevalence of combined refractive errors and myopia alone in schools was higher among girls than boys (OR: 1.2 [CI: 1.1-1.3] and OR: 1.1 [CI: 1.1-1.2]), respectively. However, hyperopia was more prevalent among boys than girls in schools (OR: 2.1 [CI: 1.8-2.4]). Refractive error in children in India is a major public health problem and requires concerted efforts from various stakeholders including the health care workforce, education professionals and parents, to manage this issue. © 2018 Optometry Australia.

  13. The Clinical Assessment in the Legal Field: An Empirical Study of Bias and Limitations in Forensic Expertise

    PubMed Central

    Iudici, Antonio; Salvini, Alessandro; Faccio, Elena; Castelnuovo, Gianluca

    2015-01-01

    According to the literature, psychological assessment in forensic contexts is one of the most controversial application areas for clinical psychology. This paper presents a review of systematic judgment errors in the forensic field. Forty-six psychological reports written by psychologists, court consultants, have been analyzed with content analysis to identify typical judgment errors related to the following areas: (a) distortions in the attribution of causality, (b) inferential errors, and (c) epistemological inconsistencies. Results indicated that systematic errors of judgment, usually referred also as “the man in the street,” are widely present in the forensic evaluations of specialist consultants. Clinical and practical implications are taken into account. This article could lead to significant benefits for clinical psychologists who want to deal with this sensitive issue and are interested in improving the quality of their contribution to the justice system. PMID:26648892

  14. Insights on the impact of systematic model errors on data assimilation performance in changing catchments

    NASA Astrophysics Data System (ADS)

    Pathiraja, S.; Anghileri, D.; Burlando, P.; Sharma, A.; Marshall, L.; Moradkhani, H.

    2018-03-01

    The global prevalence of rapid and extensive land use change necessitates hydrologic modelling methodologies capable of handling non-stationarity. This is particularly true in the context of Hydrologic Forecasting using Data Assimilation. Data Assimilation has been shown to dramatically improve forecast skill in hydrologic and meteorological applications, although such improvements are conditional on using bias-free observations and model simulations. A hydrologic model calibrated to a particular set of land cover conditions has the potential to produce biased simulations when the catchment is disturbed. This paper sheds new light on the impacts of bias or systematic errors in hydrologic data assimilation, in the context of forecasting in catchments with changing land surface conditions and a model calibrated to pre-change conditions. We posit that in such cases, the impact of systematic model errors on assimilation or forecast quality is dependent on the inherent prediction uncertainty that persists even in pre-change conditions. Through experiments on a range of catchments, we develop a conceptual relationship between total prediction uncertainty and the impacts of land cover changes on the hydrologic regime to demonstrate how forecast quality is affected when using state estimation Data Assimilation with no modifications to account for land cover changes. This work shows that systematic model errors as a result of changing or changed catchment conditions do not always necessitate adjustments to the modelling or assimilation methodology, for instance through re-calibration of the hydrologic model, time varying model parameters or revised offline/online bias estimation.

  15. Errors in Viking Lander Atmospheric Profiles Discovered Using MOLA Topography

    NASA Technical Reports Server (NTRS)

    Withers, Paul; Lorenz, R. D.; Neumann, G. A.

    2002-01-01

    Each Viking lander measured a topographic profile during entry. Comparing to MOLA (Mars Orbiter Laser Altimeter), we find a vertical error of 1-2 km in the Viking trajectory. This introduces a systematic error of 10-20% in the Viking densities and pressures at a given altitude. Additional information is contained in the original extended abstract.

  16. Solutions to decrease a systematic error related to AAPH addition in the fluorescence-based ORAC assay.

    PubMed

    Mellado-Ortega, Elena; Zabalgogeazcoa, Iñigo; Vázquez de Aldana, Beatriz R; Arellano, Juan B

    2017-02-15

    Oxygen radical absorbance capacity (ORAC) assay in 96-well multi-detection plate readers is a rapid method to determine total antioxidant capacity (TAC) in biological samples. A disadvantage of this method is that the antioxidant inhibition reaction does not start in all of the 96 wells at the same time due to technical limitations when dispensing the free radical-generating azo initiator 2,2'-azobis (2-methyl-propanimidamide) dihydrochloride (AAPH). The time delay between wells yields a systematic error that causes statistically significant differences in TAC determination of antioxidant solutions depending on their plate position. We propose two alternative solutions to avoid this AAPH-dependent error in ORAC assays. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors

    NASA Astrophysics Data System (ADS)

    Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.

    2018-04-01

    The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.

  18. PML AND PSTD ALGORITHM FOR ARBITRARY LOSSY ANISOTROPIC MEDIA. (R825225)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  19. Subband coding for image data archiving

    NASA Technical Reports Server (NTRS)

    Glover, Daniel; Kwatra, S. C.

    1993-01-01

    The use of subband coding on image data is discussed. An overview of subband coding is given. Advantages of subbanding for browsing and progressive resolution are presented. Implementations for lossless and lossy coding are discussed. Algorithm considerations and simple implementations of subband systems are given.

  20. Subband coding for image data archiving

    NASA Technical Reports Server (NTRS)

    Glover, D.; Kwatra, S. C.

    1992-01-01

    The use of subband coding on image data is discussed. An overview of subband coding is given. Advantages of subbanding for browsing and progressive resolution are presented. Implementations for lossless and lossy coding are discussed. Algorithm considerations and simple implementations of subband are given.

  1. [Improving blood safety: errors management in transfusion medicine].

    PubMed

    Bujandrić, Nevenka; Grujić, Jasmina; Krga-Milanović, Mirjana

    2014-01-01

    The concept of blood safety includes the entire transfusion chain starting with the collection of blood from the blood donor, and ending with blood transfusion to the patient. The concept involves quality management system as the systematic monitoring of adverse reactions and incidents regarding the blood donor or patient. Monitoring of near-miss errors show the critical points in the working process and increase transfusion safety. The aim of the study was to present the analysis results of adverse and unexpected events in transfusion practice with a potential risk to the health of blood donors and patients. One-year retrospective study was based on the collection, analysis and interpretation of written reports on medical errors in the Blood Transfusion Institute of Vojvodina. Errors were distributed according to the type, frequency and part of the working process where they occurred. Possible causes and corrective actions were described for each error. The study showed that there were not errors with potential health consequences for the blood donor/patient. Errors with potentially damaging consequences for patients were detected throughout the entire transfusion chain. Most of the errors were identified in the preanalytical phase. The human factor was responsible for the largest number of errors. Error reporting system has an important role in the error management and the reduction of transfusion-related risk of adverse events and incidents. The ongoing analysis reveals the strengths and weaknesses of the entire process and indicates the necessary changes. Errors in transfusion medicine can be avoided in a large percentage and prevention is cost-effective, systematic and applicable.

  2. Computation of nonlinear ultrasound fields using a linearized contrast source method.

    PubMed

    Verweij, Martin D; Demi, Libertario; van Dongen, Koen W A

    2013-08-01

    Nonlinear ultrasound is important in medical diagnostics because imaging of the higher harmonics improves resolution and reduces scattering artifacts. Second harmonic imaging is currently standard, and higher harmonic imaging is under investigation. The efficient development of novel imaging modalities and equipment requires accurate simulations of nonlinear wave fields in large volumes of realistic (lossy, inhomogeneous) media. The Iterative Nonlinear Contrast Source (INCS) method has been developed to deal with spatiotemporal domains measuring hundreds of wavelengths and periods. This full wave method considers the nonlinear term of the Westervelt equation as a nonlinear contrast source, and solves the equivalent integral equation via the Neumann iterative solution. Recently, the method has been extended with a contrast source that accounts for spatially varying attenuation. The current paper addresses the problem that the Neumann iterative solution converges badly for strong contrast sources. The remedy is linearization of the nonlinear contrast source, combined with application of more advanced methods for solving the resulting integral equation. Numerical results show that linearization in combination with a Bi-Conjugate Gradient Stabilized method allows the INCS method to deal with fairly strong, inhomogeneous attenuation, while the error due to the linearization can be eliminated by restarting the iterative scheme.

  3. Transmission of RF Signals Over Optical Fiber for Avionics Applications

    NASA Technical Reports Server (NTRS)

    Slaveski, Filip; Sluss, James, Jr.; Atiquzzaman, Mohammed; Hung, Nguyen; Ngo, Duc

    2002-01-01

    During flight, aircraft avionics transmit and receive RF signals to/from antennas over coaxial cables. As the density and complexity of onboard avionics increases, the electromagnetic interference (EM) environment degrades proportionately, leading to decreasing signal-to-noise ratios (SNRs) and potential safety concerns. The coaxial cables are inherently lossy, limiting the RF signal bandwidth while adding considerable weight. To overcome these limitations, we have investigated a fiber optic communications link for aircraft that utilizes wavelength division multiplexing (WDM) to support the simultaneous transmission of multiple signals (including RF) over a single optical fiber. Optical fiber has many advantages over coaxial cable, particularly lower loss, greater bandwidth, and immunity to EM. In this paper, we demonstrate that WDM can be successfully used to transmit multiple RF signals over a single optical fiber with no appreciable signal degradation. We investigate the transmission of FM and AM analog modulated signals, as well as FSK digital modulated signals, over a fiber optic link (FOL) employing WDM. We present measurements of power loss, delay, SNR, carrier-to-noise ratio (CNR), total harmonic distortion (THD), and bit error rate (BER). Our experimental results indicate that WDM is a fiber optic technology suitable for avionics applications.

  4. Medication errors in paediatric care: a systematic review of epidemiology and an evaluation of evidence supporting reduction strategy recommendations

    PubMed Central

    Miller, Marlene R; Robinson, Karen A; Lubomski, Lisa H; Rinke, Michael L; Pronovost, Peter J

    2007-01-01

    Background Although children are at the greatest risk for medication errors, little is known about the overall epidemiology of these errors, where the gaps are in our knowledge, and to what extent national medication error reduction strategies focus on children. Objective To synthesise peer reviewed knowledge on children's medication errors and on recommendations to improve paediatric medication safety by a systematic literature review. Data sources PubMed, Embase and Cinahl from 1 January 2000 to 30 April 2005, and 11 national entities that have disseminated recommendations to improve medication safety. Study selection Inclusion criteria were peer reviewed original data in English language. Studies that did not separately report paediatric data were excluded. Data extraction Two reviewers screened articles for eligibility and for data extraction, and screened all national medication error reduction strategies for relevance to children. Data synthesis From 358 articles identified, 31 were included for data extraction. The definition of medication error was non‐uniform across the studies. Dispensing and administering errors were the most poorly and non‐uniformly evaluated. Overall, the distributional epidemiological estimates of the relative percentages of paediatric error types were: prescribing 3–37%, dispensing 5–58%, administering 72–75%, and documentation 17–21%. 26 unique recommendations for strategies to reduce medication errors were identified; none were based on paediatric evidence. Conclusions Medication errors occur across the entire spectrum of prescribing, dispensing, and administering, are common, and have a myriad of non‐evidence based potential reduction strategies. Further research in this area needs a firmer standardisation for items such as dose ranges and definitions of medication errors, broader scope beyond inpatient prescribing errors, and prioritisation of implementation of medication error reduction strategies. PMID:17403758

  5. Particle Tracking on the BNL Relativistic Heavy Ion Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dell, G. F.

    1986-08-07

    Tracking studies including the effects of random multipole errors as well as the effects of random and systematic multipole errors have been made for RHIC. Initial results for operating at an off diagonal working point are discussed.

  6. Efficacy and workload analysis of a fixed vertical couch position technique and a fixed‐action–level protocol in whole‐breast radiotherapy

    PubMed Central

    Verhoeven, Karolien; Weltens, Caroline; Van den Heuvel, Frank

    2015-01-01

    Quantification of the setup errors is vital to define appropriate setup margins preventing geographical misses. The no‐action–level (NAL) correction protocol reduces the systematic setup errors and, hence, the setup margins. The manual entry of the setup corrections in the record‐and‐verify software, however, increases the susceptibility of the NAL protocol to human errors. Moreover, the impact of the skin mobility on the anteroposterior patient setup reproducibility in whole‐breast radiotherapy (WBRT) is unknown. In this study, we therefore investigated the potential of fixed vertical couch position‐based patient setup in WBRT. The possibility to introduce a threshold for correction of the systematic setup errors was also explored. We measured the anteroposterior, mediolateral, and superior–inferior setup errors during fractions 1–12 and weekly thereafter with tangential angled single modality paired imaging. These setup data were used to simulate the residual setup errors of the NAL protocol, the fixed vertical couch position protocol, and the fixed‐action–level protocol with different correction thresholds. Population statistics of the setup errors of 20 breast cancer patients and 20 breast cancer patients with additional regional lymph node (LN) irradiation were calculated to determine the setup margins of each off‐line correction protocol. Our data showed the potential of the fixed vertical couch position protocol to restrict the systematic and random anteroposterior residual setup errors to 1.8 mm and 2.2 mm, respectively. Compared to the NAL protocol, a correction threshold of 2.5 mm reduced the frequency of mediolateral and superior–inferior setup corrections with 40% and 63%, respectively. The implementation of the correction threshold did not deteriorate the accuracy of the off‐line setup correction compared to the NAL protocol. The combination of the fixed vertical couch position protocol, for correction of the anteroposterior setup error, and the fixed‐action–level protocol with 2.5 mm correction threshold, for correction of the mediolateral and the superior–inferior setup errors, was proved to provide adequate and comparable patient setup accuracy in WBRT and WBRT with additional LN irradiation. PACS numbers: 87.53.Kn, 87.57.‐s

  7. The Experimental Probe of Inflationary Cosmology: A Mission Concept Study for NASA's Einstein Inflation Probe

    NASA Technical Reports Server (NTRS)

    2008-01-01

    When we began our study we sought to answer five fundamental implementation questions: 1) can foregrounds be measured and subtracted to a sufficiently low level?; 2) can systematic errors be controlled?; 3) can we develop optics with sufficiently large throughput, low polarization, and frequency coverage from 30 to 300 GHz?; 4) is there a technical path to realizing the sensitivity and systematic error requirements?; and 5) what are the specific mission architecture parameters, including cost? Detailed answers to these questions are contained in this report.

  8. MAX-DOAS measurements of HONO slant column densities during the MAD-CAT campaign: inter-comparison, sensitivity studies on spectral analysis settings, and error budget

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Beirle, Steffen; Hendrick, Francois; Hilboll, Andreas; Jin, Junli; Kyuberis, Aleksandra A.; Lampel, Johannes; Li, Ang; Luo, Yuhan; Lodi, Lorenzo; Ma, Jianzhong; Navarro, Monica; Ortega, Ivan; Peters, Enno; Polyansky, Oleg L.; Remmers, Julia; Richter, Andreas; Puentedura, Olga; Van Roozendael, Michel; Seyler, André; Tennyson, Jonathan; Volkamer, Rainer; Xie, Pinhua; Zobov, Nikolai F.; Wagner, Thomas

    2017-10-01

    In order to promote the development of the passive DOAS technique the Multi Axis DOAS - Comparison campaign for Aerosols and Trace gases (MAD-CAT) was held at the Max Planck Institute for Chemistry in Mainz, Germany, from June to October 2013. Here, we systematically compare the differential slant column densities (dSCDs) of nitrous acid (HONO) derived from measurements of seven different instruments. We also compare the tropospheric difference of SCDs (delta SCD) of HONO, namely the difference of the SCDs for the non-zenith observations and the zenith observation of the same elevation sequence. Different research groups analysed the spectra from their own instruments using their individual fit software. All the fit errors of HONO dSCDs from the instruments with cooled large-size detectors are mostly in the range of 0.1 to 0.3 × 1015 molecules cm-2 for an integration time of 1 min. The fit error for the mini MAX-DOAS is around 0.7 × 1015 molecules cm-2. Although the HONO delta SCDs are normally smaller than 6 × 1015 molecules cm-2, consistent time series of HONO delta SCDs are retrieved from the measurements of different instruments. Both fits with a sequential Fraunhofer reference spectrum (FRS) and a daily noon FRS lead to similar consistency. Apart from the mini-MAX-DOAS, the systematic absolute differences of HONO delta SCDs between the instruments are smaller than 0.63 × 1015 molecules cm-2. The correlation coefficients are higher than 0.7 and the slopes of linear regressions deviate from unity by less than 16 % for the elevation angle of 1°. The correlations decrease with an increase in elevation angle. All the participants also analysed synthetic spectra using the same baseline DOAS settings to evaluate the systematic errors of HONO results from their respective fit programs. In general the errors are smaller than 0.3 × 1015 molecules cm-2, which is about half of the systematic difference between the real measurements.The differences of HONO delta SCDs retrieved in the selected three spectral ranges 335-361, 335-373 and 335-390 nm are considerable (up to 0.57 × 1015 molecules cm-2) for both real measurements and synthetic spectra. We performed sensitivity studies to quantify the dominant systematic error sources and to find a recommended DOAS setting in the three spectral ranges. The results show that water vapour absorption, temperature and wavelength dependence of O4 absorption, temperature dependence of Ring spectrum, and polynomial and intensity offset correction all together dominate the systematic errors. We recommend a fit range of 335-373 nm for HONO retrievals. In such fit range the overall systematic uncertainty is about 0.87 × 1015 molecules cm-2, much smaller than those in the other two ranges. The typical random uncertainty is estimated to be about 0.16 × 1015 molecules cm-2, which is only 25 % of the total systematic uncertainty for most of the instruments in the MAD-CAT campaign. In summary for most of the MAX-DOAS instruments for elevation angle below 5°, half daytime measurements (usually in the morning) of HONO delta SCD can be over the detection limit of 0.2 × 1015 molecules cm-2 with an uncertainty of ˜ 0.9 × 1015 molecules cm-2.

  9. The Role of Supralexical Prosodic Units in Speech Production: Evidence from the Distribution of Speech Errors

    ERIC Educational Resources Information Center

    Choe, Wook Kyung

    2013-01-01

    The current dissertation represents one of the first systematic studies of the distribution of speech errors within supralexical prosodic units. Four experiments were conducted to gain insight into the specific role of these units in speech planning and production. The first experiment focused on errors in adult English. These were found to be…

  10. A geometric model for initial orientation errors in pigeon navigation.

    PubMed

    Postlethwaite, Claire M; Walker, Michael M

    2011-01-21

    All mobile animals respond to gradients in signals in their environment, such as light, sound, odours and magnetic and electric fields, but it remains controversial how they might use these signals to navigate over long distances. The Earth's surface is essentially two-dimensional, so two stimuli are needed to act as coordinates for navigation. However, no environmental fields are known to be simple enough to act as perpendicular coordinates on a two-dimensional grid. Here, we propose a model for navigation in which we assume that an animal has a simplified 'cognitive map' in which environmental stimuli act as perpendicular coordinates. We then investigate how systematic deviation of the contour lines of the environmental signals from a simple orthogonal arrangement can cause errors in position determination and lead to systematic patterns of directional errors in initial homing directions taken by pigeons. The model reproduces patterns of initial orientation errors seen in previously collected data from homing pigeons, predicts that errors should increase with distance from the loft, and provides a basis for efforts to identify further sources of orientation errors made by homing pigeons. Copyright © 2010 Elsevier Ltd. All rights reserved.

  11. Horizon sensors attitude errors simulation for the Brazilian Remote Sensing Satellite

    NASA Astrophysics Data System (ADS)

    Vicente de Brum, Antonio Gil; Ricci, Mario Cesar

    Remote sensing, meteorological and other types of satellites require an increasingly better Earth related positioning. From the past experience it is well known that the thermal horizon in the 15 micrometer band provides conditions of determining the local vertical at any time. This detection is done by horizon sensors which are accurate instruments for Earth referred attitude sensing and control whose performance is limited by systematic and random errors amounting about 0.5 deg. Using the computer programs OBLATE, SEASON, ELECTRO and MISALIGN, developed at INPE to simulate four distinct facets of conical scanning horizon sensors, attitude errors are obtained for the Brazilian Remote Sensing Satellite (the first one, SSR-1, is scheduled to fly in 1996). These errors are due to the oblate shape of the Earth, seasonal and latitudinal variations of the 15 micrometer infrared radiation, electronic processing time delay and misalignment of sensor axis. The sensor related attitude errors are thus properly quantified in this work and will, together with other systematic errors (for instance, ambient temperature variation) take part in the pre-launch analysis of the Brazilian Remote Sensing Satellite, with respect to the horizon sensor performance.

  12. Application aware approach to compression and transmission of H.264 encoded video for automated and centralized transportation surveillance.

    DOT National Transportation Integrated Search

    2012-10-01

    In this report we present a transportation video coding and wireless transmission system specically tailored to automated : vehicle tracking applications. By taking into account the video characteristics and the lossy nature of the wireless channe...

  13. Aggregating quantum repeaters for the quantum internet

    NASA Astrophysics Data System (ADS)

    Azuma, Koji; Kato, Go

    2017-09-01

    The quantum internet holds promise for accomplishing quantum teleportation and unconditionally secure communication freely between arbitrary clients all over the globe, as well as the simulation of quantum many-body systems. For such a quantum internet protocol, a general fundamental upper bound on the obtainable entanglement or secret key has been derived [K. Azuma, A. Mizutani, and H.-K. Lo, Nat. Commun. 7, 13523 (2016), 10.1038/ncomms13523]. Here we consider its converse problem. In particular, we present a universal protocol constructible from any given quantum network, which is based on running quantum repeater schemes in parallel over the network. For arbitrary lossy optical channel networks, our protocol has no scaling gap with the upper bound, even based on existing quantum repeater schemes. In an asymptotic limit, our protocol works as an optimal entanglement or secret-key distribution over any quantum network composed of practical channels such as erasure channels, dephasing channels, bosonic quantum amplifier channels, and lossy optical channels.

  14. A radial transmission line material measurement apparatus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warne, L.K.; Moyer, R.D.; Koontz, T.E.

    1993-05-01

    A radial transmission line material measurement sample apparatus (sample holder, offset short standards, measurement software, and instrumentation) is described which has been proposed, analyzed, designed, constructed, and tested. The purpose of the apparatus is to obtain accurate surface impedance measurements of lossy, possibly anisotropic, samples at low and intermediate frequencies (vhf and low uhf). The samples typically take the form of sections of the material coatings on conducting objects. Such measurements thus provide the key input data for predictive numerical scattering codes. Prediction of the sample surface impedance from the coaxial input impedance measurement is carried out by two techniques.more » The first is an analytical model for the coaxial-to-radial transmission line junction. The second is an empirical determination of the bilinear transformation model of the junction by the measurement of three full standards. The standards take the form of three offset shorts (and an additional lossy Salisbury load), which have also been constructed. The accuracy achievable with the device appears to be near one percent.« less

  15. Extrinsic and Intrinsic Frequency Dispersion of High-k Materials in Capacitance-Voltage Measurements

    PubMed Central

    Tao, J.; Zhao, C.Z.; Zhao, C.; Taechakumput, P.; Werner, M.; Taylor, S.; Chalker, P. R.

    2012-01-01

    In capacitance-voltage (C-V) measurements, frequency dispersion in high-k dielectrics is often observed. The frequency dependence of the dielectric constant (k-value), that is the intrinsic frequency dispersion, could not be assessed before suppressing the effects of extrinsic frequency dispersion, such as the effects of the lossy interfacial layer (between the high-k thin film and silicon substrate) and the parasitic effects. The effect of the lossy interfacial layer on frequency dispersion was investigated and modeled based on a dual frequency technique. The significance of parasitic effects (including series resistance and the back metal contact of the metal-oxide-semiconductor (MOS) capacitor) on frequency dispersion was also studied. The effect of surface roughness on frequency dispersion is also discussed. After taking extrinsic frequency dispersion into account, the relaxation behavior can be modeled using the Curie-von Schweidler (CS) law, the Kohlrausch-Williams-Watts (KWW) relationship and the Havriliak-Negami (HN) relationship. Dielectric relaxation mechanisms are also discussed. PMID:28817021

  16. Analysis-Preserving Video Microscopy Compression via Correlation and Mathematical Morphology

    PubMed Central

    Shao, Chong; Zhong, Alfred; Cribb, Jeremy; Osborne, Lukas D.; O’Brien, E. Timothy; Superfine, Richard; Mayer-Patel, Ketan; Taylor, Russell M.

    2015-01-01

    The large amount video data produced by multi-channel, high-resolution microscopy system drives the need for a new high-performance domain-specific video compression technique. We describe a novel compression method for video microscopy data. The method is based on Pearson's correlation and mathematical morphology. The method makes use of the point-spread function (PSF) in the microscopy video acquisition phase. We compare our method to other lossless compression methods and to lossy JPEG, JPEG2000 and H.264 compression for various kinds of video microscopy data including fluorescence video and brightfield video. We find that for certain data sets, the new method compresses much better than lossless compression with no impact on analysis results. It achieved a best compressed size of 0.77% of the original size, 25× smaller than the best lossless technique (which yields 20% for the same video). The compressed size scales with the video's scientific data content. Further testing showed that existing lossy algorithms greatly impacted data analysis at similar compression sizes. PMID:26435032

  17. A Lossy Compression Technique Enabling Duplication-Aware Sequence Alignment

    PubMed Central

    Freschi, Valerio; Bogliolo, Alessandro

    2012-01-01

    In spite of the recognized importance of tandem duplications in genome evolution, commonly adopted sequence comparison algorithms do not take into account complex mutation events involving more than one residue at the time, since they are not compliant with the underlying assumption of statistical independence of adjacent residues. As a consequence, the presence of tandem repeats in sequences under comparison may impair the biological significance of the resulting alignment. Although solutions have been proposed, repeat-aware sequence alignment is still considered to be an open problem and new efficient and effective methods have been advocated. The present paper describes an alternative lossy compression scheme for genomic sequences which iteratively collapses repeats of increasing length. The resulting approximate representations do not contain tandem duplications, while retaining enough information for making their comparison even more significant than the edit distance between the original sequences. This allows us to exploit traditional alignment algorithms directly on the compressed sequences. Results confirm the validity of the proposed approach for the problem of duplication-aware sequence alignment. PMID:22518086

  18. Detection of bondline delaminations in multilayer structures with lossy components

    NASA Technical Reports Server (NTRS)

    Madaras, Eric I.; Winfree, William P.; Smith, B. T.; Heyman, Joseph H.

    1988-01-01

    The detection of bondline delaminations in multilayer structures using ultrasonic reflection techniques is a generic problem in adhesively bonded composite structures such as the Space Shuttles's Solid Rocket Motors (SRM). Standard pulse echo ultrasonic techniques do not perform well for a composite resonator composed of a resonant layer combined with attenuating layers. Excessive ringing in the resonant layer tends to mask internal echoes emanating from the attenuating layers. The SRM is made up of a resonant steel layer backed by layers of adhesive, rubber, liner and fuel, which are ultrasonically attenuating. The structure's response is modeled as a lossy ultrasonic transmission line. The model predicts that the acoustic response of the system is sensitive to delaminations at the interior bondlines in a few narrow frequency bands. These predictions are verified by measurements on a fabricated system. Successful imaging of internal delaminations is sensitive to proper selection of the interrogating frequency. Images of fabricated bondline delaminations are presented based on these studies.

  19. Optical monitoring of thin film electro-polymerization on surface of ITO-coated lossy-mode resonance sensor

    NASA Astrophysics Data System (ADS)

    Sobaszek, Michał; Dominik, Magdalena; Burnat, Dariusz; Bogdanowicz, Robert; Stranak, Viteszlav; Sezemsky, Petr; Śmietana, Mateusz

    2017-04-01

    This work presents an optical fiber sensors based on lossy-mode resonance (LMR) phenomenon supported by indium tin oxide (ITO) thin overlay for investigation of electro-polymerization effect on ITO's surface. The ITO overlays were deposited on core of polymer-clad silica (PCS) fibers using reactive magnetron sputtering (RMS) method. Since ITO is electrically conductive and electrochemically active it can be used as a working electrode in 3-electrode cyclic voltammetry setup. For fixed potential applied to the electrode current flow decrease with time what corresponds to polymer layer formation on the ITO surface. Since LMR phenomenon depends on optical properties in proximity of the ITO surface, polymer layer formation can be monitored optically in real time. The electrodeposition process has been performed with Isatin which is a strong endogenous neurochemical regulator in humans as it is a metabolic derivative of adrenaline. It was found that optical detection of Isatin is possible in the proposed configuration.

  20. Enhancement of specific absorption rate in lossy dielectric objects using a slab of left-handed material.

    PubMed

    Zhao, Lei; Cui, Tie Jun

    2005-12-01

    An enhancement of the specific absorption rate (SAR) inside a lossy dielectric object has been investigated theoretically based on a slab of left-handed medium (LHM). In order to make an accurate analysis of SAR distribution, a proper Green's function involved in the LHM slab is proposed, from which an integral equation for the electric field inside the dielectric object is derived. Such an integral equation has been solved accurately and efficiently using the conjugate gradient method and the fast Fourier transform. We have made a lot of numerical experiments on the SAR distributions inside the dielectric object excited by a line source with and without the LHM slab. Numerical experiments show that SAR can be enhanced tremendously when the LHM slab is involved due to the proper usage of strong surface waves, which will be helpful in the potential biomedical applications for hyperthermia. The physical insight for such a phenomenon has also been discussed.

  1. Systematic Error in Leaf Water Potential Measurements with a Thermocouple Psychrometer.

    PubMed

    Rawlins, S L

    1964-10-30

    To allow for the error in measurement of water potentials in leaves, introduced by the presence of a water droplet in the chamber of the psychrometer, a correction must be made for the permeability of the leaf.

  2. Quantifying the burden of opioid medication errors in adult oncology and palliative care settings: A systematic review.

    PubMed

    Heneka, Nicole; Shaw, Tim; Rowett, Debra; Phillips, Jane L

    2016-06-01

    Opioids are the primary pharmacological treatment for cancer pain and, in the palliative care setting, are routinely used to manage symptoms at the end of life. Opioids are one of the most frequently reported drug classes in medication errors causing patient harm. Despite their widespread use, little is known about the incidence and impact of opioid medication errors in oncology and palliative care settings. To determine the incidence, types and impact of reported opioid medication errors in adult oncology and palliative care patient settings. A systematic review. Five electronic databases and the grey literature were searched from 1980 to August 2014. Empirical studies published in English, reporting data on opioid medication error incidence, types or patient impact, within adult oncology and/or palliative care services, were included. Popay's narrative synthesis approach was used to analyse data. Five empirical studies were included in this review. Opioid error incidence rate was difficult to ascertain as each study focussed on a single narrow area of error. The predominant error type related to deviation from opioid prescribing guidelines, such as incorrect dosing intervals. None of the included studies reported the degree of patient harm resulting from opioid errors. This review has highlighted the paucity of the literature examining opioid error incidence, types and patient impact in adult oncology and palliative care settings. Defining, identifying and quantifying error reporting practices for these populations should be an essential component of future oncology and palliative care quality and safety initiatives. © The Author(s) 2015.

  3. Systematic review of the evidence for Trails B cut-off scores in assessing fitness-to-drive.

    PubMed

    Roy, Mononita; Molnar, Frank

    2013-01-01

    Fitness-to-drive guidelines recommend employing the Trail Making B Test (a.k.a. Trails B), but do not provide guidance regarding cut-off scores. There is ongoing debate regarding the optimal cut-off score on the Trails B test. The objective of this study was to address this controversy by systematically reviewing the evidence for specific Trails B cut-off scores (e.g., cut-offs in both time to completion and number of errors) with respect to fitness-to-drive. Systematic review of all prospective cohort, retrospective cohort, case-control, correlation, and cross-sectional studies reporting the ability of the Trails B to predict driving safety that were published in English-language, peer-reviewed journals. Forty-seven articles were reviewed. None of the articles justified sample sizes via formal calculations. Cut-off scores reported based on research include: 90 seconds, 133 seconds, 147 seconds, 180 seconds, and < 3 errors. There is support for the previously published Trails B cut-offs of 3 minutes or 3 errors (the '3 or 3 rule'). Major methodological limitations of this body of research were uncovered including (1) lack of justification of sample size leaving studies open to Type II error (i.e., false negative findings), and (2) excessive focus on associations rather than clinically useful cut-off scores.

  4. LOGISTIC FUNCTION PROFILE FIT: A least-squares program for fitting interface profiles to an extended logistic function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirchhoff, William H.

    2012-09-15

    The extended logistic function provides a physically reasonable description of interfaces such as depth profiles or line scans of surface topological or compositional features. It describes these interfaces with the minimum number of parameters, namely, position, width, and asymmetry. Logistic Function Profile Fit (LFPF) is a robust, least-squares fitting program in which the nonlinear extended logistic function is linearized by a Taylor series expansion (equivalent to a Newton-Raphson approach) with no apparent introduction of bias in the analysis. The program provides reliable confidence limits for the parameters when systematic errors are minimal and provides a display of the residuals frommore » the fit for the detection of systematic errors. The program will aid researchers in applying ASTM E1636-10, 'Standard practice for analytically describing sputter-depth-profile and linescan-profile data by an extended logistic function,' and may also prove useful in applying ISO 18516: 2006, 'Surface chemical analysis-Auger electron spectroscopy and x-ray photoelectron spectroscopy-determination of lateral resolution.' Examples are given of LFPF fits to a secondary ion mass spectrometry depth profile, an Auger surface line scan, and synthetic data generated to exhibit known systematic errors for examining the significance of such errors to the extrapolation of partial profiles.« less

  5. The effect of rainfall measurement uncertainties on rainfall-runoff processes modelling.

    PubMed

    Stransky, D; Bares, V; Fatka, P

    2007-01-01

    Rainfall data are a crucial input for various tasks concerning the wet weather period. Nevertheless, their measurement is affected by random and systematic errors that cause an underestimation of the rainfall volume. Therefore, the general objective of the presented work was to assess the credibility of measured rainfall data and to evaluate the effect of measurement errors on urban drainage modelling tasks. Within the project, the methodology of the tipping bucket rain gauge (TBR) was defined and assessed in terms of uncertainty analysis. A set of 18 TBRs was calibrated and the results were compared to the previous calibration. This enables us to evaluate the ageing of TBRs. A propagation of calibration and other systematic errors through the rainfall-runoff model was performed on experimental catchment. It was found that the TBR calibration is important mainly for tasks connected with the assessment of peak values and high flow durations. The omission of calibration leads to up to 30% underestimation and the effect of other systematic errors can add a further 15%. The TBR calibration should be done every two years in order to catch up the ageing of TBR mechanics. Further, the authors recommend to adjust the dynamic test duration proportionally to generated rainfall intensity.

  6. Chiral extrapolation of the leading hadronic contribution to the muon anomalous magnetic moment

    NASA Astrophysics Data System (ADS)

    Golterman, Maarten; Maltman, Kim; Peris, Santiago

    2017-04-01

    A lattice computation of the leading-order hadronic contribution to the muon anomalous magnetic moment can potentially help reduce the error on the Standard Model prediction for this quantity, if sufficient control of all systematic errors affecting such a computation can be achieved. One of these systematic errors is that associated with the extrapolation to the physical pion mass from values on the lattice larger than the physical pion mass. We investigate this extrapolation assuming lattice pion masses in the range of 200 to 400 MeV with the help of two-loop chiral perturbation theory, and we find that such an extrapolation is unlikely to lead to control of this systematic error at the 1% level. This remains true even if various tricks to improve the reliability of the chiral extrapolation employed in the literature are taken into account. In addition, while chiral perturbation theory also predicts the dependence on the pion mass of the leading-order hadronic contribution to the muon anomalous magnetic moment as the chiral limit is approached, this prediction turns out to be of no practical use because the physical pion mass is larger than the muon mass that sets the scale for the onset of this behavior.

  7. Quotation accuracy in medical journal articles-a systematic review and meta-analysis.

    PubMed

    Jergas, Hannah; Baethge, Christopher

    2015-01-01

    Background. Quotations and references are an indispensable element of scientific communication. They should support what authors claim or provide important background information for readers. Studies indicate, however, that quotations not serving their purpose-quotation errors-may be prevalent. Methods. We carried out a systematic review, meta-analysis and meta-regression of quotation errors, taking account of differences between studies in error ascertainment. Results. Out of 559 studies screened we included 28 in the main analysis, and estimated major, minor and total quotation error rates of 11,9%, 95% CI [8.4, 16.6] 11.5% [8.3, 15.7], and 25.4% [19.5, 32.4]. While heterogeneity was substantial, even the lowest estimate of total quotation errors was considerable (6.7%). Indirect references accounted for less than one sixth of all quotation problems. The findings remained robust in a number of sensitivity and subgroup analyses (including risk of bias analysis) and in meta-regression. There was no indication of publication bias. Conclusions. Readers of medical journal articles should be aware of the fact that quotation errors are common. Measures against quotation errors include spot checks by editors and reviewers, correct placement of citations in the text, and declarations by authors that they have checked cited material. Future research should elucidate if and to what degree quotation errors are detrimental to scientific progress.

  8. Outcomes of a Failure Mode and Effects Analysis for medication errors in pediatric anesthesia.

    PubMed

    Martin, Lizabeth D; Grigg, Eliot B; Verma, Shilpa; Latham, Gregory J; Rampersad, Sally E; Martin, Lynn D

    2017-06-01

    The Institute of Medicine has called for development of strategies to prevent medication errors, which are one important cause of preventable harm. Although the field of anesthesiology is considered a leader in patient safety, recent data suggest high medication error rates in anesthesia practice. Unfortunately, few error prevention strategies for anesthesia providers have been implemented. Using Toyota Production System quality improvement methodology, a multidisciplinary team observed 133 h of medication practice in the operating room at a tertiary care freestanding children's hospital. A failure mode and effects analysis was conducted to systematically deconstruct and evaluate each medication handling process step and score possible failure modes to quantify areas of risk. A bundle of five targeted countermeasures were identified and implemented over 12 months. Improvements in syringe labeling (73 to 96%), standardization of medication organization in the anesthesia workspace (0 to 100%), and two-provider infusion checks (23 to 59%) were observed. Medication error reporting improved during the project and was subsequently maintained. After intervention, the median medication error rate decreased from 1.56 to 0.95 per 1000 anesthetics. The frequency of medication error harm events reaching the patient also decreased. Systematic evaluation and standardization of medication handling processes by anesthesia providers in the operating room can decrease medication errors and improve patient safety. © 2017 John Wiley & Sons Ltd.

  9. De-biasing the dynamic mode decomposition for applied Koopman spectral analysis of noisy datasets

    NASA Astrophysics Data System (ADS)

    Hemati, Maziar S.; Rowley, Clarence W.; Deem, Eric A.; Cattafesta, Louis N.

    2017-08-01

    The dynamic mode decomposition (DMD)—a popular method for performing data-driven Koopman spectral analysis—has gained increased popularity for extracting dynamically meaningful spatiotemporal descriptions of fluid flows from snapshot measurements. Often times, DMD descriptions can be used for predictive purposes as well, which enables informed decision-making based on DMD model forecasts. Despite its widespread use and utility, DMD can fail to yield accurate dynamical descriptions when the measured snapshot data are imprecise due to, e.g., sensor noise. Here, we express DMD as a two-stage algorithm in order to isolate a source of systematic error. We show that DMD's first stage, a subspace projection step, systematically introduces bias errors by processing snapshots asymmetrically. To remove this systematic error, we propose utilizing an augmented snapshot matrix in a subspace projection step, as in problems of total least-squares, in order to account for the error present in all snapshots. The resulting unbiased and noise-aware total DMD (TDMD) formulation reduces to standard DMD in the absence of snapshot errors, while the two-stage perspective generalizes the de-biasing framework to other related methods as well. TDMD's performance is demonstrated in numerical and experimental fluids examples. In particular, in the analysis of time-resolved particle image velocimetry data for a separated flow, TDMD outperforms standard DMD by providing dynamical interpretations that are consistent with alternative analysis techniques. Further, TDMD extracts modes that reveal detailed spatial structures missed by standard DMD.

  10. Specificity of reliable change models and review of the within-subjects standard deviation as an error term.

    PubMed

    Hinton-Bayre, Anton D

    2011-02-01

    There is an ongoing debate over the preferred method(s) for determining the reliable change (RC) in individual scores over time. In the present paper, specificity comparisons of several classic and contemporary RC models were made using a real data set. This included a more detailed review of a new RC model recently proposed in this journal, that used the within-subjects standard deviation (WSD) as the error term. It was suggested that the RC(WSD) was more sensitive to change and theoretically superior. The current paper demonstrated that even in the presence of mean practice effects, false-positive rates were comparable across models when reliability was good and initial and retest variances were equivalent. However, when variances differed, discrepancies in classification across models became evident. Notably, the RC using the WSD provided unacceptably high false-positive rates in this setting. It was considered that the WSD was never intended for measuring change in this manner. The WSD actually combines systematic and error variance. The systematic variance comes from measurable between-treatment differences, commonly referred to as practice effect. It was further demonstrated that removal of the systematic variance and appropriate modification of the residual error term for the purpose of testing individual change yielded an error term already published and criticized in the literature. A consensus on the RC approach is needed. To that end, further comparison of models under varied conditions is encouraged.

  11. A proposed method to investigate reliability throughout a questionnaire.

    PubMed

    Wentzel-Larsen, Tore; Norekvål, Tone M; Ulvik, Bjørg; Nygård, Ottar; Pripp, Are H

    2011-10-05

    Questionnaires are used extensively in medical and health care research and depend on validity and reliability. However, participants may differ in interest and awareness throughout long questionnaires, which can affect reliability of their answers. A method is proposed for "screening" of systematic change in random error, which could assess changed reliability of answers. A simulation study was conducted to explore whether systematic change in reliability, expressed as changed random error, could be assessed using unsupervised classification of subjects by cluster analysis (CA) and estimation of intraclass correlation coefficient (ICC). The method was also applied on a clinical dataset from 753 cardiac patients using the Jalowiec Coping Scale. The simulation study showed a relationship between the systematic change in random error throughout a questionnaire and the slope between the estimated ICC for subjects classified by CA and successive items in a questionnaire. This slope was proposed as an awareness measure--to assessing if respondents provide only a random answer or one based on a substantial cognitive effort. Scales from different factor structures of Jalowiec Coping Scale had different effect on this awareness measure. Even though assumptions in the simulation study might be limited compared to real datasets, the approach is promising for assessing systematic change in reliability throughout long questionnaires. Results from a clinical dataset indicated that the awareness measure differed between scales.

  12. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have about 7am/7pm orbital geometry) and afternoon satellites (NOAA 7, 9, 11 and 14 that have about 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error. We find we can decrease the global temperature trend by about 0.07 K/decade. In addition there are systematic time dependent errors present in the data that are introduced by the drift in the satellite orbital geometry arises from the diurnal cycle in temperature which is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observations made in the MSU Ch 1 (50.3 GHz) support this approach. The error is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the errors on the global temperature trend. In one path the entire error is placed in the am data while in the other it is placed in the pm data. Global temperature trend is increased or decreased by about 0.03 K/decade depending upon this placement. Taking into account all random errors and systematic errors our analysis of MSU observations leads us to conclude that a conservative estimate of the global warming is 0. 11 (+-) 0.04 K/decade during 1980 to 1998.

  13. Moisture Forecast Bias Correction in GEOS DAS

    NASA Technical Reports Server (NTRS)

    Dee, D.

    1999-01-01

    Data assimilation methods rely on numerous assumptions about the errors involved in measuring and forecasting atmospheric fields. One of the more disturbing of these is that short-term model forecasts are assumed to be unbiased. In case of atmospheric moisture, for example, observational evidence shows that the systematic component of errors in forecasts and analyses is often of the same order of magnitude as the random component. we have implemented a sequential algorithm for estimating forecast moisture bias from rawinsonde data in the Goddard Earth Observing System Data Assimilation System (GEOS DAS). The algorithm is designed to remove the systematic component of analysis errors and can be easily incorporated in an existing statistical data assimilation system. We will present results of initial experiments that show a significant reduction of bias in the GEOS DAS moisture analyses.

  14. Is the four-day rotation of Venus illusory?. [includes systematic error in radial velocities of solar lines reflected from Venus

    NASA Technical Reports Server (NTRS)

    Young, A. T.

    1974-01-01

    An overlooked systematic error exists in the apparent radial velocities of solar lines reflected from regions of Venus near the terminator, owing to a combination of the finite angular size of the Sun and its large (2 km/sec) equatorial velocity of rotation. This error produces an apparent, but fictitious, retrograde component of planetary rotation, typically on the order of 40 meters/sec. Spectroscopic, photometric, and radiometric evidence against a 4-day atmospheric rotation is also reviewed. The bulk of the somewhat contradictory evidence seems to favor slow motions, on the order of 5 m/sec, in the atmosphere of Venus; the 4-day rotation may be due to a traveling wave-like disturbance, not bulk motions, driven by the UV albedo differences.

  15. Total error shift patterns for daily CT on rails image-guided radiotherapy to the prostate bed

    PubMed Central

    2011-01-01

    Background To evaluate the daily total error shift patterns on post-prostatectomy patients undergoing image guided radiotherapy (IGRT) with a diagnostic quality computer tomography (CT) on rails system. Methods A total of 17 consecutive post-prostatectomy patients receiving adjuvant or salvage IMRT using CT-on-rails IGRT were analyzed. The prostate bed's daily total error shifts were evaluated for a total of 661 CT scans. Results In the right-left, cranial-caudal, and posterior-anterior directions, 11.5%, 9.2%, and 6.5% of the 661 scans required no position adjustments; 75.3%, 66.1%, and 56.8% required a shift of 1 - 5 mm; 11.5%, 20.9%, and 31.2% required a shift of 6 - 10 mm; and 1.7%, 3.8%, and 5.5% required a shift of more than 10 mm, respectively. There was evidence of correlation between the x and y, x and z, and y and z axes in 3, 3, and 3 of 17 patients, respectively. Univariate (ANOVA) analysis showed that the total error pattern was random in the x, y, and z axis for 10, 5, and 2 of 17 patients, respectively, and systematic for the rest. Multivariate (MANOVA) analysis showed that the (x,y), (x,z), (y,z), and (x, y, z) total error pattern was random in 5, 1, 1, and 1 of 17 patients, respectively, and systematic for the rest. Conclusions The overall daily total error shift pattern for these 17 patients simulated with an empty bladder, and treated with CT on rails IGRT was predominantly systematic. Despite this, the temporal vector trends showed complex behaviors and unpredictable changes in magnitude and direction. These findings highlight the importance of using daily IGRT in post-prostatectomy patients. PMID:22024279

  16. Systematic instruction for individuals with acquired brain injury: Results of a randomized controlled trial

    PubMed Central

    Powell, Laurie Ehlhardt; Glang, Ann; Ettel, Deborah; Todis, Bonnie; Sohlberg, McKay; Albin, Richard

    2012-01-01

    The goal of this study was to experimentally evaluate systematic instruction compared with trial-and-error learning (conventional instruction) applied to assistive technology for cognition (ATC), in a double blind, pretest-posttest, randomized controlled trial. Twenty-nine persons with moderate-severe cognitive impairments due to acquired brain injury (15 in systematic instruction group; 14 in conventional instruction) completed the study. Both groups received 12, 45-minute individual training sessions targeting selected skills on the Palm Tungsten E2 personal digital assistant (PDA). A criterion-based assessment of PDA skills was used to evaluate accuracy, fluency/efficiency, maintenance, and generalization of skills. There were no significant differences between groups at immediate posttest with regard to accuracy and fluency. However, significant differences emerged at 30-day follow-up in favor of systematic instruction. Furthermore, systematic instruction participants performed significantly better at immediate posttest generalizing trained PDA skills when interacting with people other than the instructor. These results demonstrate that systematic instruction applied to ATC results in better skill maintenance and generalization than trial-and-error learning for individuals with moderate-severe cognitive impairments due to acquired brain injury. Implications, study limitations, and directions for future research are discussed. PMID:22264146

  17. Analyzing false positives of four questions in the Force Concept Inventory

    NASA Astrophysics Data System (ADS)

    Yasuda, Jun-ichiro; Mae, Naohiro; Hull, Michael M.; Taniguchi, Masa-aki

    2018-06-01

    In this study, we analyze the systematic error from false positives of the Force Concept Inventory (FCI). We compare the systematic errors of question 6 (Q.6), Q.7, and Q.16, for which clearly erroneous reasoning has been found, with Q.5, for which clearly erroneous reasoning has not been found. We determine whether or not a correct response to a given FCI question is a false positive using subquestions. In addition to the 30 original questions, subquestions were introduced for Q.5, Q.6, Q.7, and Q.16. This modified version of the FCI was administered to 1145 university students in Japan from 2015 to 2017. In this paper, we discuss our finding that the systematic errors of Q.6, Q.7, and Q.16 are much larger than that of Q.5 for students with mid-level FCI scores. Furthermore, we find that, averaged over the data sample, the sum of the false positives from Q.5, Q.6, Q.7, and Q.16 is about 10% of the FCI score of a midlevel student.

  18. Precision Møller Polarimetry

    NASA Astrophysics Data System (ADS)

    Henry, William; Jefferson Lab Hall A Collaboration

    2017-09-01

    Jefferson Lab's cutting-edge parity-violating electron scattering program has increasingly stringent requirements for systematic errors. Beam polarimetry is often one of the dominant systematic errors in these experiments. A new Møller Polarimeter in Hall A of Jefferson Lab (JLab) was installed in 2015 and has taken first measurements for a polarized scattering experiment. Upcoming parity violation experiments in Hall A include CREX, PREX-II, MOLLER and SOLID with the latter two requiring <0.5% precision on beam polarization measurements. The polarimeter measures the Møller scattering rates of the polarized electron beam incident upon an iron target placed in a saturating magnetic field. The spectrometer consists of four focusing quadrapoles and one momentum selection dipole. The detector is designed to measure the scattered and knock out target electrons in coincidence. Beam polarization is extracted by constructing an asymmetry from the scattering rates when the incident electron spin is parallel and anti-parallel to the target electron spin. Initial data will be presented. Sources of systematic errors include target magnetization, spectrometer acceptance, the Levchuk effect, and radiative corrections which will be discussed. National Science Foundation.

  19. LANDSAT/coastal processes

    NASA Technical Reports Server (NTRS)

    James, W. P. (Principal Investigator); Hill, J. M.; Bright, J. B.

    1977-01-01

    The author has identified the following significant results. Correlations between the satellite radiance values water color, Secchi disk visibility, turbidity, and attenuation coefficients were generally good. The residual was due to several factors including systematic errors in the remotely sensed data, errors, small time and space variations in the water quality measurements, and errors caused by experimental design. Satellite radiance values were closely correlated with the optical properties of the water.

  20. Drought Persistence in Models and Observations

    NASA Astrophysics Data System (ADS)

    Moon, Heewon; Gudmundsson, Lukas; Seneviratne, Sonia

    2017-04-01

    Many regions of the world have experienced drought events that persisted several years and caused substantial economic and ecological impacts in the 20th century. However, it remains unclear whether there are significant trends in the frequency or severity of these prolonged drought events. In particular, an important issue is linked to systematic biases in the representation of persistent drought events in climate models, which impedes analysis related to the detection and attribution of drought trends. This study assesses drought persistence errors in global climate model (GCM) simulations from the 5th phase of Coupled Model Intercomparison Project (CMIP5), in the period of 1901-2010. The model simulations are compared with five gridded observational data products. The analysis focuses on two aspects: the identification of systematic biases in the models and the partitioning of the spread of drought-persistence-error into four possible sources of uncertainty: model uncertainty, observation uncertainty, internal climate variability and the estimation error of drought persistence. We use monthly and yearly dry-to-dry transition probabilities as estimates for drought persistence with drought conditions defined as negative precipitation anomalies. For both time scales we find that most model simulations consistently underestimated drought persistence except in a few regions such as India and Eastern South America. Partitioning the spread of the drought-persistence-error shows that at the monthly time scale model uncertainty and observation uncertainty are dominant, while the contribution from internal variability does play a minor role in most cases. At the yearly scale, the spread of the drought-persistence-error is dominated by the estimation error, indicating that the partitioning is not statistically significant, due to a limited number of considered time steps. These findings reveal systematic errors in the representation of drought persistence in current climate models and highlight the main contributors of uncertainty of drought-persistence-error. Future analyses will focus on investigating the temporal propagation of drought persistence to better understand the causes for the identified errors in the representation of drought persistence in state-of-the-art climate models.

  1. SU-G-BRB-11: On the Sensitivity of An EPID-Based 3D Dose Verification System to Detect Delivery Errors in VMAT Treatments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzalez, P; Olaciregui-Ruiz, I; Mijnheer, B

    2016-06-15

    Purpose: To investigate the sensitivity of an EPID-based 3D dose verification system to detect delivery errors in VMAT treatments. Methods: For this study 41 EPID-reconstructed 3D in vivo dose distributions of 15 different VMAT plans (H&N, lung, prostate and rectum) were selected. To simulate the effect of delivery errors, their TPS plans were modified by: 1) scaling of the monitor units by ±3% and ±6% and 2) systematic shifting of leaf bank positions by ±1mm, ±2mm and ±5mm. The 3D in vivo dose distributions where then compared to the unmodified and modified treatment plans. To determine the detectability of themore » various delivery errors, we made use of a receiver operator characteristic (ROC) methodology. True positive and false positive rates were calculated as a function of the γ-parameters γmean, γ1% (near-maximum γ) and the PTV dose parameter ΔD{sub 50} (i.e. D{sub 50}(EPID)-D{sub 50}(TPS)). The ROC curve is constructed by plotting the true positive rate vs. the false positive rate. The area under the ROC curve (AUC) then serves as a measure of the performance of the EPID dosimetry system in detecting a particular error; an ideal system has AUC=1. Results: The AUC ranges for the machine output errors and systematic leaf position errors were [0.64 – 0.93] and [0.48 – 0.92] respectively using γmean, [0.57 – 0.79] and [0.46 – 0.85] using γ1% and [0.61 – 0.77] and [ 0.48 – 0.62] using ΔD{sub 50}. Conclusion: For the verification of VMAT deliveries, the parameter γmean is the best discriminator for the detection of systematic leaf position errors and monitor unit scaling errors. Compared to γmean and γ1%, the parameter ΔD{sub 50} performs worse as a discriminator in all cases.« less

  2. Strategic planning to reduce medical errors: Part I--diagnosis.

    PubMed

    Waldman, J Deane; Smith, Howard L

    2012-01-01

    Despite extensive dialogue and a continuing stream of proposed medical practice revisions, medical errors and adverse impacts persist. Connectivity of vital elements is often underestimated or not fully understood. This paper analyzes medical errors from a systems dynamics viewpoint (Part I). Our analysis suggests in Part II that the most fruitful strategies for dissolving medical errors include facilitating physician learning, educating patients about appropriate expectations surrounding treatment regimens, and creating "systematic" patient protections rather than depending on (nonexistent) perfect providers.

  3. Quality Assurance of Chemical Measurements.

    ERIC Educational Resources Information Center

    Taylor, John K.

    1981-01-01

    Reviews aspects of quality control (methods to control errors) and quality assessment (verification that systems are operating within acceptable limits) including an analytical measurement system, quality control by inspection, control charts, systematic errors, and use of SRMs, materials for which properties are certified by the National Bureau…

  4. Rational-Emotive Therapy versus Systematic Desensitization: A Comment on Moleski and Tosi.

    ERIC Educational Resources Information Center

    Atkinson, Leslie

    1983-01-01

    Questioned the statistical analyses of the Moleski and Tosi investigation of rational-emotive therapy versus systematic desensitization. Suggested means for lowering the error rate through a more efficient experimental design. Recommended a reanalysis of the original data. (LLL)

  5. ASME B89.4.19 Performance Evaluation Tests and Geometric Misalignments in Laser Trackers

    PubMed Central

    Muralikrishnan, B.; Sawyer, D.; Blackburn, C.; Phillips, S.; Borchardt, B.; Estler, W. T.

    2009-01-01

    Small and unintended offsets, tilts, and eccentricity of the mechanical and optical components in laser trackers introduce systematic errors in the measured spherical coordinates (angles and range readings) and possibly in the calculated lengths of reference artifacts. It is desirable that the tests described in the ASME B89.4.19 Standard [1] be sensitive to these geometric misalignments so that any resulting systematic errors are identified during performance evaluation. In this paper, we present some analysis, using error models and numerical simulation, of the sensitivity of the length measurement system tests and two-face system tests in the B89.4.19 Standard to misalignments in laser trackers. We highlight key attributes of the testing strategy adopted in the Standard and propose new length measurement system tests that demonstrate improved sensitivity to some misalignments. Experimental results with a tracker that is not properly error corrected for the effects of the misalignments validate claims regarding the proposed new length tests. PMID:27504211

  6. Topological analysis of polymeric melts: chain-length effects and fast-converging estimators for entanglement length.

    PubMed

    Hoy, Robert S; Foteinopoulou, Katerina; Kröger, Martin

    2009-09-01

    Primitive path analyses of entanglements are performed over a wide range of chain lengths for both bead spring and atomistic polyethylene polymer melts. Estimators for the entanglement length N_{e} which operate on results for a single chain length N are shown to produce systematic O(1/N) errors. The mathematical roots of these errors are identified as (a) treating chain ends as entanglements and (b) neglecting non-Gaussian corrections to chain and primitive path dimensions. The prefactors for the O(1/N) errors may be large; in general their magnitude depends both on the polymer model and the method used to obtain primitive paths. We propose, derive, and test new estimators which eliminate these systematic errors using information obtainable from the variation in entanglement characteristics with chain length. The new estimators produce accurate results for N_{e} from marginally entangled systems. Formulas based on direct enumeration of entanglements appear to converge faster and are simpler to apply.

  7. [Comparison study on sampling methods of Oncomelania hupensis snail survey in marshland schistosomiasis epidemic areas in China].

    PubMed

    An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang

    2016-06-29

    To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.

  8. Analytical quality goals derived from the total deviation from patients' homeostatic set points, with a margin for analytical errors.

    PubMed

    Bolann, B J; Asberg, A

    2004-01-01

    The deviation of test results from patients' homeostatic set points in steady-state conditions may complicate interpretation of the results and the comparison of results with clinical decision limits. In this study the total deviation from the homeostatic set point is defined as the maximum absolute deviation for 95% of measurements, and we present analytical quality requirements that prevent analytical error from increasing this deviation to more than about 12% above the value caused by biology alone. These quality requirements are: 1) The stable systematic error should be approximately 0, and 2) a systematic error that will be detected by the control program with 90% probability, should not be larger than half the value of the combined analytical and intra-individual standard deviation. As a result, when the most common control rules are used, the analytical standard deviation may be up to 0.15 times the intra-individual standard deviation. Analytical improvements beyond these requirements have little impact on the interpretability of measurement results.

  9. Systematic evaluation of NASA precipitation radar estimates using NOAA/NSSL National Mosaic QPE products

    NASA Astrophysics Data System (ADS)

    Kirstetter, P.; Hong, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Petersen, W. A.

    2011-12-01

    Proper characterization of the error structure of TRMM Precipitation Radar (PR) quantitative precipitation estimation (QPE) is needed for their use in TRMM combined products, water budget studies and hydrological modeling applications. Due to the variety of sources of error in spaceborne radar QPE (attenuation of the radar signal, influence of land surface, impact of off-nadir viewing angle, etc.) and the impact of correction algorithms, the problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements (GV) using NOAA/NSSL's National Mosaic QPE (NMQ) system. An investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) on the basis of a 3-month-long data sample. A significant effort has been carried out to derive a bias-corrected, robust reference rainfall source from NMQ. The GV processing details will be presented along with preliminary results of PR's error characteristics using contingency table statistics, probability distribution comparisons, scatter plots, semi-variograms, and systematic biases and random errors.

  10. Arbitrary Control of Polarization and Intensity Profiles of Diffraction-Attenuation-Resistant Beams along the Propagation Direction

    NASA Astrophysics Data System (ADS)

    Corato-Zanarella, Mateus; Dorrah, Ahmed H.; Zamboni-Rached, Michel; Mojahedi, Mo

    2018-02-01

    We report on the theory and experimental generation of a class of diffraction-attenuation-resistant beams with state of polarization (SOP) and intensity that can be controlled on demand along the propagation direction. This control is achieved by a suitable superposition of Bessel beams, whose parameters are systematically chosen based on closed-form analytic expressions provided by the frozen waves method. Using an amplitude-only spatial light modulator, we experimentally demonstrate three scenarios. In the first, the SOP of a horizontally polarized beam evolves to radial polarization and is then changed to vertical polarization, with the beam intensity held constant. In the second, we simultaneously control the SOP and the longitudinal intensity profile, which is chosen such that the beam's central ring can be switched off over predefined space regions, thus generating multiple foci with different SOPs and at different intensity levels along the propagation. Finally, the ability to control the SOP while overcoming attenuation inside lossy fluids is shown experimentally. We envision our proposed method to be of great interest for many applications, such as optical tweezers, atom guiding, material processing, microscopy, and optical communications.

  11. The sensitivity of patient specific IMRT QC to systematic MLC leaf bank offset errors.

    PubMed

    Rangel, Alejandra; Palte, Gesa; Dunscombe, Peter

    2010-07-01

    Patient specific IMRT QC is performed routinely in many clinics as a safeguard against errors and inaccuracies which may be introduced during the complex planning, data transfer, and delivery phases of this type of treatment. The purpose of this work is to evaluate the feasibility of detecting systematic errors in MLC leaf bank position with patient specific checks. 9 head and neck (H&N) and 14 prostate IMRT beams were delivered using MLC files containing systematic offsets (+/- 1 mm in two banks, +/- 0.5 mm in two banks, and 1 mm in one bank of leaves). The beams were measured using both MAPCHECK (Sun Nuclear Corp., Melbourne, FL) and the aS1000 electronic portal imaging device (Varian Medical Systems, Palo Alto, CA). Comparisons with calculated fields, without offsets, were made using commonly adopted criteria including absolute dose (AD) difference, relative dose difference, distance to agreement (DTA), and the gamma index. The criteria most sensitive to systematic leaf bank offsets were the 3% AD, 3 mm DTA for MAPCHECK and the gamma index with 2% AD and 2 mm DTA for the EPID. The criterion based on the relative dose measurements was the least sensitive to MLC offsets. More highly modulated fields, i.e., H&N, showed greater changes in the percentage of passing points due to systematic MLC inaccuracy than prostate fields. None of the techniques or criteria tested is sufficiently sensitive, with the population of IMRT fields, to detect a systematic MLC offset at a clinically significant level on an individual field. Patient specific QC cannot, therefore, substitute for routine QC of the MLC itself.

  12. Pion mass dependence of the HVP contribution to muon g - 2

    NASA Astrophysics Data System (ADS)

    Golterman, Maarten; Maltman, Kim; Peris, Santiago

    2018-03-01

    One of the systematic errors in some of the current lattice computations of the HVP contribution to the muon anomalous magnetic moment g - 2 is that associated with the extrapolation to the physical pion mass. We investigate this extrapolation assuming lattice pion masses in the range of 220 to 440 MeV with the help of two-loop chiral perturbation theory, and find that such an extrapolation is unlikely to lead to control of this systematic error at the 1% level. This remains true even if various proposed tricks to improve the chiral extrapolation are taken into account.

  13. Comparison of different source calculations in two-nucleon channel at large quark mass

    NASA Astrophysics Data System (ADS)

    Yamazaki, Takeshi; Ishikawa, Ken-ichi; Kuramashi, Yoshinobu

    2018-03-01

    We investigate a systematic error coming from higher excited state contributions in the energy shift of light nucleus in the two-nucleon channel by comparing two different source calculations with the exponential and wall sources. Since it is hard to obtain a clear signal of the wall source correlation function in a plateau region, we employ a large quark mass as the pion mass is 0.8 GeV in quenched QCD. We discuss the systematic error in the spin-triplet channel of the two-nucleon system, and the volume dependence of the energy shift.

  14. Pressure Measurements Using an Airborne Differential Absorption Lidar. Part 1; Analysis of the Systematic Error Sources

    NASA Technical Reports Server (NTRS)

    Flamant, Cyrille N.; Schwemmer, Geary K.; Korb, C. Laurence; Evans, Keith D.; Palm, Stephen P.

    1999-01-01

    Remote airborne measurements of the vertical and horizontal structure of the atmospheric pressure field in the lower troposphere are made with an oxygen differential absorption lidar (DIAL). A detailed analysis of this measurement technique is provided which includes corrections for imprecise knowledge of the detector background level, the oxygen absorption fine parameters, and variations in the laser output energy. In addition, we analyze other possible sources of systematic errors including spectral effects related to aerosol and molecular scattering interference by rotational Raman scattering and interference by isotopic oxygen fines.

  15. Production and detection of atomic hexadecapole at Earth's magnetic field.

    PubMed

    Acosta, V M; Auzinsh, M; Gawlik, W; Grisins, P; Higbie, J M; Jackson Kimball, D F; Krzemien, L; Ledbetter, M P; Pustelny, S; Rochester, S M; Yashchuk, V V; Budker, D

    2008-07-21

    Optical magnetometers measure magnetic fields with extremely high precision and without cryogenics. However, at geomagnetic fields, important for applications from landmine removal to archaeology, they suffer from nonlinear Zeeman splitting, leading to systematic dependence on sensor orientation. We present experimental results on a method of eliminating this systematic error, using the hexadecapole atomic polarization moment. In particular, we demonstrate selective production of the atomic hexadecapole moment at Earth's magnetic field and verify its immunity to nonlinear Zeeman splitting. This technique promises to eliminate directional errors in all-optical atomic magnetometers, potentially improving their measurement accuracy by several orders of magnitude.

  16. The study design elements employed by researchers in preclinical animal experiments from two research domains and implications for automation of systematic reviews.

    PubMed

    O'Connor, Annette M; Totton, Sarah C; Cullen, Jonah N; Ramezani, Mahmood; Kalivarapu, Vijay; Yuan, Chaohui; Gilbert, Stephen B

    2018-01-01

    Systematic reviews are increasingly using data from preclinical animal experiments in evidence networks. Further, there are ever-increasing efforts to automate aspects of the systematic review process. When assessing systematic bias and unit-of-analysis errors in preclinical experiments, it is critical to understand the study design elements employed by investigators. Such information can also inform prioritization of automation efforts that allow the identification of the most common issues. The aim of this study was to identify the design elements used by investigators in preclinical research in order to inform unique aspects of assessment of bias and error in preclinical research. Using 100 preclinical experiments each related to brain trauma and toxicology, we assessed design elements described by the investigators. We evaluated Methods and Materials sections of reports for descriptions of the following design elements: 1) use of comparison group, 2) unit of allocation of the interventions to study units, 3) arrangement of factors, 4) method of factor allocation to study units, 5) concealment of the factors during allocation and outcome assessment, 6) independence of study units, and 7) nature of factors. Many investigators reported using design elements that suggested the potential for unit-of-analysis errors, i.e., descriptions of repeated measurements of the outcome (94/200) and descriptions of potential for pseudo-replication (99/200). Use of complex factor arrangements was common, with 112 experiments using some form of factorial design (complete, incomplete or split-plot-like). In the toxicology dataset, 20 of the 100 experiments appeared to use a split-plot-like design, although no investigators used this term. The common use of repeated measures and factorial designs means understanding bias and error in preclinical experimental design might require greater expertise than simple parallel designs. Similarly, use of complex factor arrangements creates novel challenges for accurate automation of data extraction and bias and error assessment in preclinical experiments.

  17. Phylogenomics of Lophotrochozoa with Consideration of Systematic Error.

    PubMed

    Kocot, Kevin M; Struck, Torsten H; Merkel, Julia; Waits, Damien S; Todt, Christiane; Brannock, Pamela M; Weese, David A; Cannon, Johanna T; Moroz, Leonid L; Lieb, Bernhard; Halanych, Kenneth M

    2017-03-01

    Phylogenomic studies have improved understanding of deep metazoan phylogeny and show promise for resolving incongruences among analyses based on limited numbers of loci. One region of the animal tree that has been especially difficult to resolve, even with phylogenomic approaches, is relationships within Lophotrochozoa (the animal clade that includes molluscs, annelids, and flatworms among others). Lack of resolution in phylogenomic analyses could be due to insufficient phylogenetic signal, limitations in taxon and/or gene sampling, or systematic error. Here, we investigated why lophotrochozoan phylogeny has been such a difficult question to answer by identifying and reducing sources of systematic error. We supplemented existing data with 32 new transcriptomes spanning the diversity of Lophotrochozoa and constructed a new set of Lophotrochozoa-specific core orthologs. Of these, 638 orthologous groups (OGs) passed strict screening for paralogy using a tree-based approach. In order to reduce possible sources of systematic error, we calculated branch-length heterogeneity, evolutionary rate, percent missing data, compositional bias, and saturation for each OG and analyzed increasingly stricter subsets of only the most stringent (best) OGs for these five variables. Principal component analysis of the values for each factor examined for each OG revealed that compositional heterogeneity and average patristic distance contributed most to the variance observed along the first principal component while branch-length heterogeneity and, to a lesser extent, saturation contributed most to the variance observed along the second. Missing data did not strongly contribute to either. Additional sensitivity analyses examined effects of removing taxa with heterogeneous branch lengths, large amounts of missing data, and compositional heterogeneity. Although our analyses do not unambiguously resolve lophotrochozoan phylogeny, we advance the field by reducing the list of viable hypotheses. Moreover, our systematic approach for dissection of phylogenomic data can be applied to explore sources of incongruence and poor support in any phylogenomic data set. [Annelida; Brachiopoda; Bryozoa; Entoprocta; Mollusca; Nemertea; Phoronida; Platyzoa; Polyzoa; Spiralia; Trochozoa.]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Quotation accuracy in medical journal articles—a systematic review and meta-analysis

    PubMed Central

    Jergas, Hannah

    2015-01-01

    Background. Quotations and references are an indispensable element of scientific communication. They should support what authors claim or provide important background information for readers. Studies indicate, however, that quotations not serving their purpose—quotation errors—may be prevalent. Methods. We carried out a systematic review, meta-analysis and meta-regression of quotation errors, taking account of differences between studies in error ascertainment. Results. Out of 559 studies screened we included 28 in the main analysis, and estimated major, minor and total quotation error rates of 11,9%, 95% CI [8.4, 16.6] 11.5% [8.3, 15.7], and 25.4% [19.5, 32.4]. While heterogeneity was substantial, even the lowest estimate of total quotation errors was considerable (6.7%). Indirect references accounted for less than one sixth of all quotation problems. The findings remained robust in a number of sensitivity and subgroup analyses (including risk of bias analysis) and in meta-regression. There was no indication of publication bias. Conclusions. Readers of medical journal articles should be aware of the fact that quotation errors are common. Measures against quotation errors include spot checks by editors and reviewers, correct placement of citations in the text, and declarations by authors that they have checked cited material. Future research should elucidate if and to what degree quotation errors are detrimental to scientific progress. PMID:26528420

  19. Applying lessons learned to enhance human performance and reduce human error for ISS operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, W.R.

    1999-01-01

    A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation ofmore » the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy{close_quote}s Idaho National Engineering and Environmental Laboratory (INEEL) is developing a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper will describe previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS. {copyright} {ital 1999 American Institute of Physics.}« less

  20. Applying lessons learned to enhance human performance and reduce human error for ISS operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, W.R.

    1998-09-01

    A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation ofmore » the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy`s Idaho National Engineering and Environmental Laboratory (INEEL) is developed a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper describes previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS.« less

  1. Measuring diagnoses: ICD code accuracy.

    PubMed

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-10-01

    To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Main error sources along the "patient trajectory" include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the "paper trail" include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways.

  2. The Relationships Among Perceived Patients' Safety Culture, Intention to Report Errors, and Leader Coaching Behavior of Nurses in Korea: A Pilot Study.

    PubMed

    Ko, YuKyung; Yu, Soyoung

    2017-09-01

    This study was undertaken to explore the correlations among nurses' perceptions of patient safety culture, their intention to report errors, and leader coaching behaviors. The participants (N = 289) were nurses from 5 Korean hospitals with approximately 300 to 500 beds each. Sociodemographic variables, patient safety culture, intention to report errors, and coaching behavior were measured using self-report instruments. Data were analyzed using descriptive statistics, Pearson correlation coefficient, the t test, and the Mann-Whitney U test. Nurses' perceptions of patient safety culture and their intention to report errors showed significant differences between groups of nurses who rated their leaders as high-performing or low-performing coaches. Perceived coaching behavior showed a significant, positive correlation with patient safety culture and intention to report errors, i.e., as nurses' perceptions of coaching behaviors increased, so did their ratings of patient safety culture and error reporting. There is a need in health care settings for coaching by nurse managers to provide quality nursing care and thus improve patient safety. Programs that are systematically developed and implemented to enhance the coaching behaviors of nurse managers are crucial to the improvement of patient safety and nursing care. Moreover, a systematic analysis of the causes of malpractice, as opposed to a focus on the punitive consequences of errors, could increase error reporting and therefore promote a culture in which a higher level of patient safety can thrive.

  3. Estimation of geopotential differences over intercontinental locations using satellite and terrestrial measurements. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Pavlis, Nikolaos K.

    1991-01-01

    An error analysis study was conducted in order to assess the current accuracies and the future anticipated improvements in the estimation of geopotential differences over intercontinental locations. An observation/estimation scheme was proposed and studied, whereby gravity disturbance measurements on the Earth's surface, in caps surrounding the estimation points, are combined with corresponding data in caps directly over these points at the altitude of a low orbiting satellite, for the estimation of the geopotential difference between the terrestrial stations. The mathematical modeling required to relate the primary observables to the parameters to be estimated, was studied for the terrestrial data and the data at altitude. Emphasis was placed on the examination of systematic effects and on the corresponding reductions that need to be applied to the measurements to avoid systematic errors. The error estimation for the geopotential differences was performed using both truncation theory and least squares collocation with ring averages, in case observations on the Earth's surface only are used. The error analysis indicated that with the currently available global geopotential model OSU89B and with gravity disturbance data in 2 deg caps surrounding the estimation points, the error of the geopotential difference arising from errors in the reference model and the cap data is about 23 kgal cm, for 30 deg station separation.

  4. Influence of ECG measurement accuracy on ECG diagnostic statements.

    PubMed

    Zywietz, C; Celikag, D; Joseph, G

    1996-01-01

    Computer analysis of electrocardiograms (ECGs) provides a large amount of ECG measurement data, which may be used for diagnostic classification and storage in ECG databases. Until now, neither error limits for ECG measurements have been specified nor has their influence on diagnostic statements been systematically investigated. An analytical method is presented to estimate the influence of measurement errors on the accuracy of diagnostic ECG statements. Systematic (offset) errors will usually result in an increase of false positive or false negative statements since they cause a shift of the working point on the receiver operating characteristics curve. Measurement error dispersion broadens the distribution function of discriminative measurement parameters and, therefore, usually increases the overlap between discriminative parameters. This results in a flattening of the receiver operating characteristics curve and an increase of false positive and false negative classifications. The method developed has been applied to ECG conduction defect diagnoses by using the proposed International Electrotechnical Commission's interval measurement tolerance limits. These limits appear too large because more than 30% of false positive atrial conduction defect statements and 10-18% of false intraventricular conduction defect statements could be expected due to tolerated measurement errors. To assure long-term usability of ECG measurement databases, it is recommended that systems provide its error tolerance limits obtained on a defined test set.

  5. Running coupling constant from lattice studies of gluon and ghost propagators

    NASA Astrophysics Data System (ADS)

    Cucchieri, A.; Mendes, T.

    2004-12-01

    We present a numerical study of the running coupling constant in four-dimensional pure-SU(2) lattice gauge theory. The running coupling is evaluated by fitting data for the gluon and ghost propagators in minimal Landau gauge. Following Refs. [1, 2], the fitting formulae are obtained by a simultaneous integration of the β function and of a function coinciding with the anomalous dimension of the propagator in the momentum subtraction scheme. We consider these formulae at three and four loops. The fitting method works well, especially for the ghost case, for which statistical error and hyper-cubic effects are very small. Our present result for ΛMS is 200-40+60 MeV, where the error is purely systematic. We are currently extending this analysis to five loops in order to reduce this systematic error.

  6. Simplified model of pinhole imaging for quantifying systematic errors in image shape

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benedetti, Laura Robin; Izumi, N.; Khan, S. F.

    In this paper, we examine systematic errors in x-ray imaging by pinhole optics for quantifying uncertainties in the measurement of convergence and asymmetry in inertial confinement fusion implosions. We present a quantitative model for the total resolution of a pinhole optic with an imaging detector that more effectively describes the effect of diffraction than models that treat geometry and diffraction as independent. This model can be used to predict loss of shape detail due to imaging across the transition from geometric to diffractive optics. We find that fractional error in observable shapes is proportional to the total resolution element wemore » present and inversely proportional to the length scale of the asymmetry being observed. Finally, we have experimentally validated our results by imaging a single object with differently sized pinholes and with different magnifications.« less

  7. Simplified model of pinhole imaging for quantifying systematic errors in image shape

    DOE PAGES

    Benedetti, Laura Robin; Izumi, N.; Khan, S. F.; ...

    2017-10-30

    In this paper, we examine systematic errors in x-ray imaging by pinhole optics for quantifying uncertainties in the measurement of convergence and asymmetry in inertial confinement fusion implosions. We present a quantitative model for the total resolution of a pinhole optic with an imaging detector that more effectively describes the effect of diffraction than models that treat geometry and diffraction as independent. This model can be used to predict loss of shape detail due to imaging across the transition from geometric to diffractive optics. We find that fractional error in observable shapes is proportional to the total resolution element wemore » present and inversely proportional to the length scale of the asymmetry being observed. Finally, we have experimentally validated our results by imaging a single object with differently sized pinholes and with different magnifications.« less

  8. Single-lens 3D digital image correlation system based on a bilateral telecentric lens and a bi-prism: Systematic error analysis and correction

    NASA Astrophysics Data System (ADS)

    Wu, Lifu; Zhu, Jianguo; Xie, Huimin; Zhou, Mengmeng

    2016-12-01

    Recently, we proposed a single-lens 3D digital image correlation (3D DIC) method and established a measurement system on the basis of a bilateral telecentric lens (BTL) and a bi-prism. This system can retrieve the 3D morphology of a target and measure its deformation using a single BTL with relatively high accuracy. Nevertheless, the system still suffers from systematic errors caused by manufacturing deficiency of the bi-prism and distortion of the BTL. In this study, in-depth evaluations of these errors and their effects on the measurement results are performed experimentally. The bi-prism deficiency and the BTL distortion are characterized by two in-plane rotation angles and several distortion coefficients, respectively. These values are obtained from a calibration process using a chessboard placed into the field of view of the system; this process is conducted after the measurement of tested specimen. A modified mathematical model is proposed, which takes these systematic errors into account and corrects them during 3D reconstruction. Experiments on retrieving the 3D positions of the chessboard grid corners and the morphology of a ceramic plate specimen are performed. The results of the experiments reveal that ignoring the bi-prism deficiency will induce attitude error to the retrieved morphology, and the BTL distortion can lead to its pseudo out-of-plane deformation. Correcting these problems can further improve the measurement accuracy of the bi-prism-based single-lens 3D DIC system.

  9. Systematic review of the evidence for Trails B cut-off scores in assessing fitness-to-drive

    PubMed Central

    Roy, Mononita; Molnar, Frank

    2013-01-01

    Background Fitness-to-drive guidelines recommend employing the Trail Making B Test (a.k.a. Trails B), but do not provide guidance regarding cut-off scores. There is ongoing debate regarding the optimal cut-off score on the Trails B test. The objective of this study was to address this controversy by systematically reviewing the evidence for specific Trails B cut-off scores (e.g., cut-offs in both time to completion and number of errors) with respect to fitness-to-drive. Methods Systematic review of all prospective cohort, retrospective cohort, case-control, correlation, and cross-sectional studies reporting the ability of the Trails B to predict driving safety that were published in English-language, peer-reviewed journals. Results Forty-seven articles were reviewed. None of the articles justified sample sizes via formal calculations. Cut-off scores reported based on research include: 90 seconds, 133 seconds, 147 seconds, 180 seconds, and < 3 errors. Conclusions There is support for the previously published Trails B cut-offs of 3 minutes or 3 errors (the ‘3 or 3 rule’). Major methodological limitations of this body of research were uncovered including (1) lack of justification of sample size leaving studies open to Type II error (i.e., false negative findings), and (2) excessive focus on associations rather than clinically useful cut-off scores. PMID:23983828

  10. Correction of energy-dependent systematic errors in dual-energy X-ray CT using a basis material coefficients transformation method

    NASA Astrophysics Data System (ADS)

    Goh, K. L.; Liew, S. C.; Hasegawa, B. H.

    1997-12-01

    Computer simulation results from our previous studies showed that energy dependent systematic errors exist in the values of attenuation coefficient synthesized using the basis material decomposition technique with acrylic and aluminum as the basis materials, especially when a high atomic number element (e.g., iodine from radiographic contrast media) was present in the body. The errors were reduced when a basis set was chosen from materials mimicking those found in the phantom. In the present study, we employed a basis material coefficients transformation method to correct for the energy-dependent systematic errors. In this method, the basis material coefficients were first reconstructed using the conventional basis materials (acrylic and aluminum) as the calibration basis set. The coefficients were then numerically transformed to those for a more desirable set materials. The transformation was done at the energies of the low and high energy windows of the X-ray spectrum. With this correction method using acrylic and an iodine-water mixture as our desired basis set, computer simulation results showed that accuracy of better than 2% could be achieved even when iodine was present in the body at a concentration as high as 10% by mass. Simulation work had also been carried out on a more inhomogeneous 2D thorax phantom of the 3D MCAT phantom. The results of the accuracy of quantitation were presented here.

  11. Parametric decadal climate forecast recalibration (DeFoReSt 1.0)

    NASA Astrophysics Data System (ADS)

    Pasternack, Alexander; Bhend, Jonas; Liniger, Mark A.; Rust, Henning W.; Müller, Wolfgang A.; Ulbrich, Uwe

    2018-01-01

    Near-term climate predictions such as decadal climate forecasts are increasingly being used to guide adaptation measures. For near-term probabilistic predictions to be useful, systematic errors of the forecasting systems have to be corrected. While methods for the calibration of probabilistic forecasts are readily available, these have to be adapted to the specifics of decadal climate forecasts including the long time horizon of decadal climate forecasts, lead-time-dependent systematic errors (drift) and the errors in the representation of long-term changes and variability. These features are compounded by small ensemble sizes to describe forecast uncertainty and a relatively short period for which typically pairs of reforecasts and observations are available to estimate calibration parameters. We introduce the Decadal Climate Forecast Recalibration Strategy (DeFoReSt), a parametric approach to recalibrate decadal ensemble forecasts that takes the above specifics into account. DeFoReSt optimizes forecast quality as measured by the continuous ranked probability score (CRPS). Using a toy model to generate synthetic forecast observation pairs, we demonstrate the positive effect on forecast quality in situations with pronounced and limited predictability. Finally, we apply DeFoReSt to decadal surface temperature forecasts from the MiKlip prototype system and find consistent, and sometimes considerable, improvements in forecast quality compared with a simple calibration of the lead-time-dependent systematic errors.

  12. Reliability and Measurement Error of Tensiomyography to Assess Mechanical Muscle Function: A Systematic Review.

    PubMed

    Martín-Rodríguez, Saúl; Loturco, Irineu; Hunter, Angus M; Rodríguez-Ruiz, David; Munguia-Izquierdo, Diego

    2017-12-01

    Martín-Rodríguez, S, Loturco, I, Hunter, AM, Rodríguez-Ruiz, D, and Munguia-Izquierdo, D. Reliability and measurement error of tensiomyography to assess mechanical muscle function: A systematic review. J Strength Cond Res 31(12): 3524-3536, 2017-Interest in studying mechanical skeletal muscle function through tensiomyography (TMG) has increased in recent years. This systematic review aimed to (a) report the reliability and measurement error of all TMG parameters (i.e., maximum radial displacement of the muscle belly [Dm], contraction time [Tc], delay time [Td], half-relaxation time [½ Tr], and sustained contraction time [Ts]) and (b) to provide critical reflection on how to perform accurate and appropriate measurements for informing clinicians, exercise professionals, and researchers. A comprehensive literature search was performed of the Pubmed, Scopus, Science Direct, and Cochrane databases up to July 2017. Eight studies were included in this systematic review. Meta-analysis could not be performed because of the low quality of the evidence of some studies evaluated. Overall, the review of the 9 studies involving 158 participants revealed high relative reliability (intraclass correlation coefficient [ICC]) for Dm (0.91-0.99); moderate-to-high ICC for Ts (0.80-0.96), Tc (0.70-0.98), and ½ Tr (0.77-0.93); and low-to-high ICC for Td (0.60-0.98), independently of the evaluated muscles. In addition, absolute reliability (coefficient of variation [CV]) was low for all TMG parameters except for ½ Tr (CV = >20%), whereas measurement error indexes were high for this parameter. In conclusion, this study indicates that 3 of the TMG parameters (Dm, Td, and Tc) are highly reliable, whereas ½ Tr demonstrate insufficient reliability, and thus should not be used in future studies.

  13. New main reflector, subreflector and dual chamber concepts for compact range applications

    NASA Technical Reports Server (NTRS)

    Pistorius, C. W. I.; Burnside, W. D.

    1987-01-01

    A compact range is a facility used for the measurement of antenna radiation and target scattering problems. Most presently available parabolic reflectors do not produce ideal uniform plane waves in the target zone. Design improvements are suggested to reduce the amplitude taper, ripple and cross polarization errors. The ripple caused by diffractions from the reflector edges can be reduced by adding blended rolled edges and shaping the edge contour. Since the reflected edge continues smoothly from the parabola onto the rolled surface, rather than being abruptly terminated, the discontinuity in the reflected field is reduced which results in weaker diffracted fields. This is done by blending the rolled edges from the parabola into an ellipse. An algorithm which enables one to design optimum blended rolled edges was developed that is based on an analysis of the continuity of the surface radius of curvature and its derivatives across the junction. Futhermore, a concave edge contour results in a divergent diffracted ray pattern and hence less stray energy in the target zone. Design equations for three-dimensional reflectors are given. Various examples were analyzed using a new physical optics method which eliminates the effects of the false scattering centers on the incident shadow boundaries. A Gregorian subreflector system, in which both the subreflector and feed axes are tilted, results in a substantial reduction in the amplitude taper and cross polarization errors. A dual chamber configuration is proposed to eliminate the effects of diffraction from the subreflector and spillover from the feed. A computationally efficient technique, based on ray tracing and aperture integration, was developed to analyze the scattering from a lossy dielectric slab with a wedge termination.

  14. A proposed method to investigate reliability throughout a questionnaire

    PubMed Central

    2011-01-01

    Background Questionnaires are used extensively in medical and health care research and depend on validity and reliability. However, participants may differ in interest and awareness throughout long questionnaires, which can affect reliability of their answers. A method is proposed for "screening" of systematic change in random error, which could assess changed reliability of answers. Methods A simulation study was conducted to explore whether systematic change in reliability, expressed as changed random error, could be assessed using unsupervised classification of subjects by cluster analysis (CA) and estimation of intraclass correlation coefficient (ICC). The method was also applied on a clinical dataset from 753 cardiac patients using the Jalowiec Coping Scale. Results The simulation study showed a relationship between the systematic change in random error throughout a questionnaire and the slope between the estimated ICC for subjects classified by CA and successive items in a questionnaire. This slope was proposed as an awareness measure - to assessing if respondents provide only a random answer or one based on a substantial cognitive effort. Scales from different factor structures of Jalowiec Coping Scale had different effect on this awareness measure. Conclusions Even though assumptions in the simulation study might be limited compared to real datasets, the approach is promising for assessing systematic change in reliability throughout long questionnaires. Results from a clinical dataset indicated that the awareness measure differed between scales. PMID:21974842

  15. Assessing systematic errors in GOSAT CO2 retrievals by comparing assimilated fields to independent CO2 data

    NASA Astrophysics Data System (ADS)

    Baker, D. F.; Oda, T.; O'Dell, C.; Wunch, D.; Jacobson, A. R.; Yoshida, Y.; Partners, T.

    2012-12-01

    Measurements of column CO2 concentration from space are now being taken at a spatial and temporal density that permits regional CO2 sources and sinks to be estimated. Systematic errors in the satellite retrievals must be minimized for these estimates to be useful, however. CO2 retrievals from the TANSO instrument aboard the GOSAT satellite are compared to similar column retrievals from the Total Carbon Column Observing Network (TCCON) as the primary method of validation; while this is a powerful approach, it can only be done for overflights of 10-20 locations and has not, for example, permitted validation of GOSAT data over the oceans or deserts. Here we present a complementary approach that uses a global atmospheric transport model and flux inversion method to compare different types of CO2 measurements (GOSAT, TCCON, surface in situ, and aircraft) at different locations, at the cost of added transport error. The measurements from any single type of data are used in a variational carbon data assimilation method to optimize surface CO2 fluxes (with a CarbonTracker prior), then the corresponding optimized CO2 concentration fields are compared to those data types not inverted, using the appropriate vertical weighting. With this approach, we find that GOSAT column CO2 retrievals from the ACOS project (version 2.9 and 2.10) contain systematic errors that make the modeled fit to the independent data worse. However, we find that the differences between the GOSAT data and our prior model are correlated with certain physical variables (aerosol amount, surface albedo, correction to total column mass) that are likely driving errors in the retrievals, independent of CO2 concentration. If we correct the GOSAT data using a fit to these variables, then we find the GOSAT data to improve the fit to independent CO2 data, which suggests that the useful information in the measurements outweighs the negative impact of the remaining systematic errors. With this assurance, we compare the flux estimates given by assimilating the ACOS GOSAT retrievals to similar ones given by NIES GOSAT column retrievals, bias-corrected in a similar manner. Finally, we have found systematic differences on the order of a half ppm between column CO2 integrals from 18 TCCON sites and those given by assimilating NOAA in situ data (both surface and aircraft profile) in this approach. We assess how these differences change in switching to a newer version of the TCCON retrieval software.

  16. Determination of the number of ψ' events at BESIII

    NASA Astrophysics Data System (ADS)

    Ablikim, M.; N. Achasov, M.; Albayrak, O.; J. Ambrose, D.; F. An, F.; Q., An; Z. Bai, J.; Ban, Y.; Becker, J.; V. Bennett, J.; Berger, N.; Bertani, M.; M. Bian, J.; Boger, E.; Bondarenko, O.; Boyko, I.; A. Briere, R.; Bytev, V.; Cai, X.; Cakir, O.; Calcaterra, A.; F. Cao, G.; A. Cetin, S.; F. Chang, J.; Chelkov, G.; G., Chen; S. Chen, H.; C. Chen, J.; L. Chen, M.; J. Chen, S.; X., Chen; B. Chen, Y.; P. Cheng, H.; P. Chu, Y.; Cronin-Hennessy, D.; L. Dai, H.; P. Dai, J.; Dedovich, D.; Y. Deng, Z.; Denig, A.; Denysenko, I.; Destefanis, M.; M. Ding, W.; Y., Ding; Y. Dong, L.; Y. Dong, M.; X. Du, S.; J., Fang; S. Fang, S.; Fava, L.; Q. Feng, C.; B. Ferroli, R.; Friedel, P.; D. Fu, C.; Gao, Y.; C., Geng; Goetzen, K.; X. Gong, W.; Gradl, W.; Greco, M.; H. Gu, M.; T. Gu, Y.; H. Guan, Y.; Q. Guo, A.; B. Guo, L.; T., Guo; P. Guo, Y.; L. Han, Y.; A. Harris, F.; L. He, K.; M., He; Y. He, Z.; Held, T.; K. Heng, Y.; L. Hou, Z.; C., Hu; M. Hu, H.; F. Hu, J.; T., Hu; M. Huang, G.; S. Huang, G.; S. Huang, J.; L., Huang; T. Huang, X.; Y., Huang; P. Huang, Y.; Hussain, T.; S. Ji, C.; Q., Ji; P. Ji, Q.; B. Ji, X.; L. Ji, X.; L. Jiang, L.; S. Jiang, X.; B. Jiao, J.; Jiao, Z.; P. Jin, D.; S., Jin; F. Jing, F.; Kalantar-Nayestanaki, N.; Kavatsyuk, M.; Kopf, B.; Kornicer, M.; Kuehn, W.; Lai, W.; S. Lange, J.; Leyhe, M.; H. Li, C.; Cheng, Li; Cui, Li; M. Li, D.; F., Li; G., Li; B. Li, H.; C. Li, J.; K., Li; Lei, Li; J. Li, Q.; L. Li, S.; D. Li, W.; G. Li, W.; L. Li, X.; N. Li, X.; Q. Li, X.; R. Li, X.; B. Li, Z.; H., Liang; F. Liang, Y.; T. Liang, Y.; R. Liao, G.; T. Liao, X.; Lin(Lin, D.; J. Liu, B.; L. Liu, C.; X. Liu, C.; H. Liu, F.; Fang, Liu; Feng, Liu; H., Liu; B. Liu, H.; H. Liu, H.; M. Liu, H.; W. Liu, H.; P. Liu, J.; K., Liu; Y. Liu, K.; Kai, Liu; L. Liu, P.; Q., Liu; B. Liu, S.; X., Liu; B. Liu, Y.; A. Liu, Z.; Zhiqiang, Liu; Zhiqing, Liu; Loehner, H.; R. Lu, G.; J. Lu, H.; G. Lu, J.; W. Lu, Q.; R. Lu, X.; P. Lu, Y.; L. Luo, C.; X. Luo, M.; Luo, T.; L. Luo, X.; Lv, M.; L. Ma, C.; C. Ma, F.; L. Ma, H.; M. Ma, Q.; Ma, S.; Ma, T.; Y. Ma, X.; E. Maas, F.; Maggiora, M.; A. Malik, Q.; J. Mao, Y.; P. Mao, Z.; G. Messchendorp, J.; J., Min; J. Min, T.; E. Mitchell, R.; H. Mo, X.; C. Morales, Morales; Yu. Muchnoi, N.; Muramatsu, H.; Nefedov, Y.; Nicholson, C.; B. Nikolaev, I.; Z., Ning; L. Olsen, S.; Ouyang, Q.; Pacetti, S.; W. Park, J.; Pelizaeus, M.; P. Peng, H.; Peters, K.; L. Ping, J.; G. Ping, R.; Poling, R.; Prencipe, E.; M., Qi; Qian, S.; F. Qiao, C.; Q. Qin, L.; S. Qin, X.; Y., Qin; H. Qin, Z.; F. Qiu, J.; H. Rashid, K.; G., Rong; D. Ruan, X.; Sarantsev, A.; D. Schaefer, B.; Shao, M.; P. Shen, C.; Y. Shen, X.; Y. Sheng, H.; R. Shepherd, M.; Y. Song, X.; Spataro, S.; Spruck, B.; H. Sun, D.; X. Sun, G.; F. Sun, J.; S. Sun, S.; J. Sun, Y.; Z. Sun, Y.; J. Sun, Z.; T. Sun, Z.; J. Tang, C.; Tang, X.; Tapan, I.; H. Thorndike, E.; Toth, D.; Ullrich, M.; S. Varner, G.; Q. Wang, B.; D., Wang; Y. Wang, D.; K., Wang; L. Wang, L.; S. Wang, L.; M., Wang; P., Wang; L. Wang, P.; J. Wang, Q.; G. Wang, S.; F. Wang, X.; L. Wang, X.; F. Wang, Y.; Z., Wang; G. Wang, Z.; Y. Wang, Z.; H. Wei, D.; B. Wei, J.; Weidenkaff, P.; G. Wen, Q.; P. Wen, S.; M., Werner; Wiedner, U.; H. Wu, L.; N., Wu; X. Wu, S.; W., Wu; Z., Wu; G. Xia, L.; X Xia, Y.; J. Xiao, Z.; G. Xie, Y.; L. Xiu, Q.; F. Xu, G.; M. Xu, G.; J. Xu, Q.; N. Xu, Q.; P. Xu, X.; R. Xu, Z.; Xue, F.; Xue, Z.; L., Yan; B. Yan, W.; H. Yan, Y.; X. Yang, H.; Y., Yang; X. Yang, Y.; Ye, H.; Ye, M.; H. Ye, M.; X. Yu, B.; X. Yu, C.; W. Yu, H.; S. Yu, J.; P. Yu, S.; Z. Yuan, C.; Y., Yuan; A. Zafar, A.; Zallo, A.; Zeng, Y.; X. Zhang, B.; Y. Zhang, B.; Zhang, C.; C. Zhang, C.; H. Zhang, D.; H. Zhang, H.; Y. Zhang, H.; Q. Zhang, J.; W. Zhang, J.; Y. Zhang, J.; Z. Zhang, J.; Lili, Zhang; Zhang, R.; H. Zhang, S.; J. Zhang, X.; Y. Zhang, X.; Zhang, Y.; H. Zhang, Y.; P. Zhang, Z.; Y. Zhang, Z.; Zhenghao, Zhang; Zhao, G.; S. Zhao, H.; W. Zhao, J.; X. Zhao, K.; Lei, Zhao; Ling, Zhao; G. Zhao, M.; Zhao, Q.; Z. Zhao, Q.; J. Zhao, S.; C. Zhao, T.; B. Zhao, Y.; G. Zhao, Z.; Zhemchugov, A.; B., Zheng; P. Zheng, J.; H. Zheng, Y.; B., Zhong; Z., Zhong; L., Zhou; K. Zhou, X.; R. Zhou, X.; Zhu, C.; Zhu, K.; J. Zhu, K.; H. Zhu, S.; L. Zhu, X.; C. Zhu, Y.; M. Zhu, Y.; S. Zhu, Y.; A. Zhu, Z.; J., Zhuang; S. Zou, B.; H. Zou, J.

    2013-06-01

    The number of ψ' events accumulated by the BESIII experiment from March 3 through April 14, 2009, is determined by counting inclusive hadronic events. The result is 106.41×(1.00±0.81%)×106. The error is systematic dominant; the statistical error is negligible.

  17. Improving Student Results in the Crystal Violet Chemical Kinetics Experiment

    ERIC Educational Resources Information Center

    Kazmierczak, Nathanael; Vander Griend, Douglas A.

    2017-01-01

    Despite widespread use in general chemistry laboratories, the crystal violet chemical kinetics experiment frequently suffers from erroneous student results. Student calculations for the reaction order in hydroxide often contain large asymmetric errors, pointing to the presence of systematic error. Through a combination of "in silico"…

  18. Theory of Test Translation Error

    ERIC Educational Resources Information Center

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  19. Error sources in passive and active microwave satellite soil moisture over Australia

    USDA-ARS?s Scientific Manuscript database

    Development of a long-term climate record of soil moisture (SM) involves combining historic and present satellite-retrieved SM data sets. This in turn requires a consistent characterization and deep understanding of the systematic differences and errors in the individual data sets, which vary due to...

  20. Design, performance, and calculated error of a Faraday cup for absolute beam current measurements of 600-MeV protons

    NASA Technical Reports Server (NTRS)

    Beck, S. M.

    1975-01-01

    A mobile self-contained Faraday cup system for beam current measurments of nominal 600 MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 + or - 0.95 eV for nominal 600 MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV.

  1. First Year Wilkinson Microwave Anisotropy Probe(WMAP) Observations: Data Processing Methods and Systematic Errors Limits

    NASA Technical Reports Server (NTRS)

    Hinshaw, G.; Barnes, C.; Bennett, C. L.; Greason, M. R.; Halpern, M.; Hill, R. S.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.

    2003-01-01

    We describe the calibration and data processing methods used to generate full-sky maps of the cosmic microwave background (CMB) from the first year of Wilkinson Microwave Anisotropy Probe (WMAP) observations. Detailed limits on residual systematic errors are assigned based largely on analyses of the flight data supplemented, where necessary, with results from ground tests. The data are calibrated in flight using the dipole modulation of the CMB due to the observatory's motion around the Sun. This constitutes a full-beam calibration source. An iterative algorithm simultaneously fits the time-ordered data to obtain calibration parameters and pixelized sky map temperatures. The noise properties are determined by analyzing the time-ordered data with this sky signal estimate subtracted. Based on this, we apply a pre-whitening filter to the time-ordered data to remove a low level of l/f noise. We infer and correct for a small (approx. 1 %) transmission imbalance between the two sky inputs to each differential radiometer, and we subtract a small sidelobe correction from the 23 GHz (K band) map prior to further analysis. No other systematic error corrections are applied to the data. Calibration and baseline artifacts, including the response to environmental perturbations, are negligible. Systematic uncertainties are comparable to statistical uncertainties in the characterization of the beam response. Both are accounted for in the covariance matrix of the window function and are propagated to uncertainties in the final power spectrum. We characterize the combined upper limits to residual systematic uncertainties through the pixel covariance matrix.

  2. MERLIN: a Franco-German LIDAR space mission for atmospheric methane

    NASA Astrophysics Data System (ADS)

    Bousquet, P.; Ehret, G.; Pierangelo, C.; Marshall, J.; Bacour, C.; Chevallier, F.; Gibert, F.; Armante, R.; Crevoisier, C. D.; Edouart, D.; Esteve, F.; Julien, E.; Kiemle, C.; Alpers, M.; Millet, B.

    2017-12-01

    The Methane Remote Sensing Lidar Mission (MERLIN), currently in phase C, is a joint cooperation between France and Germany on the development, launch and operation of a space LIDAR dedicated to the retrieval of total weighted methane (CH4) atmospheric columns. Atmospheric methane is the second most potent anthropogenic greenhouse gas, contributing 20% to climate radiative forcing but also plying an important role in atmospheric chemistry as a precursor of tropospheric ozone and low-stratosphere water vapour. Its short lifetime ( 9 years) and the nature and variety of its anthropogenic sources also offer interesting mitigation options in regards to the 2° objective of the Paris agreement. For the first time, measurements of atmospheric composition will be performed from space thanks to an IPDA (Integrated Path Differential Absorption) LIDAR (Light Detecting And Ranging), with a precision (target ±27 ppb for a 50km aggregation along the trace) and accuracy (target <3.7 ppb at 68%) sufficient to significantly reduce the uncertainties on methane emissions. The very low targeted systematic error target is particularly ambitious compared to current passive methane space mission. It is achievable because of the differential active measurements of MERLIN, which guarantees almost no contamination by aerosols or water vapour cross-sensitivity. As an active mission, MERLIN will deliver global methane weighted columns (XCH4) for all seasons and all latitudes, day and night Here, we recall the MERLIN objectives and mission characteristics. We also propose an end-to-end error analysis, from the causes of random and systematic errors of the instrument, of the platform and of the data treatment, to the error on methane emissions. To do so, we propose an OSSE analysis (observing system simulation experiment) to estimate the uncertainty reduction on methane emissions brought by MERLIN XCH4. The originality of our inversion system is to transfer both random and systematic errors from the observation space to the flux space, thus providing more realistic error reductions than usually provided in OSSE only using the random part of errors. Uncertainty reductions are presented using two different atmospheric transport models, TM3 and LMDZ, and compared with error reduction achieved with the GOSAT passive mission.

  3. Model parameter-related optimal perturbations and their contributions to El Niño prediction errors

    NASA Astrophysics Data System (ADS)

    Tao, Ling-Jiang; Gao, Chuan; Zhang, Rong-Hua

    2018-04-01

    Errors in initial conditions and model parameters (MPs) are the main sources that limit the accuracy of ENSO predictions. In addition to exploring the initial error-induced prediction errors, model errors are equally important in determining prediction performance. In this paper, the MP-related optimal errors that can cause prominent error growth in ENSO predictions are investigated using an intermediate coupled model (ICM) and a conditional nonlinear optimal perturbation (CNOP) approach. Two MPs related to the Bjerknes feedback are considered in the CNOP analysis: one involves the SST-surface wind coupling ({α _τ } ), and the other involves the thermocline effect on the SST ({α _{Te}} ). The MP-related optimal perturbations (denoted as CNOP-P) are found uniformly positive and restrained in a small region: the {α _τ } component is mainly concentrated in the central equatorial Pacific, and the {α _{Te}} component is mainly located in the eastern cold tongue region. This kind of CNOP-P enhances the strength of the Bjerknes feedback and induces an El Niño- or La Niña-like error evolution, resulting in an El Niño-like systematic bias in this model. The CNOP-P is also found to play a role in the spring predictability barrier (SPB) for ENSO predictions. Evidently, such error growth is primarily attributed to MP errors in small areas based on the localized distribution of CNOP-P. Further sensitivity experiments firmly indicate that ENSO simulations are sensitive to the representation of SST-surface wind coupling in the central Pacific and to the thermocline effect in the eastern Pacific in the ICM. These results provide guidance and theoretical support for the future improvement in numerical models to reduce the systematic bias and SPB phenomenon in ENSO predictions.

  4. Human Error and the International Space Station: Challenges and Triumphs in Science Operations

    NASA Technical Reports Server (NTRS)

    Harris, Samantha S.; Simpson, Beau C.

    2016-01-01

    Any system with a human component is inherently risky. Studies in human factors and psychology have repeatedly shown that human operators will inevitably make errors, regardless of how well they are trained. Onboard the International Space Station (ISS) where crew time is arguably the most valuable resource, errors by the crew or ground operators can be costly to critical science objectives. Operations experts at the ISS Payload Operations Integration Center (POIC), located at NASA's Marshall Space Flight Center in Huntsville, Alabama, have learned that from payload concept development through execution, there are countless opportunities to introduce errors that can potentially result in costly losses of crew time and science. To effectively address this challenge, we must approach the design, testing, and operation processes with two specific goals in mind. First, a systematic approach to error and human centered design methodology should be implemented to minimize opportunities for user error. Second, we must assume that human errors will be made and enable rapid identification and recoverability when they occur. While a systematic approach and human centered development process can go a long way toward eliminating error, the complete exclusion of operator error is not a reasonable expectation. The ISS environment in particular poses challenging conditions, especially for flight controllers and astronauts. Operating a scientific laboratory 250 miles above the Earth is a complicated and dangerous task with high stakes and a steep learning curve. While human error is a reality that may never be fully eliminated, smart implementation of carefully chosen tools and techniques can go a long way toward minimizing risk and increasing the efficiency of NASA's space science operations.

  5. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have approximately 7am/7pm orbital geometry) and. afternoon satellites (NOAA 7, 9, 11 and 14 that have approximately 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error eo. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error eo. We find eo can decrease the global temperature trend by approximately 0.07 K/decade. In addition there are systematic time dependent errors ed and ec present in the data that are introduced by the drift in the satellite orbital geometry. ed arises from the diurnal cycle in temperature and ec is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error ed can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observation made in the MSU Ch 1 (50.3 GHz) support this approach. The error ec is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the error ec on the global temperature trend. In one path the entire error ec is placed in the am data while in the other it is placed in the pm data. Global temperature trend is increased or decreased by approximately 0.03 K/decade depending upon this placement. Taking into account all random errors and systematic errors our analysis of MSU observations leads us to conclude that a conservative estimate of the global warming is 0. 11 (+/-) 0.04 K/decade during 1980 to 1998.

  6. Nature of the refractive errors in rhesus monkeys (Macaca mulatta) with experimentally induced ametropias.

    PubMed

    Qiao-Grider, Ying; Hung, Li-Fang; Kee, Chea-Su; Ramamirtham, Ramkumar; Smith, Earl L

    2010-08-23

    We analyzed the contribution of individual ocular components to vision-induced ametropias in 210 rhesus monkeys. The primary contribution to refractive-error development came from vitreous chamber depth; a minor contribution from corneal power was also detected. However, there was no systematic relationship between refractive error and anterior chamber depth or between refractive error and any crystalline lens parameter. Our results are in good agreement with previous studies in humans, suggesting that the refractive errors commonly observed in humans are created by vision-dependent mechanisms that are similar to those operating in monkeys. This concordance emphasizes the applicability of rhesus monkeys in refractive-error studies. Copyright 2010 Elsevier Ltd. All rights reserved.

  7. Nature of the Refractive Errors in Rhesus Monkeys (Macaca mulatta) with Experimentally Induced Ametropias

    PubMed Central

    Qiao-Grider, Ying; Hung, Li-Fang; Kee, Chea-su; Ramamirtham, Ramkumar; Smith, Earl L.

    2010-01-01

    We analyzed the contribution of individual ocular components to vision-induced ametropias in 210 rhesus monkeys. The primary contribution to refractive-error development came from vitreous chamber depth; a minor contribution from corneal power was also detected. However, there was no systematic relationship between refractive error and anterior chamber depth or between refractive error and any crystalline lens parameter. Our results are in good agreement with previous studies in humans, suggesting that the refractive errors commonly observed in humans are created by vision-dependent mechanisms that are similar to those operating in monkeys. This concordance emphasizes the applicability of rhesus monkeys in refractive-error studies. PMID:20600237

  8. Electromagnetic Wave Propagation through Circular Waveguides Containing Radially Inhomogeneous Lossy Media

    DTIC Science & Technology

    1989-08-01

    where the slope becomes infinite. This point could represent a cutoff frequency of the two coupled complex modes as a/b increases and the cutin ...a good example of attenuation near the cutin for each mode. A common characteristic throughout these plots, and also very similar to the cc versus a/b

  9. Solution of the lossy nonlinear Tricomi equation with application to sonic boom focusing

    NASA Astrophysics Data System (ADS)

    Salamone, Joseph A., III

    Sonic boom focusing theory has been augmented with new terms that account for mean flow effects in the direction of propagation and also for atmospheric absorption/dispersion due to molecular relaxation due to oxygen and nitrogen. The newly derived model equation was numerically implemented using a computer code. The computer code was numerically validated using a spectral solution for nonlinear propagation of a sinusoid through a lossy homogeneous medium. An additional numerical check was performed to verify the linear diffraction component of the code calculations. The computer code was experimentally validated using measured sonic boom focusing data from the NASA sponsored Superboom Caustic and Analysis Measurement Program (SCAMP) flight test. The computer code was in good agreement with both the numerical and experimental validation. The newly developed code was applied to examine the focusing of a NASA low-boom demonstration vehicle concept. The resulting pressure field was calculated for several supersonic climb profiles. The shaping efforts designed into the signatures were still somewhat evident despite the effects of sonic boom focusing.

  10. Wavelet-based compression of M-FISH images.

    PubMed

    Hua, Jianping; Xiong, Zixiang; Wu, Qiang; Castleman, Kenneth R

    2005-05-01

    Multiplex fluorescence in situ hybridization (M-FISH) is a recently developed technology that enables multi-color chromosome karyotyping for molecular cytogenetic analysis. Each M-FISH image set consists of a number of aligned images of the same chromosome specimen captured at different optical wavelength. This paper presents embedded M-FISH image coding (EMIC), where the foreground objects/chromosomes and the background objects/images are coded separately. We first apply critically sampled integer wavelet transforms to both the foreground and the background. We then use object-based bit-plane coding to compress each object and generate separate embedded bitstreams that allow continuous lossy-to-lossless compression of the foreground and the background. For efficient arithmetic coding of bit planes, we propose a method of designing an optimal context model that specifically exploits the statistical characteristics of M-FISH images in the wavelet domain. Our experiments show that EMIC achieves nearly twice as much compression as Lempel-Ziv-Welch coding. EMIC also performs much better than JPEG-LS and JPEG-2000 for lossless coding. The lossy performance of EMIC is significantly better than that of coding each M-FISH image with JPEG-2000.

  11. Passive states as optimal inputs for single-jump lossy quantum channels

    NASA Astrophysics Data System (ADS)

    De Palma, Giacomo; Mari, Andrea; Lloyd, Seth; Giovannetti, Vittorio

    2016-06-01

    The passive states of a quantum system minimize the average energy among all the states with a given spectrum. We prove that passive states are the optimal inputs of single-jump lossy quantum channels. These channels arise from a weak interaction of the quantum system of interest with a large Markovian bath in its ground state, such that the interaction Hamiltonian couples only consecutive energy eigenstates of the system. We prove that the output generated by any input state ρ majorizes the output generated by the passive input state ρ0 with the same spectrum of ρ . Then, the output generated by ρ can be obtained applying a random unitary operation to the output generated by ρ0. This is an extension of De Palma et al. [IEEE Trans. Inf. Theory 62, 2895 (2016)], 10.1109/TIT.2016.2547426, where the same result is proved for one-mode bosonic Gaussian channels. We also prove that for finite temperature this optimality property can fail already in a two-level system, where the best input is a coherent superposition of the two energy eigenstates.

  12. Improving GPR image resolution in lossy ground using dispersive migration

    USGS Publications Warehouse

    Oden, C.P.; Powers, M.H.; Wright, D.L.; Olhoeft, G.R.

    2007-01-01

    As a compact wave packet travels through a dispersive medium, it becomes dilated and distorted. As a result, ground-penetrating radar (GPR) surveys over conductive and/or lossy soils often result in poor image resolution. A dispersive migration method is presented that combines an inverse dispersion filter with frequency-domain migration. The method requires a fully characterized GPR system including the antenna response, which is a function of the local soil properties for ground-coupled antennas. The GPR system response spectrum is used to stabilize the inverse dispersion filter. Dispersive migration restores attenuated spectral components when the signal-to-noise ratio is adequate. Applying the algorithm to simulated data shows that the improved spatial resolution is significant when data are acquired with a GPR system having 120 dB or more of dynamic range, and when the medium has a loss tangent of 0.3 or more. Results also show that dispersive migration provides no significant advantage over conventional migration when the loss tangent is less than 0.3, or when using a GPR system with a small dynamic range. ?? 2007 IEEE.

  13. A dissipative quantum mechanical beam-splitter.

    PubMed

    Ramakrishna, S A; Bandyopadhyay, A; Rai, J

    1998-01-19

    A dissipative beam-splitter (BS) has been analyzed by modeling the losses in the BS due to the excitation of optical phonons. The losses are obtained in terms of the BS medium properties. The model simplies the picture by treating the loss mechanism as a perturbation on the photon modes in a linear, non-lossy medium in the limit of small losses, instead of using the full field quantization in lossy, dispersive media. The model uses second order perturbation in the Markoff approximation and yields the Beer's law for absorption in the first approximation, thus providing a microscopic description of the absorption coecient. It is shown that the fluctuations in the modes get increased because of the losses. We show the existence of quantum interferences due to phase correlations between the input beams and it is shown that these correlations can result in loss quenching. Hence in spite of having such a dissipative medium, it is possible to design a lossless 50-50 BS at normal incidence which may have potential applications in laser optics and dielectric-coated mirrors.

  14. The effects of lossy compression on diagnostically relevant seizure information in EEG signals.

    PubMed

    Higgins, G; McGinley, B; Faul, S; McEvoy, R P; Glavin, M; Marnane, W P; Jones, E

    2013-01-01

    This paper examines the effects of compression on EEG signals, in the context of automated detection of epileptic seizures. Specifically, it examines the use of lossy compression on EEG signals in order to reduce the amount of data which has to be transmitted or stored, while having as little impact as possible on the information in the signal relevant to diagnosing epileptic seizures. Two popular compression methods, JPEG2000 and SPIHT, were used. A range of compression levels was selected for both algorithms in order to compress the signals with varying degrees of loss. This compression was applied to the database of epileptiform data provided by the University of Freiburg, Germany. The real-time EEG analysis for event detection automated seizure detection system was used in place of a trained clinician for scoring the reconstructed data. Results demonstrate that compression by a factor of up to 120:1 can be achieved, with minimal loss in seizure detection performance as measured by the area under the receiver operating characteristic curve of the seizure detection system.

  15. Tailoring properties of lossy-mode resonance optical fiber sensors with atomic layer deposition technique

    NASA Astrophysics Data System (ADS)

    Kosiel, Kamil; Koba, Marcin; Masiewicz, Marcin; Śmietana, Mateusz

    2018-06-01

    The paper shows application of atomic layer deposition (ALD) technique as a tool for tailoring sensorial properties of lossy-mode-resonance (LMR)-based optical fiber sensors. Hafnium dioxide (HfO2), zirconium dioxide (ZrO2), and tantalum oxide (TaxOy), as high-refractive-index dielectrics that are particularly convenient for LMR-sensor fabrication, were deposited by low-temperature (100 °C) ALD ensuring safe conditions for thermally vulnerable fibers. Applicability of HfO2 and ZrO2 overlays, deposited with ALD-related atomic level thickness accuracy for fabrication of LMR-sensors with controlled sensorial properties was presented. Additionally, for the first time according to our best knowledge, the double-layer overlay composed of two different materials - silicon nitride (SixNy) and TaxOy - is presented for the LMR fiber sensors. The thin films of such overlay were deposited by two different techniques - PECVD (the SixNy) and ALD (the TaxOy). Such approach ensures fast overlay fabrication and at the same time facility for resonant wavelength tuning, yielding devices with satisfactory sensorial properties.

  16. Three-dimensional imaging of buried objects in very lossy earth by inversion of VETEM data

    USGS Publications Warehouse

    Cui, T.J.; Aydiner, A.A.; Chew, W.C.; Wright, D.L.; Smith, D.V.

    2003-01-01

    The very early time electromagnetic system (VETEM) is an efficient tool for the detection of buried objects in very lossy earth, which allows a deeper penetration depth compared to the ground-penetrating radar. In this paper, the inversion of VETEM data is investigated using three-dimensional (3-D) inverse scattering techniques, where multiple frequencies are applied in the frequency range from 0-5 MHz. For small and moderately sized problems, the Born approximation and/or the Born iterative method have been used with the aid of the singular value decomposition and/or the conjugate gradient method in solving the linearized integral equations. For large-scale problems, a localized 3-D inversion method based on the Born approximation has been proposed for the inversion of VETEM data over a large measurement domain. Ways to process and to calibrate the experimental VETEM data are discussed to capture the real physics of buried objects. Reconstruction examples using synthesized VETEM data and real-world VETEM data are given to test the validity and efficiency of the proposed approach.

  17. Approximate analytical solutions to the double-stance dynamics of the lossy spring-loaded inverted pendulum.

    PubMed

    Shahbazi, Mohammad; Saranlı, Uluç; Babuška, Robert; Lopes, Gabriel A D

    2016-12-05

    This paper introduces approximate time-domain solutions to the otherwise non-integrable double-stance dynamics of the 'bipedal' spring-loaded inverted pendulum (B-SLIP) in the presence of non-negligible damping. We first introduce an auxiliary system whose behavior under certain conditions is approximately equivalent to the B-SLIP in double-stance. Then, we derive approximate solutions to the dynamics of the new system following two different methods: (i) updated-momentum approach that can deal with both the lossy and lossless B-SLIP models, and (ii) perturbation-based approach following which we only derive a solution to the lossless case. The prediction performance of each method is characterized via a comprehensive numerical analysis. The derived representations are computationally very efficient compared to numerical integrations, and, hence, are suitable for online planning, increasing the autonomy of walking robots. Two application examples of walking gait control are presented. The proposed solutions can serve as instrumental tools in various fields such as control in legged robotics and human motion understanding in biomechanics.

  18. Benchmark for Peak Detection Algorithms in Fiber Bragg Grating Interrogation and a New Neural Network for its Performance Improvement

    PubMed Central

    Negri, Lucas; Nied, Ademir; Kalinowski, Hypolito; Paterno, Aleksander

    2011-01-01

    This paper presents a benchmark for peak detection algorithms employed in fiber Bragg grating spectrometric interrogation systems. The accuracy, precision, and computational performance of currently used algorithms and those of a new proposed artificial neural network algorithm are compared. Centroid and gaussian fitting algorithms are shown to have the highest precision but produce systematic errors that depend on the FBG refractive index modulation profile. The proposed neural network displays relatively good precision with reduced systematic errors and improved computational performance when compared to other networks. Additionally, suitable algorithms may be chosen with the general guidelines presented. PMID:22163806

  19. A computational study of the discretization error in the solution of the Spencer-Lewis equation by doubling applied to the upwind finite-difference approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, P.; Seth, D.L.; Ray, A.K.

    A detailed and systematic study of the nature of the discretization error associated with the upwind finite-difference method is presented. A basic model problem has been identified and based upon the results for this problem, a basic hypothesis regarding the accuracy of the computational solution of the Spencer-Lewis equation is formulated. The basic hypothesis is then tested under various systematic single complexifications of the basic model problem. The results of these tests provide the framework of the refined hypothesis presented in the concluding comments. 27 refs., 3 figs., 14 tabs.

  20. Modeling longitudinal data, I: principles of multivariate analysis.

    PubMed

    Ravani, Pietro; Barrett, Brendan; Parfrey, Patrick

    2009-01-01

    Statistical models are used to study the relationship between exposure and disease while accounting for the potential role of other factors' impact on outcomes. This adjustment is useful to obtain unbiased estimates of true effects or to predict future outcomes. Statistical models include a systematic component and an error component. The systematic component explains the variability of the response variable as a function of the predictors and is summarized in the effect estimates (model coefficients). The error element of the model represents the variability in the data unexplained by the model and is used to build measures of precision around the point estimates (confidence intervals).

  1. Bundle Block Adjustment of Airborne Three-Line Array Imagery Based on Rotation Angles

    PubMed Central

    Zhang, Yongjun; Zheng, Maoteng; Huang, Xu; Xiong, Jinxin

    2014-01-01

    In the midst of the rapid developments in electronic instruments and remote sensing technologies, airborne three-line array sensors and their applications are being widely promoted and plentiful research related to data processing and high precision geo-referencing technologies is under way. The exterior orientation parameters (EOPs), which are measured by the integrated positioning and orientation system (POS) of airborne three-line sensors, however, have inevitable systematic errors, so the level of precision of direct geo-referencing is not sufficiently accurate for surveying and mapping applications. Consequently, a few ground control points are necessary to refine the exterior orientation parameters, and this paper will discuss bundle block adjustment models based on the systematic error compensation and the orientation image, considering the principle of an image sensor and the characteristics of the integrated POS. Unlike the models available in the literature, which mainly use a quaternion to represent the rotation matrix of exterior orientation, three rotation angles are directly used in order to effectively model and eliminate the systematic errors of the POS observations. Very good experimental results have been achieved with several real datasets that verify the correctness and effectiveness of the proposed adjustment models. PMID:24811075

  2. Bundle block adjustment of airborne three-line array imagery based on rotation angles.

    PubMed

    Zhang, Yongjun; Zheng, Maoteng; Huang, Xu; Xiong, Jinxin

    2014-05-07

    In the midst of the rapid developments in electronic instruments and remote sensing technologies, airborne three-line array sensors and their applications are being widely promoted and plentiful research related to data processing and high precision geo-referencing technologies is under way. The exterior orientation parameters (EOPs), which are measured by the integrated positioning and orientation system (POS) of airborne three-line sensors, however, have inevitable systematic errors, so the level of precision of direct geo-referencing is not sufficiently accurate for surveying and mapping applications. Consequently, a few ground control points are necessary to refine the exterior orientation parameters, and this paper will discuss bundle block adjustment models based on the systematic error compensation and the orientation image, considering the principle of an image sensor and the characteristics of the integrated POS. Unlike the models available in the literature, which mainly use a quaternion to represent the rotation matrix of exterior orientation, three rotation angles are directly used in order to effectively model and eliminate the systematic errors of the POS observations. Very good experimental results have been achieved with several real datasets that verify the correctness and effectiveness of the proposed adjustment models.

  3. Subaperture test of wavefront error of large telescopes: error sources and stitching performance simulations

    NASA Astrophysics Data System (ADS)

    Chen, Shanyong; Li, Shengyi; Wang, Guilin

    2014-11-01

    The wavefront error of large telescopes requires to be measured to check the system quality and also estimate the misalignment of the telescope optics including the primary, the secondary and so on. It is usually realized by a focal plane interferometer and an autocollimator flat (ACF) of the same aperture with the telescope. However, it is challenging for meter class telescopes due to high cost and technological challenges in producing the large ACF. Subaperture test with a smaller ACF is hence proposed in combination with advanced stitching algorithms. Major error sources include the surface error of the ACF, misalignment of the ACF and measurement noises. Different error sources have different impacts on the wavefront error. Basically the surface error of the ACF behaves like systematic error and the astigmatism will be cumulated and enlarged if the azimuth of subapertures remains fixed. It is difficult to accurately calibrate the ACF because it suffers considerable deformation induced by gravity or mechanical clamping force. Therefore a selfcalibrated stitching algorithm is employed to separate the ACF surface error from the subaperture wavefront error. We suggest the ACF be rotated around the optical axis of the telescope for subaperture test. The algorithm is also able to correct the subaperture tip-tilt based on the overlapping consistency. Since all subaperture measurements are obtained in the same imaging plane, lateral shift of the subapertures is always known and the real overlapping points can be recognized in this plane. Therefore lateral positioning error of subapertures has no impact on the stitched wavefront. In contrast, the angular positioning error changes the azimuth of the ACF and finally changes the systematic error. We propose an angularly uneven layout of subapertures to minimize the stitching error, which is very different from our knowledge. At last, measurement noises could never be corrected but be suppressed by means of averaging and environmental control. We simulate the performance of the stitching algorithm dealing with surface error and misalignment of the ACF, and noise suppression, which provides guidelines to optomechanical design of the stitching test system.

  4. pyAmpli: an amplicon-based variant filter pipeline for targeted resequencing data.

    PubMed

    Beyens, Matthias; Boeckx, Nele; Van Camp, Guy; Op de Beeck, Ken; Vandeweyer, Geert

    2017-12-14

    Haloplex targeted resequencing is a popular method to analyze both germline and somatic variants in gene panels. However, involved wet-lab procedures may introduce false positives that need to be considered in subsequent data-analysis. No variant filtering rationale addressing amplicon enrichment related systematic errors, in the form of an all-in-one package, exists to our knowledge. We present pyAmpli, a platform independent parallelized Python package that implements an amplicon-based germline and somatic variant filtering strategy for Haloplex data. pyAmpli can filter variants for systematic errors by user pre-defined criteria. We show that pyAmpli significantly increases specificity, without reducing sensitivity, essential for reporting true positive clinical relevant mutations in gene panel data. pyAmpli is an easy-to-use software tool which increases the true positive variant call rate in targeted resequencing data. It specifically reduces errors related to PCR-based enrichment of targeted regions.

  5. Accuracy and Landmark Error Calculation Using Cone-Beam Computed Tomography–Generated Cephalograms

    PubMed Central

    Grauer, Dan; Cevidanes, Lucia S. H.; Styner, Martin A.; Heulfe, Inam; Harmon, Eric T.; Zhu, Hongtu; Proffit, William R.

    2010-01-01

    Objective To evaluate systematic differences in landmark position between cone-beam computed tomography (CBCT)–generated cephalograms and conventional digital cephalograms and to estimate how much variability should be taken into account when both modalities are used within the same longitudinal study. Materials and Methods Landmarks on homologous cone-beam computed tomographic–generated cephalograms and conventional digital cephalograms of 46 patients were digitized, registered, and compared via the Hotelling T2 test. Results There were no systematic differences between modalities in the position of most landmarks. Three landmarks showed statistically significant differences but did not reach clinical significance. A method for error calculation while combining both modalities in the same individual is presented. Conclusion In a longitudinal follow-up for assessment of treatment outcomes and growth of one individual, the error due to the combination of the two modalities might be larger than previously estimated. PMID:19905853

  6. Verification of quantum entanglement of two-mode squeezed light source towards quantum radar and imaging

    NASA Astrophysics Data System (ADS)

    Masada, Genta

    2017-08-01

    Two-mode squeezed light is an effective resource for quantum entanglement and shows a non-classical correlation between each optical mode. We are developing a two-mode squeezed light source to explore the possibility of quantum radar based on the quantum illumination theory. It is expected that the error probability for discrimination of target presence or absence is improved even in a lossy and noisy environment. We are also expecting to apply two-mode squeezed light source to quantum imaging. In this work we generated two-mode squeezed light and verify its quantum entanglement property towards quantum radar and imaging. Firstly we generated two independent single-mode squeezed light beams utilizing two sub-threshold optical parametric oscillators which include periodically-polled potassium titanyl phosphate crystals for the second order nonlinear interaction. Two single-mode squeezed light beams are combined using a half mirror with the relative optical phase of 90° between each optical field. Then entangled two-mode squeezed light beams can be generated. We observes correlation variances between quadrature phase amplitudes in entangled two-mode fields by balanced homodyne measurement. Finally we verified quantum entanglement property of two-mode squeezed light source based on Duan's and Simon's inseparability criterion.

  7. The detection of problem analytes in a single proficiency test challenge in the absence of the Health Care Financing Administration rule violations.

    PubMed

    Cembrowski, G S; Hackney, J R; Carey, N

    1993-04-01

    The Clinical Laboratory Improvement Act of 1988 (CLIA 88) has dramatically changed proficiency testing (PT) practices having mandated (1) satisfactory PT for certain analytes as a condition of laboratory operation, (2) fixed PT limits for many of these "regulated" analytes, and (3) an increased number of PT specimens (n = 5) for each testing cycle. For many of these analytes, the fixed limits are much broader than the previously employed Standard Deviation Index (SDI) criteria. Paradoxically, there may be less incentive to identify and evaluate analytically significant outliers to improve the analytical process. Previously described "control rules" to evaluate these PT results are unworkable as they consider only two or three results. We used Monte Carlo simulations of Kodak Ektachem analyzers participating in PT to determine optimal control rules for the identification of PT results that are inconsistent with those from other laboratories using the same methods. The analysis of three representative analytes, potassium, creatine kinase, and iron was simulated with varying intrainstrument and interinstrument standard deviations (si and sg, respectively) obtained from the College of American Pathologists (Northfield, Ill) Quality Assurance Services data and Proficiency Test data, respectively. Analytical errors were simulated in each of the analytes and evaluated in terms of multiples of the interlaboratory SDI. Simple control rules for detecting systematic and random error were evaluated with power function graphs, graphs of probability of error detected vs magnitude of error. Based on the simulation results, we recommend screening all analytes for the occurrence of two or more observations exceeding the same +/- 1 SDI limit. For any analyte satisfying this condition, the mean of the observations should be calculated. For analytes with sg/si ratios between 1.0 and 1.5, a significant systematic error is signaled by the mean exceeding 1.0 SDI. Significant random error is signaled by one observation exceeding the +/- 3-SDI limit or the range of the observations exceeding 4 SDIs. For analytes with higher sg/si, significant systematic or random error is signaled by violation of the screening rule (having at least two observations exceeding the same +/- 1 SDI limit). Random error can also be signaled by one observation exceeding the +/- 1.5-SDI limit or the range of the observations exceeding 3 SDIs. We present a practical approach to the workup of apparent PT errors.

  8. Reducing the complexity of the CCSDS standard for image compression decreasing the DWT filter order

    NASA Astrophysics Data System (ADS)

    Ito, Leandro H.; Pinho, Marcelo S.

    2014-10-01

    The goal for this work is to evaluate the impact of utilizing shorter wavelet filters in the CCSDS standard for lossy and lossless image compression. Another constraint considered was the existence of symmetry in the filters. That approach was desired to maintain the symmetric extension compatibility of the filter banks. Even though this strategy works well for oat wavelets, it is not always the case for their integer approximations. The periodic extension was utilized whenever symmetric extension was not applicable. Even though the latter outperforms the former, for fair comparison the symmetric extension compatible integer-to-integer wavelet approximations were evaluated under both extensions. The evaluation methods adopted were bit rate (bpp), PSNR and the number of operations required by each wavelet transforms. All these results were compared against the ones obtained utilizing the standard CCSDS with 9/7 filter banks, for lossy and lossless compression. The tests were performed over tallies (512x512) of raw remote sensing images from CBERS-2B (China-Brazil Earth Resources Satellites) captured from its high resolution CCD camera. These images were cordially made available by INPE (National Institute for Space Research) in Brazil. For the CCSDS implementation, it was utilized the source code developed by Hongqiang Wang from the Electrical Department at Nebraska-Lincoln University, applying the appropriate changes on the wavelet transform. For lossy compression, the results have shown that the filter bank built from the Deslauriers-Dubuc scaling function, with respectively 2 and 4 vanishing moments on the synthesis and analysis banks, presented not only a reduction of 21% in the number of operations required, but also a performance on par with the 9/7 filter bank. In the lossless case, the biorthogonal Cohen-Daubechies-Feauveau with 2 vanishing moments presented a performance close to the 9/7 integer approximation of the CCSDS, with the number of operations reduced by 1/3.

  9. Using irreversible compression in digital radiology: a preliminary study of the opinions of radiologists

    NASA Astrophysics Data System (ADS)

    Seeram, Euclid

    2006-03-01

    The large volumes of digital images produced by digital imaging modalities in Radiology have provided the motivation for the development of picture archiving and communication systems (PACS) in an effort to provide an organized mechanism for digital image management. The development of more sophisticated methods of digital image acquisition (Multislice CT and Digital Mammography, for example), as well as the implementation and performance of PACS and Teleradiology systems in a health care environment, have created challenges in the area of image compression with respect to storing and transmitting digital images. Image compression can be reversible (lossless) or irreversible (lossy). While in the former, there is no loss of information, the latter presents concerns since there is a loss of information. This loss of information from diagnostic medical images is of primary concern not only to radiologists, but also to patients and their physicians. In 1997, Goldberg pointed out that "there is growing evidence that lossy compression can be applied without significantly affecting the diagnostic content of images... there is growing consensus in the radiologic community that some forms of lossy compression are acceptable". The purpose of this study was to explore the opinions of expert radiologists, and related professional organizations on the use of irreversible compression in routine practice The opinions of notable radiologists in the US and Canada are varied indicating no consensus of opinion on the use of irreversible compression in primary diagnosis, however, they are generally positive on the notion of the image storage and transmission advantages. Almost all radiologists are concerned with the litigation potential of an incorrect diagnosis based on irreversible compressed images. The survey of several radiology professional and related organizations reveals that no professional practice standards exist for the use of irreversible compression. Currently, the only standard for image compression is stated in the ACR's Technical Standards for Teleradiology and Digital Image Management.

  10. Measuring Diagnoses: ICD Code Accuracy

    PubMed Central

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-01-01

    Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999

  11. The accuracy of self-reported pregnancy-related weight: a systematic review.

    PubMed

    Headen, I; Cohen, A K; Mujahid, M; Abrams, B

    2017-03-01

    Self-reported maternal weight is error-prone, and the context of pregnancy may impact error distributions. This systematic review summarizes error in self-reported weight across pregnancy and assesses implications for bias in associations between pregnancy-related weight and birth outcomes. We searched PubMed and Google Scholar through November 2015 for peer-reviewed articles reporting accuracy of self-reported, pregnancy-related weight at four time points: prepregnancy, delivery, over gestation and postpartum. Included studies compared maternal self-report to anthropometric measurement or medical report of weights. Sixty-two studies met inclusion criteria. We extracted data on magnitude of error and misclassification. We assessed impact of reporting error on bias in associations between pregnancy-related weight and birth outcomes. Women underreported prepregnancy (PPW: -2.94 to -0.29 kg) and delivery weight (DW: -1.28 to 0.07 kg), and over-reported gestational weight gain (GWG: 0.33 to 3 kg). Magnitude of error was small, ranged widely, and varied by prepregnancy weight class and race/ethnicity. Misclassification was moderate (PPW: 0-48.3%; DW: 39.0-49.0%; GWG: 16.7-59.1%), and overestimated some estimates of population prevalence. However, reporting error did not largely bias associations between pregnancy-related weight and birth outcomes. Although measured weight is preferable, self-report is a cost-effective and practical measurement approach. Future researchers should develop bias correction techniques for self-reported pregnancy-related weight. © 2017 World Obesity Federation.

  12. A constrained-gradient method to control divergence errors in numerical MHD

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2016-10-01

    In numerical magnetohydrodynamics (MHD), a major challenge is maintaining nabla \\cdot {B}=0. Constrained transport (CT) schemes achieve this but have been restricted to specific methods. For more general (meshless, moving-mesh, ALE) methods, `divergence-cleaning' schemes reduce the nabla \\cdot {B} errors; however they can still be significant and can lead to systematic errors which converge away slowly. We propose a new constrained gradient (CG) scheme which augments these with a projection step, and can be applied to any numerical scheme with a reconstruction. This iteratively approximates the least-squares minimizing, globally divergence-free reconstruction of the fluid. Unlike `locally divergence free' methods, this actually minimizes the numerically unstable nabla \\cdot {B} terms, without affecting the convergence order of the method. We implement this in the mesh-free code GIZMO and compare various test problems. Compared to cleaning schemes, our CG method reduces the maximum nabla \\cdot {B} errors by ˜1-3 orders of magnitude (˜2-5 dex below typical errors if no nabla \\cdot {B} cleaning is used). By preventing large nabla \\cdot {B} at discontinuities, this eliminates systematic errors at jumps. Our CG results are comparable to CT methods; for practical purposes, the nabla \\cdot {B} errors are eliminated. The cost is modest, ˜30 per cent of the hydro algorithm, and the CG correction can be implemented in a range of numerical MHD methods. While for many problems, we find Dedner-type cleaning schemes are sufficient for good results, we identify a range of problems where using only Powell or `8-wave' cleaning can produce order-of-magnitude errors.

  13. Comment on 3PL IRT Adjustment for Guessing

    ERIC Educational Resources Information Center

    Chiu, Ting-Wei; Camilli, Gregory

    2013-01-01

    Guessing behavior is an issue discussed widely with regard to multiple choice tests. Its primary effect is on number-correct scores for examinees at lower levels of proficiency. This is a systematic error or bias, which increases observed test scores. Guessing also can inflate random error variance. Correction or adjustment for guessing formulas…

  14. Progress in the improved lattice calculation of direct CP-violation in the Standard Model

    NASA Astrophysics Data System (ADS)

    Kelly, Christopher

    2018-03-01

    We discuss the ongoing effort by the RBC & UKQCD collaborations to improve our lattice calculation of the measure of Standard Model direct CP violation, ɛ', with physical kinematics. We present our progress in decreasing the (dominant) statistical error and discuss other related activities aimed at reducing the systematic errors.

  15. An investigation of condition mapping and plot proportion calculation issues

    Treesearch

    Demetrios Gatziolis

    2007-01-01

    A systematic examination of Forest Inventory and Analysis condition data collected under the annual inventory protocol in the Pacific Northwest region between 2000 and 2004 revealed the presence of errors both in condition topology and plot proportion computations. When plots were compiled to generate population estimates, proportion errors were found to cause...

  16. Mitigating Errors of Representation: A Practical Case Study of the University Experience Survey

    ERIC Educational Resources Information Center

    Whiteley, Sonia

    2014-01-01

    The Total Survey Error (TSE) paradigm provides a framework that supports the effective planning of research, guides decision making about data collection and contextualises the interpretation and dissemination of findings. TSE also allows researchers to systematically evaluate and improve the design and execution of ongoing survey programs and…

  17. Sampling methods for titica vine (Heteropsis spp.) inventory in a tropical forest

    Treesearch

    Carine Klauberg; Edson Vidal; Carlos Alberto Silva; Michelliny de M. Bentes; Andrew Thomas Hudak

    2016-01-01

    Titica vine provides useful raw fiber material. Using sampling schemes that reduce sampling error can provide direction for sustainable forest management of this vine. Sampling systematically with rectangular plots (10× 25 m) promoted lower error and greater accuracy in the inventory of titica vines in tropical rainforest.

  18. Dealing with systematic laser scanner errors due to misalignment at area-based deformation analyses

    NASA Astrophysics Data System (ADS)

    Holst, Christoph; Medić, Tomislav; Kuhlmann, Heiner

    2018-04-01

    The ability to acquire rapid, dense and high quality 3D data has made terrestrial laser scanners (TLS) a desirable instrument for tasks demanding a high geometrical accuracy, such as geodetic deformation analyses. However, TLS measurements are influenced by systematic errors due to internal misalignments of the instrument. The resulting errors in the point cloud might exceed the magnitude of random errors. Hence, it is important to assure that the deformation analysis is not biased by these influences. In this study, we propose and evaluate several strategies for reducing the effect of TLS misalignments on deformation analyses. The strategies are based on the bundled in-situ self-calibration and on the exploitation of two-face measurements. The strategies are verified analyzing the deformation of the Onsala Space Observatory's radio telescope's main reflector. It is demonstrated that either two-face measurements as well as the in-situ calibration of the laser scanner in a bundle adjustment improve the results of deformation analysis. The best solution is gained by a combination of both strategies.

  19. Numerical investigations of potential systematic uncertainties in iron opacity measurements at solar interior temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagayama, T.; Bailey, J. E.; Loisel, G. P.

    Iron opacity calculations presently disagree with measurements at an electron temperature of ~180–195 eV and an electron density of (2–4)×10 22cm –3, conditions similar to those at the base of the solar convection zone. The measurements use x rays to volumetrically heat a thin iron sample that is tamped with low-Z materials. The opacity is inferred from spectrally resolved x-ray transmission measurements. Plasma self-emission, tamper attenuation, and temporal and spatial gradients can all potentially cause systematic errors in the measured opacity spectra. In this article we quantitatively evaluate these potential errors with numerical investigations. The analysis exploits computer simulations thatmore » were previously found to reproduce the experimentally measured plasma conditions. The simulations, combined with a spectral synthesis model, enable evaluations of individual and combined potential errors in order to estimate their potential effects on the opacity measurement. Lastly, the results show that the errors considered here do not account for the previously observed model-data discrepancies.« less

  20. Numerical investigations of potential systematic uncertainties in iron opacity measurements at solar interior temperatures

    DOE PAGES

    Nagayama, T.; Bailey, J. E.; Loisel, G. P.; ...

    2017-06-26

    Iron opacity calculations presently disagree with measurements at an electron temperature of ~180–195 eV and an electron density of (2–4)×10 22cm –3, conditions similar to those at the base of the solar convection zone. The measurements use x rays to volumetrically heat a thin iron sample that is tamped with low-Z materials. The opacity is inferred from spectrally resolved x-ray transmission measurements. Plasma self-emission, tamper attenuation, and temporal and spatial gradients can all potentially cause systematic errors in the measured opacity spectra. In this article we quantitatively evaluate these potential errors with numerical investigations. The analysis exploits computer simulations thatmore » were previously found to reproduce the experimentally measured plasma conditions. The simulations, combined with a spectral synthesis model, enable evaluations of individual and combined potential errors in order to estimate their potential effects on the opacity measurement. Lastly, the results show that the errors considered here do not account for the previously observed model-data discrepancies.« less

  1. Flexible methods for segmentation evaluation: results from CT-based luggage screening.

    PubMed

    Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry

    2014-01-01

    Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms' behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms.

  2. Multiple Flux Footprints, Flux Divergences and Boundary Layer Mixing Ratios: Studies of Ecosystem-Atmosphere CO2 Exchange Using the WLEF Tall Tower.

    NASA Astrophysics Data System (ADS)

    Davis, K. J.; Bakwin, P. S.; Yi, C.; Cook, B. D.; Wang, W.; Denning, A. S.; Teclaw, R.; Isebrands, J. G.

    2001-05-01

    Long-term, tower-based measurements using the eddy-covariance method have revealed a wealth of detail about the temporal dynamics of netecosystem-atmosphere exchange (NEE) of CO2. The data also provide a measure of the annual net CO2 exchange. The area represented by these flux measurements, however, is limited, and doubts remain about possible systematic errors that may bias the annual net exchange measurements. Flux and mixing ratio measurements conducted at the WLEF tall tower as part of the Chequamegon Ecosystem-Atmosphere Study (ChEAS) allow for unique assessment of the uncertainties in NEE of CO2. The synergy between flux and mixing ratio observations shows the potential for comparing inverse and eddy-covariance methods of estimating NEE of CO2. Such comparisons may strengthen confidence in both results and begin to bridge the huge gap in spatial scales (at least 3 orders of magnitude) between continental or hemispheric scale inverse studies and kilometer-scale eddy covariance flux measurements. Data from WLEF and Willow Creek, another ChEAS tower, are used to estimate random and systematic errors in NEE of CO2. Random uncertainty in seasonal exchange rates and the annual integrated NEE, including both turbulent sampling errors and variability in enviromental conditions, is small. Systematic errors are identified by examining changes in flux as a function of atmospheric stability and wind direction, and by comparing the multiple level flux measurements on the WLEF tower. Nighttime drainage is modest but evident. Systematic horizontal advection occurs during the morning turbulence transition. The potential total systematic error appears to be larger than random uncertainty, but still modest. The total systematic error, however, is difficult to assess. It appears that the WLEF region ecosystems were a small net sink of CO2 in 1997. It is clear that the summer uptake rate at WLEF is much smaller than that at most deciduous forest sites, including the nearby Willow Creek site. The WLEF tower also allows us to study the potential for monitoring continental CO2 mixing ratios from tower sites. Despite concerns about the proximity to ecosystem sources and sinks, it is clear that boundary layer CO2 mixing ratios can be monitored using typical surface layer towers. Seasonal and annual land-ocean mixing ratio gradients are readily detectable, providing the motivation for a flux-tower based mixing ratio observation network that could greatly improve the accuracy of inversion-based estimates of NEE of CO2, and enable inversions to be applied on smaller temporal and spatial scales. Results from the WLEF tower illustrate the degree to which local flux measurements represent interannual, seasonal and synoptic CO2 mixing ratio trends. This coherence between fluxes and mixing ratios serves to "regionalize" the eddy-covariance based local NEE observations.

  3. Qualitative fusion technique based on information poor system and its application to factor analysis for vibration of rolling bearings

    NASA Astrophysics Data System (ADS)

    Xia, Xintao; Wang, Zhongyu

    2008-10-01

    For some methods of stability analysis of a system using statistics, it is difficult to resolve the problems of unknown probability distribution and small sample. Therefore, a novel method is proposed in this paper to resolve these problems. This method is independent of probability distribution, and is useful for small sample systems. After rearrangement of the original data series, the order difference and two polynomial membership functions are introduced to estimate the true value, the lower bound and the supper bound of the system using fuzzy-set theory. Then empirical distribution function is investigated to ensure confidence level above 95%, and the degree of similarity is presented to evaluate stability of the system. Cases of computer simulation investigate stable systems with various probability distribution, unstable systems with linear systematic errors and periodic systematic errors and some mixed systems. The method of analysis for systematic stability is approved.

  4. Haptic spatial matching in near peripersonal space.

    PubMed

    Kaas, Amanda L; Mier, Hanneke I van

    2006-04-01

    Research has shown that haptic spatial matching at intermanual distances over 60 cm is prone to large systematic errors. The error pattern has been explained by the use of reference frames intermediate between egocentric and allocentric coding. This study investigated haptic performance in near peripersonal space, i.e. at intermanual distances of 60 cm and less. Twelve blindfolded participants (six males and six females) were presented with two turn bars at equal distances from the midsagittal plane, 30 or 60 cm apart. Different orientations (vertical/horizontal or oblique) of the left bar had to be matched by adjusting the right bar to either a mirror symmetric (/ \\) or parallel (/ /) position. The mirror symmetry task can in principle be performed accurately in both an egocentric and an allocentric reference frame, whereas the parallel task requires an allocentric representation. Results showed that parallel matching induced large systematic errors which increased with distance. Overall error was significantly smaller in the mirror task. The task difference also held for the vertical orientation at 60 cm distance, even though this orientation required the same response in both tasks, showing a marked effect of task instruction. In addition, men outperformed women on the parallel task. Finally, contrary to our expectations, systematic errors were found in the mirror task, predominantly at 30 cm distance. Based on these findings, we suggest that haptic performance in near peripersonal space might be dominated by different mechanisms than those which come into play at distances over 60 cm. Moreover, our results indicate that both inter-individual differences and task demands affect task performance in haptic spatial matching. Therefore, we conclude that the study of haptic spatial matching in near peripersonal space might reveal important additional constraints for the specification of adequate models of haptic spatial performance.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balderson, Michael, E-mail: michael.balderson@rmp.uhn.ca; Brown, Derek; Johnson, Patricia

    The purpose of this work was to compare static gantry intensity-modulated radiation therapy (IMRT) with volume-modulated arc therapy (VMAT) in terms of tumor control probability (TCP) under scenarios involving large geometric misses, i.e., those beyond what are accounted for when margin expansion is determined. Using a planning approach typical for these treatments, a linear-quadratic–based model for TCP was used to compare mean TCP values for a population of patients who experiences a geometric miss (i.e., systematic and random shifts of the clinical target volume within the planning target dose distribution). A Monte Carlo approach was used to account for themore » different biological sensitivities of a population of patients. Interestingly, for errors consisting of coplanar systematic target volume offsets and three-dimensional random offsets, static gantry IMRT appears to offer an advantage over VMAT in that larger shift errors are tolerated for the same mean TCP. For example, under the conditions simulated, erroneous systematic shifts of 15 mm directly between or directly into static gantry IMRT fields result in mean TCP values between 96% and 98%, whereas the same errors on VMAT plans result in mean TCP values between 45% and 74%. Random geometric shifts of the target volume were characterized using normal distributions in each Cartesian dimension. When the standard deviations were doubled from those values assumed in the derivation of the treatment margins, our model showed a 7% drop in mean TCP for the static gantry IMRT plans but a 20% drop in TCP for the VMAT plans. Although adding a margin for error to a clinical target volume is perhaps the best approach to account for expected geometric misses, this work suggests that static gantry IMRT may offer a treatment that is more tolerant to geometric miss errors than VMAT.« less

  6. Influence of Familiarization and Competitive Level on the Reliability of Countermovement Vertical Jump Kinetic and Kinematic Variables.

    PubMed

    Nibali, Maria L; Tombleson, Tom; Brady, Philip H; Wagner, Phillip

    2015-10-01

    Understanding typical variation of vertical jump (VJ) performance and confounding sources of its typical variability (i.e., familiarization and competitive level) is pertinent in the routine monitoring of athletes. We evaluated the presence of systematic error (learning effect) and nonuniformity of error (heteroscedasticity) across VJ performances of athletes that differ in competitive level and quantified the reliability of VJ kinetic and kinematic variables relative to the smallest worthwhile change (SWC). One hundred thirteen high school athletes, 30 college athletes, and 35 professional athletes completed repeat VJ trials. Average eccentric rate of force development (RFD), average concentric (CON) force, CON impulse, and jump height measurements were obtained from vertical ground reaction force (VGRF) data. Systematic error was assessed by evaluating changes in the mean of repeat trials. Heteroscedasticity was evaluated by plotting the difference score (trial 2 - trial 1) against the mean of the trials. Variability of jump variables was calculated as the typical error (TE) and coefficient of variation (%CV). No substantial systematic error (effect size range: -0.07 to 0.11) or heteroscedasticity was present for any of the VJ variables. Vertical jump can be performed without the need for familiarization trials, and the variability can be conveyed as either the raw TE or the %CV. Assessment of VGRF variables is an effective and reliable means of assessing VJ performance. Average CON force and CON impulse are highly reliable (%CV: 2.7% ×/÷ 1.10), although jump height was the only variable to display a %CV ≤SWC. Eccentric RFD is highly variable yet should not be discounted from VJ assessments on this factor alone because it may be sensitive to changes in response to training or fatigue that exceed the TE.

  7. Application of advanced shearing techniques to the calibration of autocollimators with small angle generators and investigation of error sources.

    PubMed

    Yandayan, T; Geckeler, R D; Aksulu, M; Akgoz, S A; Ozgur, B

    2016-05-01

    The application of advanced error-separating shearing techniques to the precise calibration of autocollimators with Small Angle Generators (SAGs) was carried out for the first time. The experimental realization was achieved using the High Precision Small Angle Generator (HPSAG) of TUBITAK UME under classical dimensional metrology laboratory environmental conditions. The standard uncertainty value of 5 mas (24.2 nrad) reached by classical calibration method was improved to the level of 1.38 mas (6.7 nrad). Shearing techniques, which offer a unique opportunity to separate the errors of devices without recourse to any external standard, were first adapted by Physikalisch-Technische Bundesanstalt (PTB) to the calibration of autocollimators with angle encoders. It has been demonstrated experimentally in a clean room environment using the primary angle standard of PTB (WMT 220). The application of the technique to a different type of angle measurement system extends the range of the shearing technique further and reveals other advantages. For example, the angular scales of the SAGs are based on linear measurement systems (e.g., capacitive nanosensors for the HPSAG). Therefore, SAGs show different systematic errors when compared to angle encoders. In addition to the error-separation of HPSAG and the autocollimator, detailed investigations on error sources were carried out. Apart from determination of the systematic errors of the capacitive sensor used in the HPSAG, it was also demonstrated that the shearing method enables the unique opportunity to characterize other error sources such as errors due to temperature drift in long term measurements. This proves that the shearing technique is a very powerful method for investigating angle measuring systems, for their improvement, and for specifying precautions to be taken during the measurements.

  8. Initial Steps Toward Next-Generation, Waveform-Based, Three-Dimensional Models and Metrics to Improve Nuclear Explosion Monitoring in the Middle East

    DTIC Science & Technology

    2008-09-30

    propagation effects by splitting apart the longer period surface waves from the shorter period, depth-sensitive Pnl waves. Problematic, or high-error... Pnl waves. Problematic, or high-error, stations and paths were further analyzed to identify systematic errors with unknown sensor responses and...frequency Pnl components and slower, longer period surface waves. All cut windows are fit simultaneously, allowing equal weighting of phases that may be

  9. Error Sources in Proccessing LIDAR Based Bridge Inspection

    NASA Astrophysics Data System (ADS)

    Bian, H.; Chen, S. E.; Liu, W.

    2017-09-01

    Bridge inspection is a critical task in infrastructure management and is facing unprecedented challenges after a series of bridge failures. The prevailing visual inspection was insufficient in providing reliable and quantitative bridge information although a systematic quality management framework was built to ensure visual bridge inspection data quality to minimize errors during the inspection process. The LiDAR based remote sensing is recommended as an effective tool in overcoming some of the disadvantages of visual inspection. In order to evaluate the potential of applying this technology in bridge inspection, some of the error sources in LiDAR based bridge inspection are analysed. The scanning angle variance in field data collection and the different algorithm design in scanning data processing are the found factors that will introduce errors into inspection results. Besides studying the errors sources, advanced considerations should be placed on improving the inspection data quality, and statistical analysis might be employed to evaluate inspection operation process that contains a series of uncertain factors in the future. Overall, the development of a reliable bridge inspection system requires not only the improvement of data processing algorithms, but also systematic considerations to mitigate possible errors in the entire inspection workflow. If LiDAR or some other technology can be accepted as a supplement for visual inspection, the current quality management framework will be modified or redesigned, and this would be as urgent as the refine of inspection techniques.

  10. Drought Persistence Errors in Global Climate Models

    NASA Astrophysics Data System (ADS)

    Moon, H.; Gudmundsson, L.; Seneviratne, S. I.

    2018-04-01

    The persistence of drought events largely determines the severity of socioeconomic and ecological impacts, but the capability of current global climate models (GCMs) to simulate such events is subject to large uncertainties. In this study, the representation of drought persistence in GCMs is assessed by comparing state-of-the-art GCM model simulations to observation-based data sets. For doing so, we consider dry-to-dry transition probabilities at monthly and annual scales as estimates for drought persistence, where a dry status is defined as negative precipitation anomaly. Though there is a substantial spread in the drought persistence bias, most of the simulations show systematic underestimation of drought persistence at global scale. Subsequently, we analyzed to which degree (i) inaccurate observations, (ii) differences among models, (iii) internal climate variability, and (iv) uncertainty of the employed statistical methods contribute to the spread in drought persistence errors using an analysis of variance approach. The results show that at monthly scale, model uncertainty and observational uncertainty dominate, while the contribution from internal variability is small in most cases. At annual scale, the spread of the drought persistence error is dominated by the statistical estimation error of drought persistence, indicating that the partitioning of the error is impaired by the limited number of considered time steps. These findings reveal systematic errors in the representation of drought persistence in current GCMs and suggest directions for further model improvement.

  11. Improvements in GRACE Gravity Fields Using Regularization

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S.; Tapley, B. D.

    2008-12-01

    The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or spatial smoothing.

  12. Systematic errors in Monsoon simulation: importance of the equatorial Indian Ocean processes

    NASA Astrophysics Data System (ADS)

    Annamalai, H.; Taguchi, B.; McCreary, J. P., Jr.; Nagura, M.; Miyama, T.

    2015-12-01

    H. Annamalai1, B. Taguchi2, J.P. McCreary1, J. Hafner1, M. Nagura2, and T. Miyama2 International Pacific Research Center, University of Hawaii, USA Application Laboratory, JAMSTEC, Japan In climate models, simulating the monsoon precipitation climatology remains a grand challenge. Compared to CMIP3, the multi-model-mean (MMM) errors for Asian-Australian monsoon (AAM) precipitation climatology in CMIP5, relative to GPCP observations, have shown little improvement. One of the implications is that uncertainties in the future projections of time-mean changes to AAM rainfall may not have reduced from CMIP3 to CMIP5. Despite dedicated efforts by the modeling community, the progress in monsoon modeling is rather slow. This leads us to wonder: Has the scientific community reached a "plateau" in modeling mean monsoon precipitation? Our focus here is to better understanding of the coupled air-sea interactions, and moist processes that govern the precipitation characteristics over the tropical Indian Ocean where large-scale errors persist. A series idealized coupled model experiments are performed to test the hypothesis that errors in the coupled processes along the equatorial Indian Ocean during inter-monsoon seasons could potentially influence systematic errors during the monsoon season. Moist static energy budget diagnostics has been performed to identify the leading moist and radiative processes that account for the large-scale errors in the simulated precipitation. As a way forward, we propose three coordinated efforts, and they are: (i) idealized coupled model experiments; (ii) process-based diagnostics and (iii) direct observations to constrain model physics. We will argue that a systematic and coordinated approach in the identification of the various interactive processes that shape the precipitation basic state needs to be carried out, and high-quality observations over the data sparse monsoon region are needed to validate models and further improve model physics.

  13. Understanding human management of automation errors

    PubMed Central

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  14. Understanding human management of automation errors.

    PubMed

    McBride, Sara E; Rogers, Wendy A; Fisk, Arthur D

    2014-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance.

  15. Global CO2 flux inversions from remote-sensing data with systematic errors using hierarchical statistical models

    NASA Astrophysics Data System (ADS)

    Zammit-Mangion, Andrew; Stavert, Ann; Rigby, Matthew; Ganesan, Anita; Rayner, Peter; Cressie, Noel

    2017-04-01

    The Orbiting Carbon Observatory-2 (OCO-2) satellite was launched on 2 July 2014, and it has been a source of atmospheric CO2 data since September 2014. The OCO-2 dataset contains a number of variables, but the one of most interest for flux inversion has been the column-averaged dry-air mole fraction (in units of ppm). These global level-2 data offer the possibility of inferring CO2 fluxes at Earth's surface and tracking those fluxes over time. However, as well as having a component of random error, the OCO-2 data have a component of systematic error that is dependent on the instrument's mode, namely land nadir, land glint, and ocean glint. Our statistical approach to CO2-flux inversion starts with constructing a statistical model for the random and systematic errors with parameters that can be estimated from the OCO-2 data and possibly in situ sources from flasks, towers, and the Total Column Carbon Observing Network (TCCON). Dimension reduction of the flux field is achieved through the use of physical basis functions, while temporal evolution of the flux is captured by modelling the basis-function coefficients as a vector autoregressive process. For computational efficiency, flux inversion uses only three months of sensitivities of mole fraction to changes in flux, computed using MOZART; any residual variation is captured through the modelling of a stochastic process that varies smoothly as a function of latitude. The second stage of our statistical approach is to simulate from the posterior distribution of the basis-function coefficients and all unknown parameters given the data using a fully Bayesian Markov chain Monte Carlo (MCMC) algorithm. Estimates and posterior variances of the flux field can then be obtained straightforwardly from this distribution. Our statistical approach is different than others, as it simultaneously makes inference (and quantifies uncertainty) on both the error components' parameters and the CO2 fluxes. We compare it to more classical approaches through an Observing System Simulation Experiment (OSSE) on a global scale. By changing the size of the random and systematic errors in the OSSE, we can determine the corresponding spatial and temporal resolutions at which useful flux signals could be detected from the OCO-2 data.

  16. On the Quality of Point-Clouds Derived from Sfm-Photogrammetry Applied to UAS Imagery

    NASA Astrophysics Data System (ADS)

    Carbonneau, P.; James, T.

    2014-12-01

    Structure from Motion photogrammetry (SfM-photogrammetry) recently appeared in environmental sciences as an impressive tool allowing for the creation of topographic data from unstructured imagery. Several authors have tested the performance of SfM-photogrammetry vs that of TLS or dGPS. Whilst the initial results were very promising, there is currently a growing awareness that systematic deformations occur in DEMs and point-clouds derived from SfM-photogrammetry. Notably, some authors have identified a systematic doming manifest as an increasing error vs distance to the model centre. Simulation studies have confirmed that this error is due to errors in the calibration of camera distortions. This work aims to further investigate these effects in the presence of real data. We start with a dataset of 220 images acquired from a sUAS. After obtaining an initial self-calibration of the camera lens with Agisoft Photoscan, our method consists in applying systematic perturbations to 2 key lens parameters: Focal length and the k1 distortion parameter. For each perturbation, a point-cloud was produced and compared to LiDAR data. After deriving the mean and standard deviation of the error residuals (ɛ), a 2nd order polynomial surface was fitted to the errors point-cloud and the peak ɛ defined as the mathematical extrema of this surface. The results are presented in figure 1. This figure shows that lens perturbations can induce a range of errors with systematic behaviours. Peak ɛ is primarily controlled by K1 with a secondary control exerted by the focal length. These results allow us to state that: To limit the peak ɛ to 10cm, the K1 parameter must be calibrated to within 0.00025 and the focal length to within 2.5 pixels (≈10 µm). This level of calibration accuracy can only be achieved with proper design of image acquisition and control network geometry. Our main point is therefore that SfM is not a bypass to a rigorous and well-informed photogrammetric approach. Users of SfM-photogrammetry will still require basic training and knowledge in the fundamentals of photogrammetry. This is especially true for applications where very small topographic changes need to be detected or where gradient-sensitive processes need to be modelled.

  17. The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems.

    PubMed

    White, Andrew; Tolman, Malachi; Thames, Howard D; Withers, Hubert Rodney; Mason, Kathy A; Transtrum, Mark K

    2016-12-01

    We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model's discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system-a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model.

  18. The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems

    PubMed Central

    Tolman, Malachi; Thames, Howard D.; Mason, Kathy A.

    2016-01-01

    We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model’s discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system–a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model. PMID:27923060

  19. Preparatory studies for the WFIRST supernova cosmology measurements

    NASA Astrophysics Data System (ADS)

    Perlmutter, Saul

    In the context of the WFIRST-AFTA Science Definition Team we developed a first version of a supernova program, described in the WFIRST-AFTA SDT report. This program uses the imager to discover supernova candidates and an Integral Field Spectrograph (IFS) to obtain spectrophotometric light curves and higher signal to noise spectra of the supernovae near peak to better characterize the supernovae and thus minimize systematic errors. While this program was judged a robust one, and the estimates of the sensitivity to the cosmological parameters were felt to be reliable, due to limitation of time the analysis was clearly limited in depth on a number of issues. The goal of this proposal is to further develop this program and refine the estimates of the sensitivities to the cosmological parameters using more sophisticated systematic uncertainty models and covariance error matrices that fold in more realistic data concerning observed populations of SNe Ia as well as more realistic instrument models. We propose to develop analysis algorithms and approaches that are needed to build, optimize, and refine the WFIRST instrument and program requirements to accomplish the best supernova cosmology measurements possible. We plan to address the following: a) Use realistic Supernova populations, subclasses and population drift. One bothersome uncertainty with the supernova technique is the possibility of population drift with redshift. We are in a unique position to characterize and mitigate such effects using the spectrophotometric time series of real Type Ia supernovae from the Nearby Supernova Factory (SNfactory). Each supernova in this sample has global galaxy measurements as well as additional local environment information derived from the IFS spectroscopy. We plan to develop methods of coping with this issue, e.g., by selecting similar subsamples of supernovae and allowing additional model flexibility, in order to reduce systematic uncertainties. These studies will allow us to tune details, like the wavelength coverage and S/N requirements, of the WFIRST IFS to capitalize on these systematic error reduction methods. b) Supernova extraction and host galaxy subtractions. The underlying light of the host galaxy must be subtracted from the supernova images making up the lightcurves. Using the IFS to provide the lightcurve points via spectrophotometry requires the subtraction of a reference spectrum of the galaxy taken after the supernova light has faded to a negligible level. We plan to apply the expertise obtained from the SNfactory to develop galaxy background procedures that minimize the systematic errors introduced by this step in the analysis. c) Instrument calibration and ground to space cross calibration. Calibrating the entire supernova sample will be a challenge as no standard stars exist that span the range of magnitudes and wavelengths relevant to the WFIRST survey. Linking the supernova measurements to the relatively brighter standards will require several links. WFIRST will produce the high redshift sample, but the nearby supernova to anchor the Hubble diagram will have to come from ground based observations. Developing algorithms to carry out the cross calibration of these two samples to the required one percent level will be an important goal of our proposal. An integral part of this calibration will be to remove all instrumental signatures and to develop unbiased measurement techniques starting at the pixel level. We then plan to pull the above studies together in a synthesis to produce a correlated error matrix. We plan to develop a Fisher Matrix based model to evaluate the correlated error matrix due to the various systematic errors discussed above. A realistic error model will allow us to carry out a more reliable estimates of the eventual errors on the measurement of the cosmological parameters, as well as serve as a means of optimizing and fine tuning the requirements for the instruments and survey strategies.

  20. Constituent quarks and systematic errors in mid-rapidity charged multiplicity (dNch / dη distributions

    NASA Astrophysics Data System (ADS)

    Tannenbaum, Michael

    2017-01-01

    Although it was demonstrated more than 13 years ago that the increase in midrapidity dNch / dη with increasing centrality of Au+Au collisions at RHIC was linearly proportional to the number of constituent quark participants (or ``wounded quarks'', QW) in the collision, it was only in the last few years that generating the spatial positions of the three quarks in a nucleon according to the Fourier transform of the measured electric charge form factor of the proton could be used to connect dNch / dη /QW as a function of centrality in p(d) +A and A +A collisions with the same value of dNch / dη /QW determined in p +p collisions. One calculation, which only compared its calculated dNch / dη /QW in p +p at √{sNN} = 200 GeV to the least central of 12 centrality bin measurements in Au +Au by PHENIX, claimed that the p +p value was higher by ``about 30%'' from the band of measurements vs. centrality. However the clearly quoted systematic errors were ignored for which a 1 standard deviation systematic shift would move all the 12 Au +Au data points to within 1.3 standard deviations of the p +p value, or if the statistical and systematic errors are added in quadrature a difference of 35 +/- 21%. Rearch supported by U.S. Department of Energy, Contract No. DE-SC0012704.

Top