Sample records for frequency surface errors

  1. Experimental power spectral density analysis for mid- to high-spatial frequency surface error control.

    PubMed

    Hoyo, Javier Del; Choi, Heejoo; Burge, James H; Kim, Geon-Hee; Kim, Dae Wook

    2017-06-20

    The control of surface errors as a function of spatial frequency is critical during the fabrication of modern optical systems. A large-scale surface figure error is controlled by a guided removal process, such as computer-controlled optical surfacing. Smaller-scale surface errors are controlled by polishing process parameters. Surface errors of only a few millimeters may degrade the performance of an optical system, causing background noise from scattered light and reducing imaging contrast for large optical systems. Conventionally, the microsurface roughness is often given by the root mean square at a high spatial frequency range, with errors within a 0.5×0.5  mm local surface map with 500×500 pixels. This surface specification is not adequate to fully describe the characteristics for advanced optical systems. The process for controlling and minimizing mid- to high-spatial frequency surface errors with periods of up to ∼2-3  mm was investigated for many optical fabrication conditions using the measured surface power spectral density (PSD) of a finished Zerodur optical surface. Then, the surface PSD was systematically related to various fabrication process parameters, such as the grinding methods, polishing interface materials, and polishing compounds. The retraceable experimental polishing conditions and processes used to produce an optimal optical surface PSD are presented.

  2. A new polishing process for large-aperture and high-precision aspheric surface

    NASA Astrophysics Data System (ADS)

    Nie, Xuqing; Li, Shengyi; Dai, Yifan; Song, Ci

    2013-07-01

    The high-precision aspheric surface is hard to be achieved due to the mid-spatial frequency error in the finishing step. The influence of mid-spatial frequency error is studied through the simulations and experiments. In this paper, a new polishing process based on magnetorheological finishing (MRF), smooth polishing (SP) and ion beam figuring (IBF) is proposed. A 400mm aperture parabolic surface is polished with this new process. The smooth polishing (SP) is applied after rough machining to control the MSF error. In the middle finishing step, most of low-spatial frequency error is removed by MRF rapidly, then the mid-spatial frequency error is restricted by SP, finally ion beam figuring is used to finish the surface. The surface accuracy is improved from the initial 37.691nm (rms, 95% aperture) to the final 4.195nm. The results show that the new polishing process is effective to manufacture large-aperture and high-precision aspheric surface.

  3. Evaluate error correction ability of magnetorheological finishing by smoothing spectral function

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Fan, Bin; Wan, Yongjian; Shi, Chunyan; Zhuo, Bin

    2014-08-01

    Power Spectral Density (PSD) has been entrenched in optics design and manufacturing as a characterization of mid-high spatial frequency (MHSF) errors. Smoothing Spectral Function (SSF) is a newly proposed parameter that based on PSD to evaluate error correction ability of computer controlled optical surfacing (CCOS) technologies. As a typical deterministic and sub-aperture finishing technology based on CCOS, magnetorheological finishing (MRF) leads to MHSF errors inevitably. SSF is employed to research different spatial frequency error correction ability of MRF process. The surface figures and PSD curves of work-piece machined by MRF are presented. By calculating SSF curve, the correction ability of MRF for different spatial frequency errors will be indicated as a normalized numerical value.

  4. Image defects from surface and alignment errors in grazing incidence telescopes

    NASA Technical Reports Server (NTRS)

    Saha, Timo T.

    1989-01-01

    The rigid body motions and low frequency surface errors of grazing incidence Wolter telescopes are studied. The analysis is based on surface error descriptors proposed by Paul Glenn. In his analysis, the alignment and surface errors are expressed in terms of Legendre-Fourier polynomials. Individual terms in the expression correspond to rigid body motions (decenter and tilt) and low spatial frequency surface errors of mirrors. With the help of the Legendre-Fourier polynomials and the geometry of grazing incidence telescopes, exact and approximated first order equations are derived in this paper for the components of the ray intercepts at the image plane. These equations are then used to calculate the sensitivities of Wolter type I and II telescopes for the rigid body motions and surface deformations. The rms spot diameters calculated from this theory and OSAC ray tracing code agree very well. This theory also provides a tool to predict how rigid body motions and surface errors of the mirrors compensate each other.

  5. Optimized method for manufacturing large aspheric surfaces

    NASA Astrophysics Data System (ADS)

    Zhou, Xusheng; Li, Shengyi; Dai, Yifan; Xie, Xuhui

    2007-12-01

    Aspheric optics are being used more and more widely in modern optical systems, due to their ability of correcting aberrations, enhancing image quality, enlarging the field of view and extending the range of effect, while reducing the weight and volume of the system. With optical technology development, we have more pressing requirement to large-aperture and high-precision aspheric surfaces. The original computer controlled optical surfacing (CCOS) technique cannot meet the challenge of precision and machining efficiency. This problem has been thought highly of by researchers. Aiming at the problem of original polishing process, an optimized method for manufacturing large aspheric surfaces is put forward. Subsurface damage (SSD), full aperture errors and full band of frequency errors are all in control of this method. Lesser SSD depth can be gained by using little hardness tool and small abrasive grains in grinding process. For full aperture errors control, edge effects can be controlled by using smaller tools and amendment model with material removal function. For full band of frequency errors control, low frequency errors can be corrected with the optimized material removal function, while medium-high frequency errors by using uniform removing principle. With this optimized method, the accuracy of a K9 glass paraboloid mirror can reach rms 0.055 waves (where a wave is 0.6328μm) in a short time. The results show that the optimized method can guide large aspheric surface manufacturing effectively.

  6. Path planning and parameter optimization of uniform removal in active feed polishing

    NASA Astrophysics Data System (ADS)

    Liu, Jian; Wang, Shaozhi; Zhang, Chunlei; Zhang, Linghua; Chen, Huanan

    2015-06-01

    A high-quality ultrasmooth surface is demanded in short-wave optical systems. However, the existing polishing methods have difficulties meeting the requirement on spherical or aspheric surfaces. As a new kind of small tool polishing method, active feed polishing (AFP) could attain a surface roughness of less than 0.3 nm (RMS) on spherical elements, although AFP may magnify the residual figure error or mid-frequency error. The purpose of this work is to propose an effective algorithm to realize uniform removal of the surface in the processing. At first, the principle of the AFP and the mechanism of the polishing machine are introduced. In order to maintain the processed figure error, a variable pitch spiral path planning algorithm and the dwell time-solving model are proposed. For suppressing the possible mid-frequency error, the uniformity of the synthesis tool path, which is generated by an arbitrary point at the polishing tool bottom, is analyzed and evaluated, and the angular velocity ratio of the tool spinning motion to the revolution motion is optimized. Finally, an experiment is conducted on a convex spherical surface and an ultrasmooth surface is finally acquired. In conclusion, a high-quality ultrasmooth surface can be successfully obtained with little degradation of the figure and mid-frequency errors by the algorithm.

  7. Analysis on optical heterodyne frequency error of full-field heterodyne interferometer

    NASA Astrophysics Data System (ADS)

    Li, Yang; Zhang, Wenxi; Wu, Zhou; Lv, Xiaoyu; Kong, Xinxin; Guo, Xiaoli

    2017-06-01

    The full-field heterodyne interferometric measurement technology is beginning better applied by employing low frequency heterodyne acousto-optical modulators instead of complex electro-mechanical scanning devices. The optical element surface could be directly acquired by synchronously detecting the received signal phases of each pixel, because standard matrix detector as CCD and CMOS cameras could be used in heterodyne interferometer. Instead of the traditional four-step phase shifting phase calculating, Fourier spectral analysis method is used for phase extracting which brings lower sensitivity to sources of uncertainty and higher measurement accuracy. In this paper, two types of full-field heterodyne interferometer are described whose advantages and disadvantages are also specified. Heterodyne interferometer has to combine two different frequency beams to produce interference, which brings a variety of optical heterodyne frequency errors. Frequency mixing error and beat frequency error are two different kinds of inescapable heterodyne frequency errors. In this paper, the effects of frequency mixing error to surface measurement are derived. The relationship between the phase extraction accuracy and the errors are calculated. :: The tolerance of the extinction ratio of polarization splitting prism and the signal-to-noise ratio of stray light is given. The error of phase extraction by Fourier analysis that caused by beat frequency shifting is derived and calculated. We also propose an improved phase extraction method based on spectrum correction. An amplitude ratio spectrum correction algorithm with using Hanning window is used to correct the heterodyne signal phase extraction. The simulation results show that this method can effectively suppress the degradation of phase extracting caused by beat frequency error and reduce the measurement uncertainty of full-field heterodyne interferometer.

  8. Deterministic ion beam material adding technology for high-precision optical surfaces.

    PubMed

    Liao, Wenlin; Dai, Yifan; Xie, Xuhui; Zhou, Lin

    2013-02-20

    Although ion beam figuring (IBF) provides a highly deterministic method for the precision figuring of optical components, several problems still need to be addressed, such as the limited correcting capability for mid-to-high spatial frequency surface errors and low machining efficiency for pit defects on surfaces. We propose a figuring method named deterministic ion beam material adding (IBA) technology to solve those problems in IBF. The current deterministic optical figuring mechanism, which is dedicated to removing local protuberances on optical surfaces, is enriched and developed by the IBA technology. Compared with IBF, this method can realize the uniform convergence of surface errors, where the particle transferring effect generated in the IBA process can effectively correct the mid-to-high spatial frequency errors. In addition, IBA can rapidly correct the pit defects on the surface and greatly improve the machining efficiency of the figuring process. The verification experiments are accomplished on our experimental installation to validate the feasibility of the IBA method. First, a fused silica sample with a rectangular pit defect is figured by using IBA. Through two iterations within only 47.5 min, this highly steep pit is effectively corrected, and the surface error is improved from the original 24.69 nm root mean square (RMS) to the final 3.68 nm RMS. Then another experiment is carried out to demonstrate the correcting capability of IBA for mid-to-high spatial frequency surface errors, and the final results indicate that the surface accuracy and surface quality can be simultaneously improved.

  9. Combined fabrication technique for high-precision aspheric optical windows

    NASA Astrophysics Data System (ADS)

    Hu, Hao; Song, Ci; Xie, Xuhui

    2016-07-01

    Specifications made on optical components are becoming more and more stringent with the performance improvement of modern optical systems. These strict requirements not only involve low spatial frequency surface accuracy, mid-and-high spatial frequency surface errors, but also surface smoothness and so on. This presentation mainly focuses on the fabrication process for square aspheric window which combines accurate grinding, magnetorheological finishing (MRF) and smoothing polishing (SP). In order to remove the low spatial frequency surface errors and subsurface defects after accurate grinding, the deterministic polishing method MRF with high convergence and stable material removal rate is applied. Then the SP technology with pseudo-random path is adopted to eliminate the mid-and-high spatial frequency surface ripples and high slope errors which is the defect for MRF. Additionally, the coordinate measurement method and interferometry are combined in different phase. Acid-etched method and ion beam figuring (IBF) are also investigated on observing and reducing the subsurface defects. Actual fabrication result indicates that the combined fabrication technique can lead to high machining efficiency on manufaturing the high-precision and high-quality optical aspheric windows.

  10. Do We Really Need Sinusoidal Surface Temperatures to Apply Heat Tracing Techniques to Estimate Streambed Fluid Fluxes?

    NASA Astrophysics Data System (ADS)

    Luce, C. H.; Tonina, D.; Applebee, R.; DeWeese, T.

    2017-12-01

    Two common refrains about using the one-dimensional advection diffusion equation to estimate fluid fluxes, thermal conductivity, or bed surface elevation from temperature time series in streambeds are that the solution assumes that 1) the surface boundary condition is a sine wave or nearly so, and 2) there is no gradient in mean temperature with depth. Concerns on these subjects are phrased in various ways, including non-stationarity in frequency, amplitude, or phase. Although the mathematical posing of the original solution to the problem might lead one to believe these constraints exist, the perception that they are a source of error is a fallacy. Here we re-derive the inverse solution of the 1-D advection-diffusion equation starting with an arbitrary surface boundary condition for temperature. In doing so, we demonstrate the frequency-independence of the solution, meaning any single frequency can be used in the frequency-domain solutions to estimate thermal diffusivity and 1-D fluid flux in streambeds, even if the forcing has multiple frequencies. This means that diurnal variations with asymmetric shapes, gradients in the mean temperature with depth, or `non-stationary' amplitude and frequency (or phase) do not actually represent violations of assumptions, and they should not cause errors in estimates when using one of the suite of existing solution methods derived based on a single frequency. Misattribution of errors to these issues constrains progress on solving real sources of error. Numerical and physical experiments are used to verify this conclusion and consider the utility of information at `non-standard' frequencies and multiple frequencies to augment the information derived from time series of temperature.

  11. Partial null astigmatism-compensated interferometry for a concave freeform Zernike mirror

    NASA Astrophysics Data System (ADS)

    Dou, Yimeng; Yuan, Qun; Gao, Zhishan; Yin, Huimin; Chen, Lu; Yao, Yanxia; Cheng, Jinlong

    2018-06-01

    Partial null interferometry without using any null optics is proposed to measure a concave freeform Zernike mirror. Oblique incidence on the freeform mirror is used to compensate for astigmatism as the main component in its figure, and to constrain the divergence of the test beam as well. The phase demodulated from the partial nulled interferograms is divided into low-frequency phase and high-frequency phase by Zernike polynomial fitting. The low-frequency surface figure error of the freeform mirror represented by the coefficients of Zernike polynomials is reconstructed from the low-frequency phase, applying the reverse optimization reconstruction technology in the accurate model of the interferometric system. The high-frequency surface figure error of the freeform mirror is retrieved from the high-frequency phase adopting back propagating technology, according to the updated model in which the low-frequency surface figure error has been superimposed on the sag of the freeform mirror. Simulations verified that this method is capable of testing a wide variety of astigmatism-dominated freeform mirrors due to the high dynamic range. The experimental result using our proposed method for a concave freeform Zernike mirror is consistent with the null test result employing the computer-generated hologram.

  12. Multipath induced errors in meteorological Doppler/interferometer location systems

    NASA Technical Reports Server (NTRS)

    Wallace, R. G.

    1984-01-01

    One application of an RF interferometer aboard a low-orbiting spacecraft to determine the location of ground-based transmitters is in tracking high-altitude balloons for meteorological studies. A source of error in this application is reflection of the signal from the sea surface. Through propagating and signal analysis, the magnitude of the reflection-induced error in both Doppler frequency measurements and interferometer phase measurements was estimated. The theory of diffuse scattering from random surfaces was applied to obtain the power spectral density of the reflected signal. The processing of the combined direct and reflected signals was then analyzed to find the statistics of the measurement error. It was found that the error varies greatly during the satellite overpass and attains its maximum value at closest approach. The maximum values of interferometer phase error and Doppler frequency error found for the system configuration considered were comparable to thermal noise-induced error.

  13. Evaluation and testing of image quality of the Space Solar Extreme Ultraviolet Telescope

    NASA Astrophysics Data System (ADS)

    Peng, Jilong; Yi, Zhong; Zhou, Shuhong; Yu, Qian; Hou, Yinlong; Wang, Shanshan

    2018-01-01

    For the space solar extreme ultraviolet telescope, the star point test can not be performed in the x-ray band (19.5nm band) as there is not light source of bright enough. In this paper, the point spread function of the optical system is calculated to evaluate the imaging performance of the telescope system. Combined with the actual processing surface error, such as small grinding head processing and magnetorheological processing, the optical design software Zemax and data analysis software Matlab are used to directly calculate the system point spread function of the space solar extreme ultraviolet telescope. Matlab codes are programmed to generate the required surface error grid data. These surface error data is loaded to the specified surface of the telescope system by using the communication technique of DDE (Dynamic Data Exchange), which is used to connect Zemax and Matlab. As the different processing methods will lead to surface error with different size, distribution and spatial frequency, the impact of imaging is also different. Therefore, the characteristics of the surface error of different machining methods are studied. Combining with its position in the optical system and simulation its influence on the image quality, it is of great significance to reasonably choose the processing technology. Additionally, we have also analyzed the relationship between the surface error and the image quality evaluation. In order to ensure the final processing of the mirror to meet the requirements of the image quality, we should choose one or several methods to evaluate the surface error according to the different spatial frequency characteristics of the surface error.

  14. Analysis of measured data of human body based on error correcting frequency

    NASA Astrophysics Data System (ADS)

    Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

    2014-04-01

    Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

  15. Correction of mid-spatial-frequency errors by smoothing in spin motion for CCOS

    NASA Astrophysics Data System (ADS)

    Zhang, Yizhong; Wei, Chaoyang; Shao, Jianda; Xu, Xueke; Liu, Shijie; Hu, Chen; Zhang, Haichao; Gu, Haojin

    2015-08-01

    Smoothing is a convenient and efficient way to correct mid-spatial-frequency errors. Quantifying the smoothing effect allows improvements in efficiency for finishing precision optics. A series experiments in spin motion are performed to study the smoothing effects about correcting mid-spatial-frequency errors. Some of them use a same pitch tool at different spinning speed, and others at a same spinning speed with different tools. Introduced and improved Shu's model to describe and compare the smoothing efficiency with different spinning speed and different tools. From the experimental results, the mid-spatial-frequency errors on the initial surface were nearly smoothed out after the process in spin motion and the number of smoothing times can be estimated by the model before the process. Meanwhile this method was also applied to smooth the aspherical component, which has an obvious mid-spatial-frequency error after Magnetorheological Finishing processing. As a result, a high precision aspheric optical component was obtained with PV=0.1λ and RMS=0.01λ.

  16. Extreme Universe Space Observatory (EUSO) Optics Module

    NASA Technical Reports Server (NTRS)

    Young, Roy; Christl, Mark

    2008-01-01

    A demonstration part will be manufactured in Japan on one of the large Toshiba machines with a diameter of 2.5 meters. This will be a flat PMMA disk that is cut between 0.5 and 1.25 meters radius. The cut should demonstrate manufacturing the most difficult parts of the 2.5 meter Fresnel pattern and the blazed grating on the diffractive surface. Optical simulations, validated with the subscale prototype, will be used to determine the limits on manufacturing errors (tolerances) that will result in optics that meet EUSO s requirements. There will be limits on surface roughness (or errors at high spatial frequency); radial and azimuthal slope errors (at lower spatial frequencies) and plunge cut depth errors in the blazed grating. The demonstration part will be measured to determine whether it was made within the allowable tolerances.

  17. Every photon counts: improving low, mid, and high-spatial frequency errors on astronomical optics and materials with MRF

    NASA Astrophysics Data System (ADS)

    Maloney, Chris; Lormeau, Jean Pierre; Dumas, Paul

    2016-07-01

    Many astronomical sensing applications operate in low-light conditions; for these applications every photon counts. Controlling mid-spatial frequencies and surface roughness on astronomical optics are critical for mitigating scattering effects such as flare and energy loss. By improving these two frequency regimes higher contrast images can be collected with improved efficiency. Classically, Magnetorheological Finishing (MRF) has offered an optical fabrication technique to correct low order errors as well has quilting/print-through errors left over in light-weighted optics from conventional polishing techniques. MRF is a deterministic, sub-aperture polishing process that has been used to improve figure on an ever expanding assortment of optical geometries, such as planos, spheres, on and off axis aspheres, primary mirrors and freeform optics. Precision optics are routinely manufactured by this technology with sizes ranging from 5-2,000mm in diameter. MRF can be used for form corrections; turning a sphere into an asphere or free form, but more commonly for figure corrections achieving figure errors as low as 1nm RMS while using careful metrology setups. Recent advancements in MRF technology have improved the polishing performance expected for astronomical optics in low, mid and high spatial frequency regimes. Deterministic figure correction with MRF is compatible with most materials, including some recent examples on Silicon Carbide and RSA905 Aluminum. MRF also has the ability to produce `perfectly-bad' compensating surfaces, which may be used to compensate for measured or modeled optical deformation from sources such as gravity or mounting. In addition, recent advances in MRF technology allow for corrections of mid-spatial wavelengths as small as 1mm simultaneously with form error correction. Efficient midspatial frequency corrections make use of optimized process conditions including raster polishing in combination with a small tool size. Furthermore, a novel MRF fluid, called C30, has been developed to finish surfaces to ultra-low roughness (ULR) and has been used as the low removal rate fluid required for fine figure correction of mid-spatial frequency errors. This novel MRF fluid is able to achieve <4Å RMS on Nickel-plated Aluminum and even <1.5Å RMS roughness on Silicon, Fused Silica and other materials. C30 fluid is best utilized within a fine figure correction process to target mid-spatial frequency errors as well as smooth surface roughness 'for free' all in one step. In this paper we will discuss recent advancements in MRF technology and the ability to meet requirements for precision optics in low, mid and high spatial frequency regimes and how improved MRF performance addresses the need for achieving tight specifications required for astronomical optics.

  18. Article Errors in the English Writing of Saudi EFL Preparatory Year Students

    ERIC Educational Resources Information Center

    Alhaisoni, Eid; Gaudel, Daya Ram; Al-Zuoud, Khalid M.

    2017-01-01

    This study aims at providing a comprehensive account of the types of errors produced by Saudi EFL students enrolled in the preparatory year programe in their use of articles, based on the Surface Structure Taxonomies (SST) of errors. The study describes the types, frequency and sources of the definite and indefinite article errors in writing…

  19. Analysis of the convergence rules of full-range PSD surface error of magnetorheological figuring KDP crystal.

    PubMed

    Chen, Shaoshan; He, Deyu; Wu, Yi; Chen, Huangfei; Zhang, Zaijing; Chen, Yunlei

    2016-10-01

    A new non-aqueous and abrasive-free magnetorheological finishing (MRF) method is adopted for processing potassium dihydrogen phosphate (KDP) crystal due to its low hardness, high brittleness, temperature sensitivity, and water solubility. This paper researches the convergence rules of the surface error of an initial single-point diamond turning (SPDT)-finished KDP crystal after MRF polishing. Currently, the SPDT process contains spiral cutting and fly cutting. The main difference of these two processes lies in the morphology of intermediate-frequency turning marks on the surface, which affects the convergence rules. The turning marks after spiral cutting are a series of concentric circles, while the turning marks after fly cutting are a series of parallel big arcs. Polishing results indicate that MRF polishing can only improve the low-frequency errors (L>10  mm) of a spiral-cutting KDP crystal. MRF polishing can improve the full-range surface errors (L>0.01  mm) of a fly-cutting KDP crystal if the polishing process is not done more than two times for single surface. We can conclude a fly-cutting KDP crystal will meet better optical performance after MRF figuring than a spiral-cutting KDP crystal with similar initial surface performance.

  20. Diffraction analysis of sidelobe characteristics of optical elements with ripple error

    NASA Astrophysics Data System (ADS)

    Zhao, Lei; Luo, Yupeng; Bai, Jian; Zhou, Xiangdong; Du, Juan; Liu, Qun; Luo, Yujie

    2018-03-01

    The ripple errors of the lens lead to optical damage in high energy laser system. The analysis of sidelobe on the focal plane, caused by ripple error, provides a reference to evaluate the error and the imaging quality. In this paper, we analyze the diffraction characteristics of sidelobe of optical elements with ripple errors. First, we analyze the characteristics of ripple error and build relationship between ripple error and sidelobe. The sidelobe results from the diffraction of ripple errors. The ripple error tends to be periodic due to fabrication method on the optical surface. The simulated experiments are carried out based on angular spectrum method by characterizing ripple error as rotationally symmetric periodic structures. The influence of two major parameter of ripple including spatial frequency and peak-to-valley value to sidelobe is discussed. The results indicate that spatial frequency and peak-to-valley value both impact sidelobe at the image plane. The peak-tovalley value is the major factor to affect the energy proportion of the sidelobe. The spatial frequency is the major factor to affect the distribution of the sidelobe at the image plane.

  1. Initial Steps Toward Next-Generation, Waveform-Based, Three-Dimensional Models and Metrics to Improve Nuclear Explosion Monitoring in the Middle East

    DTIC Science & Technology

    2008-09-30

    propagation effects by splitting apart the longer period surface waves from the shorter period, depth-sensitive Pnl waves. Problematic, or high-error... Pnl waves. Problematic, or high-error, stations and paths were further analyzed to identify systematic errors with unknown sensor responses and...frequency Pnl components and slower, longer period surface waves. All cut windows are fit simultaneously, allowing equal weighting of phases that may be

  2. Computer Controlled Optical Surfacing With Orbital Tool Motion

    NASA Astrophysics Data System (ADS)

    Jones, Robert A.

    1985-10-01

    Asymmetric aspheric optical surfaces are very difficult to fabricate using classical techniques and laps the same size as the workpiece. Opticians can produce such surfaces by grinding and polishing, using small laps with orbital tool motion. However, hand correction is a time consuming process unsuitable for large optical elements. Itek has developed Computer Controlled Optical Surfacing (CCOS) for fabricating such aspheric optics. Automated equipment moves a nonrotating orbiting tool slowly over the workpiece surface. The process corrects low frequency surface errors by figuring. The velocity of the tool assembly over the workpiece surface is purposely varied. Since the amount of material removal is proportional to the polishing or grinding time, accurate control over material removal is achieved. The removal of middle and high frequency surface errors is accomplished by pad smoothing. For a soft pad material, the pad will compress to fit the workpiece surface producing greater pressure and more removal at the surface high areas. A harder pad will ride on only the high regions resulting in removal only for those locations.

  3. Method for Pre-Conditioning a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to eliminate the surface measurement noise or measurement errors can also suffer from aliasing effects. During re-sampling of a surface map, this software preserves the low spatial-frequency characteristic of a given surface map through the use of Zernike-polynomial fit coefficients, and maintains mid- and high-spatial-frequency characteristics of the given surface map by the use of a PSD model derived from the two-dimensional PSD data of the mid- and high-spatial-frequency components of the original surface map. Because this new method creates the new surface map in the desired sampling format from analytical expressions only, it does not encounter any aliasing effects and does not cause any discontinuity in the resultant surface map.

  4. Non-Destructive Evaluation of Depth of Surface Cracks Using Ultrasonic Frequency Analysis

    PubMed Central

    Her, Shiuh-Chuan; Lin, Sheng-Tung

    2014-01-01

    Ultrasonic is one of the most common uses of a non-destructive evaluation method for crack detection and characterization. The effectiveness of the acoustic-ultrasound Structural Health Monitoring (SHM) technique for the determination of the depth of the surface crack was presented. A method for ultrasonic sizing of surface cracks combined with the time domain and frequency spectrum was adopted. The ultrasonic frequency spectrum was obtained by Fourier transform technique. A series of test specimens with various depths of surface crack ranging from 1 mm to 8 mm was fabricated. The depth of the surface crack was evaluated using the pulse-echo technique. In this work, three different longitudinal waves with frequencies of 2.25 MHz, 5 MHz and 10 MHz were employed to investigate the effect of frequency on the sizing detection of surface cracks. Reasonable accuracies were achieved with measurement errors less than 7%. PMID:25225875

  5. Error Reduction Methods for Integrated-path Differential-absorption Lidar Measurements

    NASA Technical Reports Server (NTRS)

    Chen, Jeffrey R.; Numata, Kenji; Wu, Stewart T.

    2012-01-01

    We report new modeling and error reduction methods for differential-absorption optical-depth (DAOD) measurements of atmospheric constituents using direct-detection integrated-path differential-absorption lidars. Errors from laser frequency noise are quantified in terms of the line center fluctuation and spectral line shape of the laser pulses, revealing relationships verified experimentally. A significant DAOD bias is removed by introducing a correction factor. Errors from surface height and reflectance variations can be reduced to tolerable levels by incorporating altimetry knowledge and "log after averaging", or by pointing the laser and receiver to a fixed surface spot during each wavelength cycle to shorten the time of "averaging before log".

  6. Simulating a transmon implementation of the surface code, Part I

    NASA Astrophysics Data System (ADS)

    Tarasinski, Brian; O'Brien, Thomas; Rol, Adriaan; Bultink, Niels; Dicarlo, Leo

    Current experimental efforts aim to realize Surface-17, a distance-3 surface-code logical qubit, using transmon qubits in a circuit QED architecture. Following experimental proposals for this device, and currently achieved fidelities on physical qubits, we define a detailed error model that takes experimentally relevant error sources into account, such as amplitude and phase damping, imperfect gate pulses, and coherent errors due to low-frequency flux noise. Using the GPU-accelerated software package 'quantumsim', we simulate the density matrix evolution of the logical qubit under this error model. Combining the simulation results with a minimum-weight matching decoder, we obtain predictions for the error rate of the resulting logical qubit when used as a quantum memory, and estimate the contribution of different error sources to the logical error budget. Research funded by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), IARPA, an ERC Synergy Grant, the China Scholarship Council, and Intel Corporation.

  7. Regional climate modeling over the Maritime Continent: Assessment of RegCM3-BATS1e and RegCM3-IBIS

    NASA Astrophysics Data System (ADS)

    Gianotti, R. L.; Zhang, D.; Eltahir, E. A.

    2010-12-01

    Despite its importance to global rainfall and circulation processes, the Maritime Continent remains a region that is poorly simulated by climate models. Relatively few studies have been undertaken using a model with fine enough resolution to capture the small-scale spatial heterogeneity of this region and associated land-atmosphere interactions. These studies have shown that even regional climate models (RCMs) struggle to reproduce the climate of this region, particularly the diurnal cycle of rainfall. This study builds on previous work by undertaking a more thorough evaluation of RCM performance in simulating the timing and intensity of rainfall over the Maritime Continent, with identification of major sources of error. An assessment was conducted of the Regional Climate Model Version 3 (RegCM3) used in a coupled system with two land surface schemes: Biosphere Atmosphere Transfer System Version 1e (BATS1e) and Integrated Biosphere Simulator (IBIS). The model’s performance in simulating precipitation was evaluated against the 3-hourly TRMM 3B42 product, with some validation provided of this TRMM product against ground station meteorological data. It is found that the model suffers from three major errors in the rainfall histogram: underestimation of the frequency of dry periods, overestimation of the frequency of low intensity rainfall, and underestimation of the frequency of high intensity rainfall. Additionally, the model shows error in the timing of the diurnal rainfall peak, particularly over land surfaces. These four errors were largely insensitive to the choice of boundary conditions, convective parameterization scheme or land surface scheme. The presence of a wet or dry bias in the simulated volumes of rainfall was, however, dependent on the choice of convection scheme and boundary conditions. This study also showed that the coupled model system has significant error in overestimation of latent heat flux and evapotranspiration from the land surface, and specifically overestimation of interception loss with concurrent underestimation of transpiration, irrespective of the land surface scheme used. Discussion of the origin of these errors is provided, with some suggestions for improvement.

  8. Active stabilization of error field penetration via control field and bifurcation of its stable frequency range

    NASA Astrophysics Data System (ADS)

    Inoue, S.; Shiraishi, J.; Takechi, M.; Matsunaga, G.; Isayama, A.; Hayashi, N.; Ide, S.

    2017-11-01

    An active stabilization effect of a rotating control field against an error field penetration is numerically studied. We have developed a resistive magnetohydrodynamic code ‘AEOLUS-IT’, which can simulate plasma responses to rotating/static external magnetic field. Adopting non-uniform flux coordinates system, the AEOLUS-IT simulation can employ high magnetic Reynolds number condition relevant to present tokamaks. By AEOLUS-IT, we successfully clarified the stabilization mechanism of the control field against the error field penetration. Physical processes of a plasma rotation drive via the control field are demonstrated by the nonlinear simulation, which reveals that the rotation amplitude at a resonant surface is not a monotonic function of the control field frequency, but has an extremum. Consequently, two ‘bifurcated’ frequency ranges of the control field are found for the stabilization of the error field penetration.

  9. Improving the surface metrology accuracy of optical profilers by using multiple measurements

    NASA Astrophysics Data System (ADS)

    Xu, Xudong; Huang, Qiushi; Shen, Zhengxiang; Wang, Zhanshan

    2016-10-01

    The performance of high-resolution optical systems is affected by small angle scattering at the mid-spatial-frequency irregularities of the optical surface. Characterizing these irregularities is, therefore, important. However, surface measurements obtained with optical profilers are influenced by additive white noise, as indicated by the heavy-tail effect observable on their power spectral density (PSD). A multiple-measurement method is used to reduce the effects of white noise by averaging individual measurements. The intensity of white noise is determined using a model based on the theoretical PSD of fractal surface measurements with additive white noise. The intensity of white noise decreases as the number of times of multiple measurements increases. Using multiple measurements also increases the highest observed spatial frequency; this increase is derived and calculated. Additionally, the accuracy obtained using multiple measurements is carefully studied, with the analysis of both the residual reference error after calibration, and the random errors appearing in the range of measured spatial frequencies. The resulting insights on the effects of white noise in optical profiler measurements and the methods to mitigate them may prove invaluable to improve the quality of surface metrology with optical profilers.

  10. Correction of electrode modelling errors in multi-frequency EIT imaging.

    PubMed

    Jehl, Markus; Holder, David

    2016-06-01

    The differentiation of haemorrhagic from ischaemic stroke using electrical impedance tomography (EIT) requires measurements at multiple frequencies, since the general lack of healthy measurements on the same patient excludes time-difference imaging methods. It has previously been shown that the inaccurate modelling of electrodes constitutes one of the largest sources of image artefacts in non-linear multi-frequency EIT applications. To address this issue, we augmented the conductivity Jacobian matrix with a Jacobian matrix with respect to electrode movement. Using this new algorithm, simulated ischaemic and haemorrhagic strokes in a realistic head model were reconstructed for varying degrees of electrode position errors. The simultaneous recovery of conductivity spectra and electrode positions removed most artefacts caused by inaccurately modelled electrodes. Reconstructions were stable for electrode position errors of up to 1.5 mm standard deviation along both surface dimensions. We conclude that this method can be used for electrode model correction in multi-frequency EIT.

  11. Further investigations on fixed abrasive diamond pellets used for diminishing mid-spatial frequency errors of optical mirrors.

    PubMed

    Dong, Zhichao; Cheng, Haobo; Tam, Hon-Yuen

    2014-01-20

    As further application investigations on fixed abrasive diamond pellets (FADPs), this work exhibits their potential capability for diminishing mid-spatial frequency errors (MSFEs, i.e., periodic small structure) of optical surfaces. Benefitting from its high surficial rigidness, the FADPs tool has a natural smoothing effect to periodic small errors. Compared with the previous design, this proposed new tool employs more compliance to aspherical surfaces due to the pellets being mutually separated and bonded on a steel plate with elastic back of silica rubber adhesive. Moreover, a unicursal Peano-like path is presented for improving MSFEs, which can enhance the multidirectionality and uniformity of the tool's motion. Experiments were conducted to validate the effectiveness of FADPs for diminishing MSFEs. In the lapping of a Φ=420 mm Zerodur paraboloid workpiece, the grinding ripples were quickly diminished (210 min) by both visual inspection and profile metrology, as well as the power spectrum density (PSD) analysis, RMS was reduced from 4.35 to 0.55 μm. In the smoothing of a Φ=101 mm fused silica workpiece, MSFEs were obviously improved from the inspection of surface form maps, interferometric fringe patterns, and PSD analysis. The mid-spatial frequency RMS was diminished from 0.017λ to 0.014λ (λ=632.8 nm).

  12. Quantifying Uncertainties in Land Surface Microwave Emissivity Retrievals

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Prigent, Catherine; Norouzi, Hamidreza; Aires, Filipe; Boukabara, Sid-Ahmed; Furuzawa, Fumie A.; Masunaga, Hirohiko

    2012-01-01

    Uncertainties in the retrievals of microwave land surface emissivities were quantified over two types of land surfaces: desert and tropical rainforest. Retrievals from satellite-based microwave imagers, including SSM/I, TMI and AMSR-E, were studied. Our results show that there are considerable differences between the retrievals from different sensors and from different groups over these two land surface types. In addition, the mean emissivity values show different spectral behavior across the frequencies. With the true emissivity assumed largely constant over both of the two sites throughout the study period, the differences are largely attributed to the systematic and random errors in the retrievals. Generally these retrievals tend to agree better at lower frequencies than at higher ones, with systematic differences ranging 14% (312 K) over desert and 17% (320 K) over rainforest. The random errors within each retrieval dataset are in the range of 0.52% (26 K). In particular, at 85.0/89.0 GHz, there are very large differences between the different retrieval datasets, and within each retrieval dataset itself. Further investigation reveals that these differences are mostly likely caused by rain/cloud contamination, which can lead to random errors up to 1017 K under the most severe conditions.

  13. Advanced Microwave Radiometer (AMR) for SWOT mission

    NASA Astrophysics Data System (ADS)

    Chae, C. S.

    2015-12-01

    The objective of the SWOT (Surface Water & Ocean Topography) satellite mission is to measure wide-swath, high resolution ocean topography and terrestrial surface waters. Since main payload radar will use interferometric SAR technology, conventional microwave radiometer system which has single nadir look antenna beam (i.e., OSTM/Jason-2 AMR) is not ideally applicable for the mission for wet tropospheric delay correction. Therefore, SWOT AMR incorporates two antenna beams along cross track direction. In addition to the cross track design of the AMR radiometer, wet tropospheric error requirement is expressed in space frequency domain (in the sense of cy/km), in other words, power spectral density (PSD). Thus, instrument error allocation and design are being done in PSD which are not conventional approaches for microwave radiometer requirement allocation and design. A few of novel analyses include: 1. The effects of antenna beam size to PSD error and land/ocean contamination, 2. Receiver error allocation and the contributions of radiometric count averaging, NEDT, Gain variation, etc. 3. Effect of thermal design in the frequency domain. In the presentation, detailed AMR design and analyses results will be discussed.

  14. Further evaluation of the constrained least squares electromagnetic compensation method

    NASA Technical Reports Server (NTRS)

    Smith, William T.

    1991-01-01

    Technologies exist for construction of antennas with adaptive surfaces that can compensate for many of the larger distortions caused by thermal and gravitational forces. However, as the frequency and size of reflectors increase, the subtle surface errors become significant and degrade the overall electromagnetic performance. Electromagnetic (EM) compensation through an adaptive feed array offers means for mitigation of surface distortion effects. Implementation of EM compensation is investigated with the measured surface errors of the NASA 15 meter hoop/column reflector antenna. Computer simulations are presented for: (1) a hybrid EM compensation technique, and (2) evaluating the performance of a given EM compensation method when implemented with discretized weights.

  15. Assessment of radar altimetry correction slopes for marine gravity recovery: A case study of Jason-1 GM data

    NASA Astrophysics Data System (ADS)

    Zhang, Shengjun; Li, Jiancheng; Jin, Taoyong; Che, Defu

    2018-04-01

    Marine gravity anomaly derived from satellite altimetry can be computed using either sea surface height or sea surface slope measurements. Here we consider the slope method and evaluate the errors in the slope of the corrections supplied with the Jason-1 geodetic mission data. The slope corrections are divided into three groups based on whether they are small, comparable, or large with respect to the 1 microradian error in the current sea surface slope models. (1) The small and thus negligible corrections include dry tropospheric correction, inverted barometer correction, solid earth tide and geocentric pole tide. (2) The moderately important corrections include wet tropospheric correction, dual-frequency ionospheric correction and sea state bias. The radiometer measurements are more preferred than model values in the geophysical data records for constraining wet tropospheric effect owing to the highly variable water-vapor structure in atmosphere. The items of dual-frequency ionospheric correction and sea state bias should better not be directly added to range observations for obtaining sea surface slopes since their inherent errors may cause abnormal sea surface slopes and along-track smoothing with uniform distribution weight in certain width is an effective strategy for avoiding introducing extra noises. The slopes calculated from radiometer wet tropospheric corrections, and along-track smoothed dual-frequency ionospheric corrections, sea state bias are generally within ±0.5 microradians and no larger than 1 microradians. (3) Ocean tide has the largest influence on obtaining sea surface slopes while most of ocean tide slopes distribute within ±3 microradians. Larger ocean tide slopes mostly occur over marginal and island-surrounding seas, and extra tidal models with better precision or with extending process (e.g. Got-e) are strongly recommended for updating corrections in geophysical data records.

  16. Method and apparatus for non-destructive evaluation of composite materials with cloth surface impressions

    NASA Technical Reports Server (NTRS)

    Madras, Eric I. (Inventor)

    1995-01-01

    A method and related apparatus for nondestructive evaluation of composite materials by determination of the quantity known as Integrated Polar Backscatter, which avoids errors caused by surface texture left by cloth impressions by identifying frequency ranges associated with peaks in a power spectrum for the backscattered signal, and removing such frequency ranges from the calculation of Integrated Polar Backscatter for all scan sites on the composite material is presented.

  17. Full-band error control and crack-free surface fabrication techniques for ultra-precision fly cutting of large-aperture KDP crystals

    NASA Astrophysics Data System (ADS)

    Zhang, F. H.; Wang, S. F.; An, C. H.; Wang, J.; Xu, Q.

    2017-06-01

    Large-aperture potassium dihydrogen phosphate (KDP) crystals are widely used in the laser path of inertial confinement fusion (ICF) systems. The most common method of manufacturing half-meter KDP crystals is ultra-precision fly cutting. When processing KDP crystals by ultra-precision fly cutting, the dynamic characteristics of the fly cutting machine and fluctuations in the fly cutting environment are translated into surface errors at different spatial frequency bands. These machining errors should be suppressed effectively to guarantee that KDP crystals meet the full-band machining accuracy specified in the evaluation index. In this study, the anisotropic machinability of KDP crystals and the causes of typical surface errors in ultra-precision fly cutting of the material are investigated. The structures of the fly cutting machine and existing processing parameters are optimized to improve the machined surface quality. The findings are theoretically and practically important in the development of high-energy laser systems in China.

  18. Combined fabrication process for high-precision aspheric surface based on smoothing polishing and magnetorheological finishing

    NASA Astrophysics Data System (ADS)

    Nie, Xuqing; Li, Shengyi; Song, Ci; Hu, Hao

    2014-08-01

    Due to the different curvature everywhere, the aspheric surface is hard to achieve high-precision accuracy by the traditional polishing process. Controlling of the mid-spatial frequency errors (MSFR), in particular, is almost unapproachable. In this paper, the combined fabrication process based on the smoothing polishing (SP) and magnetorheological finishing (MRF) is proposed. The pressure distribution of the rigid polishing lap and semi-flexible polishing lap is calculated. The shape preserving capacity and smoothing effect are compared. The feasibility of smoothing aspheric surface with the semi-flexible polishing lap is verified, and the key technologies in the SP process are discussed. Then, A K4 parabolic surface with the diameter of 500mm is fabricated based on the combined fabrication process. A Φ150 mm semi-flexible lap is used in the SP process to control the MSFR, and the deterministic MRF process is applied to figure the surface error. The root mean square (RMS) error of the aspheric surface converges from 0.083λ (λ=632.8 nm) to 0.008λ. The power spectral density (PSD) result shows that the MSFR are well restrained while the surface error has a great convergence.

  19. Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity.

    PubMed

    Li, Jielin; Hassebrook, Laurence G; Guan, Chun

    2003-01-01

    Temporal frame-to-frame noise in multipattern structured light projection can significantly corrupt depth measurement repeatability. We present a rigorous stochastic analysis of phase-measuring-profilometry temporal noise as a function of the pattern parameters and the reconstruction coefficients. The analysis is used to optimize the two-frequency phase measurement technique. In phase-measuring profilometry, a sequence of phase-shifted sine-wave patterns is projected onto a surface. In two-frequency phase measurement, two sets of pattern sequences are used. The first, low-frequency set establishes a nonambiguous depth estimate, and the second, high-frequency set is unwrapped, based on the low-frequency estimate, to obtain an accurate depth estimate. If the second frequency is too low, then depth error is caused directly by temporal noise in the phase measurement. If the second frequency is too high, temporal noise triggers ambiguous unwrapping, resulting in depth measurement error. We present a solution for finding the second frequency, where intensity noise variance is at its minimum.

  20. Fabrication and testing of Wolter type-I mirrors for soft x-ray microscopes

    NASA Astrophysics Data System (ADS)

    Hoshino, Masato; Aoki, Sadao; Watanabe, Norio; Hirai, Shinichiro

    2004-10-01

    Development of a small Wolter type-I mirror that is mainly used as an objective for the X-ray microscope is described. Small Wolter mirrors for X-ray microscopes are fabricated by the vacuum replication method because of their long aspherical shape. Master mandrel is ground and polished by an ultra-precision NC lathe. Tungsten carbide was selected as a material because its thermal expansion coefficient is a little larger than the replica glass. It was ground by ELID (Electrolytic In-process Dressing) grinding technique that is appropriate for the efficient mirror surface grinding. After ultra-precision grinding, the figure error of master mandrel was better than 0.5μm except the boundary between the hyperboloid and the ellipsoid. Before vacuum replication, the mandrel was coated with Au (thickness 50nm) as the parting layer. Pyrex glass was empirically selected as mirror material. The master mandrel was inserted into the Pyrex glass tube and heated up to 675°C in the electric furnace. Although vacuum replication is a proper technique in terms of its high replication accuracy, the surface roughness characterized by the high spatial frequency of the mandrel was replicated less accurate than the figure error characterized by the low spatial frequency. This indicates that the surface roughness and the figure error depend on the glass surface and the figure error of the master mandrel, respectively. A fabricated mirror was evaluated by the imaging performance with a laser plasma X-ray source (λ=3.2nm).

  1. Quantifying Uncertainties in Land-Surface Microwave Emissivity Retrievals

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Prigent, Catherine; Norouzi, Hamidreza; Aires, Filipe; Boukabara, Sid-Ahmed; Furuzawa, Fumie A.; Masunaga, Hirohiko

    2013-01-01

    Uncertainties in the retrievals of microwaveland-surface emissivities are quantified over two types of land surfaces: desert and tropical rainforest. Retrievals from satellite-based microwave imagers, including the Special Sensor Microwave Imager, the Tropical Rainfall Measuring Mission Microwave Imager, and the Advanced Microwave Scanning Radiometer for Earth Observing System, are studied. Our results show that there are considerable differences between the retrievals from different sensors and from different groups over these two land-surface types. In addition, the mean emissivity values show different spectral behavior across the frequencies. With the true emissivity assumed largely constant over both of the two sites throughout the study period, the differences are largely attributed to the systematic and random errors inthe retrievals. Generally, these retrievals tend to agree better at lower frequencies than at higher ones, with systematic differences ranging 1%-4% (3-12 K) over desert and 1%-7% (3-20 K) over rainforest. The random errors within each retrieval dataset are in the range of 0.5%-2% (2-6 K). In particular, at 85.5/89.0 GHz, there are very large differences between the different retrieval datasets, and within each retrieval dataset itself. Further investigation reveals that these differences are most likely caused by rain/cloud contamination, which can lead to random errors up to 10-17 K under the most severe conditions.

  2. Active Optics: stress polishing of toric mirrors for the VLT SPHERE adaptive optics system.

    PubMed

    Hugot, Emmanuel; Ferrari, Marc; El Hadi, Kacem; Vola, Pascal; Gimenez, Jean Luc; Lemaitre, Gérard R; Rabou, Patrick; Dohlen, Kjetil; Puget, Pascal; Beuzit, Jean Luc; Hubin, Norbert

    2009-05-20

    The manufacturing of toric mirrors for the Very Large Telescope-Spectro-Polarimetric High-Contrast Exoplanet Research instrument (SPHERE) is based on Active Optics and stress polishing. This figuring technique allows minimizing mid and high spatial frequency errors on an aspherical surface by using spherical polishing with full size tools. In order to reach the tight precision required, the manufacturing error budget is described to optimize each parameter. Analytical calculations based on elasticity theory and finite element analysis lead to the mechanical design of the Zerodur blank to be warped during the stress polishing phase. Results on the larger (366 mm diameter) toric mirror are evaluated by interferometry. We obtain, as expected, a toric surface within specification at low, middle, and high spatial frequencies ranges.

  3. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China).

    PubMed

    Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu

    2017-05-25

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems.

  4. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China)

    PubMed Central

    Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu

    2017-01-01

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3–5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems. PMID:28587086

  5. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China)

    NASA Astrophysics Data System (ADS)

    Zhao, Q.

    2017-12-01

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems.

  6. The effects of sampling frequency on the climate statistics of the European Centre for Medium-Range Weather Forecasts

    NASA Astrophysics Data System (ADS)

    Phillips, Thomas J.; Gates, W. Lawrence; Arpe, Klaus

    1992-12-01

    The effects of sampling frequency on the first- and second-moment statistics of selected European Centre for Medium-Range Weather Forecasts (ECMWF) model variables are investigated in a simulation of "perpetual July" with a diurnal cycle included and with surface and atmospheric fields saved at hourly intervals. The shortest characteristic time scales (as determined by the e-folding time of lagged autocorrelation functions) are those of ground heat fluxes and temperatures, precipitation and runoff, convective processes, cloud properties, and atmospheric vertical motion, while the longest time scales are exhibited by soil temperature and moisture, surface pressure, and atmospheric specific humidity, temperature, and wind. The time scales of surface heat and momentum fluxes and of convective processes are substantially shorter over land than over oceans. An appropriate sampling frequency for each model variable is obtained by comparing the estimates of first- and second-moment statistics determined at intervals ranging from 2 to 24 hours with the "best" estimates obtained from hourly sampling. Relatively accurate estimation of first- and second-moment climate statistics (10% errors in means, 20% errors in variances) can be achieved by sampling a model variable at intervals that usually are longer than the bandwidth of its time series but that often are shorter than its characteristic time scale. For the surface variables, sampling at intervals that are nonintegral divisors of a 24-hour day yields relatively more accurate time-mean statistics because of a reduction in errors associated with aliasing of the diurnal cycle and higher-frequency harmonics. The superior estimates of first-moment statistics are accompanied by inferior estimates of the variance of the daily means due to the presence of systematic biases, but these probably can be avoided by defining a different measure of low-frequency variability. Estimates of the intradiurnal variance of accumulated precipitation and surface runoff also are strongly impacted by the length of the storage interval. In light of these results, several alternative strategies for storage of the EMWF model variables are recommended.

  7. SU-G-JeP3-02: Comparison of Magnitude and Frequency of Patient Positioning Errors in Breast Irradiation Using AlignRT 3D Optical Surface Imaging and Skin Mark Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, R; Chisela, W; Dorbu, G

    2016-06-15

    Purpose: To evaluate clinical usefulness of AlignRT (Vision RT Ltd., London, UK) in reducing patient positioning errors in breast irradiation. Methods: 60 patients undergoing whole breast irradiation were selected for this study. Patients were treated to the left or right breast lying on Qfix Access breast board (Qfix, Avondale, PA) in supine position for 28 fractions using tangential fields. 30 patients were aligned using AlignRT by aligning a breast surface region of interest (ROI) to the same area from a reference surface image extracted from planning CT. When the patient’s surface image deviated from the reference by more than 3mmmore » on one or more translational and rotational directions, a new reference was acquired using AlignRT in-room cameras. The other 30 patients were aligned to the skin marks with room lasers. On-Board MV portal images of medial field were taken daily and matched to the DRRs. The magnitude and frequency of positioning errors were determined from measured translational shifts. Kolmogorov-Smirnov test was used to evaluate statistical differences of positional accuracy and precision between AlignRT and non-AlignRT patients. Results: The percentage of port images with no shift required was 46.5% and 27.0% in vertical, 49.8% and 25.8% in longitudinal, 47.6% and 28.5% in lateral for AlignRT and non-AlignRT patients, respectively. The percentage of port images requiring more than 3mm shifts was 18.1% and 35.1% in vertical, 28.6% and 50.8% in longitudinal, 11.3% and 24.2% in lateral for AlignRT and non-AlignRT patients, respectively. Kolmogorov-Smirnov test showed that there were significant differences between the frequency distributions of AlignRT and non-AlignRT in vertical, longitudinal, and lateral shifts. Conclusion: As confirmed by port images, AlignRT-assisted patient positioning can significantly reduce the frequency and magnitude of patient setup errors in breast irradiation compared to the use of lasers and skin marks.« less

  8. Was That Assumption Necessary? Reconsidering Boundary Conditions for Analytical Solutions to Estimate Streambed Fluxes

    NASA Astrophysics Data System (ADS)

    Luce, Charles H.; Tonina, Daniele; Applebee, Ralph; DeWeese, Timothy

    2017-11-01

    Two common refrains about using the one-dimensional advection diffusion equation to estimate fluid fluxes and thermal conductivity from temperature time series in streambeds are that the solution assumes that (1) the surface boundary condition is a sine wave or nearly so, and (2) there is no gradient in mean temperature with depth. Although the mathematical posing of the problem in the original solution to the problem might lead one to believe these constraints exist, the perception that they are a source of error is a fallacy. Here we develop a mathematical proof demonstrating the equivalence of the solution as developed based on an arbitrary (Fourier integral) surface temperature forcing when evaluated at a single given frequency versus that derived considering a single frequency from the beginning. The implication is that any single frequency can be used in the frequency-domain solutions to estimate thermal diffusivity and 1-D fluid flux in streambeds, even if the forcing has multiple frequencies. This means that diurnal variations with asymmetric shapes or gradients in the mean temperature with depth are not actually assumptions, and deviations from them should not cause errors in estimates. Given this clarification, we further explore the potential for using information at multiple frequencies to augment the information derived from time series of temperature.

  9. Optimizing a remote sensing instrument to measure atmospheric surface pressure

    NASA Technical Reports Server (NTRS)

    Peckham, G. E.; Gatley, C.; Flower, D. A.

    1983-01-01

    Atmospheric surface pressure can be remotely sensed from a satellite by an active instrument which measures return echoes from the ocean at frequencies near the 60 GHz oxygen absorption band. The instrument is optimized by selecting its frequencies of operation, transmitter powers and antenna size through a new procedure baesd on numerical simulation which maximizes the retrieval accuracy. The predicted standard deviation error in the retrieved surface pressure is 1 mb. In addition the measurements can be used to retrieve water vapor, cloud liquid water and sea state, which is related to wind speed.

  10. Structured Light Based 3d Scanning for Specular Surface by the Combination of Gray Code and Phase Shifting

    NASA Astrophysics Data System (ADS)

    Zhang, Yujia; Yilmaz, Alper

    2016-06-01

    Surface reconstruction using coded structured light is considered one of the most reliable techniques for high-quality 3D scanning. With a calibrated projector-camera stereo system, a light pattern is projected onto the scene and imaged by the camera. Correspondences between projected and recovered patterns are computed in the decoding process, which is used to generate 3D point cloud of the surface. However, the indirect illumination effects on the surface, such as subsurface scattering and interreflections, will raise the difficulties in reconstruction. In this paper, we apply maximum min-SW gray code to reduce the indirect illumination effects of the specular surface. We also analysis the errors when comparing the maximum min-SW gray code and the conventional gray code, which justifies that the maximum min-SW gray code has significant superiority to reduce the indirect illumination effects. To achieve sub-pixel accuracy, we project high frequency sinusoidal patterns onto the scene simultaneously. But for specular surface, the high frequency patterns are susceptible to decoding errors. Incorrect decoding of high frequency patterns will result in a loss of depth resolution. Our method to resolve this problem is combining the low frequency maximum min-SW gray code and the high frequency phase shifting code, which achieves dense 3D reconstruction for specular surface. Our contributions include: (i) A complete setup of the structured light based 3D scanning system; (ii) A novel combination technique of the maximum min-SW gray code and phase shifting code. First, phase shifting decoding with sub-pixel accuracy. Then, the maximum min-SW gray code is used to resolve the ambiguity resolution. According to the experimental results and data analysis, our structured light based 3D scanning system enables high quality dense reconstruction of scenes with a small number of images. Qualitative and quantitative comparisons are performed to extract the advantages of our new combined coding method.

  11. A fast multigrid-based electromagnetic eigensolver for curved metal boundaries on the Yee mesh

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauer, Carl A., E-mail: carl.bauer@colorado.edu; Werner, Gregory R.; Cary, John R.

    For embedded boundary electromagnetics using the Dey–Mittra (Dey and Mittra, 1997) [1] algorithm, a special grad–div matrix constructed in this work allows use of multigrid methods for efficient inversion of Maxwell’s curl–curl matrix. Efficient curl–curl inversions are demonstrated within a shift-and-invert Krylov-subspace eigensolver (open-sourced at ([ofortt]https://github.com/bauerca/maxwell[cfortt])) on the spherical cavity and the 9-cell TESLA superconducting accelerator cavity. The accuracy of the Dey–Mittra algorithm is also examined: frequencies converge with second-order error, and surface fields are found to converge with nearly second-order error. In agreement with previous work (Nieter et al., 2009) [2], neglecting some boundary-cut cell faces (as is requiredmore » in the time domain for numerical stability) reduces frequency convergence to first-order and surface-field convergence to zeroth-order (i.e. surface fields do not converge). Additionally and importantly, neglecting faces can reduce accuracy by an order of magnitude at low resolutions.« less

  12. Adaptive x-ray optics development at AOA-Xinetics

    NASA Astrophysics Data System (ADS)

    Lillie, Charles F.; Cavaco, Jeff L.; Brooks, Audrey D.; Ezzo, Kevin; Pearson, David D.; Wellman, John A.

    2013-05-01

    Grazing-incidence optics for X-ray applications require extremely smooth surfaces with precise mirror figures to provide well focused beams and small image spot sizes for astronomical telescopes and laboratory test facilities. The required precision has traditionally been achieved by time-consuming grinding and polishing of thick substrates with frequent pauses for precise metrology to check the mirror figure. More recently, substrates with high quality surface finish and figures have become available at reasonable cost, and techniques have been developed to mechanically adjust the figure of these traditionally polished substrates for ground-based applications. The beam-bending techniques currently in use are mechanically complex, however, with little control over mid-spatial frequency errors. AOA-Xinetics has been developing been developing techniques for shaping grazing incidence optics with surface-normal and surface-parallel electrostrictive Lead magnesium niobate (PMN) actuators bonded to mirror substrates for several years. These actuators are highly reliable; exhibit little to no hysteresis, aging or creep; and can be closely spaced to correct low and mid-spatial frequency errors in a compact package. In this paper we discuss recent development of adaptive x-ray optics at AOA-Xinetics.

  13. Adaptive x-ray optics development at AOA-Xinetics

    NASA Astrophysics Data System (ADS)

    Lillie, Charles F.; Pearson, David D.; Cavaco, Jeffrey L.; Plinta, Audrey D.; Wellman, John A.

    2012-10-01

    Grazing-incidence optics for X-ray applications require extremely smooth surfaces with precise mirror figures to provide well focused beams and small image spot sizes for astronomical telescopes and laboratory test facilities. The required precision has traditionally been achieved by time-consuming grinding and polishing of thick substrates with frequent pauses for precise metrology to check the mirror figure. More recently, substrates with high quality surface finish and figures have become available at reasonable cost, and techniques have been developed to mechanically adjust the figure of these traditionally polished substrates for ground-based applications. The beam-bending techniques currently in use are mechanically complex, however, with little control over mid-spatial frequency errors. AOA-Xinetics has been developing been developing techniques for shaping grazing incidence optics with surface-normal and surface-parallel electrostrictive Lead magnesium niobate (PMN) actuators bonded to mirror substrates for several years. These actuators are highly reliable; exhibit little to no hysteresis, aging or creep; and can be closely spaced to correct low and mid-spatial frequency errors in a compact package. In this paper we discuss recent development of adaptive x-ray optics at AOAXinetics.

  14. Land surface dynamics monitoring using microwave passive satellite sensors

    NASA Astrophysics Data System (ADS)

    Guijarro, Lizbeth Noemi

    Soil moisture, surface temperature and vegetation are variables that play an important role in our environment. There is growing demand for accurate estimation of these geophysical parameters for the research of global climate models (GCMs), weather, hydrological and flooding models, and for the application to agricultural assessment, land cover change, and a wide variety of other uses that meet the needs for the study of our environment. The different studies covered in this dissertation evaluate the capabilities and limitations of microwave passive sensors to monitor land surface dynamics. The first study evaluates the 19 GHz channel of the SSM/I instrument with a radiative transfer model and in situ datasets from the Illinois stations and the Oklahoma Mesonet to retrieve land surface temperature and surface soil moisture. The surface temperatures were retrieved with an average error of 5 K and the soil moisture with an average error of 6%. The results show that the 19 GHz channel can be used to qualitatively predict the spatial and temporal variability of surface soil moisture and surface temperature at regional scales. In the second study, in situ observations were compared with sensor observations to evaluate aspects of low and high spatial resolution at multiple frequencies with data collected from the Southern Great Plains Experiment (SGP99). The results showed that the sensitivity to soil moisture at each frequency is a function of wavelength and amount of vegetation. The results confirmed that L-band is more optimal for soil moisture, but each sensor can provide soil moisture information if the vegetation water content is low. The spatial variability of the emissivities reveals that resolution suffers considerably at higher frequencies. The third study evaluates C- and X-bands of the AMSR-E instrument. In situ datasets from the Soil Moisture Experiments (SMEX03) in South Central Georgia were utilized to validate the AMSR-E soil moisture product and to derive surface soil moisture with a radiative transfer model. The soil moisture was retrieved with an average error of 2.7% at X-band and 6.7% at C-band. The AMSR-E demonstrated its ability to successfully infer soil moisture during the SMEX03 experiment.

  15. Satellite Estimation of Daily Land Surface Water Vapor Pressure Deficit from AMSR- E

    NASA Astrophysics Data System (ADS)

    Jones, L. A.; Kimball, J. S.; McDonald, K. C.; Chan, S. K.; Njoku, E. G.; Oechel, W. C.

    2007-12-01

    Vapor pressure deficit (VPD) is a key variable for monitoring land surface water and energy exchanges, and estimating plant water stress. Multi-frequency day/night brightness temperatures from the Advanced Microwave Scanning Radiometer on EOS Aqua (AMSR-E) were used to estimate daily minimum and average near surface (2 m) air temperatures across a North American boreal-Arctic transect. A simple method for determining daily mean VPD (Pa) from AMSR-E air temperature retrievals was developed and validated against observations across a regional network of eight study sites ranging from boreal grassland and forest to arctic tundra. The method assumes that the dew point and minimum daily air temperatures tend to equilibrate in areas with low night time temperatures and relatively moist conditions. This assumption was tested by comparing the VPD algorithm results derived from site daily temperature observations against results derived from AMSR-E retrieved temperatures alone. An error analysis was conducted to determine the amount of error introduced in VPD estimates given known levels of error in satellite retrieved temperatures. Results indicate that the assumption generally holds for the high latitude study sites except for arid locations in mid-summer. VPD estimates using the method with AMSR-E retrieved temperatures compare favorably with site observations. The method can be applied to land surface temperature retrievals from any sensor with day and night surface or near-surface thermal measurements and shows potential for inferring near-surface wetness conditions where dense vegetation may hinder surface soil moisture retrievals from low-frequency microwave sensors. This work was carried out at The University of Montana, at San Diego State University, and at the Jet Propulsion Laboratory, California Institute of Technology, under contract to the National Aeronautics and Space Administration.

  16. Permeable Surface Corrections for Ffowcs Williams and Hawkings Integrals

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Casper, Jay H.

    2005-01-01

    The acoustic prediction methodology discussed herein applies an acoustic analogy to calculate the sound generated by sources in an aerodynamic simulation. Sound is propagated from the computed flow field by integrating the Ffowcs Williams and Hawkings equation on a suitable control surface. Previous research suggests that, for some applications, the integration surface must be placed away from the solid surface to incorporate source contributions from within the flow volume. As such, the fluid mechanisms in the input flow field that contribute to the far-field noise are accounted for by their mathematical projection as a distribution of source terms on a permeable surface. The passage of nonacoustic disturbances through such an integration surface can result in significant error in an acoustic calculation. A correction for the error is derived in the frequency domain using a frozen gust assumption. The correction is found to work reasonably well in several test cases where the error is a small fraction of the actual radiated noise. However, satisfactory agreement has not been obtained between noise predictions using the solution from a three-dimensional, detached-eddy simulation of flow over a cylinder.

  17. Ion beam figuring of Φ520mm convex hyperbolic secondary mirror

    NASA Astrophysics Data System (ADS)

    Meng, Xiaohui; Wang, Yonggang; Li, Ang; Li, Wenqing

    2016-10-01

    The convex hyperbolic secondary mirror is a Φ520-mm Zerodur lightweight hyperbolic convex mirror. Typically conventional methods like CCOS, stressed-lap polishing are used to manufacture this secondary mirror. Nevertheless, the required surface accuracy cannot be achieved through the use of conventional polishing methods because of the unpredictable behavior of the polishing tools, which leads to an unstable removal rate. Ion beam figuring is an optical fabrication method that provides highly controlled error of previously polished surfaces using a directed, inert and neutralized ion beam to physically sputter material from the optic surface. Several iterations with different ion beam size are selected and optimized to fit different stages of surface figure error and spatial frequency components. Before ion beam figuring, surface figure error of the secondary mirror is 2.5λ p-v, 0.23λ rms, and is improved to 0.12λ p-v, 0.014λ rms in several process iterations. The demonstration clearly shows that ion beam figuring can not only be used to the final correction of aspheric, but also be suitable for polishing the coarse surface of large, complex mirror.

  18. Experimental study of an adaptive CFRC reflector for high order wave-front error correction

    NASA Astrophysics Data System (ADS)

    Lan, Lan; Fang, Houfei; Wu, Ke; Jiang, Shuidong; Zhou, Yang

    2018-03-01

    The recent radio frequency communication system developments are generating the need for creating space antennas with lightweight and high precision. The carbon fiber reinforced composite (CFRC) materials have been used to manufacture the high precision reflector. The wave-front errors caused by fabrication and on-orbit distortion are inevitable. The adaptive CFRC reflector has received much attention to do the wave-front error correction. Due to uneven stress distribution that is introduced by actuation force and fabrication, the high order wave-front errors such as print-through error is found on the reflector surface. However, the adaptive CFRC reflector with PZT actuators basically has no control authority over the high order wave-front errors. A new design architecture assembled secondary ribs at the weak triangular surfaces is presented in this paper. The virtual experimental study of the new adaptive CFRC reflector has conducted. The controllability of the original adaptive CFRC reflector and the new adaptive CFRC reflector with secondary ribs are investigated. The virtual experimental investigation shows that the new adaptive CFRC reflector is feasible and efficient to diminish the high order wave-front error.

  19. Surface inspection system for carriage parts

    NASA Astrophysics Data System (ADS)

    Denkena, Berend; Acker, Wolfram

    2006-04-01

    Quality standards are very high in carriage manufacturing, due to the fact, that the visual quality impression is highly relevant for the purchase decision for the customer. In carriage parts even very small dents can be visible on the varnished and polished surface by observing reflections. The industrial demands are to detect these form errors on the unvarnished part. In order to meet the requirements, a stripe projection system for automatic recognition of waviness and form errors is introduced1. It bases on a modified stripe projection method using a high resolution line scan camera. Particular emphasis is put on achieving a short measuring time and a high resolution in depth, aiming at a reliable automatic recognition of dents and waviness of 10 μm on large curved surfaces of approximately 1 m width. The resulting point cloud needs to be filtered in order to detect dents. Therefore a spatial filtering technique is used. This works well on smoothly curved surfaces, if frequency parameters are well defined. On more complex parts like mudguards the method is restricted by the fact that frequencies near the define dent frequencies occur within the surface as well. To allow analysis of complex parts, the system is currently extended by including 3D CAD models into the process of inspection. For smoothly curved surfaces, the measuring speed of the prototype is mainly limited by the amount of light produced by the stripe projector. For complex surfaces the measuring speed is limited by the time consuming matching process. Currently, the development focuses on the improvement of the measuring speed.

  20. Alterations in Neural Control of Constant Isometric Contraction with the Size of Error Feedback

    PubMed Central

    Hwang, Ing-Shiou; Lin, Yen-Ting; Huang, Wei-Min; Yang, Zong-Ru; Hu, Chia-Ling; Chen, Yi-Ching

    2017-01-01

    Discharge patterns from a population of motor units (MUs) were estimated with multi-channel surface electromyogram and signal processing techniques to investigate parametric differences in low-frequency force fluctuations, MU discharges, and force-discharge relation during static force-tracking with varying sizes of execution error presented via visual feedback. Fourteen healthy adults produced isometric force at 10% of maximal voluntary contraction through index abduction under three visual conditions that scaled execution errors with different amplification factors. Error-augmentation feedback that used a high amplification factor (HAF) to potentiate visualized error size resulted in higher sample entropy, mean frequency, ratio of high-frequency components, and spectral dispersion of force fluctuations than those of error-reducing feedback using a low amplification factor (LAF). In the HAF condition, MUs with relatively high recruitment thresholds in the dorsal interosseous muscle exhibited a larger coefficient of variation for inter-spike intervals and a greater spectral peak of the pooled MU coherence at 13–35 Hz than did those in the LAF condition. Manipulation of the size of error feedback altered the force-discharge relation, which was characterized with non-linear approaches such as mutual information and cross sample entropy. The association of force fluctuations and global discharge trace decreased with increasing error amplification factor. Our findings provide direct neurophysiological evidence that favors motor training using error-augmentation feedback. Amplification of the visualized error size of visual feedback could enrich force gradation strategies during static force-tracking, pertaining to selective increases in the discharge variability of higher-threshold MUs that receive greater common oscillatory inputs in the β-band. PMID:28125658

  1. Generation of a pseudo-2D shear-wave velocity section by inversion of a series of 1D dispersion curves

    USGS Publications Warehouse

    Luo, Y.; Xia, J.; Liu, J.; Xu, Y.; Liu, Q.

    2008-01-01

    Multichannel Analysis of Surface Waves utilizes a multichannel recording system to estimate near-surface shear (S)-wave velocities from high-frequency Rayleigh waves. A pseudo-2D S-wave velocity (vS) section is constructed by aligning 1D models at the midpoint of each receiver spread and using a spatial interpolation scheme. The horizontal resolution of the section is therefore most influenced by the receiver spread length and the source interval. The receiver spread length sets the theoretical lower limit and any vS structure with its lateral dimension smaller than this length will not be properly resolved in the final vS section. A source interval smaller than the spread length will not improve the horizontal resolution because spatial smearing has already been introduced by the receiver spread. In this paper, we first analyze the horizontal resolution of a pair of synthetic traces. Resolution analysis shows that (1) a pair of traces with a smaller receiver spacing achieves higher horizontal resolution of inverted S-wave velocities but results in a larger relative error; (2) the relative error of the phase velocity at a high frequency is smaller than at a low frequency; and (3) a relative error of the inverted S-wave velocity is affected by the signal-to-noise ratio of data. These results provide us with a guideline to balance the trade-off between receiver spacing (horizontal resolution) and accuracy of the inverted S-wave velocity. We then present a scheme to generate a pseudo-2D S-wave velocity section with high horizontal resolution using multichannel records by inverting high-frequency surface-wave dispersion curves calculated through cross-correlation combined with a phase-shift scanning method. This method chooses only a pair of consecutive traces within a shot gather to calculate a dispersion curve. We finally invert surface-wave dispersion curves of synthetic and real-world data. Inversion results of both synthetic and real-world data demonstrate that inverting high-frequency surface-wave dispersion curves - by a pair of traces through cross-correlation with phase-shift scanning method and with the damped least-square method and the singular-value decomposition technique - can feasibly achieve a reliable pseudo-2D S-wave velocity section with relatively high horizontal resolution. ?? 2008 Elsevier B.V. All rights reserved.

  2. Neural Flight Control System

    NASA Technical Reports Server (NTRS)

    Gundy-Burlet, Karen

    2003-01-01

    The Neural Flight Control System (NFCS) was developed to address the need for control systems that can be produced and tested at lower cost, easily adapted to prototype vehicles and for flight systems that can accommodate damaged control surfaces or changes to aircraft stability and control characteristics resulting from failures or accidents. NFCS utilizes on a neural network-based flight control algorithm which automatically compensates for a broad spectrum of unanticipated damage or failures of an aircraft in flight. Pilot stick and rudder pedal inputs are fed into a reference model which produces pitch, roll and yaw rate commands. The reference model frequencies and gains can be set to provide handling quality characteristics suitable for the aircraft of interest. The rate commands are used in conjunction with estimates of the aircraft s stability and control (S&C) derivatives by a simplified Dynamic Inverse controller to produce virtual elevator, aileron and rudder commands. These virtual surface deflection commands are optimally distributed across the aircraft s available control surfaces using linear programming theory. Sensor data is compared with the reference model rate commands to produce an error signal. A Proportional/Integral (PI) error controller "winds up" on the error signal and adds an augmented command to the reference model output with the effect of zeroing the error signal. In order to provide more consistent handling qualities for the pilot, neural networks learn the behavior of the error controller and add in the augmented command before the integrator winds up. In the case of damage sufficient to affect the handling qualities of the aircraft, an Adaptive Critic is utilized to reduce the reference model frequencies and gains to stay within a flyable envelope of the aircraft.

  3. Generation of Rayleigh waves into mortar and concrete samples.

    PubMed

    Piwakowski, B; Fnine, Abdelilah; Goueygou, M; Buyle-Bodin, F

    2004-04-01

    The paper deals with a non-destructive method for characterizing the degraded cover of concrete structures using high-frequency ultrasound. In a preliminary study, the authors emphasized on the interest of using higher frequency Rayleigh waves (within the 0.2-1 MHz frequency band) for on-site inspection of concrete structures with subsurface damage. The present study represents a continuation of the previous work and aims at optimizing the generation and reception of Rayleigh waves into mortar and concrete be means of wedge transducers. This is performed experimentally by checking the influence of the wedge material and coupling agent on the surface wave parameters. The selection of the best combination wedge/coupling is performed by searching separately for the best wedge material and the best coupling material. Three wedge materials and five coupling agents were tested. For each setup the five parameters obtained from the surface wave measurement i.e. the frequency band, the maximal available central frequency, the group velocity error and its standard deviation and finally the error in velocity dispersion characteristic were investigated and classed as a function of the wedge material and the coupling agent. The selection criteria were chosen so as to minimize the absorption of both materials, the randomness of measurements and the systematic error of the group velocity and of dispersion characteristic. Among the three tested wedge materials, Teflon was found to be the best. The investigation on the coupling agent shows that the gel type materials are the best solutions. The "thick" materials displaying higher viscosity were found as the worst. The results show also that the use of a thin plastic film combined with the coupling agent even increases the bandwidth and decreases the uncertainty of measurements.

  4. Optical Testing of Diamond Machined, Aspheric Mirrors for Groundbased, Near-IR Astronomy

    NASA Technical Reports Server (NTRS)

    Chambers, V. John; Mink, Ronald G.; Ohl, Raymond G.; Connelly, Joseph A.; Mentzell, J. Eric; Arnold, Steven M.; Greenhouse, Matthew A.; Winsor, Robert S.; MacKenty, John W.

    2002-01-01

    The Infrared Multi-Object Spectrometer (IRMOS) is a facility-class instrument for the Kitt Peak National Observatory 4 and 2.1 meter telescopes. IRMOS is a near-IR (0.8-2.5 micron) spectrometer and operates at approximately 80 K. The 6061-T651 aluminum bench and mirrors constitute an athermal design. The instrument produces simultaneous spectra at low- to mid-resolving power (R=lambda/delta lambda= 300-3000) of approximately 100 objects in its 2.8 x 2.0 arcmin field. We describe ambient and cryogenic optical testing of the IRMOS mirrors across a broad range in spatial frequency (figure error, mid-frequency error, and microroughness). The mirrors include three rotationally symmetric, off-axis conic sections, one off-axis biconic, and several flat fold mirrors. The symmetric mirrors include convex and concave prolate and oblate ellipsoids. They range in aperture from 94x86 mm to 286x269 mm and in f-number from 0.9 to 2.4. The biconic mirror is concave and has a 94x76 mm aperture, R(sub x)=377 mm, k(sub x)=0.0778, R(sub y)=407 mm, and k(sub y)=0.1265 and is decentered by -2 mm in X and 227 mm in Y. All of the mirrors have an aspect ratio of approximately 6:1. The surface error fabrication tolerances are less than 10 nm RMS microroughness, 'best effort' for mid-frequency error, and less than 63.3 nm RMS figure error. Ambient temperature (approximately 293 K) testing is performed for each of the three surface error regimes, and figure testing is also performed at approximately 80 K. Operation of the ADE Phaseshift MicroXAM white light interferometer (micro-roughness) and the Bauer Model 200 profilometer (mid-frequency error) is described. Both the sag and conic values of the aspheric mirrors make these tests challenging. Figure testing is performed using a Zygo GPI interferometer, custom computer generated holograms (CGH), and optomechanical alignment fiducials. Cryogenic CGH null testing is discussed in detail. We discuss complications such as the change in prescription with temperature and thermal gradients. Correction for the effect of the dewar window is also covered. We discuss the error budget for the optical test and alignment procedure. Data reduction is accomplished using commercial optical design and data analysis software packages. Results from CGH testing at cryogenic temperatures are encouraging thus far.

  5. Power Spectral Density Specification and Analysis of Large Optical Surfaces

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2009-01-01

    The 2-dimensional Power Spectral Density (PSD) can be used to characterize the mid- and the high-spatial frequency components of the surface height errors of an optical surface. We found it necessary to have a complete, easy-to-use approach for specifying and evaluating the PSD characteristics of large optical surfaces, an approach that allows one to specify the surface quality of a large optical surface based on simulated results using a PSD function and to evaluate the measured surface profile data of the same optic in comparison with those predicted by the simulations during the specification-derivation process. This paper provides a complete mathematical description of PSD error, and proposes a new approach in which a 2-dimentional (2D) PSD is converted into a 1-dimentional (1D) one by azimuthally averaging the 2D-PSD. The 1D-PSD calculated this way has the same unit and the same profile as the original PSD function, thus allows one to compare the two with each other directly.

  6. Registration of human skull computed tomography data to an ultrasound treatment space using a sparse high frequency ultrasound hemispherical array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Reilly, Meaghan A., E-mail: moreilly@sri.utoront

    Purpose: Transcranial focused ultrasound (FUS) shows great promise for a range of therapeutic applications in the brain. Current clinical investigations rely on the use of magnetic resonance imaging (MRI) to monitor treatments and for the registration of preoperative computed tomography (CT)-data to the MR images at the time of treatment to correct the sound aberrations caused by the skull. For some applications, MRI is not an appropriate choice for therapy monitoring and its cost may limit the accessibility of these treatments. An alternative approach, using high frequency ultrasound measurements to localize the skull surface and register CT data to themore » ultrasound treatment space, for the purposes of skull-related phase aberration correction and treatment targeting, has been developed. Methods: A prototype high frequency, hemispherical sparse array was fabricated. Pulse-echo measurements of the surface of five ex vivo human skulls were made, and the CT datasets of each skull were obtained. The acoustic data were used to rigidly register the CT-derived skull surface to the treatment space. The ultrasound-based registrations of the CT datasets were compared to the gold-standard landmark-based registrations. Results: The results show on an average sub-millimeter (0.9 ± 0.2 mm) displacement and subdegree (0.8° ± 0.4°) rotation registration errors. Numerical simulations predict that registration errors on this scale will result in a mean targeting error of 1.0 ± 0.2 mm and reduction in focal pressure of 1.0% ± 0.6% when targeting a midbrain structure (e.g., hippocampus) using a commercially available low-frequency brain prototype device (InSightec, 230 kHz brain system). Conclusions: If combined with ultrasound-based treatment monitoring techniques, this registration method could allow for the development of a low-cost transcranial FUS treatment platform to make this technology more widely available.« less

  7. Registration of human skull computed tomography data to an ultrasound treatment space using a sparse high frequency ultrasound hemispherical array.

    PubMed

    O'Reilly, Meaghan A; Jones, Ryan M; Birman, Gabriel; Hynynen, Kullervo

    2016-09-01

    Transcranial focused ultrasound (FUS) shows great promise for a range of therapeutic applications in the brain. Current clinical investigations rely on the use of magnetic resonance imaging (MRI) to monitor treatments and for the registration of preoperative computed tomography (CT)-data to the MR images at the time of treatment to correct the sound aberrations caused by the skull. For some applications, MRI is not an appropriate choice for therapy monitoring and its cost may limit the accessibility of these treatments. An alternative approach, using high frequency ultrasound measurements to localize the skull surface and register CT data to the ultrasound treatment space, for the purposes of skull-related phase aberration correction and treatment targeting, has been developed. A prototype high frequency, hemispherical sparse array was fabricated. Pulse-echo measurements of the surface of five ex vivo human skulls were made, and the CT datasets of each skull were obtained. The acoustic data were used to rigidly register the CT-derived skull surface to the treatment space. The ultrasound-based registrations of the CT datasets were compared to the gold-standard landmark-based registrations. The results show on an average sub-millimeter (0.9 ± 0.2 mm) displacement and subdegree (0.8° ± 0.4°) rotation registration errors. Numerical simulations predict that registration errors on this scale will result in a mean targeting error of 1.0 ± 0.2 mm and reduction in focal pressure of 1.0% ± 0.6% when targeting a midbrain structure (e.g., hippocampus) using a commercially available low-frequency brain prototype device (InSightec, 230 kHz brain system). If combined with ultrasound-based treatment monitoring techniques, this registration method could allow for the development of a low-cost transcranial FUS treatment platform to make this technology more widely available.

  8. Registration of human skull computed tomography data to an ultrasound treatment space using a sparse high frequency ultrasound hemispherical array

    PubMed Central

    O’Reilly, Meaghan A.; Jones, Ryan M.; Birman, Gabriel; Hynynen, Kullervo

    2016-01-01

    Purpose: Transcranial focused ultrasound (FUS) shows great promise for a range of therapeutic applications in the brain. Current clinical investigations rely on the use of magnetic resonance imaging (MRI) to monitor treatments and for the registration of preoperative computed tomography (CT)-data to the MR images at the time of treatment to correct the sound aberrations caused by the skull. For some applications, MRI is not an appropriate choice for therapy monitoring and its cost may limit the accessibility of these treatments. An alternative approach, using high frequency ultrasound measurements to localize the skull surface and register CT data to the ultrasound treatment space, for the purposes of skull-related phase aberration correction and treatment targeting, has been developed. Methods: A prototype high frequency, hemispherical sparse array was fabricated. Pulse-echo measurements of the surface of five ex vivo human skulls were made, and the CT datasets of each skull were obtained. The acoustic data were used to rigidly register the CT-derived skull surface to the treatment space. The ultrasound-based registrations of the CT datasets were compared to the gold-standard landmark-based registrations. Results: The results show on an average sub-millimeter (0.9 ± 0.2 mm) displacement and subdegree (0.8° ± 0.4°) rotation registration errors. Numerical simulations predict that registration errors on this scale will result in a mean targeting error of 1.0 ± 0.2 mm and reduction in focal pressure of 1.0% ± 0.6% when targeting a midbrain structure (e.g., hippocampus) using a commercially available low-frequency brain prototype device (InSightec, 230 kHz brain system). Conclusions: If combined with ultrasound-based treatment monitoring techniques, this registration method could allow for the development of a low-cost transcranial FUS treatment platform to make this technology more widely available. PMID:27587036

  9. Fringe-period selection for a multifrequency fringe-projection phase unwrapping method

    NASA Astrophysics Data System (ADS)

    Zhang, Chunwei; Zhao, Hong; Jiang, Kejian

    2016-08-01

    The multi-frequency fringe-projection phase unwrapping method (MFPPUM) is a typical phase unwrapping algorithm for fringe projection profilometry. It has the advantage of being capable of correctly accomplishing phase unwrapping even in the presence of surface discontinuities. If the fringe frequency ratio of the MFPPUM is too large, fringe order error (FOE) may be triggered. FOE will result in phase unwrapping error. It is preferable for the phase unwrapping to be kept correct while the fewest sets of lower frequency fringe patterns are used. To achieve this goal, in this paper a parameter called fringe order inaccuracy (FOI) is defined, dominant factors which may induce FOE are theoretically analyzed, a method to optimally select the fringe periods for the MFPPUM is proposed with the aid of FOI, and experiments are conducted to research the impact of the dominant factors in phase unwrapping and demonstrate the validity of the proposed method. Some novel phenomena are revealed by these experiments. The proposed method helps to optimally select the fringe periods and detect the phase unwrapping error for the MFPPUM.

  10. Phase-slope and phase measurements of tunable CW-THz radiation with terahertz comb for wide-dynamic-range, high-resolution, distance measurement of optically rough object.

    PubMed

    Yasui, Takeshi; Fujio, Makoto; Yokoyama, Shuko; Araki, Tsutomu

    2014-07-14

    Phase measurement of continuous-wave terahertz (CW-THz) radiation is a potential tool for direct distance and imaging measurement of optically rough objects due to its high robustness to optical rough surfaces. However, the 2π phase ambiguity in the phase measurement of single-frequency CW-THz radiation limits the dynamic range of the measured distance to the order of the wavelength used. In this article, phase-slope measurement of tunable CW-THz radiation with a THz frequency comb was effectively used to extend the dynamic range up to 1.834 m while maintaining an error of a few tens µm in the distance measurement of an optically rough object. Furthermore, a combination of phase-slope measurement of tunable CW-THz radiation and phase measurement of single-frequency CW-THz radiation enhanced the distance error to a few µm within the dynamic range of 1.834 m without any influence from the 2π phase ambiguity. The proposed method will be a powerful tool for the construction and maintenance of large-scale structures covered with optically rough surfaces.

  11. Acoustic measurement of the surface tension of levitated drops

    NASA Technical Reports Server (NTRS)

    Trinh, E. H.; Marston, P. L.; Robey, J. L.

    1988-01-01

    The measurement of the frequency of the fundamental mode of shape oscillation of acoustically levitated drops has been carried out to determine the surface tension of the drop material. Sound fields of about 20 kHz in frequency allow the suspension of drops a few millimeters in size, as well as the necessary drive for oscillations. The surface tension of water, hexadecane, silicone oil, and aqueous solutions of glycerin levitated in air has been measured, and the results have been compared with those obtained with standard ring tensiometry. The two sets of data are in good agreement, the largest discrepancy being about 10 percent. Uncertainties in the effects of the nonspherical static shape of drops levitated in the earth's gravitational field and the rotation state of the sample are the major contributors to the experimental error. A decrease of the resonance frequency of the fundamental mode indicates a soft nonlinearity as the oscillation amplitude increases.

  12. Effects of modeled tropical sea surface temperature variability on coral reef bleaching predictions

    NASA Astrophysics Data System (ADS)

    Van Hooidonk, R. J.

    2011-12-01

    Future widespread coral bleaching and subsequent mortality has been projected with sea surface temperature (SST) data from global, coupled ocean-atmosphere general circulation models (GCMs). While these models possess fidelity in reproducing many aspects of climate, they vary in their ability to correctly capture such parameters as the tropical ocean seasonal cycle and El Niño Southern Oscillation (ENSO) variability. These model weaknesses likely reduce the skill of coral bleaching predictions, but little attention has been paid to the important issue of understanding potential errors and biases, the interaction of these biases with trends and their propagation in predictions. To analyze the relative importance of various types of model errors and biases on coral reef bleaching predictive skill, various intra- and inter-annual frequency bands of observed SSTs were replaced with those frequencies from GCMs 20th century simulations to be included in the Intergovernmental Panel on Climate Change (IPCC) 5th assessment report. Subsequent thermal stress was calculated and predictions of bleaching were made. These predictions were compared with observations of coral bleaching in the period 1982-2007 to calculate skill using an objective measure of forecast quality, the Peirce Skill Score (PSS). This methodology will identify frequency bands that are important to predicting coral bleaching and it will highlight deficiencies in these bands in models. The methodology we describe can be used to improve future climate model derived predictions of coral reef bleaching and it can be used to better characterize the errors and uncertainty in predictions.

  13. Study of and proposals for the correction of errors in a radar ranging device designed to facilitate docking of a teleoperator maneuvering system

    NASA Technical Reports Server (NTRS)

    Mcdonald, M. W.

    1982-01-01

    A frequency modulated continuous wave radar system was developed. The system operates in the 35 gigahertz frequency range and provides millimeter accuracy range and range rate measurements. This level of range resolution allows soft docking for the proposed teleoperator maneuvering system (TMS) or other autonomous or robotic space vehicles. Sources of error in the operation of the system which tend to limit its range resolution capabilities are identified. Alternative signal processing techniques are explored with emphasis on determination of the effects of inserting various signal filtering circuits in the system. The identification and elimination of an extraneous low frequency signal component created as a result of zero range immediate reflection of radar energy from the surface of the antenna dish back into the mixer of the system is described.

  14. Real-Time Parameter Estimation in the Frequency Domain

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2000-01-01

    A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented

  15. Cost-effectiveness of the stream-gaging program in Nebraska

    USGS Publications Warehouse

    Engel, G.B.; Wahl, K.L.; Boohar, J.A.

    1984-01-01

    This report documents the results of a study of the cost-effectiveness of the streamflow information program in Nebraska. Presently, 145 continuous surface-water stations are operated in Nebraska on a budget of $908,500. Data uses and funding sources are identified for each of the 145 stations. Data from most stations have multiple uses. All stations have sufficient justification for continuation, but two stations primarily are used in short-term research studies; their continued operation needs to be evaluated when the research studies end. The present measurement frequency produces an average standard error for instantaneous discharges of about 12 percent, including periods when stage data are missing. Altering the travel routes and the measurement frequency will allow a reduction in standard error of about 1 percent with the present budget. Standard error could be reduced to about 8 percent if lost record could be eliminated. A minimum budget of $822,000 is required to operate the present network, but operations at that funding level would result in an increase in standard error to about 16 percent. The maximum budget analyzed was $1,363,000, which would result in an average standard error of 6 percent. (USGS)

  16. Performance evaluation of four directional emissivity analytical models with thermal SAIL model and airborne images.

    PubMed

    Ren, Huazhong; Liu, Rongyuan; Yan, Guangjian; Li, Zhao-Liang; Qin, Qiming; Liu, Qiang; Nerry, Françoise

    2015-04-06

    Land surface emissivity is a crucial parameter in the surface status monitoring. This study aims at the evaluation of four directional emissivity models, including two bi-directional reflectance distribution function (BRDF) models and two gap-frequency-based models. Results showed that the kernel-driven BRDF model could well represent directional emissivity with an error less than 0.002, and was consequently used to retrieve emissivity with an accuracy of about 0.012 from an airborne multi-angular thermal infrared data set. Furthermore, we updated the cavity effect factor relating to multiple scattering inside canopy, which improved the performance of the gap-frequency-based models.

  17. Design Considerations of Polishing Lap for Computer-Controlled Cylindrical Polishing Process

    NASA Technical Reports Server (NTRS)

    Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian

    2010-01-01

    The future X-ray observatory missions, such as International X-ray Observatory, require grazing incidence replicated optics of extremely large collecting area (3 m2) in combination with angular resolution of less than 5 arcsec half-power diameter. The resolution of a mirror shell depends ultimately on the quality of the cylindrical mandrels from which they are being replicated. Mid-spatial-frequency axial figure error is a dominant contributor in the error budget of the mandrel. This paper presents our efforts to develop a deterministic cylindrical polishing process in order to keep the mid-spatial-frequency axial figure errors to a minimum. Simulation studies have been performed to optimize the operational parameters as well as the polishing lap configuration. Furthermore, depending upon the surface error profile, a model for localized polishing based on dwell time approach is developed. Using the inputs from the mathematical model, a mandrel, having conical approximated Wolter-1 geometry, has been polished on a newly developed computer-controlled cylindrical polishing machine. We report our first experimental results and discuss plans for further improvements in the polishing process.

  18. Skill assessment of Korea operational oceanographic system (KOOS)

    NASA Astrophysics Data System (ADS)

    Kim, J.; Park, K.

    2016-02-01

    For the ocean forecast system in Korea, the Korea operational oceanographic system (KOOS) has been developed and pre-operated since 2009 by the Korea institute of ocean science and technology (KIOST) funded by the Korean government. KOOS provides real time information and forecasts for marine environmental conditions in order to support all kinds of activities in the sea. Furthermore, more significant purpose of the KOOS information is to response and support to maritime problems and accidents such as oil spill, red-tide, shipwreck, extraordinary wave, coastal inundation and so on. Accordingly, it is essential to evaluate prediction accuracy and efforts to improve accuracy. The forecast accuracy should meet or exceed target benchmarks before its products are approved for release to the public.In this paper, we conduct error quantification of the forecasts using skill assessment technique for judgement of the KOOS performance. Skill assessment statistics includes the measures of errors and correlations such as root-mean-square-error (RMSE), mean bias (MB), correlation coefficient (R), and index of agreement (IOA) and the frequency with which errors lie within specified limits termed the central frequency (CF).The KOOS provides 72-hour daily forecast data such as air pressure, wind, water elevation, currents, wave, water temperature, and salinity produced by meteorological and hydrodynamic numerical models of WRF, ROMS, MOM5, WAM, WW3, and MOHID. The skill assessment has been performed through comparison of model results with in-situ observation data (Figure 1) for the period from 1 July, 2010 to 31 March, 2015 in Table 1 and model errors have been quantified with skill scores and CF determined by acceptable criteria depending on predicted variables (Table 2). Moreover, we conducted quantitative evaluation of spatio-temporal pattern correlation between numerical models and observation data such as sea surface temperature (SST) and sea surface current produced by ocean sensor in satellites and high frequency (HF) radar, respectively. Those quantified errors can allow to objective assessment of the KOOS performance and used can reveal different aspects of model inefficiency. Based on these results, various model components are tested and developed in order to improve forecast accuracy.

  19. A Multi-Channel Method for Retrieving Surface Temperature for High-Emissivity Surfaces from Hyperspectral Thermal Infrared Images

    PubMed Central

    Zhong, Xinke; Labed, Jelila; Zhou, Guoqing; Shao, Kun; Li, Zhao-Liang

    2015-01-01

    The surface temperature (ST) of high-emissivity surfaces is an important parameter in climate systems. The empirical methods for retrieving ST for high-emissivity surfaces from hyperspectral thermal infrared (HypTIR) images require spectrally continuous channel data. This paper aims to develop a multi-channel method for retrieving ST for high-emissivity surfaces from space-borne HypTIR data. With an assumption of land surface emissivity (LSE) of 1, ST is proposed as a function of 10 brightness temperatures measured at the top of atmosphere by a radiometer having a spectral interval of 800–1200 cm−1 and a spectral sampling frequency of 0.25 cm−1. We have analyzed the sensitivity of the proposed method to spectral sampling frequency and instrumental noise, and evaluated the proposed method using satellite data. The results indicated that the parameters in the developed function are dependent on the spectral sampling frequency and that ST of high-emissivity surfaces can be accurately retrieved by the proposed method if appropriate values are used for each spectral sampling frequency. The results also showed that the accuracy of the retrieved ST is of the order of magnitude of the instrumental noise and that the root mean square error (RMSE) of the ST retrieved from satellite data is 0.43 K in comparison with the AVHRR SST product. PMID:26061199

  20. Caution: Precision Error in Blade Alignment Results in Faulty Unsteady CFD Simulation

    NASA Astrophysics Data System (ADS)

    Lewis, Bryan; Cimbala, John; Wouden, Alex

    2012-11-01

    Turbomachinery components experience unsteady loads at several frequencies. The rotor frequency corresponds to the time for one rotor blade to rotate between two stator vanes, and is normally dominant for rotor torque oscillations. The guide vane frequency corresponds to the time for two rotor blades to pass by one guide vane. The machine frequency corresponds to the machine RPM. Oscillations at the machine frequency are always present due to minor blade misalignments and imperfections resulting from manufacturing defects. However, machine frequency oscillations should not be present in CFD simulations if the mesh is free of both blade misalignment and surface imperfections. The flow through a Francis hydroturbine was modeled with unsteady Reynolds-Averaged Navier-Stokes (URANS) CFD simulations and a dynamic rotating grid. Spectral analysis of the unsteady torque on the rotor blades revealed a large component at the machine frequency. Close examination showed that one blade was displaced by 0 .0001° due to round-off errors during mesh generation. A second mesh without blade misalignment was then created. Subsequently, large machine frequency oscillations were not observed for this mesh. These results highlight the effect of minor geometry imperfections on CFD solutions. This research was supported by a grant from the DoE and a National Defense Science and Engineering Graduate Fellowship.

  1. An Initial Assessment of the Surface Reference Technique Applied to Data from the Dual-Frequency Precipitation Radar (DPR) on the GPM Satellite

    NASA Technical Reports Server (NTRS)

    Meneghini, Robert; Kim, Hyokyung; Liao, Liang; Jones, Jeffrey A.; Kwiatkowski, John M.

    2015-01-01

    It has long been recognized that path-integrated attenuation (PIA) can be used to improve precipitation estimates from high-frequency weather radar data. One approach that provides an estimate of this quantity from airborne or spaceborne radar data is the surface reference technique (SRT), which uses measurements of the surface cross section in the presence and absence of precipitation. Measurements from the dual-frequency precipitation radar (DPR) on the Global Precipitation Measurement (GPM) satellite afford the first opportunity to test the method for spaceborne radar data at Ka band as well as for the Ku-band-Ka-band combination. The study begins by reviewing the basis of the single- and dual-frequency SRT. As the performance of the method is closely tied to the behavior of the normalized radar cross section (NRCS or sigma(0)) of the surface, the statistics of sigma(0) derived from DPR measurements are given as a function of incidence angle and frequency for ocean and land backgrounds over a 1-month period. Several independent estimates of the PIA, formed by means of different surface reference datasets, can be used to test the consistency of the method since, in the absence of error, the estimates should be identical. Along with theoretical considerations, the comparisons provide an initial assessment of the performance of the single- and dual-frequency SRT for the DPR. The study finds that the dual-frequency SRT can provide improvement in the accuracy of path attenuation estimates relative to the single-frequency method, particularly at Ku band.

  2. Methods for estimating the magnitude and frequency of peak streamflows at ungaged sites in and near the Oklahoma Panhandle

    USGS Publications Warehouse

    Smith, S. Jerrod; Lewis, Jason M.; Graves, Grant M.

    2015-09-28

    Generalized-least-squares multiple-linear regression analysis was used to formulate regression relations between peak-streamflow frequency statistics and basin characteristics. Contributing drainage area was the only basin characteristic determined to be statistically significant for all percentage of annual exceedance probabilities and was the only basin characteristic used in regional regression equations for estimating peak-streamflow frequency statistics on unregulated streams in and near the Oklahoma Panhandle. The regression model pseudo-coefficient of determination, converted to percent, for the Oklahoma Panhandle regional regression equations ranged from about 38 to 63 percent. The standard errors of prediction and the standard model errors for the Oklahoma Panhandle regional regression equations ranged from about 84 to 148 percent and from about 76 to 138 percent, respectively. These errors were comparable to those reported for regional peak-streamflow frequency regression equations for the High Plains areas of Texas and Colorado. The root mean square errors for the Oklahoma Panhandle regional regression equations (ranging from 3,170 to 92,000 cubic feet per second) were less than the root mean square errors for the Oklahoma statewide regression equations (ranging from 18,900 to 412,000 cubic feet per second); therefore, the Oklahoma Panhandle regional regression equations produce more accurate peak-streamflow statistic estimates for the irrigated period of record in the Oklahoma Panhandle than do the Oklahoma statewide regression equations. The regression equations developed in this report are applicable to streams that are not substantially affected by regulation, impoundment, or surface-water withdrawals. These regression equations are intended for use for stream sites with contributing drainage areas less than or equal to about 2,060 square miles, the maximum value for the independent variable used in the regression analysis.

  3. Estimation of sensible and latent heat flux from natural sparse vegetation surfaces using surface renewal

    NASA Astrophysics Data System (ADS)

    Zapata, N.; Martínez-Cob, A.

    2001-12-01

    This paper reports a study undertaken to evaluate the feasibility of the surface renewal method to accurately estimate long-term evaporation from the playa and margins of an endorreic salty lagoon (Gallocanta lagoon, Spain) under semiarid conditions. High-frequency temperature readings were taken for two time lags ( r) and three measurement heights ( z) in order to get surface renewal sensible heat flux ( HSR) values. These values were compared against eddy covariance sensible heat flux ( HEC) values for a calibration period (25-30 July 2000). Error analysis statistics (index of agreement, IA; root mean square error, RMSE; and systematic mean square error, MSEs) showed that the agreement between HSR and HEC improved as measurement height decreased and time lag increased. Calibration factors α were obtained for all analyzed cases. The best results were obtained for the z=0.9 m ( r=0.75 s) case for which α=1.0 was observed. In this case, uncertainty was about 10% in terms of relative error ( RE). Latent heat flux values were obtained by solving the energy balance equation for both the surface renewal ( LESR) and the eddy covariance ( LEEC) methods, using HSR and HEC, respectively, and measurements of net radiation and soil heat flux. For the calibration period, error analysis statistics for LESR were quite similar to those for HSR, although errors were mostly at random. LESR uncertainty was less than 9%. Calibration factors were applied for a validation data subset (30 July-4 August 2000) for which meteorological conditions were somewhat different (higher temperatures and wind speed and lower solar and net radiation). Error analysis statistics for both HSR and LESR were quite good for all cases showing the goodness of the calibration factors. Nevertheless, the results obtained for the z=0.9 m ( r=0.75 s) case were still the best ones.

  4. The study about forming high-precision optical lens minimalized sinuous error structures for designed surface

    NASA Astrophysics Data System (ADS)

    Katahira, Yu; Fukuta, Masahiko; Katsuki, Masahide; Momochi, Takeshi; Yamamoto, Yoshihiro

    2016-09-01

    Recently, it has been required to improve qualities of aspherical lenses mounted on camera units. Optical lenses in highvolume production generally are applied with molding process using cemented carbide or Ni-P coated steel, which can be selected from lens material such as glass and plastic. Additionally it can be obtained high quality of the cut or ground surface on mold due to developments of different mold product technologies. As results, it can be less than 100nmPV as form-error and 1nmRa as surface roughness in molds. Furthermore it comes to need higher quality, not only formerror( PV) and surface roughness(Ra) but also other surface characteristics. For instance, it can be caused distorted shapes at imaging by middle spatial frequency undulations on the lens surface. In this study, we made focus on several types of sinuous structures, which can be classified into form errors for designed surface and deteriorate optical system performances. And it was obtained mold product processes minimalizing undulations on the surface. In the report, it was mentioned about the analyzing process by using PSD so as to evaluate micro undulations on the machined surface quantitatively. In addition, it was mentioned that the grinding process with circumferential velocity control was effective for large aperture lenses fabrication and could minimalize undulations appeared on outer area of the machined surface, and mentioned about the optical glass lens molding process by using the high precision press machine.

  5. DNA assembly with error correction on a droplet digital microfluidics platform.

    PubMed

    Khilko, Yuliya; Weyman, Philip D; Glass, John I; Adams, Mark D; McNeil, Melanie A; Griffin, Peter B

    2018-06-01

    Custom synthesized DNA is in high demand for synthetic biology applications. However, current technologies to produce these sequences using assembly from DNA oligonucleotides are costly and labor-intensive. The automation and reduced sample volumes afforded by microfluidic technologies could significantly decrease materials and labor costs associated with DNA synthesis. The purpose of this study was to develop a gene assembly protocol utilizing a digital microfluidic device. Toward this goal, we adapted bench-scale oligonucleotide assembly methods followed by enzymatic error correction to the Mondrian™ digital microfluidic platform. We optimized Gibson assembly, polymerase chain reaction (PCR), and enzymatic error correction reactions in a single protocol to assemble 12 oligonucleotides into a 339-bp double- stranded DNA sequence encoding part of the human influenza virus hemagglutinin (HA) gene. The reactions were scaled down to 0.6-1.2 μL. Initial microfluidic assembly methods were successful and had an error frequency of approximately 4 errors/kb with errors originating from the original oligonucleotide synthesis. Relative to conventional benchtop procedures, PCR optimization required additional amounts of MgCl 2 , Phusion polymerase, and PEG 8000 to achieve amplification of the assembly and error correction products. After one round of error correction, error frequency was reduced to an average of 1.8 errors kb - 1 . We demonstrated that DNA assembly from oligonucleotides and error correction could be completely automated on a digital microfluidic (DMF) platform. The results demonstrate that enzymatic reactions in droplets show a strong dependence on surface interactions, and successful on-chip implementation required supplementation with surfactants, molecular crowding agents, and an excess of enzyme. Enzymatic error correction of assembled fragments improved sequence fidelity by 2-fold, which was a significant improvement but somewhat lower than expected compared to bench-top assays, suggesting an additional capacity for optimization.

  6. Evolutionary Model and Oscillation Frequencies for α Ursae Majoris: A Comparison with Observations

    NASA Astrophysics Data System (ADS)

    Guenther, D. B.; Demarque, P.; Buzasi, D.; Catanzarite, J.; Laher, R.; Conrow, T.; Kreidl, T.

    2000-02-01

    Inspired by the observations of low-amplitude oscillations of α Ursae Majoris A by Buzasi et al. using the WIRE satellite, a grid of stellar evolutionary tracks has been constructed to derive physically consistent interior models for the nearby red giant. The pulsation properties of these models were then calculated and compared with the observations. It is found that, by adopting the correct metallicity and for a normal helium abundance, only models in the mass range of 4.0-4.5 Msolar fall within the observational error box for α UMa A. This mass range is compatible, within the uncertainties, with the mass derived from the astrometric mass function. Analysis of the pulsation spectra of the models indicates that the observed α UMa oscillations can be most simply interpreted as radial (i.e., l=0) p-mode oscillations of low radial order n. The lowest frequencies observed by Buzasi et al. are compatible, within the observational errors, with model frequencies of radial orders n=0, 1, and 2 for models in the mass range of 4.0-4.5 Msolar. The higher frequencies observed can also be tentatively interpreted as higher n-valued radial p-modes, if we allow that some n-values are not presently observed. The theoretical l=1, 2, and 3 modes in the observed frequency range are g-modes with a mixed mode character, that is, with p-mode-like characteristics near the surface and g-mode-like characteristics in the interior. The calculated radial p-mode frequencies are nearly equally spaced, separated by 2-3 μHz. The nonradial modes are very densely packed throughout the observed frequency range and, even if excited to significant amplitudes at the surface, are unlikely to be resolved by the present observations.

  7. Effect of photogrammetric reading error on slope-frequency distributions. [obtained from Apollo 17 mission

    NASA Technical Reports Server (NTRS)

    Moore, H. J.; Wu, S. C.

    1973-01-01

    The effect of reading error on two hypothetical slope frequency distributions and two slope frequency distributions from actual lunar data in order to ensure that these errors do not cause excessive overestimates of algebraic standard deviations for the slope frequency distributions. The errors introduced are insignificant when the reading error is small and the slope length is large. A method for correcting the errors in slope frequency distributions is presented and applied to 11 distributions obtained from Apollo 15, 16, and 17 panoramic camera photographs and Apollo 16 metric camera photographs.

  8. Automated Figuring and Polishing of Replication Mandrels for X-Ray Telescopes

    NASA Technical Reports Server (NTRS)

    Krebs, Carolyn (Technical Monitor); Content, David; Fleetwood, Charles; Wright, Geraldine; Arsenovic, Petar; Collela, David; Kolos, Linette

    2003-01-01

    In support of the Constellation X mission the Optics Branch at Goddard Space Flight Center is developing technology for precision figuring and polishing of mandrels used to produce replicated mirrors that will be used in X-Ray telescopes. Employing a specially built machine controlled in 2 axes by a computer, we are doing automated polishing/figuring of 15 cm long, 20 cm diameter cylindrical, conical and Wolter mandrels. A battery of tests allow us to fully characterize all important aspects of the mandrels, including surface figure and finish, mid-frequency errors, diameters and cone angle. Parts are currently being produced with surface roughnesses at the .5nm RMS level, and half-power diameter slope error less than 2 arcseconds.

  9. Manufacturing of super-polished large aspheric/freeform optics

    NASA Astrophysics Data System (ADS)

    Kim, Dae Wook; Oh, Chang-jin; Lowman, Andrew; Smith, Greg A.; Aftab, Maham; Burge, James H.

    2016-07-01

    Several next generation astronomical telescopes or large optical systems utilize aspheric/freeform optics for creating a segmented optical system. Multiple mirrors can be combined to form a larger optical surface or used as a single surface to avoid obscurations. In this paper, we demonstrate a specific case of the Daniel K. Inouye Solar Telescope (DKIST). This optic is a 4.2 m in diameter off-axis primary mirror using ZERODUR thin substrate, and has been successfully completed in the Optical Engineering and Fabrication Facility (OEFF) at the University of Arizona, in 2016. As the telescope looks at the brightest object in the sky, our own Sun, the primary mirror surface quality meets extreme specifications covering a wide range of spatial frequency errors. In manufacturing the DKIST mirror, metrology systems have been studied, developed and applied to measure low-to-mid-to-high spatial frequency surface shape information in the 4.2 m super-polished optical surface. In this paper, measurements from these systems are converted to Power Spectral Density (PSD) plots and combined in the spatial frequency domain. Results cover 5 orders of magnitude in spatial frequencies and meet or exceed specifications for this large aspheric mirror. Precision manufacturing of the super-polished DKIST mirror enables a new level of solar science.

  10. Effects of modeled tropical sea surface temperature variability on coral reef bleaching predictions

    NASA Astrophysics Data System (ADS)

    van Hooidonk, R.; Huber, M.

    2012-03-01

    Future widespread coral bleaching and subsequent mortality has been projected using sea surface temperature (SST) data derived from global, coupled ocean-atmosphere general circulation models (GCMs). While these models possess fidelity in reproducing many aspects of climate, they vary in their ability to correctly capture such parameters as the tropical ocean seasonal cycle and El Niño Southern Oscillation (ENSO) variability. Such weaknesses most likely reduce the accuracy of predicting coral bleaching, but little attention has been paid to the important issue of understanding potential errors and biases, the interaction of these biases with trends, and their propagation in predictions. To analyze the relative importance of various types of model errors and biases in predicting coral bleaching, various intra- and inter-annual frequency bands of observed SSTs were replaced with those frequencies from 24 GCMs 20th century simulations included in the Intergovernmental Panel on Climate Change (IPCC) 4th assessment report. Subsequent thermal stress was calculated and predictions of bleaching were made. These predictions were compared with observations of coral bleaching in the period 1982-2007 to calculate accuracy using an objective measure of forecast quality, the Peirce skill score (PSS). Major findings are that: (1) predictions are most sensitive to the seasonal cycle and inter-annual variability in the ENSO 24-60 months frequency band and (2) because models tend to understate the seasonal cycle at reef locations, they systematically underestimate future bleaching. The methodology we describe can be used to improve the accuracy of bleaching predictions by characterizing the errors and uncertainties involved in the predictions.

  11. High frequency source localization in a shallow ocean sound channel using frequency difference matched field processing.

    PubMed

    Worthmann, Brian M; Song, H C; Dowling, David R

    2015-12-01

    Matched field processing (MFP) is an established technique for source localization in known multipath acoustic environments. Unfortunately, in many situations, particularly those involving high frequency signals, imperfect knowledge of the actual propagation environment prevents accurate propagation modeling and source localization via MFP fails. For beamforming applications, this actual-to-model mismatch problem was mitigated through a frequency downshift, made possible by a nonlinear array-signal-processing technique called frequency difference beamforming [Abadi, Song, and Dowling (2012). J. Acoust. Soc. Am. 132, 3018-3029]. Here, this technique is extended to conventional (Bartlett) MFP using simulations and measurements from the 2011 Kauai Acoustic Communications MURI experiment (KAM11) to produce ambiguity surfaces at frequencies well below the signal bandwidth where the detrimental effects of mismatch are reduced. Both the simulation and experimental results suggest that frequency difference MFP can be more robust against environmental mismatch than conventional MFP. In particular, signals of frequency 11.2 kHz-32.8 kHz were broadcast 3 km through a 106-m-deep shallow ocean sound channel to a sparse 16-element vertical receiving array. Frequency difference MFP unambiguously localized the source in several experimental data sets with average peak-to-side-lobe ratio of 0.9 dB, average absolute-value range error of 170 m, and average absolute-value depth error of 10 m.

  12. Note: Focus error detection device for thermal expansion-recovery microscopy (ThERM).

    PubMed

    Domené, E A; Martínez, O E

    2013-01-01

    An innovative focus error detection method is presented that is only sensitive to surface curvature variations, canceling both thermoreflectance and photodefelection effects. The detection scheme consists of an astigmatic probe laser and a four-quadrant detector. Nonlinear curve fitting of the defocusing signal allows the retrieval of a cutoff frequency, which only depends on the thermal diffusivity of the sample and the pump beam size. Therefore, a straightforward retrieval of the thermal diffusivity of the sample is possible with microscopic lateral resolution and high axial resolution (~100 pm).

  13. High-frequency fluctuations of surface temperatures in an urban environment

    NASA Astrophysics Data System (ADS)

    Christen, Andreas; Meier, Fred; Scherer, Dieter

    2012-04-01

    This study presents an attempt to resolve fluctuations in surface temperatures at scales of a few seconds to several minutes using time-sequential thermography (TST) from a ground-based platform. A scheme is presented to decompose a TST dataset into fluctuating, high-frequency, and long-term mean parts. To demonstrate the scheme's application, a set of four TST runs (day/night, leaves-on/leaves-off) recorded from a 125-m-high platform above a complex urban environment in Berlin, Germany is used. Fluctuations in surface temperatures of different urban facets are measured and related to surface properties (material and form) and possible error sources. A number of relationships were found: (1) Surfaces with surface temperatures that were significantly different from air temperature experienced the highest fluctuations. (2) With increasing surface temperature above (below) air temperature, surface temperature fluctuations experienced a stronger negative (positive) skewness. (3) Surface materials with lower thermal admittance (lawns, leaves) showed higher fluctuations than surfaces with high thermal admittance (walls, roads). (4) Surface temperatures of emerged leaves fluctuate more compared to trees in a leaves-off situation. (5) In many cases, observed fluctuations were coherent across several neighboring pixels. The evidence from (1) to (5) suggests that atmospheric turbulence is a significant contributor to fluctuations. The study underlines the potential of using high-frequency thermal remote sensing in energy balance and turbulence studies at complex land-atmosphere interfaces.

  14. Differential Laser Doppler based Non-Contact Sensor for Dimensional Inspection with Error Propagation Evaluation

    PubMed Central

    Mekid, Samir; Vacharanukul, Ketsaya

    2006-01-01

    To achieve dynamic error compensation in CNC machine tools, a non-contact laser probe capable of dimensional measurement of a workpiece while it is being machined has been developed and presented in this paper. The measurements are automatically fed back to the machine controller for intelligent error compensations. Based on a well resolved laser Doppler technique and real time data acquisition, the probe delivers a very promising dimensional accuracy at few microns over a range of 100 mm. The developed optical measuring apparatus employs a differential laser Doppler arrangement allowing acquisition of information from the workpiece surface. In addition, the measurements are traceable to standards of frequency allowing higher precision.

  15. Two dimensional wavefront retrieval using lateral shearing interferometry

    NASA Astrophysics Data System (ADS)

    Mancilla-Escobar, B.; Malacara-Hernández, Z.; Malacara-Hernández, D.

    2018-06-01

    A new zonal two-dimensional method for wavefront retrieval from a surface under test using lateral shearing interferometry is presented. A modified Saunders method and phase shifting techniques are combined to generate a method for wavefront reconstruction. The result is a wavefront with an error below 0.7 λ and without any global high frequency filtering. A zonal analysis over square cells along the surfaces is made, obtaining a polynomial expression for the wavefront deformations over each cell. The main advantage of this method over previously published methods is that a global filtering of high spatial frequencies is not present. Thus, a global smoothing of the wavefront deformations is avoided, allowing the detection of deformations with relatively small extensions, that is, with high spatial frequencies. Additionally, local curvature and low order aberration coefficients are obtained in each cell.

  16. SMOS: a satellite mission to measure ocean surface salinity

    NASA Astrophysics Data System (ADS)

    Font, Jordi; Kerr, Yann H.; Srokosz, Meric A.; Etcheto, Jacqueline; Lagerloef, Gary S.; Camps, Adriano; Waldteufel, Philippe

    2001-01-01

    The ESA's SMOS (Soil Moisture and Ocean Salinity) Earth Explorer Opportunity Mission will be launched by 2005. Its baseline payload is a microwave L-band (21 cm, 1.4 GHz) 2D interferometric radiometer, Y shaped, with three arms 4.5 m long. This frequency allows the measurement of brightness temperature (Tb) under the best conditions to retrieve soil moisture and sea surface salinity (SSS). Unlike other oceanographic variables, until now it has not been possible to measure salinity from space. However, large ocean areas lack significant salinity measurements. The 2D interferometer will measure Tb at large and different incidence angles, for two polarizations. It is possible to obtain SSS from L-band passive microwave measurements if the other factors influencing Tb (SST, surface roughness, foam, sun glint, rain, ionospheric effects and galactic/cosmic background radiation) can be accounted for. Since the radiometric sensitivity is low, SSS cannot be recovered to the required accuracy from a single measurement as the error is about 1-2 psu. If the errors contributing to the uncertainty in Tb are random, averaging the independent data and views along the track, and considering a 200 km square, allow the error to be reduced to 0.1-0.2 pus, assuming all ancillary errors are budgeted.

  17. Error Analysis of Wind Measurements for the University of Illinois Sodium Doppler Temperature System

    NASA Technical Reports Server (NTRS)

    Pfenninger, W. Matthew; Papen, George C.

    1992-01-01

    Four-frequency lidar measurements of temperature and wind velocity require accurate frequency tuning to an absolute reference and long term frequency stability. We quantify frequency tuning errors for the Illinois sodium system, to measure absolute frequencies and a reference interferometer to measure relative frequencies. To determine laser tuning errors, we monitor the vapor cell and interferometer during lidar data acquisition and analyze the two signals for variations as functions of time. Both sodium cell and interferometer are the same as those used to frequency tune the laser. By quantifying the frequency variations of the laser during data acquisition, an error analysis of temperature and wind measurements can be calculated. These error bounds determine the confidence in the calculated temperatures and wind velocities.

  18. Computer-Controlled Cylindrical Polishing Process for Large X-Ray Mirror Mandrels

    NASA Technical Reports Server (NTRS)

    Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian

    2010-01-01

    We are developing high-energy grazing incidence shell optics for hard-x-ray telescopes. The resolution of a mirror shells depends on the quality of cylindrical mandrel from which they are being replicated. Mid-spatial-frequency axial figure error is a dominant contributor in the error budget of the mandrel. This paper presents our efforts to develop a deterministic cylindrical polishing process in order to keep the mid-spatial-frequency axial figure errors to a minimum. Simulation software is developed to model the residual surface figure errors of a mandrel due to the polishing process parameters and the tools used, as well as to compute the optical performance of the optics. The study carried out using the developed software was focused on establishing a relationship between the polishing process parameters and the mid-spatial-frequency error generation. The process parameters modeled are the speeds of the lap and the mandrel, the tool s influence function, the contour path (dwell) of the tools, their shape and the distribution of the tools on the polishing lap. Using the inputs from the mathematical model, a mandrel having conical approximated Wolter-1 geometry, has been polished on a newly developed computer-controlled cylindrical polishing machine. The preliminary results of a series of polishing experiments demonstrate a qualitative agreement with the developed model. We report our first experimental results and discuss plans for further improvements in the polishing process. The ability to simulate the polishing process is critical to optimize the polishing process, improve the mandrel quality and significantly reduce the cost of mandrel production

  19. Analysis of error type and frequency in apraxia of speech among Portuguese speakers.

    PubMed

    Cera, Maysa Luchesi; Minett, Thaís Soares Cianciarullo; Ortiz, Karin Zazo

    2010-01-01

    Most studies characterizing errors in the speech of patients with apraxia involve English language. To analyze the types and frequency of errors produced by patients with apraxia of speech whose mother tongue was Brazilian Portuguese. 20 adults with apraxia of speech caused by stroke were assessed. The types of error committed by patients were analyzed both quantitatively and qualitatively, and frequencies compared. We observed the presence of substitution, omission, trial-and-error, repetition, self-correction, anticipation, addition, reiteration and metathesis, in descending order of frequency, respectively. Omission type errors were one of the most commonly occurring whereas addition errors were infrequent. These findings differed to those reported in English speaking patients, probably owing to differences in the methodologies used for classifying error types; the inclusion of speakers with apraxia secondary to aphasia; and the difference in the structure of Portuguese language to English in terms of syllable onset complexity and effect on motor control. The frequency of omission and addition errors observed differed to the frequency reported for speakers of English.

  20. An Assessment of State-of-the-Art Mean Sea Surface and Geoid Models of the Arctic Ocean: Implications for Sea Ice Freeboard Retrieval

    NASA Astrophysics Data System (ADS)

    Skourup, Henriette; Farrell, Sinéad Louise; Hendricks, Stefan; Ricker, Robert; Armitage, Thomas W. K.; Ridout, Andy; Andersen, Ole Baltazar; Haas, Christian; Baker, Steven

    2017-11-01

    State-of-the-art Arctic Ocean mean sea surface (MSS) models and global geoid models (GGMs) are used to support sea ice freeboard estimation from satellite altimeters, as well as in oceanographic studies such as mapping sea level anomalies and mean dynamic ocean topography. However, errors in a given model in the high-frequency domain, primarily due to unresolved gravity features, can result in errors in the estimated along-track freeboard. These errors are exacerbated in areas with a sparse lead distribution in consolidated ice pack conditions. Additionally model errors can impact ocean geostrophic currents, derived from satellite altimeter data, while remaining biases in these models may impact longer-term, multisensor oceanographic time series of sea level change in the Arctic. This study focuses on an assessment of five state-of-the-art Arctic MSS models (UCL13/04 and DTU15/13/10) and a commonly used GGM (EGM2008). We describe errors due to unresolved gravity features, intersatellite biases, and remaining satellite orbit errors, and their impact on the derivation of sea ice freeboard. The latest MSS models, incorporating CryoSat-2 sea surface height measurements, show improved definition of gravity features, such as the Gakkel Ridge. The standard deviation between models ranges 0.03-0.25 m. The impact of remaining MSS/GGM errors on freeboard retrieval can reach several decimeters in parts of the Arctic. While the maximum observed freeboard difference found in the central Arctic was 0.59 m (UCL13 MSS minus EGM2008 GGM), the standard deviation in freeboard differences is 0.03-0.06 m.

  1. A Dual Frequency Carrier Phase Error Difference Checking Algorithm for the GNSS Compass.

    PubMed

    Liu, Shuo; Zhang, Lei; Li, Jian

    2016-11-24

    The performance of the Global Navigation Satellite System (GNSS) compass is related to the quality of carrier phase measurement. How to process the carrier phase error properly is important to improve the GNSS compass accuracy. In this work, we propose a dual frequency carrier phase error difference checking algorithm for the GNSS compass. The algorithm aims at eliminating large carrier phase error in dual frequency double differenced carrier phase measurement according to the error difference between two frequencies. The advantage of the proposed algorithm is that it does not need additional environment information and has a good performance on multiple large errors compared with previous research. The core of the proposed algorithm is removing the geographical distance from the dual frequency carrier phase measurement, then the carrier phase error is separated and detectable. We generate the Double Differenced Geometry-Free (DDGF) measurement according to the characteristic that the different frequency carrier phase measurements contain the same geometrical distance. Then, we propose the DDGF detection to detect the large carrier phase error difference between two frequencies. The theoretical performance of the proposed DDGF detection is analyzed. An open sky test, a manmade multipath test and an urban vehicle test were carried out to evaluate the performance of the proposed algorithm. The result shows that the proposed DDGF detection is able to detect large error in dual frequency carrier phase measurement by checking the error difference between two frequencies. After the DDGF detection, the accuracy of the baseline vector is improved in the GNSS compass.

  2. Novel deformable mirror design for possible wavefront correction in CO2 laser fusion system

    NASA Astrophysics Data System (ADS)

    Gunn, S. V.; Heinz, T. A.; Henderson, W. D.; Massie, N. A.; Viswanathan, V. K.

    1980-11-01

    Analysis at Los Alamos and elsewhere has resulted in the conclusion that deformable mirrors can substantially improve the optical performance of laser fusion systems, as the errors are mostly static or quasi-static with mainly low spatial frequencies across the aperture resulting in low order Seidel aberrations in the beam. A novel deformable mirror assembly (Fig. 1) has been fabricated with 19 actuators capable of surface deflection of ±20 microns. The mirror surface deflections are produced by a unique differential ball screw that acts as both a force and position actuator. The screw is driven by a stepper motor giving a surface positioning resolution of 0.025 micron. No holding voltage potential is required, and a piezoceramic element in series with each ball screw provides a ±1 micron amplitude high-frequency surface dither to aid the correction process. Mirror performance in terms of individual actuator influence function, cross-coupling, figure attainment, long-term surface stability as well as optical performance characteristics will be discussed.

  3. Low Frequency Error Analysis and Calibration for High-Resolution Optical Satellite's Uncontrolled Geometric Positioning

    NASA Astrophysics Data System (ADS)

    Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng

    2016-06-01

    The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.

  4. Improving patient safety through quality assurance.

    PubMed

    Raab, Stephen S

    2006-05-01

    Anatomic pathology laboratories use several quality assurance tools to detect errors and to improve patient safety. To review some of the anatomic pathology laboratory patient safety quality assurance practices. Different standards and measures in anatomic pathology quality assurance and patient safety were reviewed. Frequency of anatomic pathology laboratory error, variability in the use of specific quality assurance practices, and use of data for error reduction initiatives. Anatomic pathology error frequencies vary according to the detection method used. Based on secondary review, a College of American Pathologists Q-Probes study showed that the mean laboratory error frequency was 6.7%. A College of American Pathologists Q-Tracks study measuring frozen section discrepancy found that laboratories improved the longer they monitored and shared data. There is a lack of standardization across laboratories even for governmentally mandated quality assurance practices, such as cytologic-histologic correlation. The National Institutes of Health funded a consortium of laboratories to benchmark laboratory error frequencies, perform root cause analysis, and design error reduction initiatives, using quality assurance data. Based on the cytologic-histologic correlation process, these laboratories found an aggregate nongynecologic error frequency of 10.8%. Based on gynecologic error data, the laboratory at my institution used Toyota production system processes to lower gynecologic error frequencies and to improve Papanicolaou test metrics. Laboratory quality assurance practices have been used to track error rates, and laboratories are starting to use these data for error reduction initiatives.

  5. Lock-in amplifier error prediction and correction in frequency sweep measurements.

    PubMed

    Sonnaillon, Maximiliano Osvaldo; Bonetto, Fabian Jose

    2007-01-01

    This article proposes an analytical algorithm for predicting errors in lock-in amplifiers (LIAs) working with time-varying reference frequency. Furthermore, a simple method for correcting such errors is presented. The reference frequency can be swept in order to measure the frequency response of a system within a given spectrum. The continuous variation of the reference frequency produces a measurement error that depends on three factors: the sweep speed, the LIA low-pass filters, and the frequency response of the measured system. The proposed error prediction algorithm is based on the final value theorem of the Laplace transform. The correction method uses a double-sweep measurement. A mathematical analysis is presented and validated with computational simulations and experimental measurements.

  6. Comparison of different spatial transformations applied to EEG data: A case study of error processing.

    PubMed

    Cohen, Michael X

    2015-09-01

    The purpose of this paper is to compare the effects of different spatial transformations applied to the same scalp-recorded EEG data. The spatial transformations applied are two referencing schemes (average and linked earlobes), the surface Laplacian, and beamforming (a distributed source localization procedure). EEG data were collected during a speeded reaction time task that provided a comparison of activity between error vs. correct responses. Analyses focused on time-frequency power, frequency band-specific inter-electrode connectivity, and within-subject cross-trial correlations between EEG activity and reaction time. Time-frequency power analyses showed similar patterns of midfrontal delta-theta power for errors compared to correct responses across all spatial transformations. Beamforming additionally revealed error-related anterior and lateral prefrontal beta-band activity. Within-subject brain-behavior correlations showed similar patterns of results across the spatial transformations, with the correlations being the weakest after beamforming. The most striking difference among the spatial transformations was seen in connectivity analyses: linked earlobe reference produced weak inter-site connectivity that was attributable to volume conduction (zero phase lag), while the average reference and Laplacian produced more interpretable connectivity results. Beamforming did not reveal any significant condition modulations of connectivity. Overall, these analyses show that some findings are robust to spatial transformations, while other findings, particularly those involving cross-trial analyses or connectivity, are more sensitive and may depend on the use of appropriate spatial transformations. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Modeling and characterization of multipath in global navigation satellite system ranging signals

    NASA Astrophysics Data System (ADS)

    Weiss, Jan Peter

    The Global Positioning System (GPS) provides position, velocity, and time information to users in anywhere near the earth in real-time and regardless of weather conditions. Since the system became operational, improvements in many areas have reduced systematic errors affecting GPS measurements such that multipath, defined as any signal taking a path other than the direct, has become a significant, if not dominant, error source for many applications. This dissertation utilizes several approaches to characterize and model multipath errors in GPS measurements. Multipath errors in GPS ranging signals are characterized for several receiver systems and environments. Experimental P(Y) code multipath data are analyzed for ground stations with multipath levels ranging from minimal to severe, a C-12 turboprop, an F-18 jet, and an aircraft carrier. Comparisons between receivers utilizing single patch antennas and multi-element arrays are also made. In general, the results show significant reductions in multipath with antenna array processing, although large errors can occur even with this kind of equipment. Analysis of airborne platform multipath shows that the errors tend to be small in magnitude because the size of the aircraft limits the geometric delay of multipath signals, and high in frequency because aircraft dynamics cause rapid variations in geometric delay. A comprehensive multipath model is developed and validated. The model integrates 3D structure models, satellite ephemerides, electromagnetic ray-tracing algorithms, and detailed antenna and receiver models to predict multipath errors. Validation is performed by comparing experimental and simulated multipath via overall error statistics, per satellite time histories, and frequency content analysis. The validation environments include two urban buildings, an F-18, an aircraft carrier, and a rural area where terrain multipath dominates. The validated models are used to identify multipath sources, characterize signal properties, evaluate additional antenna and receiver tracking configurations, and estimate the reflection coefficients of multipath-producing surfaces. Dynamic models for an F-18 landing on an aircraft carrier correlate aircraft dynamics to multipath frequency content; the model also characterizes the separate contributions of multipath due to the aircraft, ship, and ocean to the overall error statistics. Finally, reflection coefficients for multipath produced by terrain are estimated via a least-squares algorithm.

  8. Conversion of radius of curvature to power (and vice versa)

    NASA Astrophysics Data System (ADS)

    Wickenhagen, Sven; Endo, Kazumasa; Fuchs, Ulrike; Youngworth, Richard N.; Kiontke, Sven R.

    2015-09-01

    Manufacturing optical components relies on good measurements and specifications. One of the most precise measurements routinely required is the form accuracy. In practice, form deviation from the ideal surface is effectively low frequency errors, where the form error most often accounts for no more than a few undulations across a surface. These types of errors are measured in a variety of ways including interferometry and tactile methods like profilometry, with the latter often being employed for aspheres and general surface shapes such as freeforms. This paper provides a basis for a correct description of power and radius of curvature tolerances, including best practices and calculating the power value with respect to the radius deviation (and vice versa) of the surface form. A consistent definition of the sagitta is presented, along with different cases in manufacturing that are of interest to fabricators and designers. The results make clear how the definitions and results should be documented, for all measurement setups. Relationships between power and radius of curvature are shown that allow specifying the preferred metric based on final accuracy and measurement method. Results shown include all necessary equations for conversion to give optical designers and manufacturers a consistent and robust basis for decision-making. The paper also gives guidance on preferred methods for different scenarios for surface types, accuracy required, and metrology methods employed.

  9. On Combining Thermal-Infrared and Radio-Occultation Data of Saturn's Atmosphere

    NASA Technical Reports Server (NTRS)

    Flasar, F. M.; Schinder, P. J.; Conrath, B. J.

    2008-01-01

    Radio-occultation and thermal-infrared measurements are complementary investigations for sounding planetary atmospheres. The vertical resolution afforded by radio occultations is typically approximately 1 km or better, whereas that from infrared sounding is often comparable to a scale height. On the other hand, an instrument like CIRS can easily generate global maps of temperature and composition, whereas occultation soundings are usually distributed more sparsely. The starting point for radio-occultation inversions is determining the residual Doppler-shifted frequency, that is the shift in frequency from what it would be in the absence of the atmosphere. Hence the positions and relative velocities of the spacecraft, target atmosphere, and DSN receiving station must be known to high accuracy. It is not surprising that the inversions can be susceptible to sources of systematic errors. Stratospheric temperature profiles on Titan retrieved from Cassini radio occultations were found to be very susceptible to errors in the reconstructed spacecraft velocities (approximately equal to 1 mm/s). Here the ability to adjust the spacecraft ephemeris so that the profiles matched those retrieved from CIRS limb sounding proved to be critical in mitigating this error. A similar procedure can be used for Saturn, although the sensitivity of its retrieved profiles to this type of error seems to be smaller. One issue that has appeared in inverting the Cassini occultations by Saturn is the uncertainty in its equatorial bulge, that is, the shape in its iso-density surfaces at low latitudes. Typically one approximates that surface as a geopotential surface by assuming a barotropic atmosphere. However, the recent controversy in the equatorial winds, i.e., whether they changed between the Voyager (1981) era and later (after 1996) epochs of Cassini and some Hubble observations, has made it difficult to know the exact shape of the surface, and it leads to uncertainties in the retrieved temperature profiles of one to a few kelvins. This propagates into errors in the retrieved helium abundance, which makes use of thermal-infrared spectra and synthetic spectra computed with retrieved radio-occultation temperature profiles. The highest abundances are retrieved with the faster Voyager-era winds, but even these abundances are somewhat smaller than those retrieved from the thermal-infrared data alone (albeit with larger formal errors). The helium abundance determination is most sensitive to temperatures in the upper troposphere. Further progress may include matching the radio-occultation profiles with those from CIRS limb sounding in the upper stratosphere.

  10. The potential for geostationary remote sensing of NO2 to improve weather prediction

    NASA Astrophysics Data System (ADS)

    Liu, X.; Mizzi, A. P.; Anderson, J. L.; Fung, I. Y.; Cohen, R. C.

    2017-12-01

    Observations of surface winds remain sparse making it challenging to simulate and predict the weather in circumstances of light winds that are most important for poor air quality. Direct measurements of short-lived chemicals from space might be a solution to this challenge. Here we investigate the application of data assimilation of NO­2 columns as will be observed from geostationary orbit to improve predictions and retrospective analysis of surface wind fields. Specifically, synthetic NO2 observations are sampled from a "nature run (NR)" regarded as the true atmosphere. Then NO2 observations are assimilated using EAKF methods into a "control run (CR)" which differs from the NR in the wind field. Wind errors are generated by introducing (1) errors in the initial conditions, (2) creating a model error by using two different formulations for the planetary boundary layer, (3) and by combining both of these effects. Assimilation of NO2 column observations succeeds in reducing wind errors, indicating the prospects for future geostationary atmospheric composition measurements to improve weather forecasting are substantial. We find that due to the temporal heterogeneity of wind errors, the success of this application favors chemical observations of high frequency, such as those from geostationary platform. We also show the potential to improve soil moisture field by assimilating NO­2 columns.

  11. Flight Investigation of Prescribed Simultaneous Independent Surface Excitations for Real-Time Parameter Identification

    NASA Technical Reports Server (NTRS)

    Moes, Timothy R.; Smith, Mark S.; Morelli, Eugene A.

    2003-01-01

    Near real-time stability and control derivative extraction is required to support flight demonstration of Intelligent Flight Control System (IFCS) concepts being developed by NASA, academia, and industry. Traditionally, flight maneuvers would be designed and flown to obtain stability and control derivative estimates using a postflight analysis technique. The goal of the IFCS concept is to be able to modify the control laws in real time for an aircraft that has been damaged in flight. In some IFCS implementations, real-time parameter identification (PID) of the stability and control derivatives of the damaged aircraft is necessary for successfully reconfiguring the control system. This report investigates the usefulness of Prescribed Simultaneous Independent Surface Excitations (PreSISE) to provide data for rapidly obtaining estimates of the stability and control derivatives. Flight test data were analyzed using both equation-error and output-error PID techniques. The equation-error PID technique is known as Fourier Transform Regression (FTR) and is a frequency-domain real-time implementation. Selected results were compared with a time-domain output-error technique. The real-time equation-error technique combined with the PreSISE maneuvers provided excellent derivative estimation in the longitudinal axis. However, the PreSISE maneuvers as presently defined were not adequate for accurate estimation of the lateral-directional derivatives.

  12. Optimising 4-D surface change detection: an approach for capturing rockfall magnitude-frequency

    NASA Astrophysics Data System (ADS)

    Williams, Jack G.; Rosser, Nick J.; Hardy, Richard J.; Brain, Matthew J.; Afana, Ashraf A.

    2018-02-01

    We present a monitoring technique tailored to analysing change from near-continuously collected, high-resolution 3-D data. Our aim is to fully characterise geomorphological change typified by an event magnitude-frequency relationship that adheres to an inverse power law or similar. While recent advances in monitoring have enabled changes in volume across more than 7 orders of magnitude to be captured, event frequency is commonly assumed to be interchangeable with the time-averaged event numbers between successive surveys. Where events coincide, or coalesce, or where the mechanisms driving change are not spatially independent, apparent event frequency must be partially determined by survey interval.The data reported have been obtained from a permanently installed terrestrial laser scanner, which permits an increased frequency of surveys. Surveying from a single position raises challenges, given the single viewpoint onto a complex surface and the need for computational efficiency associated with handling a large time series of 3-D data. A workflow is presented that optimises the detection of change by filtering and aligning scans to improve repeatability. An adaptation of the M3C2 algorithm is used to detect 3-D change to overcome data inconsistencies between scans. Individual rockfall geometries are then extracted and the associated volumetric errors modelled. The utility of this approach is demonstrated using a dataset of ˜ 9 × 103 surveys acquired at ˜ 1 h intervals over 10 months. The magnitude-frequency distribution of rockfall volumes generated is shown to be sensitive to monitoring frequency. Using a 1 h interval between surveys, rather than 30 days, the volume contribution from small (< 0.1 m3) rockfalls increases from 67 to 98 % of the total, and the number of individual rockfalls observed increases by over 3 orders of magnitude. High-frequency monitoring therefore holds considerable implications for magnitude-frequency derivatives, such as hazard return intervals and erosion rates. As such, while high-frequency monitoring has potential to describe short-term controls on geomorphological change and more realistic magnitude-frequency relationships, the assessment of longer-term erosion rates may be more suited to less-frequent data collection with lower accumulative errors.

  13. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  14. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  15. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  16. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  17. Local blur analysis and phase error correction method for fringe projection profilometry systems.

    PubMed

    Rao, Li; Da, Feipeng

    2018-05-20

    We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.

  18. Variations in Static Force Control and Motor Unit Behavior with Error Amplification Feedback in the Elderly.

    PubMed

    Chen, Yi-Ching; Lin, Linda L; Lin, Yen-Ting; Hu, Chia-Ling; Hwang, Ing-Shiou

    2017-01-01

    Error amplification (EA) feedback is a promising approach to advance visuomotor skill. As error detection and visuomotor processing at short time scales decline with age, this study examined whether older adults could benefit from EA feedback that included higher-frequency information to guide a force-tracking task. Fourteen young and 14 older adults performed low-level static isometric force-tracking with visual guidance of typical visual feedback and EA feedback containing augmented high-frequency errors. Stabilogram diffusion analysis was used to characterize force fluctuation dynamics. Also, the discharge behaviors of motor units and pooled motor unit coherence were assessed following the decomposition of multi-channel surface electromyography (EMG). EA produced different behavioral and neurophysiological impacts on young and older adults. Older adults exhibited inferior task accuracy with EA feedback than with typical visual feedback, but not young adults. Although stabilogram diffusion analysis revealed that EA led to a significant decrease in critical time points for both groups, EA potentiated the critical point of force fluctuations [Formula: see text], short-term effective diffusion coefficients (Ds), and short-term exponent scaling only for the older adults. Moreover, in older adults, EA added to the size of discharge variability of motor units and discharge regularity of cumulative discharge rate, but suppressed the pooled motor unit coherence in the 13-35 Hz band. Virtual EA alters the strategic balance between open-loop and closed-loop controls for force-tracking. Contrary to expectations, the prevailing use of closed-loop control with EA that contained high-frequency error information enhanced the motor unit discharge variability and undermined the force steadiness in the older group, concerning declines in physiological complexity in the neurobehavioral system and the common drive to the motoneuronal pool against force destabilization.

  19. Variations in Static Force Control and Motor Unit Behavior with Error Amplification Feedback in the Elderly

    PubMed Central

    Chen, Yi-Ching; Lin, Linda L.; Lin, Yen-Ting; Hu, Chia-Ling; Hwang, Ing-Shiou

    2017-01-01

    Error amplification (EA) feedback is a promising approach to advance visuomotor skill. As error detection and visuomotor processing at short time scales decline with age, this study examined whether older adults could benefit from EA feedback that included higher-frequency information to guide a force-tracking task. Fourteen young and 14 older adults performed low-level static isometric force-tracking with visual guidance of typical visual feedback and EA feedback containing augmented high-frequency errors. Stabilogram diffusion analysis was used to characterize force fluctuation dynamics. Also, the discharge behaviors of motor units and pooled motor unit coherence were assessed following the decomposition of multi-channel surface electromyography (EMG). EA produced different behavioral and neurophysiological impacts on young and older adults. Older adults exhibited inferior task accuracy with EA feedback than with typical visual feedback, but not young adults. Although stabilogram diffusion analysis revealed that EA led to a significant decrease in critical time points for both groups, EA potentiated the critical point of force fluctuations <ΔFc2>, short-term effective diffusion coefficients (Ds), and short-term exponent scaling only for the older adults. Moreover, in older adults, EA added to the size of discharge variability of motor units and discharge regularity of cumulative discharge rate, but suppressed the pooled motor unit coherence in the 13–35 Hz band. Virtual EA alters the strategic balance between open-loop and closed-loop controls for force-tracking. Contrary to expectations, the prevailing use of closed-loop control with EA that contained high-frequency error information enhanced the motor unit discharge variability and undermined the force steadiness in the older group, concerning declines in physiological complexity in the neurobehavioral system and the common drive to the motoneuronal pool against force destabilization. PMID:29167637

  20. Reverberant acoustic energy in auditoria that comprise systems of coupled rooms

    NASA Astrophysics Data System (ADS)

    Summers, Jason E.

    2003-11-01

    A frequency-dependent model for reverberant energy in coupled rooms is developed and compared with measurements for a 1:10 scale model and for Bass Hall, Ft. Worth, TX. At high frequencies, prior statistical-acoustics models are improved by geometrical-acoustics corrections for decay within sub-rooms and for energy transfer between sub-rooms. Comparisons of computational geometrical acoustics predictions based on beam-axis tracing with scale model measurements indicate errors resulting from tail-correction assuming constant quadratic growth of reflection density. Using ray tracing in the late part corrects this error. For mid-frequencies, the models are modified to account for wave effects at coupling apertures by including power transmission coefficients. Similarly, statical-acoustics models are improved through more accurate estimates of power transmission measurements. Scale model measurements are in accord with the predicted behavior. The edge-diffraction model is adapted to study transmission through apertures. Multiple-order scattering is theoretically and experimentally shown inaccurate due to neglect of slope diffraction. At low frequencies, perturbation models qualitatively explain scale model measurements. Measurements confirm relation of coupling strength to unperturbed pressure distribution on coupling surfaces. Measurements in Bass Hall exhibit effects of the coupled stage house. High frequency predictions of statistical acoustics and geometrical acoustics models and predictions of coupling apertures all agree with measurements.

  1. Multi-spacecraft coherent Doppler and ranging for interplanetary-navigation

    NASA Technical Reports Server (NTRS)

    Pollmeier, Vincent M.

    1995-01-01

    Future plans for planetary exploration currently include using multiple spacecraft to simultaneously explore one planet. This never before encountered situation places new demands on tracking systems used to support navigation. One possible solution to the problem of heavy ground resource conflicts is the use of multispacecraft coherent radio metric data, also known as, bent-pipe data. Analysis of the information content of these data types show that the information content of multi-spacecraft Doppler is dependent only on the frequency of the final downlink leg and is independent of the frequencies used on other legs. Numerical analysis shows that coherent bent-pipe data can provide significantly better capability to estimate the location of a lander on the surface of Mars than can direct lander to Earth radio metric data. However, this is complicated by difficulties in separating the effect of a lander position error from that of an orbiter position error for single passes of data.

  2. Active Mirror Predictive and Requirements Verification Software (AMP-ReVS)

    NASA Technical Reports Server (NTRS)

    Basinger, Scott A.

    2012-01-01

    This software is designed to predict large active mirror performance at various stages in the fabrication lifecycle of the mirror. It was developed for 1-meter class powered mirrors for astronomical purposes, but is extensible to other geometries. The package accepts finite element model (FEM) inputs and laboratory measured data for large optical-quality mirrors with active figure control. It computes phenomenological contributions to the surface figure error using several built-in optimization techniques. These phenomena include stresses induced in the mirror by the manufacturing process and the support structure, the test procedure, high spatial frequency errors introduced by the polishing process, and other process-dependent deleterious effects due to light-weighting of the mirror. Then, depending on the maturity of the mirror, it either predicts the best surface figure error that the mirror will attain, or it verifies that the requirements for the error sources have been met once the best surface figure error has been measured. The unique feature of this software is that it ties together physical phenomenology with wavefront sensing and control techniques and various optimization methods including convex optimization, Kalman filtering, and quadratic programming to both generate predictive models and to do requirements verification. This software combines three distinct disciplines: wavefront control, predictive models based on FEM, and requirements verification using measured data in a robust, reusable code that is applicable to any large optics for ground and space telescopes. The software also includes state-of-the-art wavefront control algorithms that allow closed-loop performance to be computed. It allows for quantitative trade studies to be performed for optical systems engineering, including computing the best surface figure error under various testing and operating conditions. After the mirror manufacturing process and testing have been completed, the software package can be used to verify that the underlying requirements have been met.

  3. Evaluation of different models to estimate the global solar radiation on inclined surface

    NASA Astrophysics Data System (ADS)

    Demain, C.; Journée, M.; Bertrand, C.

    2012-04-01

    Global and diffuse solar radiation intensities are, in general, measured on horizontal surfaces, whereas stationary solar conversion systems (both flat plate solar collector and solar photovoltaic) are mounted on inclined surface to maximize the amount of solar radiation incident on the collector surface. Consequently, the solar radiation incident measured on a tilted surface has to be determined by converting solar radiation from horizontal surface to tilted surface of interest. This study evaluates the performance of 14 models transposing 10 minutes, hourly and daily diffuse solar irradiation from horizontal to inclined surface. Solar radiation data from 8 months (April to November 2011) which include diverse atmospheric conditions and solar altitudes, measured on the roof of the radiation tower of the Royal Meteorological Institute of Belgium in Uccle (Longitude 4.35°, Latitude 50.79°) were used for validation purposes. The individual model performance is assessed by an inter-comparison between the calculated and measured solar global radiation on the south-oriented surface tilted at 50.79° using statistical methods. The relative performance of the different models under different sky conditions has been studied. Comparison of the statistical errors between the different radiation models in function of the clearness index shows that some models perform better under one type of sky condition. Putting together different models acting under different sky conditions can lead to a diminution of the statistical error between global measured solar radiation and global estimated solar radiation. As models described in this paper have been developed for hourly data inputs, statistical error indexes are minimum for hourly data and increase for 10 minutes and one day frequency data.

  4. Correlation methods in optical metrology with state-of-the-art x-ray mirrors

    NASA Astrophysics Data System (ADS)

    Yashchuk, Valeriy V.; Centers, Gary; Gevorkyan, Gevork S.; Lacey, Ian; Smith, Brian V.

    2018-01-01

    The development of fully coherent free electron lasers and diffraction limited storage ring x-ray sources has brought to focus the need for higher performing x-ray optics with unprecedented tolerances for surface slope and height errors and roughness. For example, the proposed beamlines for the future upgraded Advance Light Source, ALS-U, require optical elements characterized by a residual slope error of <100 nrad (root-mean-square) and height error of <1-2 nm (peak-tovalley). These are for optics with a length of up to one meter. However, the current performance of x-ray optical fabrication and metrology generally falls short of these requirements. The major limitation comes from the lack of reliable and efficient surface metrology with required accuracy and with reasonably high measurement rate, suitable for integration into the modern deterministic surface figuring processes. The major problems of current surface metrology relate to the inherent instrumental temporal drifts, systematic errors, and/or an unacceptably high cost, as in the case of interferometry with computer-generated holograms as a reference. In this paper, we discuss the experimental methods and approaches based on correlation analysis to the acquisition and processing of metrology data developed at the ALS X-Ray Optical Laboratory (XROL). Using an example of surface topography measurements of a state-of-the-art x-ray mirror performed at the XROL, we demonstrate the efficiency of combining the developed experimental correlation methods to the advanced optimal scanning strategy (AOSS) technique. This allows a significant improvement in the accuracy and capacity of the measurements via suppression of the instrumental low frequency noise, temporal drift, and systematic error in a single measurement run. Practically speaking, implementation of the AOSS technique leads to an increase of the measurement accuracy, as well as the capacity of ex situ metrology by a factor of about four. The developed method is general and applicable to a broad spectrum of high accuracy measurements.

  5. One-dimensional soil temperature assimilation experiment based on unscented particle filter and Common Land Model

    NASA Astrophysics Data System (ADS)

    Fu, Xiao Lei; Jin, Bao Ming; Jiang, Xiao Lei; Chen, Cheng

    2018-06-01

    Data assimilation is an efficient way to improve the simulation/prediction accuracy in many fields of geosciences especially in meteorological and hydrological applications. This study takes unscented particle filter (UPF) as an example to test its performance at different two probability distribution, Gaussian and Uniform distributions with two different assimilation frequencies experiments (1) assimilating hourly in situ soil surface temperature, (2) assimilating the original Moderate Resolution Imaging Spectroradiometer (MODIS) Land Surface Temperature (LST) once per day. The numerical experiment results show that the filter performs better when increasing the assimilation frequency. In addition, UPF is efficient for improving the soil variables (e.g., soil temperature) simulation/prediction accuracy, though it is not sensitive to the probability distribution for observation error in soil temperature assimilation.

  6. A Feasibility Study for Simultaneous Measurements of Water Vapor and Precipitation Parameters using a Three-frequency Radar

    NASA Technical Reports Server (NTRS)

    Meneghini, R.; Liao, L.; Tian, L.

    2005-01-01

    The radar return powers from a three-frequency radar, with center frequency at 22.235 GHz and upper and lower frequencies chosen with equal water vapor absorption coefficients, can be used to estimate water vapor density and parameters of the precipitation. A linear combination of differential measurements between the center and lower frequencies on one hand and the upper and lower frequencies on the other provide an estimate of differential water vapor absorption. The coupling between the precipitation and water vapor estimates is generally weak but increases with bandwidth and the amount of non-Rayleigh scattering of the hydrometeors. The coupling leads to biases in the estimates of water vapor absorption that are related primarily to the phase state and the median mass diameter of the hydrometeors. For a down-looking radar, path-averaged estimates of water vapor absorption are possible under rain-free as well as raining conditions by using the surface returns at the three frequencies. Simulations of the water vapor attenuation retrieval show that the largest source of error typically arises from the variance in the measured radar return powers. Although the error can be mitigated by a combination of a high pulse repetition frequency, pulse compression, and averaging in range and time, the radar receiver must be stable over the averaging period. For fractional bandwidths of 20% or less, the potential exists for simultaneous measurements at the three frequencies with a single antenna and transceiver, thereby significantly reducing the cost and mass of the system.

  7. Effect of phase errors in stepped-frequency radar systems

    NASA Astrophysics Data System (ADS)

    Vanbrundt, H. E.

    1988-04-01

    Stepped-frequency waveforms are being considered for inverse synthetic aperture radar (ISAR) imaging from ship and airborne platforms and for detailed radar cross section (RCS) measurements of ships and aircraft. These waveforms make it possible to achieve resolutions of 1.0 foot by using existing radar designs and processing technology. One problem not yet fully resolved in using stepped-frequency waveform for ISAR imaging is the deterioration in signal level caused by random frequency error. Random frequency error of the stepped-frequency source results in reduced peak responses and increased null responses. The resulting reduced signal-to-noise ratio is range dependent. Two of the major concerns addressed in this report are radar range limitations for ISAR and the error in calibration for RCS measurements caused by differences in range between a passive reflector used for an RCS reference and the target to be measured. In addressing these concerns, NOSC developed an analysis to assess the tolerable frequency error in terms of resulting power loss in signal power and signal-to-phase noise.

  8. Symmetry based frequency domain processing to remove harmonic noise from surface nuclear magnetic resonance measurements

    NASA Astrophysics Data System (ADS)

    Hein, Annette; Larsen, Jakob Juul; Parsekian, Andrew D.

    2017-02-01

    Surface nuclear magnetic resonance (NMR) is a unique geophysical method due to its direct sensitivity to water. A key limitation to overcome is the difficulty of making surface NMR measurements in environments with anthropogenic electromagnetic noise, particularly constant frequency sources such as powerlines. Here we present a method of removing harmonic noise by utilizing frequency domain symmetry of surface NMR signals to reconstruct portions of the spectrum corrupted by frequency-domain noise peaks. This method supplements the existing NMR processing workflow and is applicable after despiking, coherent noise cancellation, and stacking. The symmetry based correction is simple, grounded in mathematical theory describing NMR signals, does not introduce errors into the data set, and requires no prior knowledge about the harmonics. Modelling and field examples show that symmetry based noise removal reduces the effects of harmonics. In one modelling example, symmetry based noise removal improved signal-to-noise ratio in the data by 10 per cent. This improvement had noticeable effects on inversion parameters including water content and the decay constant T2*. Within water content profiles, aquifer boundaries and water content are more accurate after harmonics are removed. Fewer spurious water content spikes appear within aquifers, which is especially useful for resolving multilayered structures. Within T2* profiles, estimates are more accurate after harmonics are removed, especially in the lower half of profiles.

  9. Effect of DM Actuator Errors on the WFIRST/AFTA Coronagraph Contrast Performance

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin; Shi, Fang

    2015-01-01

    The WFIRST/AFTA 2.4 m space telescope currently under study includes a stellar coronagraph for the imaging and the spectral characterization of extrasolar planets. The coronagraph employs two sequential deformable mirrors (DMs) to compensate for phase and amplitude errors in creating dark holes. DMs are critical elements in high contrast coronagraphs, requiring precision and stability measured in picometers to enable detection of Earth-like exoplanets. Working with a low-order wavefront-sensor the DM that is conjugate to a pupil can also be used to correct low-order wavefront drift during a scientific observation. However, not all actuators in a DM have the same gain. When using such a DM in low-order wavefront sensing and control subsystem, the actuator gain errors introduce high-spatial frequency errors to the DM surface and thus worsen the contrast performance of the coronagraph. We have investigated the effects of actuator gain errors and the actuator command digitization errors on the contrast performance of the coronagraph through modeling and simulations, and will present our results in this paper.

  10. Reduction of Surface Errors over a Wide Range of Spatial Frequencies Using a Combination of Electrolytic In-Process Dressing Grinding and Magnetorheological Finishing

    NASA Astrophysics Data System (ADS)

    Kunimura, Shinsuke; Ohmori, Hitoshi

    We present a rapid process for producing flat and smooth surfaces. In this technical note, a fabrication result of a carbon mirror is shown. Electrolytic in-process dressing (ELID) grinding with a metal bonded abrasive wheel, then a metal-resin bonded abrasive wheel, followed by a conductive rubber bonded abrasive wheel, and finally magnetorheological finishing (MRF) were performed as the first, second, third, and final steps, respectively in this process. Flatness over the whole surface was improved by performing the first and second steps. After the third step, peak to valley (PV) and root mean square (rms) values in an area of 0.72 x 0.54 mm2 on the surface were improved. These values were further improved after the final step, and a PV value of 10 nm and an rms value of 1 nm were obtained. Form errors and small surface irregularities such as surface waviness and micro roughness were efficiently reduced by performing ELID grinding using the above three kinds of abrasive wheels because of the high removal rate of ELID grinding, and residual small irregularities were reduced by short time MRF. This process makes it possible to produce flat and smooth surfaces in several hours.

  11. Headaches associated with refractive errors: myth or reality?

    PubMed

    Gil-Gouveia, R; Martins, I P

    2002-04-01

    Headache and refractive errors are very common conditions in the general population, and those with headache often attribute their pain to a visual problem. The International Headache Society (IHS) criteria for the classification of headache includes an entity of headache associated with refractive errors (HARE), but indicates that its importance is widely overestimated. To compare overall headache frequency and HARE frequency in healthy subjects with uncorrected or miscorrected refractive errors and a control group. We interviewed 105 individuals with uncorrected refractive errors and a control group of 71 subjects (with properly corrected or without refractive errors) regarding their headache history. We compared the occurrence of headache and its diagnosis in both groups and assessed its relation to their habits of visual effort and type of refractive errors. Headache frequency was similar in both subjects and controls. Headache associated with refractive errors was the only headache type significantly more common in subjects with refractive errors than in controls (6.7% versus 0%). It was associated with hyperopia and was unrelated to visual effort or to the severity of visual error. With adequate correction, 72.5% of the subjects with headache and refractive error reported improvement in their headaches, and 38% had complete remission of headache. Regardless of the type of headache present, headache frequency was significantly reduced in these subjects (t = 2.34, P =.02). Headache associated with refractive errors was rarely identified in individuals with refractive errors. In those with chronic headache, proper correction of refractive errors significantly improved headache complaints and did so primarily by decreasing the frequency of headache episodes.

  12. Estimate of higher order ionospheric errors in GNSS positioning

    NASA Astrophysics Data System (ADS)

    Hoque, M. Mainul; Jakowski, N.

    2008-10-01

    Precise navigation and positioning using GPS/GLONASS/Galileo require the ionospheric propagation errors to be accurately determined and corrected for. Current dual-frequency method of ionospheric correction ignores higher order ionospheric errors such as the second and third order ionospheric terms in the refractive index formula and errors due to bending of the signal. The total electron content (TEC) is assumed to be same at two GPS frequencies. All these assumptions lead to erroneous estimations and corrections of the ionospheric errors. In this paper a rigorous treatment of these problems is presented. Different approximation formulas have been proposed to correct errors due to excess path length in addition to the free space path length, TEC difference at two GNSS frequencies, and third-order ionospheric term. The GPS dual-frequency residual range errors can be corrected within millimeter level accuracy using the proposed correction formulas.

  13. A new hybrid active/passive sound absorber with variable surface impedance

    NASA Astrophysics Data System (ADS)

    Betgen, Benjamin; Galland, Marie-Annick

    2011-07-01

    The context of the present paper is the wall treatment of flow ducts, notably aero-engine nacelle intakes and outlets. For this purpose, hybrid active/passive absorbers have been developed at the LMFA for about 15 years. A hybrid cell combines passive absorbent properties of a porous layer and active control at its rear face. Active control is mainly used to increase absorption at low frequencies by cancelling the imaginary part of the surface impedance presented by the absorber. However, the optimal impedance (i.e. the one that produces the highest noise reduction) of an absorber for flow duct applications is generally complex and frequency dependent. A new hybrid absorber intended to realise any of impedance has therefore been developed. The new cell uses one microphone on each side of a resistive cloth. Normal velocity can then be deduced by a simple pressure difference, which allows an estimation of the surface impedance of the absorber. In order to obtain an error signal related to a target impedance, the target impedance has to be reproduced in time domain. The design of a stable and causal filter is a difficult task, considering the kind of frequency response we seek. An alternative way of representing the impedance in time domain is therefore given. The new error signal is integrated into a feedback control structure. Fast convergence and good stability are observed for a wide range of target impedances. Typical optimal impedances with a positive increasing real part and a negative decreasing imaginary part have been successfully realised. Measurements in a grazing-incidence tube show that the new complex impedance absorber clearly outperforms the former active absorber.

  14. The potential for geostationary remote sensing of NO2 to improve weather prediction

    NASA Astrophysics Data System (ADS)

    Liu, X.; Mizzi, A. P.; Anderson, J. L.; Fung, I. Y.; Cohen, R. C.

    2016-12-01

    Observations of surface winds remain sparse making it challenging to simulate and predict the weather in circumstances of light winds that are most important for poor air quality. Direct measurements of short-lived chemicals from space might be a solution to this challenge. Here we investigate the application of data assimilation of NO­2 columns as will be observed from geostationary orbit to improve predictions and retrospective analysis of surface wind fields. Specifically, synthetic NO2 observations are sampled from a "nature run (NR)" regarded as the true atmosphere. Then NO2 observations are assimilated using EAKF methods into a "control run (CR)" which differs from the NR in the wind field. Wind errors are generated by introducing (1) errors in the initial conditions, (2) creating a model error by using two different formulations for the planetary boundary layer, (3) and by combining both of these effects. The assimilation reduces wind errors by up to 50%, indicating the prospects for future geostationary atmospheric composition measurements to improve weather forecasting are substantial. We also examine the assimilation sensitivity to the data assimilation window length. We find that due to the temporal heterogeneity of wind errors, the success of this application favors chemical observations of high frequency, such as those from geostationary platform. We also show the potential to improve soil moisture field by assimilating NO­2 columns.

  15. Direct measurement of the poliovirus RNA polymerase error frequency in vitro

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ward, C.D.; Stokes, M.A.M.; Flanegan, J.B.

    1988-02-01

    The fidelity of RNA replication by the poliovirus-RNA-dependent RNA polymerase was examined by copying homopolymeric RNA templates in vitro. The poliovirus RNA polymerase was extensively purified and used to copy poly(A), poly(C), or poly(I) templates with equimolar concentrations of noncomplementary and complementary ribonucleotides. The error frequency was expressed as the amount of a noncomplementary nucleotide incorporated divided by the total amount of complementary and noncomplementary nucleotide incorporated. The polymerase error frequencies were very high, depending on the specific reaction conditions. The activity of the polymerase on poly(U) and poly(G) was too low to measure error frequencies on these templates. Amore » fivefold increase in the error frequency was observed when the reaction conditions were changed from 3.0 mM Mg{sup 2+} (pH 7.0) to 7.0 mM Mg{sup 2+} (pH 8.0). This increase in the error frequency correlates with an eightfold increase in the elongation rate that was observed under the same conditions in a previous study.« less

  16. Error Type and Lexical Frequency Effects: Error Detection in Swedish Children with Language Impairment

    ERIC Educational Resources Information Center

    Hallin, Anna Eva; Reuterskiöld, Christina

    2017-01-01

    Purpose: The first aim of this study was to investigate if Swedish-speaking school-age children with language impairment (LI) show specific morphosyntactic vulnerabilities in error detection. The second aim was to investigate the effects of lexical frequency on error detection, an overlooked aspect of previous error detection studies. Method:…

  17. Fourier Transform-Plasmon Waveguide Spectroscopy: A Nondestructive Multifrequency Method for Simultaneously Determining Polymer Thickness and Apparent Index of Refraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bobbitt, Jonathan M; Weibel, Stephen C; Elshobaki, Moneim

    2014-12-16

    Fourier transform (FT)-plasmon waveguide resonance (PWR) spectroscopy measures light reflectivity at a waveguide interface as the incident frequency and angle are scanned. Under conditions of total internal reflection, the reflected light intensity is attenuated when the incident frequency and angle satisfy conditions for exciting surface plasmon modes in the metal as well as guided modes within the waveguide. Expanding upon the concept of two-frequency surface plasmon resonance developed by Peterlinz and Georgiadis [ Opt. Commun. 1996, 130, 260], the apparent index of refraction and the thickness of a waveguide can be measured precisely and simultaneously by FT-PWR with an averagemore » percent relative error of 0.4%. Measuring reflectivity for a range of frequencies extends the analysis to a wide variety of sample compositions and thicknesses since frequencies with the maximum attenuation can be selected to optimize the analysis. Additionally, the ability to measure reflectivity curves with both p- and s-polarized light provides anisotropic indices of refraction. FT-PWR is demonstrated using polystyrene waveguides of varying thickness, and the validity of FT-PWR measurements are verified by comparing the results to data from profilometry and atomic force microscopy (AFM).« less

  18. Fourier transform-plasmon waveguide spectroscopy: a nondestructive multifrequency method for simultaneously determining polymer thickness and apparent index of refraction.

    PubMed

    Bobbitt, Jonathan M; Weibel, Stephen C; Elshobaki, Moneim; Chaudhary, Sumit; Smith, Emily A

    2014-12-16

    Fourier transform (FT)-plasmon waveguide resonance (PWR) spectroscopy measures light reflectivity at a waveguide interface as the incident frequency and angle are scanned. Under conditions of total internal reflection, the reflected light intensity is attenuated when the incident frequency and angle satisfy conditions for exciting surface plasmon modes in the metal as well as guided modes within the waveguide. Expanding upon the concept of two-frequency surface plasmon resonance developed by Peterlinz and Georgiadis [Opt. Commun. 1996, 130, 260], the apparent index of refraction and the thickness of a waveguide can be measured precisely and simultaneously by FT-PWR with an average percent relative error of 0.4%. Measuring reflectivity for a range of frequencies extends the analysis to a wide variety of sample compositions and thicknesses since frequencies with the maximum attenuation can be selected to optimize the analysis. Additionally, the ability to measure reflectivity curves with both p- and s-polarized light provides anisotropic indices of refraction. FT-PWR is demonstrated using polystyrene waveguides of varying thickness, and the validity of FT-PWR measurements are verified by comparing the results to data from profilometry and atomic force microscopy (AFM).

  19. Initializing a Mesoscale Boundary-Layer Model with Radiosonde Observations

    NASA Astrophysics Data System (ADS)

    Berri, Guillermo J.; Bertossa, Germán

    2018-01-01

    A mesoscale boundary-layer model is used to simulate low-level regional wind fields over the La Plata River of South America, a region characterized by a strong daily cycle of land-river surface-temperature contrast and low-level circulations of sea-land breeze type. The initial and boundary conditions are defined from a limited number of local observations and the upper boundary condition is taken from the only radiosonde observations available in the region. The study considers 14 different upper boundary conditions defined from the radiosonde data at standard levels, significant levels, level of the inversion base and interpolated levels at fixed heights, all of them within the first 1500 m. The period of analysis is 1994-2008 during which eight daily observations from 13 weather stations of the region are used to validate the 24-h surface-wind forecast. The model errors are defined as the root-mean-square of relative error in wind-direction frequency distribution and mean wind speed per wind sector. Wind-direction errors are greater than wind-speed errors and show significant dispersion among the different upper boundary conditions, not present in wind speed, revealing a sensitivity to the initialization method. The wind-direction errors show a well-defined daily cycle, not evident in wind speed, with the minimum at noon and the maximum at dusk, but no systematic deterioration with time. The errors grow with the height of the upper boundary condition level, in particular wind direction, and double the errors obtained when the upper boundary condition is defined from the lower levels. The conclusion is that defining the model upper boundary condition from radiosonde data closer to the ground minimizes the low-level wind-field errors throughout the region.

  20. Radar error statistics for the space shuttle

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1979-01-01

    Radar error statistics of C-band and S-band that are recommended for use with the groundtracking programs to process space shuttle tracking data are presented. The statistics are divided into two parts: bias error statistics, using the subscript B, and high frequency error statistics, using the subscript q. Bias errors may be slowly varying to constant. High frequency random errors (noise) are rapidly varying and may or may not be correlated from sample to sample. Bias errors were mainly due to hardware defects and to errors in correction for atmospheric refraction effects. High frequency noise was mainly due to hardware and due to atmospheric scintillation. Three types of atmospheric scintillation were identified: horizontal, vertical, and line of sight. This was the first time that horizontal and line of sight scintillations were identified.

  1. Factors associated with reporting of medication errors by Israeli nurses.

    PubMed

    Kagan, Ilya; Barnoy, Sivia

    2008-01-01

    This study investigated medication error reporting among Israeli nurses, the relationship between nurses' personal views about error reporting, and the impact of the safety culture of the ward and hospital on this reporting. Nurses (n = 201) completed a questionnaire related to different aspects of error reporting (frequency, organizational norms of dealing with errors, and personal views on reporting). The higher the error frequency, the more errors went unreported. If the ward nurse manager corrected errors on the ward, error self-reporting decreased significantly. Ward nurse managers have to provide good role models.

  2. Analysis of frequency mixing error on heterodyne interferometric ellipsometry

    NASA Astrophysics Data System (ADS)

    Deng, Yuan-long; Li, Xue-jin; Wu, Yu-bin; Hu, Ju-guang; Yao, Jian-quan

    2007-11-01

    A heterodyne interferometric ellipsometer, with no moving parts and a transverse Zeeman laser, is demonstrated. The modified Mach-Zehnder interferometer characterized as a separate frequency and common-path configuration is designed and theoretically analyzed. The experimental data show a fluctuation mainly resulting from the frequency mixing error which is caused by the imperfection of polarizing beam splitters (PBS), the elliptical polarization and non-orthogonality of light beams. The producing mechanism of the frequency mixing error and its influence on measurement are analyzed with the Jones matrix method; the calculation indicates that it results in an error up to several nanometres in the thickness measurement of thin films. The non-orthogonality has no contribution to the phase difference error when it is relatively small; the elliptical polarization and the imperfection of PBS have a major effect on the error.

  3. Statistical analysis of the surface figure of the James Webb Space Telescope

    NASA Astrophysics Data System (ADS)

    Lightsey, Paul A.; Chaney, David; Gallagher, Benjamin B.; Brown, Bob J.; Smith, Koby; Schwenker, John

    2012-09-01

    The performance of an optical system is best characterized by either the point spread function (PSF) or the optical transfer function (OTF). However, for system budgeting purposes, it is convenient to use a single scalar metric, or a combination of a few scalar metrics to track performance. For the James Webb Space Telescope, the Observatory level requirements were expressed in metrics of Strehl Ratio, and Encircled Energy. These in turn were converted to the metrics of total rms WFE and rms WFE within spatial frequency domains. The 18 individual mirror segments for the primary mirror segment assemblies (PMSA), the secondary mirror (SM), tertiary mirror (TM), and Fine Steering Mirror have all been fabricated. They are polished beryllium mirrors with a protected gold reflective coating. The statistical analysis of the resulting Surface Figure Error of these mirrors has been analyzed. The average spatial frequency distribution and the mirror-to-mirror consistency of the spatial frequency distribution are reported. The results provide insight to system budgeting processes for similar optical systems.

  4. Effect of inter-tissue inductive coupling on multi-frequency imaging of intracranial hemorrhage by magnetic induction tomography

    NASA Astrophysics Data System (ADS)

    Xiao, Zhili; Tan, Chao; Dong, Feng

    2017-08-01

    Magnetic induction tomography (MIT) is a promising technique for continuous monitoring of intracranial hemorrhage due to its contactless nature, low cost and capacity to penetrate the high-resistivity skull. The inter-tissue inductive coupling increases with frequency, which may lead to errors in multi-frequency imaging at high frequency. The effect of inter-tissue inductive coupling was investigated to improve the multi-frequency imaging of hemorrhage. An analytical model of inter-tissue inductive coupling based on the equivalent circuit was established. A set of new multi-frequency decomposition equations separating the phase shift of hemorrhage from other brain tissues was derived by employing the coupling information to improve the multi-frequency imaging of intracranial hemorrhage. The decomposition error and imaging error are both decreased after considering the inter-tissue inductive coupling information. The study reveals that the introduction of inter-tissue inductive coupling can reduce the errors of multi-frequency imaging, promoting the development of intracranial hemorrhage monitoring by multi-frequency MIT.

  5. Reduction of low frequency error for SED36 and APS based HYDRA star trackers

    NASA Astrophysics Data System (ADS)

    Ouaknine, Julien; Blarre, Ludovic; Oddos-Marcel, Lionel; Montel, Johan; Julio, Jean-Marc

    2017-11-01

    In the frame of the CNES Pleiades satellite, a reduction of the star tracker low frequency error, which is the most penalizing error for the satellite attitude control, was performed. For that purpose, the SED36 star tracker was developed, with a design based on the flight qualified SED16/26. In this paper, the SED36 main features will be first presented. Then, the reduction process of the low frequency error will be developed, particularly the optimization of the optical distortion calibration. The result is an attitude low frequency error of 1.1" at 3 sigma along transverse axes. The implementation of these improvements to HYDRA, the new multi-head APS star tracker developed by SODERN, will finally be presented.

  6. Atmospheric correction for retrieving ground brightness temperature at commonly-used passive microwave frequencies.

    PubMed

    Han, Xiao-Jing; Duan, Si-Bo; Li, Zhao-Liang

    2017-02-20

    An analysis of the atmospheric impact on ground brightness temperature (Tg) is performed for numerous land surface types at commonly-used frequencies (i.e., 1.4 GHz, 6.93 GHz, 10.65 GHz, 18.7 GHz, 23.8 GHz, 36.5 GHz and 89.0 GHz). The results indicate that the atmosphere has a negligible impact on Tg at 1.4 GHz for land surfaces with emissivities greater than 0.7, at 6.93 GHz for land surfaces with emissivities greater than 0.8, and at 10.65 GHz for land surfaces with emissivities greater than 0.9 if a root mean square error (RMSE) less than 1 K is desired. To remove the atmospheric effect on Tg, a generalized atmospheric correction method is proposed by parameterizing the atmospheric transmittance τ and upwelling atmospheric brightness temperature Tba↑. Better accuracies with Tg RMSEs less than 1 K are achieved at 1.4 GHz, 6.93 GHz, 10.65 GHz, 18.7 GHz and 36.5 GHz, and worse accuracies with RMSEs of 1.34 K and 4.35 K are obtained at 23.8 GHz and 89.0 GHz, respectively. Additionally, a simplified atmospheric correction method is developed when lacking sufficient input data to perform the generalized atmospheric correction method, and an emissivity-based atmospheric correction method is presented when the emissivity is known. Consequently, an appropriate atmospheric correction method can be selected based on the available data, frequency and required accuracy. Furthermore, this study provides a method to estimate τ and Tba↑ of different frequencies using the atmospheric parameters (total water vapor content in observation direction Lwv, total cloud liquid water content Lclw and mean temperature of cloud Tclw), which is important for simultaneously determining the land surface parameters using multi-frequency passive microwave satellite data.

  7. Foam-PVDF smart skin for active control of sound

    NASA Astrophysics Data System (ADS)

    Fuller, Chris R.; Guigou, Cathy; Gentry, C. A.

    1996-05-01

    This work is concerned with the development and testing of a foam-PVDF smart skin designed for active noise control. The smart skin is designed to reduce sound by the action of the passive absorption of the foam (which is effective at higher frequencies) and the active input of an embedded PVDF element driven by an oscillating electrical input (which is effective at lower frequencies). It is primarily developed to be used in an aircraft fuselage in order to reduce interior noise associated with turbulent boundary layer excitation. The device consists of cylindrically curved sections of PVDF piezoelectric film embedded in partially reticulated polyurethane acoustic foam. The active PVDF layer was configured to behave in a linear sense as well as to couple the predominantly in-plane strain due to the piezoelectric effect and the vertical motion that is needed to accelerate fluid particles and hence radiate sound away from the foam surface. For performance testing, the foam-PVDF element was mounted near the surface of an oscillating rigid piston mounted in a baffle in an anechoic chamber. A far-field and a near-field microphone were considered as an error sensor and compared in terms of their efficiency to control the far-field sound radiation. A feedforward LMS controller was used to minimize the error sensor signal under broadband excitation (0 - 1.6 kHz). The potential of the smart foam-PVDF skin for globally reducing sound radiation is demonstrated as more than 20 dB attenuation is obtained over the studied frequency band. The device thus has the potential of simultaneously controlling low and high frequency sound in a very thin compact arrangement.

  8. Skilled adult readers activate the meanings of high-frequency words using phonology: Evidence from eye tracking.

    PubMed

    Jared, Debra; O'Donnell, Katrina

    2017-02-01

    We examined whether highly skilled adult readers activate the meanings of high-frequency words using phonology when reading sentences for meaning. A homophone-error paradigm was used. Sentences were written to fit 1 member of a homophone pair, and then 2 other versions were created in which the homophone was replaced by its mate or a spelling-control word. The error words were all high-frequency words, and the correct homophones were either higher-frequency words or low-frequency words-that is, the homophone errors were either the subordinate or dominant member of the pair. Participants read sentences as their eye movements were tracked. When the high-frequency homophone error words were the subordinate member of the homophone pair, participants had shorter immediate eye-fixation latencies on these words than on matched spelling-control words. In contrast, when the high-frequency homophone error words were the dominant member of the homophone pair, a difference between these words and spelling controls was delayed. These findings provide clear evidence that the meanings of high-frequency words are activated by phonological representations when skilled readers read sentences for meaning. Explanations of the differing patterns of results depending on homophone dominance are discussed.

  9. Continuous-wave ultrasound reflectometry for surface roughness imaging applications

    PubMed Central

    Kinnick, R. R.; Greenleaf, J. F.; Fatemi, M.

    2009-01-01

    Background Measurement of surface roughness irregularities that result from various sources such as manufacturing processes, surface damage, and corrosion, is an important indicator of product quality for many nondestructive testing (NDT) industries. Many techniques exist, however because of their qualitative, time-consuming and direct-contact modes, it is of some importance to work out new experimental methods and efficient tools for quantitative estimation of surface roughness. Objective and Method Here we present continuous-wave ultrasound reflectometry (CWUR) as a novel nondestructive modality for imaging and measuring surface roughness in a non-contact mode. In CWUR, voltage variations due to phase shifts in the reflected ultrasound waves are recorded and processed to form an image of surface roughness. Results An acrylic test block with surface irregularities ranging from 4.22 μm to 19.05 μm as measured by a coordinate measuring machine (CMM), is scanned by an ultrasound transducer having a diameter of 45 mm, a focal distance of 70 mm, and a central frequency of 3 MHz. It is shown that CWUR technique gives very good agreement with the results obtained through CMM inasmuch as the maximum average percent error is around 11.5%. Conclusion Images obtained here demonstrate that CWUR may be used as a powerful noncontact and quantitative tool for nondestructive inspection and imaging of surface irregularities at the micron-size level with an average error of less than 11.5%. PMID:18664399

  10. Reliable absolute analog code retrieval approach for 3D measurement

    NASA Astrophysics Data System (ADS)

    Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Chen, Deyun

    2017-11-01

    The wrapped phase of phase-shifting approach can be unwrapped by using Gray code, but both the wrapped phase error and Gray code decoding error can result in period jump error, which will lead to gross measurement error. Therefore, this paper presents a reliable absolute analog code retrieval approach. The combination of unequal-period Gray code and phase shifting patterns at high frequencies are used to obtain high-frequency absolute analog code, and at low frequencies, the same unequal-period combination patterns are used to obtain the low-frequency absolute analog code. Next, the difference between the two absolute analog codes was employed to eliminate period jump errors, and a reliable unwrapped result can be obtained. Error analysis was used to determine the applicable conditions, and this approach was verified through theoretical analysis. The proposed approach was further verified experimentally. Theoretical analysis and experimental results demonstrate that the proposed approach can perform reliable analog code unwrapping.

  11. Stress polishing of thin shells for adaptive secondary mirrors. Application to the Very Large Telescope deformable secondary

    NASA Astrophysics Data System (ADS)

    Hugot, E.; Ferrari, M.; Riccardi, A.; Xompero, M.; Lemaître, G. R.; Arsenault, R.; Hubin, N.

    2011-03-01

    Context. Adaptive secondary mirrors (ASM) are, or will be, key components on all modern telescopes, providing improved seeing conditions or diffraction limited images, thanks to the high-order atmospheric turbulence correction obtained by controlling the shape of a thin mirror. Their development is a key milestone towards future extremely large telescopes (ELT) where this technology is mandatory for successful observations. Aims: The key point of actual adaptive secondaries technology is the thin glass mirror that acts as a deformable membrane, often aspheric. On 6 m - 8 m class telescopes, these are typically 1 m-class with a 2 mm thickness. The optical quality of this shell must be sufficiently good not to degrade the correction, meaning that high spatial frequency errors must be avoided. The innovative method presented here aims at generating aspherical shapes by elastic bending to reach high optical qualities. Methods: This method is called stress polishing and allows generating aspherical optics of a large amplitude with a simple spherical polishing with a full sized lap applied on a warped blank. The main advantage of this technique is the smooth optical quality obtained, free of high spatial frequency ripples as they are classically caused by subaperture toolmarks. After describing the manufacturing process we developed, our analytical calculations lead to a preliminary definition of the geometry of the blank, which allows a precise bending of the substrate. The finite element analysis (FEA) can be performed to refine this geometry by using an iterative method with a criterion based on the power spectral density of the displacement map of the optical surface. Results: Considering the specific case of the Very Large Telescope (VLT) deformable secondary mirror (DSM), extensive FEA were performed for the optimisation of the geometry. Results are showing that the warping will not introduce surface errors higher than 0.3 nm rms on the minimal spatial scale considered on the mirror. Simulations of the flattening operation of the shell also demonstrate that the actuators system is able to correct manufacturing surface errors coming from the warping of the blank with a residual error lower than 8 nm rms.

  12. System stability and calibrations for hand-held electromagnetic frequency domain instruments

    NASA Astrophysics Data System (ADS)

    Saksa, Pauli J.; Sorsa, Joona

    2017-05-01

    There are a few multiple-frequency domain electromagnetic induction (EMI) hand-held rigid boom systems available for shallow geophysical resistivity investigations. They basically measure secondary field real and imaginary components after the system calibrations. One multiple-frequency system, the EMP-400 Profiler from Geophysical Survey Systems Inc., was tested for system calibrations, stability and various effects present in normal measurements like height variation, tilting, signal stacking and time stability. Results indicated that in test conditions, repeatable high-accuracy imaginary component values can be recorded for near-surface frequency soundings. In test conditions, real components are also stable but vary strongly in normal surveying measurements. However, certain calibration issues related to the combination of user influence and measurement system height were recognised as an important factor in reducing for data errors and for further processing like static offset corrections.

  13. How well can we measure the vertical wind speed? Implications for fluxes of energy and mass

    Treesearch

    John Kochendorfer; Tilden P. Meyers; John Frank; William J. Massman; Mark W. Heuer

    2012-01-01

    Sonic anemometers are capable of measuring the wind speed in all three dimensions at high frequencies (10­50 Hz), and are relied upon to estimate eddy-covariance-based fluxes of mass and energy over a wide variety of surfaces and ecosystems. In this study, wind-velocity measurement errors from a three-dimensional sonic anemometer with a nonorthogonal transducer...

  14. Evaluation of Probe-Induced Flow Distortion of Campbell CSAT3 Sonic Anemometers by Numerical Simulation

    NASA Astrophysics Data System (ADS)

    Huq, Sadiq; De Roo, Frederik; Foken, Thomas; Mauder, Matthias

    2017-10-01

    The Campbell CSAT3 sonic anemometer is one of the most popular instruments for turbulence measurements in basic micrometeorological research and ecological applications. While measurement uncertainty has been characterized by field experiments and wind-tunnel studies in the past, there are conflicting estimates, which motivated us to conduct a numerical experiment using large-eddy simulation to evaluate the probe-induced flow distortion of the CSAT3 anemometer under controlled conditions, and with exact knowledge of the undisturbed flow. As opposed to wind-tunnel studies, we imposed oscillations in both the vertical and horizontal velocity components at the distinct frequencies and amplitudes found in typical turbulence spectra in the surface layer. The resulting flow-distortion errors for the standard deviations of the vertical velocity component range from 3 to 7%, and from 1 to 3% for the horizontal velocity component, depending on the azimuth angle. The magnitude of these errors is almost independent of the frequency of wind speed fluctuations, provided the amplitude is typical for surface-layer turbulence. A comparison of the corrections for transducer shadowing proposed by both Kaimal et al. (Proc Dyn Flow Conf, 551-565, 1978) and Horst et al. (Boundary-Layer Meteorol 155:371-395, 2015) show that both methods compensate for a larger part of the observed error, but do not sufficiently account for the azimuth dependency. Further numerical simulations could be conducted in the future to characterize the flow distortion induced by other existing types of sonic anemometers for the purposes of optimizing their geometry.

  15. Highly compact fiber Fabry-Perot interferometer: A new instrument design

    NASA Astrophysics Data System (ADS)

    Nowakowski, B. K.; Smith, D. T.; Smith, S. T.

    2016-11-01

    This paper presents the design, construction, and characterization of a new optical-fiber-based, low-finesse Fabry-Perot interferometer with a simple cavity formed by two reflecting surfaces (the end of a cleaved optical fiber and a plane, reflecting counter-surface), for the continuous measurement of displacements of several nanometers to several tens of millimeters. No beam collimation or focusing optics are required, resulting in a displacement sensor that is extremely compact (optical fiber diameter 125 μm), is surprisingly tolerant of misalignment (more than 5°), and can be used over a very wide range of temperatures and environmental conditions, including ultra-high-vacuum. The displacement measurement is derived from interferometric phase measurements using an infrared laser source whose wavelength is modulated sinusoidally at a frequency f. The phase signal is in turn derived from changes in the amplitudes of demodulated signals, at both the modulation frequency, f, and its harmonic at 2f, coming from a photodetector that is monitoring light intensity reflected back from the cavity as the cavity length changes. Simple quadrature detection results in phase errors corresponding to displacement errors of up to 25 nm, but by using compensation algorithms discussed in this paper, these inherent non-linearities can be reduced to below 3 nm. In addition, wavelength sweep capability enables measurement of the absolute surface separation. This experimental design creates a unique set of displacement measuring capabilities not previously combined in a single interferometer.

  16. A Flexible Arrayed Eddy Current Sensor for Inspection of Hollow Axle Inner Surfaces.

    PubMed

    Sun, Zhenguo; Cai, Dong; Zou, Cheng; Zhang, Wenzeng; Chen, Qiang

    2016-06-23

    A reliable and accurate inspection of the hollow axle inner surface is important for the safe operation of high-speed trains. In order to improve the reliability of the inspection, a flexible arrayed eddy current sensor for non-destructive testing of the hollow axle inner surface was designed, fabricated and characterized. The sensor, consisting of two excitation traces and 28 sensing traces, was developed by using the flexible printed circuit board (FPCB) technique to conform the geometric features of the inner surfaces of the hollow axles. The main innovative aspect of the sensor was the new arrangement of excitation/sensing traces to achieve a differential configuration. Finite element model was established to analyze sensor responses and to determine the optimal excitation frequency. Experimental validations were conducted on a specimen with several artificial defects. Results from experiments and simulations were consistent with each other, with the maximum relative error less than 4%. Both results proved that the sensor was capable of detecting longitudinal and transverse defects with the depth of 0.5 mm under the optimal excitation frequency of 0.9 MHz.

  17. Frequency of pediatric medication administration errors and contributing factors.

    PubMed

    Ozkan, Suzan; Kocaman, Gulseren; Ozturk, Candan; Seren, Seyda

    2011-01-01

    This study examined the frequency of pediatric medication administration errors and contributing factors. This research used the undisguised observation method and Critical Incident Technique. Errors and contributing factors were classified through the Organizational Accident Model. Errors were made in 36.5% of the 2344 doses that were observed. The most frequent errors were those associated with administration at the wrong time. According to the results of this study, errors arise from problems within the system.

  18. Research on aspheric focusing lens processing and testing technology in the high-energy laser test system

    NASA Astrophysics Data System (ADS)

    Liu, Dan; Fu, Xiu-hua; Jia, Zong-he; Wang, Zhe; Dong, Huan

    2014-08-01

    In the high-energy laser test system, surface profile and finish of the optical element are put forward higher request. Taking a focusing aspherical zerodur lens with a diameter of 100mm as example, using CNC and classical machining method of combining surface profile and surface quality of the lens were investigated. Taking profilometer and high power microscope measurement results as a guide, by testing and simulation analysis, process parameters were improved constantly in the process of manufacturing. Mid and high frequency error were trimmed and improved so that the surface form gradually converged to the required accuracy. The experimental results show that the final accuracy of the surface is less than 0.5μm and the surface finish is □, which fulfils the accuracy requirement of aspherical focusing lens in optical system.

  19. Remote Sensing of Salinity: The Dielectric Constant of Sea Water

    NASA Technical Reports Server (NTRS)

    LeVine, David M.; Lang, R.; Utku, C.; Tarkocin, Y.

    2011-01-01

    Global monitoring of sea surface salinity from space requires an accurate model for the dielectric constant of sea water as a function of salinity and temperature to characterize the emissivity of the surface. Measurements are being made at 1.413 GHz, the center frequency of the Aquarius radiometers, using a resonant cavity and the perturbation method. The cavity is operated in a transmission mode and immersed in a liquid bath to control temperature. Multiple measurements are made at each temperature and salinity. Error budgets indicate a relative accuracy for both real and imaginary parts of the dielectric constant of about 1%.

  20. A multi-frequency inverse-phase error compensation method for projector nonlinear in 3D shape measurement

    NASA Astrophysics Data System (ADS)

    Mao, Cuili; Lu, Rongsheng; Liu, Zhijian

    2018-07-01

    In fringe projection profilometry, the phase errors caused by the nonlinear intensity response of digital projectors needs to be correctly compensated. In this paper, a multi-frequency inverse-phase method is proposed. The theoretical model of periodical phase errors is analyzed. The periodical phase errors can be adaptively compensated in the wrapped maps by using a set of fringe patterns. The compensated phase is then unwrapped with multi-frequency method. Compared with conventional methods, the proposed method can greatly reduce the periodical phase error without calibrating measurement system. Some simulation and experimental results are presented to demonstrate the validity of the proposed approach.

  1. Evaluation of probe-induced flow distortion of Campbell CSAT3 sonic anemometers by numerical simulation

    NASA Astrophysics Data System (ADS)

    Mauder, M.; Huq, S.; De Roo, F.; Foken, T.; Manhart, M.; Schmid, H. P. E.

    2017-12-01

    The Campbell CSAT3 sonic anemometer is one of the most widely used instruments for eddy-covariance measurement. However, conflicting estimates for the probe-induced flow distortion error of this instrument have been reported recently, and those error estimates range between 3% and 14% for the measurement of vertical velocity fluctuations. This large discrepancy between the different studies can probably be attributed to the different experimental approaches applied. In order to overcome the limitations of both field intercomparison experiments and wind tunnel experiments, we propose a new approach that relies on virtual measurements in a large-eddy simulation (LES) environment. In our experimental set-up, we generate horizontal and vertical velocity fluctuations at frequencies that typically dominate the turbulence spectra of the surface layer. The probe-induced flow distortion error of a CSAT3 is then quantified by this numerical wind tunnel approach while the statistics of the prescribed inflow signal are taken as reference or etalon. The resulting relative error is found to range from 3% to 7% and from 1% to 3% for the standard deviation of the vertical and the horizontal velocity component, respectively, depending on the orientation of the CSAT3 in the flow field. We further demonstrate that these errors are independent of the frequency of fluctuations at the inflow of the simulation. The analytical corrections proposed by Kaimal et al. (Proc Dyn Flow Conf, 551-565, 1978) and Horst et al. (Boundary-Layer Meteorol, 155, 371-395, 2015) are compared against our simulated results, and we find that they indeed reduce the error by up to three percentage points. However, these corrections fail to reproduce the azimuth-dependence of the error that we observe. Moreover, we investigate the general Reynolds number dependence of the flow distortion error by more detailed idealized simulations.

  2. Surface Accuracy and Pointing Error Prediction of a 32 m Diameter Class Radio Astronomy Telescope

    NASA Astrophysics Data System (ADS)

    Azankpo, Severin

    2017-03-01

    The African Very-long-baseline interferometry Network (AVN) is a joint project between South Africa and eight partner African countries aimed at establishing a VLBI (Very-Long-Baseline Interferometry) capable network of radio telescopes across the African continent. An existing structure that is earmarked for this project, is a 32 m diameter antenna located in Ghana that has become obsolete due to advances in telecommunication. The first phase of the conversion of this Ghana antenna into a radio astronomy telescope is to upgrade the antenna to observe at 5 GHz to 6.7 GHz frequency and then later to 18 GHz within a required performing tolerance. The surface and pointing accuracies for a radio telescope are much more stringent than that of a telecommunication antenna. The mechanical pointing accuracy of such telescopes is influenced by factors such as mechanical alignment, structural deformation, and servo drive train errors. The current research investigates the numerical simulation of the surface and pointing accuracies of the Ghana 32 m diameter radio astronomy telescope due to its structural deformation mainly influenced by gravity, wind and thermal loads.

  3. Principles of operation, accuracy and precision of an Eye Surface Profiler.

    PubMed

    Iskander, D Robert; Wachel, Pawel; Simpson, Patrick N D; Consejo, Alejandra; Jesus, Danilo A

    2016-05-01

    To introduce a newly developed instrument for measuring the topography of the anterior eye, provide principles of its operation and to assess its accuracy and precision. The Eye Surface Profiler is a new technology based on Fourier transform profilometry for measuring the anterior eye surface encompassing the corneo-scleral area. Details of technical principles of operation are provided for the particular case of sequential double fringe projection. Technical limits of accuracy have been assessed for several key parameters such as the carrier frequency, image quantisation level, sensor size, carrier frequency inaccuracy, and level and type of noise. Further, results from both artificial test surfaces as well as real eyes are used to assess precision and accuracy of the device (here benchmarked against one of popular Placido disk videokeratoscopes). Technically, the Eye Surface Profiler accuracy can reach levels below 1 μm for a range of considered key parameters. For the unit tested and using calibrated artificial surfaces, the accuracy of measurement (in terms of RMS error) was below 10 μm for a central measurement area of 8 mm diameter and below 40 μm for an extended measurement area of 16 mm. In some cases, the error reached levels of up to 200 μm at the very periphery of the measured surface (up to 20 mm). The SimK estimates of the test surfaces from the Eye Surface Profiler were in close agreement with those from a Placido disk videokeratoscope with differences no greater than ±0.1 D. For real eyes, the benchmarked accuracy was within ±0.5D for both the spherical and cylindrical SimK components. The Eye Surface Profiler can successfully measure the topography of the entire anterior eye including the cornea, limbus and sclera. It has a great potential to become an optometry clinical tool that could substitute the currently used videokeratoscopes and provide a high quality corneo-scleral topography. © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.

  4. Automatic Locking of Laser Frequency to an Absorption Peak

    NASA Technical Reports Server (NTRS)

    Koch, Grady J.

    2006-01-01

    An electronic system adjusts the frequency of a tunable laser, eventually locking the frequency to a peak in the optical absorption spectrum of a gas (or of a Fabry-Perot cavity that has an absorption peak like that of a gas). This system was developed to enable precise locking of the frequency of a laser used in differential absorption LIDAR measurements of trace atmospheric gases. This system also has great commercial potential as a prototype of means for precise control of frequencies of lasers in future dense wavelength-division-multiplexing optical communications systems. The operation of this system is completely automatic: Unlike in the operation of some prior laser-frequency-locking systems, there is ordinarily no need for a human operator to adjust the frequency manually to an initial value close enough to the peak to enable automatic locking to take over. Instead, this system also automatically performs the initial adjustment. The system (see Figure 1) is based on a concept of (1) initially modulating the laser frequency to sweep it through a spectral range that includes the desired absorption peak, (2) determining the derivative of the absorption peak with respect to the laser frequency for use as an error signal, (3) identifying the desired frequency [at the very top (which is also the middle) of the peak] as the frequency where the derivative goes to zero, and (4) thereafter keeping the frequency within a locking range and adjusting the frequency as needed to keep the derivative (the error signal) as close as possible to zero. More specifically, the system utilizes the fact that in addition to a zero crossing at the top of the absorption peak, the error signal also closely approximates a straight line in the vicinity of the zero crossing (see Figure 2). This vicinity is the locking range because the linearity of the error signal in this range makes it useful as a source of feedback for a proportional + integral + derivative control scheme that constantly adjusts the frequency in an effort to drive the error to zero. When the laser frequency deviates from the midpeak value but remains within the locking range, the magnitude and sign of the error signal indicate the amount of detuning and the control circuitry adjusts the frequency by what it estimates to be the negative of this amount in an effort to bring the error to zero.

  5. Wideband characterization of printed circuit board materials up to 50 ghz

    NASA Astrophysics Data System (ADS)

    Rakov, Aleksei

    A traveling-wave technique developed a few years ago in the Missouri S&T EMC Laboratory has been employed until now for characterization of PCB materials over a broad frequency range up to 30 GHz. This technique includes measuring S-parameters of the specially designed PCB test vehicles. An extension of the frequency range of printed circuit board laminate dielectric and copper foil characterization is an important problem. In this work, a new PCB test vehicle design for operating up to 50 GHz has been proposed. As the frequency range of measurements increases, the analysis of errors and uncertainties in measuring dielectric properties becomes increasingly important. Formulas for quantification of two major groups of errors, repeatability (manufacturing variability) and reproducibility (systematic) errors, in extracting dielectric constant (DK) and dissipation factor (DK) have been derived, and computations for a number of cases are presented. Conductor (copper foil) surface roughness of PCB interconnects is an important factor, which affects accuracy of DK and DF measurements. This work describes a new algorithm for semi-automatic characterization of copper foil profiles on optical or scanning electron microscopy (SEM) pictures of signal traces. The collected statistics of numerous copper foil roughness profiles allows for introducing a new metric for roughness characterization of PCB interconnects. This is an important step to refining the measured DK and DF parameters from roughness contributions. The collected foil profile data and its analysis allow for developing "design curves", which could be used by SI engineers and electronics developers in their designs.

  6. Dual-sensitivity profilometry with defocused projection of binary fringes.

    PubMed

    Garnica, G; Padilla, M; Servin, M

    2017-10-01

    A dual-sensitivity profilometry technique based on defocused projection of binary fringes is presented. Here, two sets of fringe patterns with a sinusoidal profile are produced by applying the same analog low-pass filter (projector defocusing) to binary fringes with a high- and low-frequency spatial carrier. The high-frequency fringes have a binary square-wave profile, while the low-frequency binary fringes are produced with error-diffusion dithering. The binary nature of the binary fringes removes the need for calibration of the projector's nonlinear gamma. Working with high-frequency carrier fringes, we obtain a high-quality wrapped phase. On the other hand, working with low-frequency carrier fringes we found a lower-quality, nonwrapped phase map. The nonwrapped estimation is used as stepping stone for dual-sensitivity temporal phase unwrapping, extending the applicability of the technique to discontinuous (piecewise continuous) surfaces. We are proposing a single defocusing level for faster high- and low-frequency fringe data acquisition. The proposed technique is validated with experimental results.

  7. A second-order 3D electromagnetics algorithm for curved interfaces between anisotropic dielectrics on a Yee mesh

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauer, Carl A., E-mail: bauerca@colorado.ed; Werner, Gregory R.; Cary, John R.

    A new frequency-domain electromagnetics algorithm is developed for simulating curved interfaces between anisotropic dielectrics embedded in a Yee mesh with second-order error in resonant frequencies. The algorithm is systematically derived using the finite integration formulation of Maxwell's equations on the Yee mesh. Second-order convergence of the error in resonant frequencies is achieved by guaranteeing first-order error on dielectric boundaries and second-order error in bulk (possibly anisotropic) regions. Convergence studies, conducted for an analytically solvable problem and for a photonic crystal of ellipsoids with anisotropic dielectric constant, both show second-order convergence of frequency error; the convergence is sufficiently smooth that Richardsonmore » extrapolation yields roughly third-order convergence. The convergence of electric fields near the dielectric interface for the analytic problem is also presented.« less

  8. Repeat-aware modeling and correction of short read errors.

    PubMed

    Yang, Xiao; Aluru, Srinivas; Dorman, Karin S

    2011-02-15

    High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors for genomes with high repeat content.

  9. Video error concealment using block matching and frequency selective extrapolation algorithms

    NASA Astrophysics Data System (ADS)

    P. K., Rajani; Khaparde, Arti

    2017-06-01

    Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.

  10. Optimum projection pattern generation for grey-level coded structured light illumination systems

    NASA Astrophysics Data System (ADS)

    Porras-Aguilar, Rosario; Falaggis, Konstantinos; Ramos-Garcia, Ruben

    2017-04-01

    Structured light illumination (SLI) systems are well-established optical inspection techniques for noncontact 3D surface measurements. A common technique is multi-frequency sinusoidal SLI that obtains the phase map at various fringe periods in order to estimate the absolute phase, and hence, the 3D surface information. Nevertheless, multi-frequency SLI systems employ multiple measurement planes (e.g. four phase shifted frames) to obtain the phase at a given fringe period. It is therefore an age old challenge to obtain the absolute surface information using fewer measurement frames. Grey level (GL) coding techniques have been developed as an attempt to reduce the number of planes needed, because a spatio-temporal GL sequence employing p discrete grey-levels and m frames has the potential to unwrap up to pm fringes. Nevertheless, one major disadvantage of GL based SLI techniques is that there are often errors near the border of each stripe, because an ideal stepwise intensity change cannot be measured. If the step-change in intensity is a single discrete grey-level unit, this problem can usually be overcome by applying an appropriate threshold. However, severe errors occur if the intensity change at the border of the stripe exceeds several discrete grey-level units. In this work, an optimum GL based technique is presented that generates a series of projection patterns with a minimal gradient in the intensity. It is shown that when using this technique, the errors near the border of the stripes can be significantly reduced. This improvement is achieved with the choice generated patterns, and does not involve additional hardware or special post-processing techniques. The performance of that method is validated using both simulations and experiments. The reported technique is generic, works with an arbitrary number of frames, and can employ an arbitrary number of grey-levels.

  11. The Performance of Noncoherent Orthogonal M-FSK in the Presence of Timing and Frequency Errors

    NASA Technical Reports Server (NTRS)

    Hinedi, Sami; Simon, Marvin K.; Raphaeli, Dan

    1993-01-01

    Practical M-FSK systems experience a combination of time and frequency offsets (errors). This paper assesses the deleterious effect of these offsets, first individually and then combined, on the average bit error probability performance of the system.

  12. Calculation Of Pneumatic Attenuation In Pressure Sensors

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.

    1991-01-01

    Errors caused by attenuation of air-pressure waves in narrow tubes calculated by method based on fundamental equations of flow. Changes in ambient pressure transmitted along narrow tube to sensor. Attenuation of high-frequency components of pressure wave calculated from wave equation derived from Navier-Stokes equations of viscous flow in tube. Developed to understand and compensate for frictional attenuation in narrow tubes used to connect aircraft pressure sensors with pressure taps on affected surfaces.

  13. Single Platform Geolocation of Radio Frequency Emitters

    DTIC Science & Technology

    2015-03-26

    Error SNR Signal to Noise Ratio SOI Signal of Interest STK Systems Tool Kit UCA Uniform Circular Array WGS World Geodetic System xv SINGLE PLATFORM...Section 2.6 describes a method to visualize the confidence of estimated parameters. 2.1 Coordinate Systems and Reference Frames The following...be used to visualize the confidence surface using the method developed in Section 2.6. The NLO method will be shown to be the minimization of the

  14. Incorporation of a Redfern Integrated Optics ORION Laser Module with an IPG Photonics Erbium Fiber Laser to Create a Frequency Conversion Photon Doppler Velocimeter for US Army Research Laboratory Measurements: Hardware, Data Analysis, and Error Quantification

    DTIC Science & Technology

    2017-04-01

    measurements of oscillating surfaces, such as a vehicle hull subjected to an under-body blast, or a reactive armor tile subject to nearest neighbor...Cast steel (CS) subscale tub test: hull displacement measurements made via photon Doppler velocimetry. Aberdeen Proving Ground (MD): Army Research

  15. Electromagnetic receiver with capacitive electrodes and triaxial induction coil for tunnel exploration

    NASA Astrophysics Data System (ADS)

    Kai, Chen; Sheng, Jin; Wang, Shun

    2017-09-01

    A new type of electromagnetic (EM) receiver has been developed by integrating four capacitive electrodes and a triaxial induction coil with an advanced data logger for tunnel exploration. The new EM receiver can conduct EM observations in tunnels, which is one of the principal goals of surface-tunnel-borehole EM detection for deep ore deposit mapping. The use of capacitive electrodes enables us to record the electrical field (E-field) signals from hard rock surfaces, which are high-resistance terrains. A compact triaxial induction coil integrates three independent induction coils for narrow-tunnel exploration applications. A low-time-drift-error clock source is developed for tunnel applications where GPS signals are unavailable. The three main components of our tunnel EM receiver are: (1) four capacitive electrodes for measuring the E-field signal without digging in hard rock regions; (2) a triaxial induction coil sensor for audio-frequency magnetotelluric and controlled-source audio-frequency magnetotelluric signal measurements; and (3) a data logger that allows us to record five-component MT signals with low noise levels, low time-drift-error for the clock source, and high dynamic range. The proposed tunnel EM receiver was successfully deployed in a mine that exhibited with typical noise characteristics. [Figure not available: see fulltext. Caption: The new EM receiver can conduct EM observations in tunnels, which is one of the principal goals of the surface-tunnel-borehole EM (STBEM) detection for deep ore deposit mapping. The use of a capacitive electrode enables us to record the electrical field (E-field) signals from hard rock surfaces. A compact triaxial induction coil integrated three induction coils, for narrow-tunnel applications.

  16. The microwave holography system for the Sardinia Radio Telescope

    NASA Astrophysics Data System (ADS)

    Serra, G.; Bolli, P.; Busonera, G.; Pisanu, T.; Poppi, S.; Gaudiomonte, F.; Zacchiroli, G.; Roda, J.; Morsiani, M.; López-Pérez, J. A.

    2012-09-01

    Microwave holography is a well-established technique for mapping surface errors of large reflector antennas, particularly those designed to operate at high frequencies. We present here a holography system based on the interferometric method for mapping the primary reflector surface of the Sardinia Radio Telescope (SRT). SRT is a new 64-m-diameter antenna located in Sardinia, Italy, equipped with an active surface and designed to operate up to 115 GHz. The system consists mainly of two radio frequency low-noise coherent channels, designed to receive Ku-band digital TV signals from geostationary satellites. Two commercial prime focus low-noise block converters are installed on the radio telescope under test and on a small reference antenna, respectively. Then the signals are amplified, filtered and downconverted to baseband. An innovative digital back-end based on FPGA technology has been implemented to digitize two 5 MHz-band signals and calculate their cross-correlation in real-time. This is carried out by using a 16-bit resolution ADCs and a FPGA reaching very large amplitude dynamic range and reducing post-processing time. The final holography data analysis is performed by CLIC data reduction software developed within the Institut de Radioastronomie Millimétrique (IRAM, Grenoble, France). The system was successfully tested during several holography measurement campaigns, recently performed at the Medicina 32-m radio telescope. Two 65-by-65 maps, using an on-the-fly raster scan with on-source phase calibration, were performed pointing the radio telescope at 38 degrees elevation towards EUTELSAT 7A satellite. The high SNR (greater than 60 dB) and the good phase stability led to get an accuracy on the surface error maps better than 150 μm RMS.

  17. Error analysis for relay type satellite-aided search and rescue systems

    NASA Technical Reports Server (NTRS)

    Marini, J. W.

    1977-01-01

    An analysis was made of the errors in the determination of the position of an emergency transmitter in a satellite aided search and rescue system. The satellite was assumed to be at a height of 820 km in a near circular near polar orbit. Short data spans of four minutes or less were used. The error sources considered were measurement noise, transmitter frequency drift, ionospheric effects and error in the assumed height of the transmitter. The errors were calculated for several different transmitter positions, data rates and data spans. The only transmitter frequency used was 406 MHz, but the results can be scaled to different frequencies. In a typical case, in which four Doppler measurements were taken over a span of two minutes, the position error was about 1.2 km.

  18. Frequency and types of the medication errors in an academic emergency department in Iran: The emergent need for clinical pharmacy services in emergency departments.

    PubMed

    Zeraatchi, Alireza; Talebian, Mohammad-Taghi; Nejati, Amir; Dashti-Khavidaki, Simin

    2013-07-01

    Emergency departments (EDs) are characterized by simultaneous care of multiple patients with various medical conditions. Due to a large number of patients with complex diseases, speed and complexity of medication use, working in under-staffing and crowded environment, medication errors are commonly perpetrated by emergency care providers. This study was designed to evaluate the incidence of medication errors among patients attending to an ED in a teaching hospital in Iran. In this cross-sectional study, a total of 500 patients attending to ED were randomly assessed for incidence and types of medication errors. Some factors related to medication errors such as working shift, weekdays and schedule of the educational program of trainee were also evaluated. Nearly, 22% of patients experienced at least one medication error. The rate of medication errors were 0.41 errors per patient and 0.16 errors per ordered medication. The frequency of medication errors was higher in men, middle age patients, first weekdays, night-time work schedules and the first semester of educational year of new junior emergency medicine residents. More than 60% of errors were prescription errors by physicians and the remaining were transcription or administration errors by nurses. More than 35% of the prescribing errors happened during the selection of drug dose and frequency. The most common medication errors by nurses during the administration were omission error (16.2%) followed by unauthorized drug (6.4%). Most of the medication errors happened for anticoagulants and thrombolytics (41.2%) followed by antimicrobial agents (37.7%) and insulin (7.4%). In this study, at least one-fifth of the patients attending to ED experienced medication errors resulting from multiple factors. More common prescription errors happened during ordering drug dose and frequency. More common administration errors included dug omission or unauthorized drug.

  19. 3D absolute shape measurement of live rabbit hearts with a superfast two-frequency phase-shifting technique

    PubMed Central

    Wang, Yajun; Laughner, Jacob I.; Efimov, Igor R.; Zhang, Song

    2013-01-01

    This paper presents a two-frequency binary phase-shifting technique to measure three-dimensional (3D) absolute shape of beating rabbit hearts. Due to the low contrast of the cardiac surface, the projector and the camera must remain focused, which poses challenges for any existing binary method where the measurement accuracy is low. To conquer this challenge, this paper proposes to utilize the optimal pulse width modulation (OPWM) technique to generate high-frequency fringe patterns, and the error-diffusion dithering technique to produce low-frequency fringe patterns. Furthermore, this paper will show that fringe patterns produced with blue light provide the best quality measurements compared to fringe patterns generated with red or green light; and the minimum data acquisition speed for high quality measurements is around 800 Hz for a rabbit heart beating at 180 beats per minute. PMID:23482151

  20. Impact and quantification of the sources of error in DNA pooling designs.

    PubMed

    Jawaid, A; Sham, P

    2009-01-01

    The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.

  1. Frequency-independent approach to calculate physical optics radiations with the quadratic concave phase variations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Yu Mao, E-mail: yumaowu@fudan.edu.cn; Teng, Si Jia, E-mail: sjteng12@fudan.edu.cn

    In this work, we develop the numerical steepest descent path (NSDP) method to calculate the physical optics (PO) radiations with the quadratic concave phase variations. With the surface integral equation method, the physical optics (PO) scattered fields are formulated and further reduced to the surface integrals. The high frequency physical critical points contributions, including the stationary phase points, the boundary resonance points and the vertex points are comprehensively studied via the proposed NSDP method. The key contributions of this work are twofold. One is that together with the PO integrals taking the quadratic parabolic and hyperbolic phase terms, this workmore » makes the NSDP theory be complete for treating the PO integrals with quadratic phase variations. Another is that, in order to illustrate the transition effect of the high frequency physical critical points, in this work, we consider and further extend the NSDP method to calculate the PO integrals with the coalescence of the high frequency critical points. Numerical results for the highly oscillatory PO integral with the coalescence of the critical points are given to verify the efficiency of the proposed NSDP method. The NSDP method could achieve the frequency independent computational workload and error controllable accuracy in all the numerical experiments, especially for the case of the coalescence of the high frequency critical points.« less

  2. Specularity of longitudinal acoustic phonons at rough surfaces

    NASA Astrophysics Data System (ADS)

    Gelda, Dhruv; Ghossoub, Marc G.; Valavala, Krishna; Ma, Jun; Rajagopal, Manjunath C.; Sinha, Sanjiv

    2018-01-01

    The specularity of phonons at crystal surfaces is of direct importance to thermal transport in nanostructures and to dissipation in nanomechanical resonators. Wave scattering theory provides a framework for estimating wavelength-dependent specularity, but experimental validation remains elusive. Widely available thermal conductivity data presents poor validation since the involvement of the infinitude of phonon wavelengths in thermal transport presents an underconstrained test for specularity theory. Here, we report phonon specularity by measuring the lifetimes of individual coherent longitudinal acoustic phonon modes excited in ultrathin (36-205 nm) suspended silicon membranes at room temperature over the frequency range ˜20 -118 GHz. Phonon surface scattering dominates intrinsic Akhiezer damping at frequencies ≳60 GHz, enabling measurements of phonon boundary scattering time over wavelengths ˜72 -140 nm . We obtain detailed statistics of the surface roughness at the top and bottom surfaces of membranes using HRTEM imaging. We find that the specularity of the excited modes are in good agreement with solutions of wave scattering only when the TEM statistics are corrected for projection errors. The often-cited Ziman formula for phonon specularity also appears in good agreement with the data, contradicting previous results. This work helps to advance the fundamental understanding of phonon scattering at the surfaces of nanostructures.

  3. Grammatical Errors Produced by English Majors: The Translation Task

    ERIC Educational Resources Information Center

    Mohaghegh, Hamid; Zarandi, Fatemeh Mahmoudi; Shariati, Mohammad

    2011-01-01

    This study investigated the frequency of the grammatical errors related to the four categories of preposition, relative pronoun, article, and tense using the translation task. In addition, the frequencies of these grammatical errors in different categories and in each category were examined. The quantitative component of the study further looked…

  4. The effect of withdrawal of visual presentation of errors upon the frequency spectrum of tremor in a manual task

    PubMed Central

    Sutton, G. G.; Sykes, K.

    1967-01-01

    1. When a subject attempts to exert a steady pressure on a joystick he makes small unavoidable errors which, irrespective of their origin or frequency, may be called tremor. 2. Frequency analysis shows that low frequencies always contribute much more to the total error than high frequencies. If the subject is not allowed to check his performance visually, but has to rely on sensations of pressure in the finger tips, etc., the error power spectrum plotted on logarithmic co-ordinates approximates to a straight line falling at 6 db/octave from 0·4 to 9 c/s. In other words the amplitude of the tremor component at each frequency is inversely proportional to frequency. 3. When the subject is given a visual indication of his errors on an oscilloscope the shape of the tremor spectrum alters. The most striking change is the appearance of a tremor peak at about 9 c/s, but there is also a significant increase of error in the range 1-4 c/s. The extent of these changes varies from subject to subject. 4. If the 9 c/s peak represents oscillation of a muscle length-servo it would appear that greater use is made of this servo when positional information is available from the eyes than when proprioceptive impulses from the limbs have to be relied on. ImagesFig. 2 PMID:6048997

  5. Addressing the unit of analysis in medical care studies: a systematic review.

    PubMed

    Calhoun, Aaron W; Guyatt, Gordon H; Cabana, Michael D; Lu, Downing; Turner, David A; Valentine, Stacey; Randolph, Adrienne G

    2008-06-01

    We assessed the frequency that patients are incorrectly used as the unit of analysis among studies of physicians' patient care behavior in articles published in high impact journals. We surveyed 30 high-impact journals across 6 medical fields for articles susceptible to unit of analysis errors published from 1994 to 2005. Three reviewers independently abstracted articles using previously published criteria to determine the presence of analytic errors. One hundred fourteen susceptible articles were found published in 15 journals, 4 journals published the majority (71 of 114 or 62.3%) of studies, 40 were intervention studies, and 74 were noninterventional studies. The unit of analysis error was present in 19 (48%) of the intervention studies and 31 (42%) of the noninterventional studies (overall error rate 44%). The frequency of the error decreased between 1994-1999 (N = 38; 65% error) and 2000-2005 (N = 76; 33% error) (P = 0.001). Although the frequency of the error in published studies is decreasing, further improvement remains desirable.

  6. Common medial frontal mechanisms of adaptive control in humans and rodents

    PubMed Central

    Frank, Michael J.; Laubach, Mark

    2013-01-01

    In this report, we describe how common brain networks within the medial frontal cortex facilitate adaptive behavioral control in rodents and humans. We demonstrate that low frequency oscillations below 12 Hz are dramatically modulated after errors in humans over mid-frontal cortex and in rats within prelimbic and anterior cingulate regions of medial frontal cortex. These oscillations were phase-locked between medial frontal cortex and motor areas in both rats and humans. In rats, single neurons that encoded prior behavioral outcomes were phase-coherent with low-frequency field oscillations particularly after errors. Inactivating medial frontal regions in rats led to impaired behavioral adjustments after errors, eliminated the differential expression of low frequency oscillations after errors, and increased low-frequency spike-field coupling within motor cortex. Our results describe a novel mechanism for behavioral adaptation via low-frequency oscillations and elucidate how medial frontal networks synchronize brain activity to guide performance. PMID:24141310

  7. Quantitation Error in 1H MRS Caused by B1 Inhomogeneity and Chemical Shift Displacement.

    PubMed

    Watanabe, Hidehiro; Takaya, Nobuhiro

    2017-11-08

    The quantitation accuracy in proton magnetic resonance spectroscopy ( 1 H MRS) improves at higher B 0 field. However, a larger chemical shift displacement (CSD) and stronger B 1 inhomogeneity exist. In this work, we evaluate the quantitation accuracy for the spectra of metabolite mixtures in phantom experiments at 4.7T. We demonstrate a position-dependent error in quantitation and propose a correction method by measuring water signals. All experiments were conducted on a whole-body 4.7T magnetic resonance (MR) system with a quadrature volume coil for transmission and reception. We arranged three bottles filled with metabolite solutions of N-acetyl aspartate (NAA) and creatine (Cr) in a vertical row inside a cylindrical phantom filled with water. Peak areas of three singlets of NAA and Cr were measured on three 1 H spectra at three volume of interests (VOIs) inside three bottles. We also measured a series of water spectra with a shifted carrier frequency and measured a reception sensitivity map. The ratios of NAA and Cr at 3.92 ppm to Cr at 3.01 ppm differed amongst the three VOIs in peak area, which leads to a position-dependent error. The nature of slope depicting the relationship between peak areas and the shifted values of frequency was like that between the reception sensitivities and displacement at every VOI. CSD and inhomogeneity of reception sensitivity cause amplitude modulation along the direction of chemical shift on the spectra, resulting in a quantitation error. This error may be more significant at higher B 0 field where CSD and B 1 inhomogeneity are more severe. This error may also occur in reception using a surface coil having inhomogeneous B 1 . Since this type of error is around a few percent, the data should be analyzed with greater attention while discussing small differences in the studies of 1 H MRS.

  8. Cryo-optical testing of large aspheric reflectors operating in the sub mm range

    NASA Astrophysics Data System (ADS)

    Roose, S.; Houbrechts, Y.; Mazzoli, A.; Ninane, N.; Stockman, Y.; Daddato, R.; Kirschner, V.; Venacio, L.; de Chambure, D.

    2006-02-01

    The cryo-optical testing of the PLANCK primary reflector (elliptical off-axis CFRP reflector of 1550 mm x 1890 mm) is one of the major issue in the payload development program. It is requested to measure the changes of the Surface Figure Error (SFE) with respect to the best ellipsoid, between 293 K and 50 K, with a 1 μm RMS accuracy. To achieve this, Infra Red interferometry has been used and a dedicated thermo mechanical set-up has been constructed. This paper summarises the test activities, the test methods and results on the PLANCK Primary Reflector - Flight Model (PRFM) achieved in FOCAL 6.5 at Centre Spatial de Liege (CSL). Here, the Wave Front Error (WFE) will be considered, the SFE can be derived from the WFE measurement. After a brief introduction, the first part deals with the general test description. The thermo-elastic deformations will be addressed: the surface deformation in the medium frequency range (spatial wavelength down to 60 mm) and core-cell dimpling.

  9. High-Level, First-Principles, Full-Dimensional Quantum Calculation of the Ro-vibrational Spectrum of the Simplest Criegee Intermediate (CH2OO).

    PubMed

    Li, Jun; Carter, Stuart; Bowman, Joel M; Dawes, Richard; Xie, Daiqian; Guo, Hua

    2014-07-03

    The ro-vibrational spectrum of the simplest Criegee intermediate (CH2OO) has been determined quantum mechanically based on nine-dimensional potential energy and dipole surfaces for its ground electronic state. The potential energy surface is fitted to more than 50 000 high-level ab initio points with a root-mean-square error of 25 cm(-1), using a recently proposed permutation invariant polynomial neural network method. The calculated rotational constants, vibrational frequencies, and spectral intensities of CH2OO are in excellent agreement with experiment. The potential energy surface provides a valuable platform for studying highly excited vibrational and unimolecular reaction dynamics of this important molecule.

  10. Frequency-selective fading statistics of shallow-water acoustic communication channel with a few multipaths

    NASA Astrophysics Data System (ADS)

    Bae, Minja; Park, Jihyun; Kim, Jongju; Xue, Dandan; Park, Kyu-Chil; Yoon, Jong Rak

    2016-07-01

    The bit error rate of an underwater acoustic communication system is related to multipath fading statistics, which determine the signal-to-noise ratio. The amplitude and delay of each path depend on sea surface roughness, propagation medium properties, and source-to-receiver range as a function of frequency. Therefore, received signals will show frequency-dependent fading. A shallow-water acoustic communication channel generally shows a few strong multipaths that interfere with each other and the resulting interference affects the fading statistics model. In this study, frequency-selective fading statistics are modeled on the basis of the phasor representation of the complex path amplitude. The fading statistics distribution is parameterized by the frequency-dependent constructive or destructive interference of multipaths. At a 16 m depth with a muddy bottom, a wave height of 0.2 m, and source-to-receiver ranges of 100 and 400 m, fading statistics tend to show a Rayleigh distribution at a destructive interference frequency, but a Rice distribution at a constructive interference frequency. The theoretical fading statistics well matched the experimental ones.

  11. Analysis of Vibratory Excitation of Gear Systems as a Contributor to Aircraft Interior Noise. [helicopter cabin noise

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1979-01-01

    Application of the transfer function approach to predict the resulting interior noise contribution requires gearbox vibration sources and paths to be characterized in the frequency domain. Tooth-face deviations from perfect involute surfaces were represented in terms of Legendre polynomials which may be directly interpreted in terms of tooth-spacing errors, mean and random deviations associated with involute slope and fullness, lead mismatch and crowning, and analogous higher-order components. The contributions of these components to the spectrum of the static transmission error is discussed and illustrated using a set of measurements made on a pair of helicopter spur gears. The general methodology presented is applicable to both spur and helical gears.

  12. Rapid estimation of frequency response functions by close-range photogrammetry

    NASA Technical Reports Server (NTRS)

    Tripp, J. S.

    1985-01-01

    The accuracy of a rapid method which estimates the frequency response function from stereoscopic dynamic data is computed. It is shown that reversal of the order of the operations of coordinate transformation and Fourier transformation, which provides a significant increase in computational speed, introduces error. A portion of the error, proportional to the perturbation components normal to the camera focal planes, cannot be eliminated. The remaining error may be eliminated by proper scaling of frequency data prior to coordinate transformation. Methods are developed for least squares estimation of the full 3x3 frequency response matrix for a three dimensional structure.

  13. Digital implementation of a laser frequency stabilisation technique in the telecommunications band

    NASA Astrophysics Data System (ADS)

    Jivan, Pritesh; van Brakel, Adriaan; Manuel, Rodolfo Martínez; Grobler, Michael

    2016-02-01

    Laser frequency stabilisation in the telecommunications band was realised using the Pound-Drever-Hall (PDH) error signal. The transmission spectrum of the Fabry-Perot cavity was used as opposed to the traditionally used reflected spectrum. A comparison was done using an analogue as well as a digitally implemented system. This study forms part of an initial step towards developing a portable optical time and frequency standard. The frequency discriminator used in the experimental setup was a fibre-based Fabry-Perot etalon. The phase sensitive system made use of the optical heterodyne technique to detect changes in the phase of the system. A lock-in amplifier was used to filter and mix the input signals to generate the error signal. This error signal may then be used to generate a control signal via a PID controller. An error signal was realised at a wavelength of 1556 nm which correlates to an optical frequency of 1.926 THz. An implementation of the analogue PDH technique yielded an error signal with a bandwidth of 6.134 GHz, while a digital implementation yielded a bandwidth of 5.774 GHz.

  14. GNSS-Reflectometry aboard ISS with GEROS: Investigation of atmospheric propagation effects

    NASA Astrophysics Data System (ADS)

    Zus, F.; Heise, S.; Wickert, J.; Semmling, M.

    2015-12-01

    GEROS-ISS (GNSS rEflectometry Radio Occultation and Scatterometry) is an ESA mission aboard the International Space Station (ISS). The main mission goals are the determination of the sea surface height and surface winds. Secondary goals are monitoring of land surface parameters and atmosphere sounding using GNSS radio occultation measurements. The international scientific study GARCA (GNSS-Reflectometry Assessment of Requirements and Consolidation of Retrieval Algorithms), funded by ESA, is part of the preparations for GEROS-ISS. Major goals of GARCA are the development of an end2end Simulator for the GEROS-ISS measurements (GEROS-SIM) and the evaluation of the error budget of the GNSS reflectometry measurements. In this presentation we introduce some of the GARCA activities to quantify the influence of the ionized and neutral atmosphere on the altimetric measurements, which is a major error source for GEROS-ISS. At first, we analyse, to which extend the standard linear combination of interferometric paths at different carrier frequencies can be used to correct for the ionospheric propagation effects. Second, we make use of the tangent-linear version of our ray-trace algorithm to propagate the uncertainty of the underlying refractivity profile into the uncertainty of the interferometric path. For comparison the sensitivity of the interferometric path with respect to the sea surface height is computed. Though our calculations are based on a number of simplifying assumptions (the Earth is a sphere, the atmosphere is spherically layered and the ISS and GNSS satellite orbits are circular) some general conclusions can be drawn. In essence, for elevation angles above -5° at the ISS the higher-order ionospheric errors and the uncertaintiy of the inteferometric path due to the uncertainty of the underlying refractivity profile are small enough to distinguish a sea surface height of ± 0.5 m.

  15. Signal Analysis Algorithms for Optimized Fitting of Nonresonant Laser Induced Thermal Acoustics Damped Sinusoids

    NASA Technical Reports Server (NTRS)

    Balla, R. Jeffrey; Miller, Corey A.

    2008-01-01

    This study seeks a numerical algorithm which optimizes frequency precision for the damped sinusoids generated by the nonresonant LITA technique. It compares computed frequencies, frequency errors, and fit errors obtained using five primary signal analysis methods. Using variations on different algorithms within each primary method, results from 73 fits are presented. Best results are obtained using an AutoRegressive method. Compared to previous results using Prony s method, single shot waveform frequencies are reduced approx.0.4% and frequency errors are reduced by a factor of approx.20 at 303K to approx. 0.1%. We explore the advantages of high waveform sample rates and potential for measurements in low density gases.

  16. Improved statistical power with a sparse shape model in detecting an aging effect in the hippocampus and amygdala

    NASA Astrophysics Data System (ADS)

    Chung, Moo K.; Kim, Seung-Goo; Schaefer, Stacey M.; van Reekum, Carien M.; Peschke-Schmitz, Lara; Sutterer, Matthew J.; Davidson, Richard J.

    2014-03-01

    The sparse regression framework has been widely used in medical image processing and analysis. However, it has been rarely used in anatomical studies. We present a sparse shape modeling framework using the Laplace- Beltrami (LB) eigenfunctions of the underlying shape and show its improvement of statistical power. Tradition- ally, the LB-eigenfunctions are used as a basis for intrinsically representing surface shapes as a form of Fourier descriptors. To reduce high frequency noise, only the first few terms are used in the expansion and higher frequency terms are simply thrown away. However, some lower frequency terms may not necessarily contribute significantly in reconstructing the surfaces. Motivated by this idea, we present a LB-based method to filter out only the significant eigenfunctions by imposing a sparse penalty. For dense anatomical data such as deformation fields on a surface mesh, the sparse regression behaves like a smoothing process, which will reduce the error of incorrectly detecting false negatives. Hence the statistical power improves. The sparse shape model is then applied in investigating the influence of age on amygdala and hippocampus shapes in the normal population. The advantage of the LB sparse framework is demonstrated by showing the increased statistical power.

  17. Rapid Transient Pressure Field Computations in the Nearfield of Circular Transducers using Frequency Domain Time-Space Decomposition

    PubMed Central

    Alles, E. J.; Zhu, Y.; van Dongen, K. W. A.; McGough, R. J.

    2013-01-01

    The fast nearfield method, when combined with time-space decomposition, is a rapid and accurate approach for calculating transient nearfield pressures generated by ultrasound transducers. However, the standard time-space decomposition approach is only applicable to certain analytical representations of the temporal transducer surface velocity that, when applied to the fast nearfield method, are expressed as a finite sum of products of separate temporal and spatial terms. To extend time-space decomposition such that accelerated transient field simulations are enabled in the nearfield for an arbitrary transducer surface velocity, a new transient simulation method, frequency domain time-space decomposition (FDTSD), is derived. With this method, the temporal transducer surface velocity is transformed into the frequency domain, and then each complex-valued term is processed separately. Further improvements are achieved by spectral clipping, which reduces the number of terms and the computation time. Trade-offs between speed and accuracy are established for FDTSD calculations, and pressure fields obtained with the FDTSD method for a circular transducer are compared to those obtained with Field II and the impulse response method. The FDTSD approach, when combined with the fast nearfield method and spectral clipping, consistently achieves smaller errors in less time and requires less memory than Field II or the impulse response method. PMID:23160476

  18. On low-frequency errors of uniformly modulated filtered white-noise models for ground motions

    USGS Publications Warehouse

    Safak, Erdal; Boore, David M.

    1988-01-01

    Low-frequency errors of a commonly used non-stationary stochastic model (uniformly modulated filtered white-noise model) for earthquake ground motions are investigated. It is shown both analytically and by numerical simulation that uniformly modulated filter white-noise-type models systematically overestimate the spectral response for periods longer than the effective duration of the earthquake, because of the built-in low-frequency errors in the model. The errors, which are significant for low-magnitude short-duration earthquakes, can be eliminated by using the filtered shot-noise-type models (i. e. white noise, modulated by the envelope first, and then filtered).

  19. Sensitivity of Global Methane Bayesian Inversion to Surface Observation Data Sets and Chemical-Transport Model Resolution

    NASA Astrophysics Data System (ADS)

    Lew, E. J.; Butenhoff, C. L.; Karmakar, S.; Rice, A. L.; Khalil, A. K.

    2017-12-01

    Methane is the second most important greenhouse gas after carbon dioxide. In efforts to control emissions, a careful examination of the methane budget and source strengths is required. To determine methane surface fluxes, Bayesian methods are often used to provide top-down constraints. Inverse modeling derives unknown fluxes using observed methane concentrations, a chemical transport model (CTM) and prior information. The Bayesian inversion reduces prior flux uncertainties by exploiting information content in the data. While the Bayesian formalism produces internal error estimates of source fluxes, systematic or external errors that arise from user choices in the inversion scheme are often much larger. Here we examine model sensitivity and uncertainty of our inversion under different observation data sets and CTM grid resolution. We compare posterior surface fluxes using the data product GLOBALVIEW-CH4 against the event-level molar mixing ratio data available from NOAA. GLOBALVIEW-CH4 is a collection of CH4 concentration estimates from 221 sites, collected by 12 laboratories, that have been interpolated and extracted to provide weekly records from 1984-2008. Differently, the event-level NOAA data records methane mixing ratios field measurements from 102 sites, containing sampling frequency irregularities and gaps in time. Furthermore, the sampling platform types used by the data sets may influence the posterior flux estimates, namely fixed surface, tower, ship and aircraft sites. To explore the sensitivity of the posterior surface fluxes to the observation network geometry, inversions composed of all sites, only aircraft, only ship, only tower and only fixed surface sites, are performed and compared. Also, we investigate the sensitivity of the error reduction associated with the resolution of the GEOS-Chem simulation (4°×5° vs 2°×2.5°) used to calculate the response matrix. Using a higher resolution grid decreased the model-data error at most sites, thereby increasing the information at that site. These different inversions—event-level and interpolated data, higher and lower resolutions—are compared using an ensemble of descriptive and comparative statistics. Analyzing the sensitivity of the inverse model leads to more accurate estimates of the methane source category uncertainty.

  20. Effects of specimen preparation on the electromagnetic property measurements of solid materials with an automatic network analyzer

    NASA Technical Reports Server (NTRS)

    Long, E. R., Jr.

    1986-01-01

    Effects of specimen preparation on measured values of an acrylic's electomagnetic properties at X-band microwave frequencies, TE sub 1,0 mode, utilizing an automatic network analyzer have been studied. For 1 percent or less error, a gap between the specimen edge and the 0.901-in. wall of the specimen holder was the most significant parameter. The gap had to be less than 0.002 in. The thickness variation and alignment errors in the direction parallel to the 0.901-in. wall were equally second most significant and had to be less than 1 degree. Errors in the measurement f the thickness were third most significant. They had to be less than 3 percent. The following parameters caused errors of 1 percent or less: ratios of specimen-holder thicknesses of more than 15 percent, gaps between the specimen edge and the 0.401-in. wall less than 0.045 in., position errors less than 15 percent, surface roughness, hickness variation in the direction parallel to the 0.401-in. wall less than 35 percent, and specimen alignment in the direction parallel to the 0.401-in. wall mass than 5 degrees.

  1. Magnitude error bounds for sampled-data frequency response obtained from the truncation of an infinite series, and compensator improvement program

    NASA Technical Reports Server (NTRS)

    Mitchell, J. R.

    1972-01-01

    The frequency response method of analyzing control system performance is discussed, and the difficulty of obtaining the sampled frequency response of the continuous system is considered. An upper bound magnitude error equation is obtained which yields reasonable estimates of the actual error. Finalization of the compensator improvement program is also reported, and the program was used to design compensators for Saturn 5/S1-C dry workshop and Saturn 5/S1-C Skylab.

  2. New 50-M-Class Single Dish Telescope: Large Submillimeter Telescope (LST)

    NASA Astrophysics Data System (ADS)

    Kawabe, Ryohei

    2018-01-01

    We report on the plan to construct a 50 m class millimeter (mm) and sub-mm single dish telescope, the Large Submillimeter Telescope (LST). The telescope is optimized for wide-area imaging and spectroscopic surveys in the 70 to 420 GHz main frequency range, which just covers main atmospheric windows at millimeter and submillimeter wavelengths for good observing sites such as the ALMA site in Chile. We also target observations at higher frequencies of up to 1 THz, using an inner part high-precision surface. Active surface control is required in order to correct gravitational and thermal deformations of the surface. The LST will facilitate new discovery spaces such as wide-field imaging with both continuum and spectral lines, along with new developments for time domain science. With exploiting synergy with ALMA and other telescopes, LST can contribute to a wide range of topics in astronomy and astrophysics, e.g., astrochemistry, star formation in the Galaxy and galaxies, evolution of galaxy clusters via SZ effect. We also report the recent progress on the technical study, e.g., the tentative study of the surface error budget and challenges to correction for the wind-load effect.

  3. Frequency synchronization of a frequency-hopped MFSK communication system

    NASA Technical Reports Server (NTRS)

    Huth, G. K.; Polydoros, A.; Simon, M. K.

    1981-01-01

    This paper presents the performance of fine-frequency synchronization. The performance degradation due to imperfect frequency synchronization is found in terms of the effect on bit error probability as a function of full-band or partial-band noise jamming levels and of the number of frequency hops used in the estimator. The effect of imperfect fine-time synchronization is also included in the calculation of fine-frequency synchronization performance to obtain the overall performance degradation due to synchronization errors.

  4. An accurate surface topography restoration algorithm for white light interferometry

    NASA Astrophysics Data System (ADS)

    Yuan, He; Zhang, Xiangchao; Xu, Min

    2017-10-01

    As an important measuring technique, white light interferometry can realize fast and non-contact measurement, thus it is now widely used in the field of ultra-precision engineering. However, the traditional recovery algorithms of surface topographies have flaws and limits. In this paper, we propose a new algorithm to solve these problems. It is a combination of Fourier transform and improved polynomial fitting method. Because the white light interference signal is usually expressed as a cosine signal whose amplitude is modulated by a Gaussian function, its fringe visibility is not constant and varies with different scanning positions. The interference signal is processed first by Fourier transform, then the positive frequency part is selected and moved back to the center of the amplitude-frequency curve. In order to restore the surface morphology, a polynomial fitting method is used to fit the amplitude curve after inverse Fourier transform and obtain the corresponding topography information. The new method is then compared to the traditional algorithms. It is proved that the aforementioned drawbacks can be effectively overcome. The relative error is less than 0.8%.

  5. Effect of surface moisture on chemically bonded phosphor for thermographic phosphor thermometry

    NASA Astrophysics Data System (ADS)

    Cai, Tao; Kim, Dong; Kim, Mirae; Liu, Ying Zheng; Kim, Kyung Chun

    2016-09-01

    This study examined the effect of surface moisture on the calibration lifetime in chemically bonded phosphor paint preparation. Mg4FGeO6:Mn was used as a sensor material, which was excited by a pulsed UV LED. A high-speed camera with a frequency of 8000 Hz was used to conduct phosphor thermometry. Five samples with different degrees of surface moisture were selected during the preparation process, and each sample was calibrated 40 times at room temperature. A conventional post-processing method was used to acquire the phosphorescent lifetime for different samples with a 4  ×  4-pixel interrogation window. The measurement error and paint uniformity were also studied. The results showed that there was no obvious phosphorescence boundary between the wet parts and dry parts of phosphor paint. The lifetime increased by about 0.0345% per hour during the preparation process, showing the degree of surface moisture had almost no influence on the lifetime measurement. The lifetime changed only after annealing treatment. There was also no effect on the measurement error and uniformity. These results provide a reference for developing a real-time measurement method using thermographic phosphor thermometry. This study also provides a feasible basis for chemically bonded phosphor thermometry applications in humid and low-temperature environments.

  6. Modified fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1992-01-01

    A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.

  7. The effect of clock, media, and station location errors on Doppler measurement accuracy

    NASA Technical Reports Server (NTRS)

    Miller, J. K.

    1993-01-01

    Doppler tracking by the Deep Space Network (DSN) is the primary radio metric data type used by navigation to determine the orbit of a spacecraft. The accuracy normally attributed to orbits determined exclusively with Doppler data is about 0.5 microradians in geocentric angle. Recently, the Doppler measurement system has evolved to a high degree of precision primarily because of tracking at X-band frequencies (7.2 to 8.5 GHz). However, the orbit determination system has not been able to fully utilize this improved measurement accuracy because of calibration errors associated with transmission media, the location of tracking stations on the Earth's surface, the orientation of the Earth as an observing platform, and timekeeping. With the introduction of Global Positioning System (GPS) data, it may be possible to remove a significant error associated with the troposphere. In this article, the effect of various calibration errors associated with transmission media, Earth platform parameters, and clocks are examined. With the introduction of GPS calibrations, it is predicted that a Doppler tracking accuracy of 0.05 microradians is achievable.

  8. Subglacial drainage effects on surface motion on a small surge type alpine glacier on the St. Elias range, Yukon Territory, Canada.

    NASA Astrophysics Data System (ADS)

    Rada, C.; Schoof, C.; King, M. A.; Flowers, G. E.; Haber, E.

    2017-12-01

    Subglacial drainage is known to play an important role in glacier dynamics trough its influence on basal sliding. However, drainage is also one of the most poorly understood process in glacier flow due to the difficulties of observing, identifying and modeling the physics involved. In an effort to improve understanding of subglacial processes, we have monitored a small, approximately 100 m thick surge-type alpine glacier for nine years. Over 300 boreholes were instrumented with pressure transducers over a 0.5 km² in its upper ablation area, in addition to a weather station and a permanent GPS array consisting on 16 dual-frequency receivers within the study area. We study the influence of the subglacial drainage system on the glacier surface velocity. However, pressure variations in the drainage system during the melt season are dominated by diurnal oscillations.Therefore, GPS solutions have to be computed at sub-diurnal time intervals in order to explore the effects of transient diurnal pressure variations. Due to the small displacements of the surface of the glacier over those periods (4-10 cm/day), sub-diurnal solutions are dominated by errors, making it impossible to observe the diurnal variations in glacier motion. We have found that the main source of error is GPS multipath. This error source does largely cancel out when solutions are computed over 24 hour periods (or more precisely, over a sidereal day), but solution precisions decrease quickly when computed over shorter periods of time. Here we present an inverse problem approach to remove GPS multipath errors on glaciers, and use the reconstructed glacier motion to explore how the subglacial drainage morphology and effective pressure influence glacier dynamics at multiple time scales.

  9. A new multiple air beam approach for in-process form error optical measurement

    NASA Astrophysics Data System (ADS)

    Gao, Y.; Li, R.

    2018-07-01

    In-process measurement can provide feedback for the control of workpiece precision in terms of size, roughness and, in particular, mid-spatial frequency form error. Optical measurement methods are of the non-contact type and possess high precision, as required for in-process form error measurement. In precision machining, coolant is commonly used to reduce heat generation and thermal deformation on the workpiece surface. However, the use of coolant will induce an opaque coolant barrier if optical measurement methods are used. In this paper, a new multiple air beam approach is proposed. The new approach permits the displacement of coolant from any direction and with a large thickness, i.e. with a large amount of coolant. The model, the working principle, and the key features of the new approach are presented. Based on the proposed new approach, a new in-process form error optical measurement system is developed. The coolant removal capability and the performance of this new multiple air beam approach are assessed. The experimental results show that the workpiece surface y(x, z) can be measured successfully with standard deviation up to 0.3011 µm even under a large amount of coolant, such that the coolant thickness is 15 mm. This means a relative uncertainty of 2σ up to 4.35% and the workpiece surface is deeply immersed in the opaque coolant. The results also show that, in terms of coolant removal capability, air supply and air velocity, the proposed new approach improves by, respectively, 3.3, 1.3 and 5.3 times on the previous single air beam approach. The results demonstrate the significant improvements brought by the new multiple air beam method together with the developed measurement system.

  10. Application of genetic algorithm in the evaluation of the profile error of archimedes helicoid surface

    NASA Astrophysics Data System (ADS)

    Zhu, Lianqing; Chen, Yunfang; Chen, Qingshan; Meng, Hao

    2011-05-01

    According to minimum zone condition, a method for evaluating the profile error of Archimedes helicoid surface based on Genetic Algorithm (GA) is proposed. The mathematic model of the surface is provided and the unknown parameters in the equation of surface are acquired through least square method. Principle of GA is explained. Then, the profile error of Archimedes Helicoid surface is obtained through GA optimization method. To validate the proposed method, the profile error of an Archimedes helicoid surface, Archimedes Cylindrical worm (ZA worm) surface, is evaluated. The results show that the proposed method is capable of correctly evaluating the profile error of Archimedes helicoid surface and satisfy the evaluation standard of the Minimum Zone Method. It can be applied to deal with the measured data of profile error of complex surface obtained by three coordinate measurement machines (CMM).

  11. The Interaction of Ambient Frequency and Feature Complexity in the Diphthong Errors of Children with Phonological Disorders.

    ERIC Educational Resources Information Center

    Stokes, Stephanie F.; Lau, Jessica Tse-Kay; Ciocca, Valter

    2002-01-01

    This study examined the interaction of ambient frequency and feature complexity in the diphthong errors produced by 13 Cantonese-speaking children with phonological disorders. Perceptual analysis of 611 diphthongs identified those most frequently and least frequently in error. Suggested treatment guidelines include consideration of three factors:…

  12. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review.

    PubMed

    Mathes, Tim; Klaßen, Pauline; Pieper, Dawid

    2017-11-28

    Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. We performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second. The analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates. The evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods.

  13. Laser frequency stabilization by combining modulation transfer and frequency modulation spectroscopy.

    PubMed

    Zi, Fei; Wu, Xuejian; Zhong, Weicheng; Parker, Richard H; Yu, Chenghui; Budker, Simon; Lu, Xuanhui; Müller, Holger

    2017-04-01

    We present a hybrid laser frequency stabilization method combining modulation transfer spectroscopy (MTS) and frequency modulation spectroscopy (FMS) for the cesium D2 transition. In a typical pump-probe setup, the error signal is a combination of the DC-coupled MTS error signal and the AC-coupled FMS error signal. This combines the long-term stability of the former with the high signal-to-noise ratio of the latter. In addition, we enhance the long-term frequency stability with laser intensity stabilization. By measuring the frequency difference between two independent hybrid spectroscopies, we investigate the short-and long-term stability. We find a long-term stability of 7.8 kHz characterized by a standard deviation of the beating frequency drift over the course of 10 h and a short-term stability of 1.9 kHz characterized by an Allan deviation of that at 2 s of integration time.

  14. Apparatus and Method to Enable Precision and Fast Laser Frequency Tuning

    NASA Technical Reports Server (NTRS)

    Chen, Jeffrey R. (Inventor); Numata, Kenji (Inventor); Wu, Stewart T. (Inventor); Yang, Guangning (Inventor)

    2015-01-01

    An apparatus and method is provided to enable precision and fast laser frequency tuning. For instance, a fast tunable slave laser may be dynamically offset-locked to a reference laser line using an optical phase-locked loop. The slave laser is heterodyned against a reference laser line to generate a beatnote that is subsequently frequency divided. The phase difference between the divided beatnote and a reference signal may be detected to generate an error signal proportional to the phase difference. The error signal is converted into appropriate feedback signals to phase lock the divided beatnote to the reference signal. The slave laser frequency target may be rapidly changed based on a combination of a dynamically changing frequency of the reference signal, the frequency dividing factor, and an effective polarity of the error signal. Feed-forward signals may be generated to accelerate the slave laser frequency switching through laser tuning ports.

  15. A drifting GPS buoy for retrieving effective riverbed bathymetry

    NASA Astrophysics Data System (ADS)

    Hostache, R.; Matgen, P.; Giustarini, L.; Teferle, F. N.; Tailliez, C.; Iffly, J.-F.; Corato, G.

    2015-01-01

    Spatially distributed riverbed bathymetry information are rarely available but mandatory for accurate hydrodynamic modeling. This study aims at evaluating the potential of the Global Navigation Satellite System (GNSS), like for instance Global Positioning System (GPS), for retrieving such data. Drifting buoys equipped with navigation systems such as GPS enable the quasi-continuous measurement of water surface elevation, from virtually any point in the world. The present study investigates the potential of assimilating GNSS-derived water surface elevation measurements into hydraulic models in order to retrieve effective riverbed bathymetry. First tests with a GPS dual-frequency receiver show that the root mean squared error (RMSE) on the elevation measurement equals 30 cm provided that a differential post processing is performed. Next, synthetic observations of a drifting buoy were generated assuming a 30 cm average error of Water Surface Elevation (WSE) measurements. By assimilating the synthetic observation into a 1D-Hydrodynamic model, we show that the riverbed bathymetry can be retrieved with an accuracy of 36 cm. Moreover, the WSEs simulated by the hydrodynamic model using the retrieved bathymetry are in good agreement with the synthetic "truth", exhibiting an RMSE of 27 cm.

  16. Wavefront Sensing Analysis of Grazing Incidence Optical Systems

    NASA Technical Reports Server (NTRS)

    Rohrbach, Scott; Saha, Timo

    2012-01-01

    Wavefront sensing is a process by which optical system errors are deduced from the aberrations in the image of an ideal source. The method has been used successfully in near-normal incidence, but not for grazing incidence systems. This innovation highlights the ability to examine out-of-focus images from grazing incidence telescopes (typically operating in the x-ray wavelengths, but integrated using optical wavelengths) and determine the lower-order deformations. This is important because as a metrology tool, this method would allow the integration of high angular resolution optics without the use of normal incidence interferometry, which requires direct access to the front surface of each mirror. Measuring the surface figure of mirror segments in a highly nested x-ray telescope mirror assembly is difficult due to the tight packing of elements and blockage of all but the innermost elements to normal incidence light. While this can be done on an individual basis in a metrology mount, once the element is installed and permanently bonded into the assembly, it is impossible to verify the figure of each element and ensure that the necessary imaging quality will be maintained. By examining on-axis images of an ideal point source, one can gauge the low-order figure errors of individual elements, even when integrated into an assembly. This technique is known as wavefront sensing (WFS). By shining collimated light down the optical axis of the telescope and looking at out-of-focus images, the blur due to low-order figure errors of individual elements can be seen, and the figure error necessary to produce that blur can be calculated. The method avoids the problem of requiring normal incidence access to the surface of each mirror segment. Mirror figure errors span a wide range of spatial frequencies, from the lowest-order bending to the highest order micro-roughness. While all of these can be measured in normal incidence, only the lowest-order contributors can be determined through this WFS technique.

  17. 3D measurement using combined Gray code and dual-frequency phase-shifting approach

    NASA Astrophysics Data System (ADS)

    Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Liu, Xin

    2018-04-01

    The combined Gray code and phase-shifting approach is a commonly used 3D measurement technique. In this technique, an error that equals integer multiples of the phase-shifted fringe period, i.e. period jump error, often exists in the absolute analog code, which can lead to gross measurement errors. To overcome this problem, the present paper proposes 3D measurement using a combined Gray code and dual-frequency phase-shifting approach. Based on 3D measurement using the combined Gray code and phase-shifting approach, one set of low-frequency phase-shifted fringe patterns with an odd-numbered multiple of the original phase-shifted fringe period is added. Thus, the absolute analog code measured value can be obtained by the combined Gray code and phase-shifting approach, and the low-frequency absolute analog code measured value can also be obtained by adding low-frequency phase-shifted fringe patterns. Then, the corrected absolute analog code measured value can be obtained by correcting the former by the latter, and the period jump errors can be eliminated, resulting in reliable analog code unwrapping. For the proposed approach, we established its measurement model, analyzed its measurement principle, expounded the mechanism of eliminating period jump errors by error analysis, and determined its applicable conditions. Theoretical analysis and experimental results show that the proposed approach can effectively eliminate period jump errors, reliably perform analog code unwrapping, and improve the measurement accuracy.

  18. Crosslinking EEG time-frequency decomposition and fMRI in error monitoring.

    PubMed

    Hoffmann, Sven; Labrenz, Franziska; Themann, Maria; Wascher, Edmund; Beste, Christian

    2014-03-01

    Recent studies implicate a common response monitoring system, being active during erroneous and correct responses. Converging evidence from time-frequency decompositions of the response-related ERP revealed that evoked theta activity at fronto-central electrode positions differentiates correct from erroneous responses in simple tasks, but also in more complex tasks. However, up to now it is unclear how different electrophysiological parameters of error processing, especially at the level of neural oscillations are related, or predictive for BOLD signal changes reflecting error processing at a functional-neuroanatomical level. The present study aims to provide crosslinks between time domain information, time-frequency information, MRI BOLD signal and behavioral parameters in a task examining error monitoring due to mistakes in a mental rotation task. The results show that BOLD signal changes reflecting error processing on a functional-neuroanatomical level are best predicted by evoked oscillations in the theta frequency band. Although the fMRI results in this study account for an involvement of the anterior cingulate cortex, middle frontal gyrus, and the Insula in error processing, the correlation of evoked oscillations and BOLD signal was restricted to a coupling of evoked theta and anterior cingulate cortex BOLD activity. The current results indicate that although there is a distributed functional-neuroanatomical network mediating error processing, only distinct parts of this network seem to modulate electrophysiological properties of error monitoring.

  19. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    PubMed

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  20. Wave-induced response of a floating two-dimensional body with a moonpool

    PubMed Central

    Fredriksen, Arnt G.; Kristiansen, Trygve; Faltinsen, Odd M.

    2015-01-01

    Regular wave-induced behaviour of a floating stationary two-dimensional body with a moonpool is studied. The focus is on resonant piston-mode motion in the moonpool and rigid-body motions. Dedicated two-dimensional experiments have been performed. Two numerical hybrid methods, which have previously been applied to related problems, are further developed. Both numerical methods couple potential and viscous flow. The semi-nonlinear hybrid method uses linear free-surface and body-boundary conditions. The other one uses fully nonlinear free-surface and body-boundary conditions. The harmonic polynomial cell method solves the Laplace equation in the potential flow domain, while the finite volume method solves the Navier–Stokes equations in the viscous flow domain near the body. Results from the two codes are compared with the experimental data. The nonlinear hybrid method compares well with the data, while certain discrepancies are observed for the semi-nonlinear method. In particular, the roll motion is over-predicted by the semi-nonlinear hybrid method. Error sources in the semi-nonlinear hybrid method are discussed. The moonpool strongly affects heave motions in a frequency range around the piston-mode resonance frequency of the moonpool. No resonant water motions occur in the moonpool at the piston-mode resonance frequency. Instead large moonpool motions occur at a heave natural frequency associated with small damping near the piston-mode resonance frequency. PMID:25512594

  1. cBathy: A robust algorithm for estimating nearshore bathymetry

    USGS Publications Warehouse

    Plant, Nathaniel G.; Holman, Rob; Holland, K. Todd

    2013-01-01

    A three-part algorithm is described and tested to provide robust bathymetry maps based solely on long time series observations of surface wave motions. The first phase consists of frequency-dependent characterization of the wave field in which dominant frequencies are estimated by Fourier transform while corresponding wave numbers are derived from spatial gradients in cross-spectral phase over analysis tiles that can be small, allowing high-spatial resolution. Coherent spatial structures at each frequency are extracted by frequency-dependent empirical orthogonal function (EOF). In phase two, depths are found that best fit weighted sets of frequency-wave number pairs. These are subsequently smoothed in time in phase 3 using a Kalman filter that fills gaps in coverage and objectively averages new estimates of variable quality with prior estimates. Objective confidence intervals are returned. Tests at Duck, NC, using 16 surveys collected over 2 years showed a bias and root-mean-square (RMS) error of 0.19 and 0.51 m, respectively but were largest near the offshore limits of analysis (roughly 500 m from the camera) and near the steep shoreline where analysis tiles mix information from waves, swash and static dry sand. Performance was excellent for small waves but degraded somewhat with increasing wave height. Sand bars and their small-scale alongshore variability were well resolved. A single ground truth survey from a dissipative, low-sloping beach (Agate Beach, OR) showed similar errors over a region that extended several kilometers from the camera and reached depths of 14 m. Vector wave number estimates can also be incorporated into data assimilation models of nearshore dynamics.

  2. A Whole Word and Number Reading Machine Based on Two Dimensional Low Frequency Fourier Transforms

    DTIC Science & Technology

    1990-12-01

    they are energy normalized. The normalization process accounts for brightness variations and is equivalent to graphing each 2DFT onto the surface of an n...determined empirically (trial and error). Each set is energy normalized based on the number of coefficients within the set. Therefore, the actual...using the 6 font group case with the top 1000 words, where the energy has been renormalized based on the particular number of coefficients being used

  3. Rigidity controllable polishing tool based on magnetorheological effect

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Wan, Yongjian; Shi, Chunyan

    2012-10-01

    A stable and predictable material removal function (MRF) plays a crucial role in computer controlled optical surfacing (CCOS). For physical contact polishing case, the stability of MRF depends on intimate contact between polishing interface and workpiece. Rigid laps maintain this function in polishing spherical surfaces, whose curvature has no variation with the position on the surface. Such rigid laps provide smoothing effect for mid-spatial frequency errors, but can't be used in aspherical surfaces for they will destroy the surface figure. Flexible tools such as magnetorheological fluid or air bonnet conform to the surface [1]. They lack rigidity and provide little natural smoothing effect. We present a rigidity controllable polishing tool that uses a kind of magnetorheological elastomers (MRE) medium [2]. It provides the ability of both conforming to the aspheric surface and maintaining natural smoothing effect. What's more, its rigidity can be controlled by the magnetic field. This paper will present the design, analysis, and stiffness variation mechanism model of such polishing tool [3].

  4. A procedure for removing the effect of response bias errors from waterfowl hunter questionnaire responses

    USGS Publications Warehouse

    Atwood, E.L.

    1958-01-01

    Response bias errors are studied by comparing questionnaire responses from waterfowl hunters using four large public hunting areas with actual hunting data from these areas during two hunting seasons. To the extent that the data permit, the sources of the error in the responses were studied and the contribution of each type to the total error was measured. Response bias errors, including both prestige and memory bias, were found to be very large as compared to non-response and sampling errors. Good fits were obtained with the seasonal kill distribution of the actual hunting data and the negative binomial distribution and a good fit was obtained with the distribution of total season hunting activity and the semi-logarithmic curve. A comparison of the actual seasonal distributions with the questionnaire response distributions revealed that the prestige and memory bias errors are both positive. The comparisons also revealed the tendency for memory bias errors to occur at digit frequencies divisible by five and for prestige bias errors to occur at frequencies which are multiples of the legal daily bag limit. A graphical adjustment of the response distributions was carried out by developing a smooth curve from those frequency classes not included in the predictable biased frequency classes referred to above. Group averages were used in constructing the curve, as suggested by Ezekiel [1950]. The efficiency of the technique described for reducing response bias errors in hunter questionnaire responses on seasonal waterfowl kill is high in large samples. The graphical method is not as efficient in removing response bias errors in hunter questionnaire responses on seasonal hunting activity where an average of 60 percent was removed.

  5. Compensating for estimation smoothing in kriging

    USGS Publications Warehouse

    Olea, R.A.; Pawlowsky, Vera

    1996-01-01

    Smoothing is a characteristic inherent to all minimum mean-square-error spatial estimators such as kriging. Cross-validation can be used to detect and model such smoothing. Inversion of the model produces a new estimator-compensated kriging. A numerical comparison based on an exhaustive permeability sampling of a 4-fr2 slab of Berea Sandstone shows that the estimation surface generated by compensated kriging has properties intermediate between those generated by ordinary kriging and stochastic realizations resulting from simulated annealing and sequential Gaussian simulation. The frequency distribution is well reproduced by the compensated kriging surface, which also approximates the experimental semivariogram well - better than ordinary kriging, but not as well as stochastic realizations. Compensated kriging produces surfaces that are more accurate than stochastic realizations, but not as accurate as ordinary kriging. ?? 1996 International Association for Mathematical Geology.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esfahani, M. Nasr; Yilmaz, M.; Sonne, M. R.

    The trend towards nanomechanical resonator sensors with increasing sensitivity raises the need to address challenges encountered in the modeling of their mechanical behavior. Selecting the best approach in mechanical response modeling amongst the various potential computational solid mechanics methods is subject to controversy. A guideline for the selection of the appropriate approach for a specific set of geometry and mechanical properties is needed. In this study, geometrical limitations in frequency response modeling of flexural nanomechanical resonators are investigated. Deviation of Euler and Timoshenko beam theories from numerical techniques including finite element modeling and Surface Cauchy-Born technique are studied. The resultsmore » provide a limit beyond which surface energy contribution dominates the mechanical behavior. Using the Surface Cauchy-Born technique as the reference, a maximum error on the order of 50 % is reported for high-aspect ratio resonators.« less

  7. A curved edge diffraction-utilized displacement sensor for spindle metrology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, ChaBum, E-mail: clee@tntech.edu; Zhao, Rui; Jeon, Seongkyul

    This paper presents a new dimensional metrological sensing principle for a curved surface based on curved edge diffraction. Spindle error measurement technology utilizes a cylindrical or spherical target artifact attached to the spindle with non-contact sensors, typically a capacitive sensor (CS) or an eddy current sensor, pointed at the artifact. However, these sensors are designed for flat surface measurement. Therefore, measuring a target with a curved surface causes error. This is due to electric fields behaving differently between a flat and curved surface than between two flat surfaces. In this study, a laser is positioned incident to the cylindrical surfacemore » of the spindle, and a photodetector collects the total field produced by the diffraction around the target surface. The proposed sensor was compared with a CS within a range of 500 μm. The discrepancy between the proposed sensor and CS was 0.017% of the full range. Its sensing performance showed a resolution of 14 nm and a drift of less than 10 nm for 7 min of operation. This sensor was also used to measure dynamic characteristics of the spindle system (natural frequency 181.8 Hz, damping ratio 0.042) and spindle runout (22.0 μm at 2000 rpm). The combined standard uncertainty was estimated as 85.9 nm under current experiment conditions. It is anticipated that this measurement technique allows for in situ health monitoring of a precision spindle system in an accurate, convenient, and low cost manner.« less

  8. A new high-resolution electromagnetic method for subsurface imaging

    NASA Astrophysics Data System (ADS)

    Feng, Wanjie

    For most electromagnetic (EM) geophysical systems, the contamination of primary fields on secondary fields ultimately limits the capability of the controlled-source EM methods. Null coupling techniques were proposed to solve this problem. However, the small orientation errors in the null coupling systems greatly restrict the applications of these systems. Another problem encountered by most EM systems is the surface interference and geologic noise, which sometimes make the geophysical survey impossible to carry out. In order to solve these problems, the alternating target antenna coupling (ATAC) method was introduced, which greatly removed the influence of the primary field and reduced the surface interference. But this system has limitations on the maximum transmitter moment that can be used. The differential target antenna coupling (DTAC) method was proposed to allow much larger transmitter moments and at the same time maintain the advantages of the ATAC method. In this dissertation, first, the theoretical DTAC calculations were derived mathematically using Born and Wolf's complex magnetic vector. 1D layered and 2D blocked earth models were used to demonstrate that the DTAC method has no responses for 1D and 2D structures. Analytical studies of the plate model influenced by conductive and resistive backgrounds were presented to explain the physical phenomenology behind the DTAC method, which is the magnetic fields of the subsurface targets are required to be frequency dependent. Then, the advantages of the DTAC method, e.g., high-resolution, reducing the geologic noise and insensitive to surface interference, were analyzed using surface and subsurface numerical examples in the EMGIMA software. Next, the theoretical advantages, such as high resolution and insensitive to surface interference, were verified by designing and developing a low-power (moment of 50 Am 2) vertical-array DTAC system and testing it on controlled targets and scaled target coils. At last, a high-power (moment of about 6800 Am2) vertical-array DTAC system was designed, developed and tested on controlled buried targets and surface interference to illustrate that the DTAC system was insensitive to surface interference even with a high-power transmitter and having higher resolution by using the large-moment transmitter. From the theoretical and practical analysis and tests, several characteristics of the DTAC method were found: (1) The DTAC method can null out the effect of 1D layered and 2D structures, because magnetic fields are orientation independent which lead to no difference among the null vector directions. This characteristic allows for the measurements of smaller subsurface targets; (2) The DTAC method is insensitive to the orientation errors. It is a robust EM null coupling method. Even large orientation errors do not affect the measured target responses, when a reference frequency and one or more data frequencies are used; (3) The vertical-array DTAC method is effective in reducing the geologic noise and insensitive to the surface interference, e.g., fences, vehicles, power line and buildings; (4) The DTAC method is a high-resolution EM sounding method. It can distinguish the depth and orientation of subsurface targets; (5) The vertical-array DTAC method can be adapted to a variety of rapidly moving survey applications. The transmitter moment can be scaled for effective study of near-surface targets (civil engineering, water resource, and environmental restoration) as well as deep targets (mining and other natural-resource exploration).

  9. Removal of daytime thermal deformations in the GBT active surface via out-of-focus holography

    NASA Astrophysics Data System (ADS)

    Hunter, T. R.; Mello, M.; Nikolic, B.; Mason, B.; Schwab, F.; Ghigo, F.; Dicker, S.

    2009-01-01

    The 100-m diameter Green Bank Telescope (GBT) was built with an active surface of 2209 actuators in order to achieve and maintain an accurate paraboloidal shape. While much of the large-scale gravitational deformation of the surface can be described by a finite element model, a significant uncompensated gravitational deformation exists. In recent years, the elevation-dependence of this residual deformation has been successfully measured during benign nighttime conditions using the out-of-focus (OOF) holography technique (Nikolic et al, 2007, A&A 465, 685). Parametrized by a set of Zernike polynomials, the OOF model correction was implemented into the active surface and has been applied during all high-frequency observations since Fall 2006, yielding a consistent gain curve that is flat with elevation. However, large-scale thermal deformation of the surface has remained a problem for daytime high-frequency observations. OOF holography maps taken throughout a clear winter day indicate that surface deformations become significant whenever the Sun is above 10 degrees elevation, but that they change slowly while tracking a single source. In this paper, we describe a further improvement to the GBT active surface that allows an observer to measure and compensate for the thermal surface deformation using the OOF technique. In order to support high-frequency observers, "AutoOOF" is a new GBT Astrid procedure that acquires a quick set of in-focus and out-of-focus on-the-fly continuum maps on a quasar using the currently active receiver. Upon completion of the maps, the data analysis software is launched automatically which produces and displays the surface map along with a set of Zernike coefficients. These coefficients are then sent to the active surface manager which combines them with the existing gravitational Zernike terms and FEM in order to compute the total active surface correction. The end-to-end functionality has been tested on the sky at Q-Band and Ka-band during several mornings and afternoons. The telescope beam profiles on a bright quasar typically change from slightly asymmetric to Gaussian, the peak antenna temperature increases, and significant sidelobes (when present) are eliminated. This technique has the potential to bring the daytime GBT aperture efficiency at high frequencies closer to its nighttime level. The total time to run the procedure and apply the corrections is about 20 minutes. The time interval over which the solutions remain valid and helpful will likely vary with the weather conditions and program of observations, and can be better evaluated once a larger dataset has been acquired. We are presently researching the OOF technique using MUSTANG, the first 90 GHz instrument on the GBT. MUSTANG is 64-pixel bolometer camera, presently operating as a shared-risk science instrument. The use of multi-pixel MUSTANG maps has the potential to significantly speed the process of measuring and correcting thermal deformations to the surface during 90 GHz observations. Of course, the efficiency of 90 GHz observations with the GBT is also limited by the small-scale surface roughness due to errors in the initial setting of the actuator zero points and the individual panel corners. We are planning to measure these errors in detail with traditional holography in the near future.

  10. Removal of daytime thermal deformations in the GBT active surface via out-of-focus holography

    NASA Astrophysics Data System (ADS)

    Hunter, T. R.; Mello, M.; Nikolic, B.; Mason, B. S.; Schwab, F. R.; Ghigo, F. D.; Dicker, S. R.

    2009-01-01

    The 100-m diameter Green Bank Telescope (GBT) was built with an active surface of 2209 actuators in order to achieve and maintain an accurate paraboloidal shape. While much of the large-scale gravitational deformation of the surface can be described by a finite element model, a significant uncompensated gravitational deformation exists. In recent years, the elevation-dependence of this residual deformation has been successfully measured during benign nighttime conditions using the out-of-focus (OOF) holography technique (Nikolic et al, 2007, A&A 465, 685). Parametrized by a set of Zernike polynomials, the OOF model correction was implemented into the active surface and has been applied during all high frequency observations since Fall 2006, yielding a consistent gain curve that is constant with elevation. However, large-scale thermal deformation of the surface has remained a problem for daytime high-frequency observations. OOF holography maps taken throughout a clear winter day indicate that surface deformations become significant whenever the Sun is above 10 degrees elevation, but that they change slowly while tracking a single source. In this paper, we describe a further improvement to the GBT active surface that allows an observer to measure and compensate for the thermal surface deformation using the OOF technique. In order to support high-frequency observers, "AutoOOF" is a new GBT Astrid procedure that acquires a quick set of in-focus and out-of-focus on-the-fly continuum maps on a quasar using the currently active receiver. Upon completion of the maps, the data analysis software is launched automatically which produces and displays the surface map along with a set of Zernike coefficients. These coefficients are then sent to the active surface manager which combines them with the existing gravitational Zernike terms and FEM in order to compute the total active surface correction. The end-to-end functionality has been tested on the sky at Q-Band and Ka-band during several mornings and afternoons. The telescope beam profiles on a bright quasar typically change from slightly asymmetric to Gaussian, the peak antenna temperature increases, and signicant sidelobes (when present) are eliminated. This technique has the potential to bring the daytime GBT aperture efficiency at high frequencies closer to its nighttime level. The total time to run the procedure and apply the corrections is about 20 minutes. The time interval over which the solutions remain valid and helpful will likely vary with the weather conditions and program of observations, and can be better evaluated once a larger dataset has been acquired. We are presently researching the OOF technique using MUSTANG, the first 90 GHz instrument on the GBT. MUSTANG is 64-pixel bolometer camera, presently operating as a shared-risk science instrument. The use of multi-pixel MUSTANG maps has the potential to signicantly speed the process of measuring and correcting thermal deformations to the surface during 90 GHz observations. Of course, th efficiency of 90 GHz observations with the GBT is also limited by the small-scale surface roughness due to errors in the initial setting of the actuator zero points and the individual panel corners. We are planning to measure these errors in detail with traditional holography in the near future.

  11. Quantifying Diurnal Cloud Radiative Effects by Cloud Type in the Tropical Western Pacific

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burleyson, Casey D.; Long, Charles N.; Comstock, Jennifer M.

    2015-06-01

    Cloud radiative effects are examined using long-term datasets collected at the three Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facilities in the tropical western Pacific. We quantify the surface radiation budget, cloud populations, and cloud radiative effects by partitioning the data by cloud type, time of day, and as a function of large scale modes of variability such as El Niño Southern Oscillation (ENSO) phase and wet/dry seasons at Darwin. The novel facet of our analysis is that we break aggregate cloud radiative effects down by cloud type across the diurnal cycle. The Nauru cloud populations andmore » subsequently the surface radiation budget are strongly impacted by ENSO variability whereas the cloud populations over Manus only shift slightly in response to changes in ENSO phase. The Darwin site exhibits large seasonal monsoon related variations. We show that while deeper convective clouds have a strong conditional influence on the radiation reaching the surface, their limited frequency reduces their aggregate radiative impact. The largest source of shortwave cloud radiative effects at all three sites comes from low clouds. We use the observations to demonstrate that potential model biases in the amplitude of the diurnal cycle and mean cloud frequency would lead to larger errors in the surface energy budget compared to biases in the timing of the diurnal cycle of cloud frequency. Our results provide solid benchmarks to evaluate model simulations of cloud radiative effects in the tropics.« less

  12. Asteroseismic modelling of the solar-type subgiant star β Hydri

    NASA Astrophysics Data System (ADS)

    Brandão, I. M.; Doğan, G.; Christensen-Dalsgaard, J.; Cunha, M. S.; Bedding, T. R.; Metcalfe, T. S.; Kjeldsen, H.; Bruntt, H.; Arentoft, T.

    2011-03-01

    Context. Comparing models and data of pulsating stars is a powerful way to understand the stellar structure better. Moreover, such comparisons are necessary to make improvements to the physics of the stellar models, since they do not yet perfectly represent either the interior or especially the surface layers of stars. Because β Hydri is an evolved solar-type pulsator with mixed modes in its frequency spectrum, it is very interesting for asteroseismic studies. Aims: The goal of the present work is to search for a representative model of the solar-type star β Hydri, based on up-to-date non-seismic and seismic data. Methods: We present a revised list of frequencies for 33 modes, which we produced by analysing the power spectrum of the published observations again using a new weighting scheme that minimises the daily sidelobes. We ran several grids of evolutionary models with different input parameters and different physics, using the stellar evolutionary code ASTEC. For the models that are inside the observed error box of β Hydri, we computed their frequencies with the pulsation code ADIPLS. We used two approaches to find the model that oscillates with the frequencies that are closest to the observed frequencies of β Hydri: (i) we assume that the best model is the one that reproduces the star's interior based on the radial oscillation frequencies alone, to which we have applied the correction for the near-surface effects; (ii) we assume that the best model is the one that produces the lowest value of the chi-square (χ2), i.e. that minimises the difference between the observed frequencies of all available modes and the model predictions, after all model frequencies are corrected for near-surface effects. Results: We show that after applying a correction for near-surface effects to the frequencies of the best models, we can reproduce the observed modes well, including those that have mixed mode character. The model that gives the lowest value of the χ2 is a post-main-sequence model with a mass of 1.04 M⊙ and a metallicity slightly lower than that of the Sun. Our results underscore the importance of having individual frequencies to constrain the properties of the stellar model.

  13. A fresh look at the predictors of naming accuracy and errors in Alzheimer's disease.

    PubMed

    Cuetos, Fernando; Rodríguez-Ferreiro, Javier; Sage, Karen; Ellis, Andrew W

    2012-09-01

    In recent years, a considerable number of studies have tried to establish which characteristics of objects and their names predict the responses of patients with Alzheimer's disease (AD) in the picture-naming task. The frequency of use of words and their age of acquisition (AoA) have been implicated as two of the most influential variables, with naming being best preserved for objects with high-frequency, early-acquired names. The present study takes a fresh look at the predictors of naming success in Spanish and English AD patients using a range of measures of word frequency and AoA along with visual complexity, imageability, and word length as predictors. Analyses using generalized linear mixed modelling found that naming accuracy was better predicted by AoA ratings taken from older adults than conventional ratings from young adults. Older frequency measures based on written language samples predicted accuracy better than more modern measures based on the frequencies of words in film subtitles. Replacing adult frequency with an estimate of cumulative (lifespan) frequency did not reduce the impact of AoA. Semantic error rates were predicted by both written word frequency and senior AoA while null response errors were only predicted by frequency. Visual complexity, imageability, and word length did not predict naming accuracy or errors. ©2012 The British Psychological Society.

  14. Influence of measuring algorithm on shape accuracy in the compensating turning of high gradient thin-wall parts

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Wang, Guilin; Zhu, Dengchao; Li, Shengyi

    2015-02-01

    In order to meet the requirement of aerodynamics, the infrared domes or windows with conformal and thin-wall structure becomes the development trend of high-speed aircrafts in the future. But these parts usually have low stiffness, the cutting force will change along with the axial position, and it is very difficult to meet the requirement of shape accuracy by single machining. Therefore, on-machine measurement and compensating turning are used to control the shape errors caused by the fluctuation of cutting force and the change of stiffness. In this paper, on the basis of ultra precision diamond lathe, a contact measuring system with five DOFs is developed to achieve on-machine measurement of conformal thin-wall parts with high accuracy. According to high gradient surface, the optimizing algorithm is designed on the distribution of measuring points by using the data screening method. The influence rule of sampling frequency is analyzed on measuring errors, the best sampling frequency is found out based on planning algorithm, the effect of environmental factors and the fitting errors are controlled within lower range, and the measuring accuracy of conformal dome is greatly improved in the process of on-machine measurement. According to MgF2 conformal dome with high gradient, the compensating turning is implemented by using the designed on-machine measuring algorithm. The shape error is less than PV 0.8μm, greatly superior compared with PV 3μm before compensating turning, which verifies the correctness of measuring algorithm.

  15. Real-Time Point Positioning Performance Evaluation of Single-Frequency Receivers Using NASA's Global Differential GPS System

    NASA Technical Reports Server (NTRS)

    Muellerschoen, Ronald J.; Iijima, Byron; Meyer, Robert; Bar-Sever, Yoaz; Accad, Elie

    2004-01-01

    This paper evaluates the performance of a single-frequency receiver using the 1-Hz differential corrections as provided by NASA's global differential GPS system. While the dual-frequency user has the ability to eliminate the ionosphere error by taking a linear combination of observables, the single-frequency user must remove or calibrate this error by other means. To remove the ionosphere error we take advantage of the fact that the magnitude of the group delay in range observable and the carrier phase advance have the same magnitude but are opposite in sign. A way to calibrate this error is to use a real-time database of grid points computed by JPL's RTI (Real-Time Ionosphere) software. In both cases we evaluate the positional accuracy of a kinematic carrier phase based point positioning method on a global extent.

  16. Resolution-enhancement and sampling error correction based on molecular absorption line in frequency scanning interferometry

    NASA Astrophysics Data System (ADS)

    Pan, Hao; Qu, Xinghua; Shi, Chunzhao; Zhang, Fumin; Li, Yating

    2018-06-01

    The non-uniform interval resampling method has been widely used in frequency modulated continuous wave (FMCW) laser ranging. In the large-bandwidth and long-distance measurements, the range peak is deteriorated due to the fiber dispersion mismatch. In this study, we analyze the frequency-sampling error caused by the mismatch and measure it using the spectroscopy of molecular frequency references line. By using the adjacent points' replacement and spline interpolation technique, the sampling errors could be eliminated. The results demonstrated that proposed method is suitable for resolution-enhancement and high-precision measurement. Moreover, using the proposed method, we achieved the precision of absolute distance less than 45 μm within 8 m.

  17. Application of Modified Particle Swarm Optimization Method for Parameter Extraction of 2-D TEC Mapping

    NASA Astrophysics Data System (ADS)

    Toker, C.; Gokdag, Y. E.; Arikan, F.; Arikan, O.

    2012-04-01

    Ionosphere is a very important part of Space Weather. Modeling and monitoring of ionospheric variability is a major part of satellite communication, navigation and positioning systems. Total Electron Content (TEC), which is defined as the line integral of the electron density along a ray path, is one of the parameters to investigate the ionospheric variability. Dual-frequency GPS receivers, with their world wide availability and efficiency in TEC estimation, have become a major source of global and regional TEC modeling. When Global Ionospheric Maps (GIM) of International GPS Service (IGS) centers (http://iono.jpl.nasa.gov/gim.html) are investigated, it can be observed that regional ionosphere along the midlatitude regions can be modeled as a constant, linear or a quadratic surface. Globally, especially around the magnetic equator, the TEC surfaces resemble twisted and dispersed single centered or double centered Gaussian functions. Particle Swarm Optimization (PSO) proved itself as a fast converging and an effective optimization tool in various diverse fields. Yet, in order to apply this optimization technique into TEC modeling, the method has to be modified for higher efficiency and accuracy in extraction of geophysical parameters such as model parameters of TEC surfaces. In this study, a modified PSO (mPSO) method is applied to regional and global synthetic TEC surfaces. The synthetic surfaces that represent the trend and small scale variability of various ionospheric states are necessary to compare the performance of mPSO over number of iterations, accuracy in parameter estimation and overall surface reconstruction. The Cramer-Rao bounds for each surface type and model are also investigated and performance of mPSO are tested with respect to these bounds. For global models, the sample points that are used in optimization are obtained using IGS receiver network. For regional TEC models, regional networks such as Turkish National Permanent GPS Network (TNPGN-Active) receiver sites are used. The regional TEC models are grouped into constant (one parameter), linear (two parameters), and quadratic (six parameters) surfaces which are functions of latitude and longitude. Global models require seven parameters for single centered Gaussian and 13 parameters for double centered Gaussian function. The error criterion is the normalized percentage error for both the surface and the parameters. It is observed that mPSO is very successful in parameter extraction of various regional and global models. The normalized reconstruction error varies from 10-4 for constant surfaces to 10-3 for quadratic surfaces in regional models, sampled with regional networks. Even for the cases of a severe geomagnetic storm that affects measurements globally, with IGS network, the reconstruction error is on the order of 10-1 even though individual parameters have higher normalized errors. The modified PSO technique proved itself to be a useful tool for parameter extraction of more complicated TEC models. This study is supported by TUBITAK EEEAG under Grant No: 109E055.

  18. Surface characterization protocol for precision aspheric optics

    NASA Astrophysics Data System (ADS)

    Sarepaka, RamaGopal V.; Sakthibalan, Siva; Doodala, Somaiah; Panwar, Rakesh S.; Kotaria, Rajendra

    2017-10-01

    In Advanced Optical Instrumentation, Aspherics provide an effective performance alternative. The aspheric fabrication and surface metrology, followed by aspheric design are complementary iterative processes for Precision Aspheric development. As in fabrication, a holistic approach of aspheric surface characterization is adopted to evaluate actual surface error and to aim at the deliverance of aspheric optics with desired surface quality. Precision optical surfaces are characterized by profilometry or by interferometry. Aspheric profiles are characterized by contact profilometers, through linear surface scans to analyze their Form, Figure and Finish errors. One must ensure that, the surface characterization procedure does not add to the resident profile errors (generated during the aspheric surface fabrication). This presentation examines the errors introduced post-surface generation and during profilometry of aspheric profiles. This effort is to identify sources of errors and is to optimize the metrology process. The sources of error during profilometry may be due to: profilometer settings, work-piece placement on the profilometer stage, selection of zenith/nadir points of aspheric profiles, metrology protocols, clear aperture - diameter analysis, computational limitations of the profiler and the software issues etc. At OPTICA, a PGI 1200 FTS contact profilometer (Taylor-Hobson make) is used for this study. Precision Optics of various profiles are studied, with due attention to possible sources of errors during characterization, with multi-directional scan approach for uniformity and repeatability of error estimation. This study provides an insight of aspheric surface characterization and helps in optimal aspheric surface production methodology.

  19. Modeling the directivity of parametric loudspeaker

    NASA Astrophysics Data System (ADS)

    Shi, Chuang; Gan, Woon-Seng

    2012-09-01

    The emerging applications of the parametric loudspeaker, such as 3D audio, demands accurate directivity control at the audible frequency (i.e. the difference frequency). Though the delay-and-sum beamforming has been proven adequate to adjust the steering angles of the parametric loudspeaker, accurate prediction of the mainlobe and sidelobes remains a challenging problem. It is mainly because of the approximations that are used to derive the directivity of the difference frequency from the directivity of the primary frequency, and the mismatches between the theoretical directivity and the measured directivity caused by system errors incurred at different stages of the implementation. In this paper, we propose a directivity model of the parametric loudspeaker. The directivity model consists of two tuning vectors corresponding to the spacing error and the weight error for the primary frequency. The directivity model adopts a modified form of the product directivity principle for the difference frequency to further improve the modeling accuracy.

  20. Test Cases for Modeling and Validation of Structures with Piezoelectric Actuators

    NASA Technical Reports Server (NTRS)

    Reaves, Mercedes C.; Horta, Lucas G.

    2001-01-01

    A set of benchmark test articles were developed to validate techniques for modeling structures containing piezoelectric actuators using commercially available finite element analysis packages. The paper presents the development, modeling, and testing of two structures: an aluminum plate with surface mounted patch actuators and a composite box beam with surface mounted actuators. Three approaches for modeling structures containing piezoelectric actuators using the commercially available packages: MSC/NASTRAN and ANSYS are presented. The approaches, applications, and limitations are discussed. Data for both test articles are compared in terms of frequency response functions from deflection and strain data to input voltage to the actuator. Frequency response function results using the three different analysis approaches provided comparable test/analysis results. It is shown that global versus local behavior of the analytical model and test article must be considered when comparing different approaches. Also, improper bonding of actuators greatly reduces the electrical to mechanical effectiveness of the actuators producing anti-resonance errors.

  1. Probing the Spatio-Temporal Characteristics of Temporal Aliasing Errors and their Impact on Satellite Gravity Retrievals

    NASA Astrophysics Data System (ADS)

    Wiese, D. N.; McCullough, C. M.

    2017-12-01

    Studies have shown that both single pair low-low satellite-to-satellite tracking (LL-SST) and dual-pair LL-SST hypothetical future satellite gravimetry missions utilizing improved onboard measurement systems relative to the Gravity Recovery and Climate Experiment (GRACE) will be limited by temporal aliasing errors; that is, the error introduced through deficiencies in models of high frequency mass variations required for the data processing. Here, we probe the spatio-temporal characteristics of temporal aliasing errors to understand their impact on satellite gravity retrievals using high fidelity numerical simulations. We find that while aliasing errors are dominant at long wavelengths and multi-day timescales, improving knowledge of high frequency mass variations at these resolutions translates into only modest improvements (i.e. spatial resolution/accuracy) in the ability to measure temporal gravity variations at monthly timescales. This result highlights the reliance on accurate models of high frequency mass variations for gravity processing, and the difficult nature of reducing temporal aliasing errors and their impact on satellite gravity retrievals.

  2. Data processing and error analysis for the CE-1 Lunar microwave radiometer

    NASA Astrophysics Data System (ADS)

    Feng, Jian-Qing; Su, Yan; Liu, Jian-Jun; Zou, Yong-Liao; Li, Chun-Lai

    2013-03-01

    The microwave radiometer (MRM) onboard the Chang' E-1 (CE-1) lunar orbiter is a 4-frequency microwave radiometer, and it is mainly used to obtain the brightness temperature (TB) of the lunar surface, from which the thickness, temperature, dielectric constant and other related properties of the lunar regolith can be derived. The working mode of the CE-1 MRM, the ground calibration (including the official calibration coefficients), as well as the acquisition and processing of the raw data are introduced. Our data analysis shows that TB increases with increasing frequency, decreases towards the lunar poles and is significantly affected by solar illumination. Our analysis also reveals that the main uncertainty in TB comes from ground calibration.

  3. Optimal design of tilt carrier frequency computer-generated holograms to measure aspherics.

    PubMed

    Peng, Jiantao; Chen, Zhe; Zhang, Xingxiang; Fu, Tianjiao; Ren, Jianyue

    2015-08-20

    Computer-generated holograms (CGHs) provide an approach to high-precision metrology of aspherics. A CGH is designed under the trade-off among size, mapping distortion, and line spacing. This paper describes an optimal design method based on the parametric model for tilt carrier frequency CGHs placed outside the interferometer focus points. Under the condition of retaining an admissible size and a tolerable mapping distortion, the optimal design method has two advantages: (1) separating the parasitic diffraction orders to improve the contrast of the interferograms and (2) achieving the largest line spacing to minimize sensitivity to fabrication errors. This optimal design method is applicable to common concave aspherical surfaces and illustrated with CGH design examples.

  4. Design Considerations of Polishing Lap for Computer-Controlled Cylindrical Polishing Process

    NASA Technical Reports Server (NTRS)

    Khan, Gufran S.; Gubarev, Mikhail; Arnold, William; Ramsey, Brian D.

    2009-01-01

    This paper establishes a relationship between the polishing process parameters and the generation of mid spatial-frequency error. The consideration of the polishing lap design to optimize the process in order to keep residual errors to a minimum and optimization of the process (speeds, stroke, etc.) and to keep the residual mid spatial-frequency error to a minimum, is also presented.

  5. Distance measurements in Au nanoparticles functionalized with nitroxide radicals and Gd(3+)-DTPA chelate complexes.

    PubMed

    Yulikov, Maxim; Lueders, Petra; Warsi, Muhammad Farooq; Chechik, Victor; Jeschke, Gunnar

    2012-08-14

    Nanosized gold particles were functionalised with two types of paramagnetic surface tags, one having a nitroxide radical and the other one carrying a DTPA complex loaded with Gd(3+). Selective measurements of nitroxide-nitroxide, Gd(3+)-nitroxide and Gd(3+)-Gd(3+) distances were performed on this system and information on the distance distribution in the three types of spin pairs was obtained. A numerical analysis of the dipolar frequency distributions is presented for Gd(3+) centres with moderate magnitudes of zero-field splitting, in the range of detection frequencies and resonance fields where the high-field approximation is only roughly valid. The dipolar frequency analysis confirms the applicability of DEER for distance measurements in such complexes and gives an estimate for the magnitudes of possible systematic errors due to the non-ideality of the measurement of the dipole-dipole interaction.

  6. Fast coupled-cluster singles and doubles for extended systems: Application to the anharmonic vibrational frequencies of polyethylene in the Γ approximation

    NASA Astrophysics Data System (ADS)

    Keçeli, Murat; Hirata, So

    2010-09-01

    The mod- n scheme is introduced to the coupled-cluster singles and doubles (CCSD) and third-order Møller-Plesset perturbation (MP3) methods for extended systems of one-dimensional periodicity. By downsampling uniformly the wave vectors in Brillouin-zone integrations, this scheme accelerates these accurate but expensive correlation-energy calculations by two to three orders of magnitude while incurring negligible errors in their total and relative energies. To maintain this accuracy, the number of the nearest-neighbor unit cells included in the lattice sums must also be reduced by the same downsampling rate (n) . The mod- n CCSD and MP3 methods are applied to the potential-energy surface of polyethylene in anharmonic frequency calculations of its infrared- and Raman-active vibrations. The calculated frequencies are found to be within 46cm-1 (CCSD) and 78cm-1 (MP3) of the observed.

  7. The Impact of Atmospheric Modeling Errors on GRACE Estimates of Mass Loss in Greenland and Antarctica

    NASA Astrophysics Data System (ADS)

    Hardy, Ryan A.; Nerem, R. Steven; Wiese, David N.

    2017-12-01

    Systematic errors in Gravity Recovery and Climate Experiment (GRACE) monthly mass estimates over the Greenland and Antarctic ice sheets can originate from low-frequency biases in the European Centre for Medium-Range Weather Forecasts (ECMWF) Operational Analysis model, the atmospheric component of the Atmospheric and Ocean Dealising Level-1B (AOD1B) product used to forward model atmospheric and ocean gravity signals in GRACE processing. These biases are revealed in differences in surface pressure between the ECMWF Operational Analysis model, state-of-the-art reanalyses, and in situ surface pressure measurements. While some of these errors are attributable to well-understood discrete model changes and have published corrections, we examine errors these corrections do not address. We compare multiple models and in situ data in Antarctica and Greenland to determine which models have the most skill relative to monthly averages of the dealiasing model. We also evaluate linear combinations of these models and synthetic pressure fields generated from direct interpolation of pressure observations. These models consistently reveal drifts in the dealiasing model that cause the acceleration of Antarctica's mass loss between April 2002 and August 2016 to be underestimated by approximately 4 Gt yr-2. We find similar results after attempting to solve the inverse problem, recovering pressure biases directly from the GRACE Jet Propulsion Laboratory RL05.1 M mascon solutions. Over Greenland, we find a 2 Gt yr-1 bias in mass trend. While our analysis focuses on errors in Release 05 of AOD1B, we also evaluate the new AOD1B RL06 product. We find that this new product mitigates some of the aforementioned biases.

  8. Reproducibility of 3D kinematics and surface electromyography measurements of mastication.

    PubMed

    Remijn, Lianne; Groen, Brenda E; Speyer, Renée; van Limbeek, Jacques; Nijhuis-van der Sanden, Maria W G

    2016-03-01

    The aim of this study was to determine the measurement reproducibility for a procedure evaluating the mastication process and to estimate the smallest detectable differences of 3D kinematic and surface electromyography (sEMG) variables. Kinematics of mandible movements and sEMG activity of the masticatory muscles were obtained over two sessions with four conditions: two food textures (biscuit and bread) of two sizes (small and large). Twelve healthy adults (mean age 29.1 years) completed the study. The second to the fifth chewing cycle of 5 bites were used for analyses. The reproducibility per outcome variable was calculated with an intraclass correlation coefficient (ICC) and a Bland-Altman analysis was applied to determine the standard error of measurement relative error of measurement and smallest detectable differences of all variables. ICCs ranged from 0.71 to 0.98 for all outcome variables. The outcome variables consisted of four bite and fourteen chewing cycle variables. The relative standard error of measurement of the bite variables was up to 17.3% for 'time-to-swallow', 'time-to-transport' and 'number of chewing cycles', but ranged from 31.5% to 57.0% for 'change of chewing side'. The relative standard error of measurement ranged from 4.1% to 24.7% for chewing cycle variables and was smaller for kinematic variables than sEMG variables. In general, measurements obtained with 3D kinematics and sEMG are reproducible techniques to assess the mastication process. The duration of the chewing cycle and frequency of chewing were the best reproducible measurements. Change of chewing side could not be reproduced. The published measurement error and smallest detectable differences will aid the interpretation of the results of future clinical studies using the same study variables. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Fringe order correction for the absolute phase recovered by two selected spatial frequency fringe projections in fringe projection profilometry.

    PubMed

    Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun

    2017-08-01

    The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.

  10. Text familiarity, word frequency, and sentential constraints in error detection.

    PubMed

    Pilotti, Maura; Chodorow, Martin; Schauss, Frances

    2009-12-01

    The present study examines whether the frequency of an error-bearing word and its predictability, arising from sentential constraints and text familiarity, either independently or jointly, would impair error detection by making proofreading driven by top-down processes. Prior to a proofreading task, participants were asked to read, copy, memorize, or paraphrase sentences, half of which contained errors. These tasks represented a continuum of progressively more demanding and time-consuming activities, which were thought to lead to comparable increases in text familiarity and thus predictability. Proofreading times were unaffected by whether the sentences had been encountered earlier. Proofreading was slower and less accurate for high-frequency words and for highly constrained sentences. Prior memorization produced divergent effects on accuracy depending on sentential constraints. The latter finding suggested that a substantial level of predictability, such as that produced by memorizing highly constrained sentences, can increase the probability of overlooking errors.

  11. Research of misalignment between dithered ring laser gyro angle rate input axis and dither axis

    NASA Astrophysics Data System (ADS)

    Li, Geng; Wu, Wenqi; FAN, Zhenfang; LU, Guangfeng; Hu, Shaomin; Luo, Hui; Long, Xingwu

    2014-12-01

    The strap-down inertial navigation system (SINS), especially the SINS composed by dithered ring laser gyroscope (DRLG) is a kind of equipment, which providing high reliability and performance for moving vehicles. However, the mechanical dither which is used to eliminate the "Lock-In" effect can cause vibration disturbance to the INS and lead to dithering coupling problem in the inertial measurement unit (IMU) gyroscope triad, so its further application is limited. Among DRLG errors between the true gyro rotation rate and the measured rotation rate, the frequently considered one is the input axis misalignment between input reference axis which is perpendicular to the mounting surface and gyro angular rate input axis. But the misalignment angle between DRLG dither axis and gyro angular rate input axis is often ignored by researchers, which is amplified by dither coupling problem and that would lead to negative effects especially in high accuracy SINS. In order to study the problem more clearly, the concept of misalignment between DRLG dither axis and gyro angle rate input axis is researched. Considering the error of misalignment is of the order of 10-3 rad. or even smaller, the best way to measure it is using DRLG itself by means of an angle exciter as an auxiliary. In this paper, the concept of dither axis misalignment is explained explicitly firstly, based on this, the frequency of angle exciter is induced as reference parameter, when DRLG is mounted on the angle exciter in a certain angle, the projections of angle exciter rotation rate and mechanical oscillation rate on the gyro input axis are both sensed by DRLG. If the dither axis has misalignment error with the gyro input axis, there will be four major frequencies detected: the frequency of angle exciter, the dither mechanical frequency, sum and difference frequencies of the former two frequencies. Then the amplitude spectrum of DRLG output signal obtained by the using LabVIEW program. if there are only angle exciter and the dither mechanical frequencies, the misalignment may be too small to be detected, otherwise, the amplitude of the sum and difference frequencies will show the misalignment angle between the gyro angle rate input axis and the dither axis. Finally, some related parameters such as frequency and amplitude of the angle exciter and sample rate are calculated and the results are analyzed. The simulation and experiment result prove the effectiveness of the proposed method..

  12. Dynamic Calibration and Verification Device of Measurement System for Dynamic Characteristic Coefficients of Sliding Bearing

    PubMed Central

    Chen, Runlin; Wei, Yangyang; Shi, Zhaoyang; Yuan, Xiaoyang

    2016-01-01

    The identification accuracy of dynamic characteristics coefficients is difficult to guarantee because of the errors of the measurement system itself. A novel dynamic calibration method of measurement system for dynamic characteristics coefficients is proposed in this paper to eliminate the errors of the measurement system itself. Compared with the calibration method of suspension quality, this novel calibration method is different because the verification device is a spring-mass system, which can simulate the dynamic characteristics of sliding bearing. The verification device is built, and the calibration experiment is implemented in a wide frequency range, in which the bearing stiffness is simulated by the disc springs. The experimental results show that the amplitude errors of this measurement system are small in the frequency range of 10 Hz–100 Hz, and the phase errors increase along with the increasing of frequency. It is preliminarily verified by the simulated experiment of dynamic characteristics coefficients identification in the frequency range of 10 Hz–30 Hz that the calibration data in this frequency range can support the dynamic characteristics test of sliding bearing in this frequency range well. The bearing experiments in greater frequency ranges need higher manufacturing and installation precision of calibration device. Besides, the processes of calibration experiments should be improved. PMID:27483283

  13. Numerical simulation and analysis for low-frequency rock physics measurements

    NASA Astrophysics Data System (ADS)

    Dong, Chunhui; Tang, Genyang; Wang, Shangxu; He, Yanxiao

    2017-10-01

    In recent years, several experimental methods have been introduced to measure the elastic parameters of rocks in the relatively low-frequency range, such as differential acoustic resonance spectroscopy (DARS) and stress-strain measurement. It is necessary to verify the validity and feasibility of the applied measurement method and to quantify the sources and levels of measurement error. Relying solely on the laboratory measurements, however, we cannot evaluate the complete wavefield variation in the apparatus. Numerical simulations of elastic wave propagation, on the other hand, are used to model the wavefield distribution and physical processes in the measurement systems, and to verify the measurement theory and analyze the measurement results. In this paper we provide a numerical simulation method to investigate the acoustic waveform response of the DARS system and the quasi-static responses of the stress-strain system, both of which use axisymmetric apparatus. We applied this method to parameterize the properties of the rock samples, the sample locations and the sensor (hydrophone and strain gauges) locations and simulate the measurement results, i.e. resonance frequencies and axial and radial strains on the sample surface, from the modeled wavefield following the physical experiments. Rock physical parameters were estimated by inversion or direct processing of these data, and showed a perfect match with the true values, thus verifying the validity of the experimental measurements. Error analysis was also conducted for the DARS system with 18 numerical samples, and the sources and levels of error are discussed. In particular, we propose an inversion method for estimating both density and compressibility of these samples. The modeled results also showed fairly good agreement with the real experiment results, justifying the effectiveness and feasibility of our modeling method.

  14. Numerical simulation and experimental verification of extended source interferometer

    NASA Astrophysics Data System (ADS)

    Hou, Yinlong; Li, Lin; Wang, Shanshan; Wang, Xiao; Zang, Haijun; Zhu, Qiudong

    2013-12-01

    Extended source interferometer, compared with the classical point source interferometer, can suppress coherent noise of environment and system, decrease dust scattering effects and reduce high-frequency error of reference surface. Numerical simulation and experimental verification of extended source interferometer are discussed in this paper. In order to provide guidance for the experiment, the modeling of the extended source interferometer is realized by using optical design software Zemax. Matlab codes are programmed to rectify the field parameters of the optical system automatically and get a series of interferometric data conveniently. The communication technique of DDE (Dynamic Data Exchange) was used to connect Zemax and Matlab. Then the visibility of interference fringes can be calculated through adding the collected interferometric data. Combined with the simulation, the experimental platform of the extended source interferometer was established, which consists of an extended source, interference cavity and image collection system. The decrease of high-frequency error of reference surface and coherent noise of the environment is verified. The relation between the spatial coherence and the size, shape, intensity distribution of the extended source is also verified through the analysis of the visibility of interference fringes. The simulation result is in line with the result given by real extended source interferometer. Simulation result shows that the model can simulate the actual optical interference of the extended source interferometer quite well. Therefore, the simulation platform can be used to guide the experiment of interferometer which is based on various extended sources.

  15. An experimental system for the study of active vibration control - Development and modeling

    NASA Astrophysics Data System (ADS)

    Batta, George R.; Chen, Anning

    A modular rotational vibration system designed to facilitate the study of active control of vibrating systems is discussed. The model error associated with four common types of identification problems has been studied. The general multiplicative uncertainty shape for a vibration system is small in low frequencies, large at high frequencies. The frequency-domain error function has sharp peaks near the frequency of each mode. The inability to identify a high-frequency mode causes an increase of uncertainties at all frequencies. Missing a low-frequency mode causes the uncertainties to be much larger at all frequencies than missing a high-frequency mode. Hysteresis causes a small increase of uncertainty at low frequencies, but its overall effect is relatively small.

  16. Use of IRI to Model the Effect of Ionosphere Emission on Earth Remote Sensing at L-Band

    NASA Technical Reports Server (NTRS)

    Abraham, Saji; LeVine, David M.

    2004-01-01

    Microwave remote sensing in the window at 1.413 GHz (L-band) set aside for passive use only is important for monitoring sea surface salinity and soil moisture. These parameters are important for understanding ocean dynamics and energy exchange between the surface and atmosphere, and both NASA and ESA plan to launch satellite sensors to monitor these parameters at L-band (Aquarius, Hydros and SMOS). The ionosphere is an important source of error for passive remote sensing at this frequency. In addition to Faraday rotation, emission from the ionosphere is also a potential source of error at L-band. As an aid for correcting for emission, a regression model is presented that relates ionosphere emission to the integrated electron density (TEC). The goal is to use TEC from sources such as TOPEX, JASON or GPS to obtain estimates of emission over the oceans where the electron density profiles needed to compute emission are not available. In addition, data will also be presented to evaluate the use of the IRI for computing emission over the ocean.

  17. Sensitivity analysis of periodic errors in heterodyne interferometry

    NASA Astrophysics Data System (ADS)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-03-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.

  18. Panel positioning error and support mechanism for a 30-m THz radio telescope

    NASA Astrophysics Data System (ADS)

    Yang, De-Hua; Okoh, Daniel; Zhou, Guo-Hua; Li, Ai-Hua; Li, Guo-Ping; Cheng, Jing-Quan

    2011-06-01

    A 30-m TeraHertz (THz) radio telescope is proposed to operate at 200 μm with an active primary surface. This paper presents sensitivity analysis of active surface panel positioning errors with optical performance in terms of the Strehl ratio. Based on Ruze's surface error theory and using a Monte Carlo simulation, the effects of six rigid panel positioning errors, such as piston, tip, tilt, radial, azimuthal and twist displacements, were directly derived. The optical performance of the telescope was then evaluated using the standard Strehl ratio. We graphically illustrated the various panel error effects by presenting simulations of complete ensembles of full reflector surface errors for the six different rigid panel positioning errors. Study of the panel error sensitivity analysis revealed that the piston error and tilt/tip errors are dominant while the other rigid errors are much less important. Furthermore, as indicated by the results, we conceived of an alternative Master-Slave Concept-based (MSC-based) active surface by implementating a special Series-Parallel Concept-based (SPC-based) hexapod as the active panel support mechanism. A new 30-m active reflector based on the two concepts was demonstrated to achieve correction for all the six rigid panel positioning errors in an economically feasible way.

  19. The many places of frequency: evidence for a novel locus of the lexical frequency effect in word production.

    PubMed

    Knobel, Mark; Finkbeiner, Matthew; Caramazza, Alfonso

    2008-03-01

    The effect of lexical frequency on language-processing tasks is exceptionally reliable. For example, pictures with higher frequency names are named faster and more accurately than those with lower frequency names. Experiments with normal participants and patients strongly suggest that this production effect arises at the level of lexical access. Further work has suggested that within lexical access this effect arises at the level of lexical representations. Here we present patient E.C. who shows an effect of lexical frequency on his nonword error rate. The best explanation of his performance is that there is an additional locus of frequency at the interface of lexical and segmental representational levels. We confirm this hypothesis by showing that only computational models with frequency at this new locus can produce a similar error pattern to that of patient E.C. Finally, in an analysis of a large group of Italian patients, we show that there exist patients who replicate E.C.'s pattern of results and others who show the complementary pattern of frequency effects on semantic error rates. Our results combined with previous findings suggest that frequency plays a role throughout the process of lexical access.

  20. Source process and tectonic implication of the January 20, 2007 Odaesan earthquake, South Korea

    NASA Astrophysics Data System (ADS)

    Abdel-Fattah, Ali K.; Kim, K. Y.; Fnais, M. S.; Al-Amri, A. M.

    2014-04-01

    The source process for the 20th of January 2007, Mw 4.5 Odaesan earthquake in South Korea is investigated in the low- and high-frequency bands, using velocity and acceleration waveform data recorded by the Korea Meteorological Administration Seismographic Network at distances less than 70 km from the epicenter. Synthetic Green functions are adopted for the low-frequency band of 0.1-0.3 Hz by using the wave-number integration technique and the one dimensional velocity model beneath the epicentral area. An iterative technique was performed by a grid search across the strike, dip, rake, and focal depth of rupture nucleation parameters to find the best-fit double-couple mechanism. To resolve the nodal plane ambiguity, the spatiotemporal slip distribution on the fault surface was recovered using a non-negative least-square algorithm for each set of the grid-searched parameters. The focal depth of 10 km was determined through the grid search for depths in the range of 6-14 km. The best-fit double-couple mechanism obtained from the finite-source model indicates a vertical strike-slip faulting mechanism. The NW faulting plane gives comparatively smaller root-mean-squares (RMS) error than its auxiliary plane. Slip pattern event provides simple source process due to the effect of Low-frequency that acted as a point source model. Three empirical Green functions are adopted to investigate the source process in the high-frequency band. A set of slip models was recovered on both nodal planes of the focal mechanism with various rupture velocities in the range of 2.0-4.0 km/s. Although there is a small difference between the RMS errors produced by the two orthogonal nodal planes, the SW dipping plane gives a smaller RMS error than its auxiliary plane. The slip distribution is relatively assessable by the oblique pattern recovered around the hypocenter in the high-frequency analysis; indicating a complex rupture scenario for such moderate-sized earthquake, similar to those reported for large earthquakes.

  1. Reverberant acoustic energy in auditoria that comprise systems of coupled rooms

    NASA Astrophysics Data System (ADS)

    Summers, Jason Erik

    A frequency-dependent model for levels and decay rates of reverberant energy in systems of coupled rooms is developed and compared with measurements conducted in a 1:10 scale model and in Bass Hall, Fort Worth, TX. Schroeder frequencies of subrooms, fSch, characteristic size of coupling apertures, a, relative to wavelength lambda, and characteristic size of room surfaces, l, relative to lambda define the frequency regions. At high frequencies [HF (f >> f Sch, a >> lambda, l >> lambda)], this work improves upon prior statistical-acoustics (SA) coupled-ODE models by incorporating geometrical-acoustics (GA) corrections for the model of decay within subrooms and the model of energy transfer between subrooms. Previous researchers developed prediction algorithms based on computational GA. Comparisons of predictions derived from beam-axis tracing with scale-model measurements indicate that systematic errors for coupled rooms result from earlier tail-correction procedures that assume constant quadratic growth of reflection density. A new algorithm is developed that uses ray tracing rather than tail correction in the late part and is shown to correct this error. At midfrequencies [MF (f >> f Sch, a ˜ lambda)], HF models are modified to account for wave effects at coupling apertures by including analytically or heuristically derived power transmission coefficients tau. This work improves upon prior SA models of this type by developing more accurate estimates of random-incidence tau. While the accuracy of the MF models is difficult to verify, scale-model measurements evidence the expected behavior. The Biot-Tolstoy-Medwin-Svensson (BTMS) time-domain edge-diffraction model is newly adapted to study transmission through apertures. Multiple-order BTMS scattering is theoretically and experimentally shown to be inaccurate due to the neglect of slope diffraction. At low frequencies (f ˜ f Sch), scale-model measurements have been qualitatively explained by application of previously developed perturbation models. Measurements newly confirm that coupling strength between three-dimensional rooms is related to unperturbed pressure distribution on the coupling surface. In Bass Hall, measurements are conducted to determine the acoustical effects of the coupled stage house on stage and in the audience area. The high-frequency predictions of statistical- and geometrical-acoustics models agree well with measured results. Predictions of the transmission coefficients of the coupling apertures agree, at least qualitatively, with the observed behavior.

  2. Estimates of Flow Duration, Mean Flow, and Peak-Discharge Frequency Values for Kansas Stream Locations

    USGS Publications Warehouse

    Perry, Charles A.; Wolock, David M.; Artman, Joshua C.

    2004-01-01

    Streamflow statistics of flow duration and peak-discharge frequency were estimated for 4,771 individual locations on streams listed on the 1999 Kansas Surface Water Register. These statistics included the flow-duration values of 90, 75, 50, 25, and 10 percent, as well as the mean flow value. Peak-discharge frequency values were estimated for the 2-, 5-, 10-, 25-, 50-, and 100-year floods. Least-squares multiple regression techniques were used, along with Tobit analyses, to develop equations for estimating flow-duration values of 90, 75, 50, 25, and 10 percent and the mean flow for uncontrolled flow stream locations. The contributing-drainage areas of 149 U.S. Geological Survey streamflow-gaging stations in Kansas and parts of surrounding States that had flow uncontrolled by Federal reservoirs and used in the regression analyses ranged from 2.06 to 12,004 square miles. Logarithmic transformations of climatic and basin data were performed to yield the best linear relation for developing equations to compute flow durations and mean flow. In the regression analyses, the significant climatic and basin characteristics, in order of importance, were contributing-drainage area, mean annual precipitation, mean basin permeability, and mean basin slope. The analyses yielded a model standard error of prediction range of 0.43 logarithmic units for the 90-percent duration analysis to 0.15 logarithmic units for the 10-percent duration analysis. The model standard error of prediction was 0.14 logarithmic units for the mean flow. Regression equations used to estimate peak-discharge frequency values were obtained from a previous report, and estimates for the 2-, 5-, 10-, 25-, 50-, and 100-year floods were determined for this report. The regression equations and an interpolation procedure were used to compute flow durations, mean flow, and estimates of peak-discharge frequency for locations along uncontrolled flow streams on the 1999 Kansas Surface Water Register. Flow durations, mean flow, and peak-discharge frequency values determined at available gaging stations were used to interpolate the regression-estimated flows for the stream locations where available. Streamflow statistics for locations that had uncontrolled flow were interpolated using data from gaging stations weighted according to the drainage area and the bias between the regression-estimated and gaged flow information. On controlled reaches of Kansas streams, the streamflow statistics were interpolated between gaging stations using only gaged data weighted by drainage area.

  3. Performance of cellular frequency-hopped spread-spectrum radio networks

    NASA Astrophysics Data System (ADS)

    Gluck, Jeffrey W.; Geraniotis, Evaggelos

    1989-10-01

    Multiple access interference is characterized for cellular mobile networks, in which users are assumed to be Poisson-distributed in the plane and employ frequency-hopped spread-spectrum signaling with transmitter-oriented assignment of frequency-hopping patterns. Exact expressions for the bit error probabilities are derived for binary coherently demodulated systems without coding. Approximations for the packet error probability are derived for coherent and noncoherent systems and these approximations are applied when forward-error-control coding is employed. In all cases, the effects of varying interference power are accurately taken into account according to some propagation law. Numerical results are given in terms of bit error probability for the exact case and throughput for the approximate analyses. Comparisons are made with previously derived bounds and it is shown that these tend to be very pessimistic.

  4. Airborne gravity measurement over sea-ice: The western Weddel Sea

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brozena, J.; Peters, M.; LaBrecque, J.

    1990-10-01

    An airborne gravity study of the western Weddel Sea, east of the Antarctic Peninsula, has shown that floating pack-ice provides a useful radar altimetric reference surface for altitude and vertical acceleration corrections surface for alititude and vertical acceleration corrections to airborne gravimetry. Airborne gravimetry provides an important alternative to satellite altimetry for the sea-ice covered regions of the world since satellite alimeters are not designed or intended to provide accurate geoidal heights in areas where significant sea-ice is present within the radar footprint. Errors in radar corrected airborne gravimetry are primarily sensitive to the variations in the second derivative ofmore » the sea-ice reference surface in the frequency pass-band of interest. With the exception of imbedded icebergs the second derivative of the pack-ice surface closely approximates that of the mean sea-level surface at wavelengths > 10-20 km. With the airborne method the percentage of ice coverage, the mixture of first and multi-year ice and the existence of leads and pressure ridges prove to be unimportant in determining gravity anomalies at scales of geophysical and geodetic interest, provided that the ice is floating and not grounded. In the Weddell study an analysis of 85 crosstrack miss-ties distributed over 25 data tracks yields an rms error of 2.2 mGals. Significant structural anomalies including the continental shelf and offsets and lineations interpreted as fracture zones recording the early spreading directions within the Weddell Sea are observed in the gravity map.« less

  5. The study of optimization on process parameters of high-accuracy computerized numerical control polishing

    NASA Astrophysics Data System (ADS)

    Huang, Wei-Ren; Huang, Shih-Pu; Tsai, Tsung-Yueh; Lin, Yi-Jyun; Yu, Zong-Ru; Kuo, Ching-Hsiang; Hsu, Wei-Yao; Young, Hong-Tsu

    2017-09-01

    Spherical lenses lead to forming spherical aberration and reduced optical performance. Consequently, in practice optical system shall apply a combination of spherical lenses for aberration correction. Thus, the volume of the optical system increased. In modern optical systems, aspherical lenses have been widely used because of their high optical performance with less optical components. However, aspherical surfaces cannot be fabricated by traditional full aperture polishing process due to their varying curvature. Sub-aperture computer numerical control (CNC) polishing is adopted for aspherical surface fabrication in recent years. By using CNC polishing process, mid-spatial frequency (MSF) error is normally accompanied during this process. And the MSF surface texture of optics decreases the optical performance for high precision optical system, especially for short-wavelength applications. Based on a bonnet polishing CNC machine, this study focuses on the relationship between MSF surface texture and CNC polishing parameters, which include feed rate, head speed, track spacing and path direction. The power spectral density (PSD) analysis is used to judge the MSF level caused by those polishing parameters. The test results show that controlling the removal depth of single polishing path, through the feed rate, and without same direction polishing path for higher total removal depth can efficiently reduce the MSF error. To verify the optical polishing parameters, we divided a correction polishing process to several polishing runs with different direction polishing paths. Compare to one shot polishing run, multi-direction path polishing plan could produce better surface quality on the optics.

  6. Issues with data and analyses: Errors, underlying themes, and potential solutions

    PubMed Central

    Allison, David B.

    2018-01-01

    Some aspects of science, taken at the broadest level, are universal in empirical research. These include collecting, analyzing, and reporting data. In each of these aspects, errors can and do occur. In this work, we first discuss the importance of focusing on statistical and data errors to continually improve the practice of science. We then describe underlying themes of the types of errors and postulate contributing factors. To do so, we describe a case series of relatively severe data and statistical errors coupled with surveys of some types of errors to better characterize the magnitude, frequency, and trends. Having examined these errors, we then discuss the consequences of specific errors or classes of errors. Finally, given the extracted themes, we discuss methodological, cultural, and system-level approaches to reducing the frequency of commonly observed errors. These approaches will plausibly contribute to the self-critical, self-correcting, ever-evolving practice of science, and ultimately to furthering knowledge. PMID:29531079

  7. Radiometric correction of scatterometric wind measurements

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Use of a spaceborne scatterometer to determine the ocean-surface wind vector requires accurate measurement of radar backscatter from ocean. Such measurements are hindered by the effect of attenuation in the precipitating regions over sea. The attenuation can be estimated reasonably well with the knowledge of brightness temperatures observed by a microwave radiometer. The NASA SeaWinds scatterometer is to be flown on the Japanese ADEOS2. The AMSR multi-frequency radiometer on ADEOS2 will be used to correct errors due to attenuation in the SeaWinds scatterometer measurements. Here we investigate the errors in the attenuation corrections. Errors would be quite small if the radiometer and scatterometer footprints were identical and filled with uniform rain. However, the footprints are not identical, and because of their size one cannot expect uniform rain across each cell. Simulations were performed with the SeaWinds scatterometer (13.4 GHz) and AMSR (18.7 GHz) footprints with gradients of attenuation. The study shows that the resulting wind speed errors after correction (using the radiometer) are small for most cases. However, variations in the degree of overlap between the radiometer and scatterometer footprints affect the accuracy of the wind speed measurements.

  8. Diagnosis of extratropical variability in seasonal integrations of the ECMWF model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferranti, L.; Molteni, F.; Brankovic, C.

    1994-06-01

    Properties of the general circulation simulated by the ECMWF model are discussed using a set of seasonal integrations at T63 resolution. For each season, over the period of 5 years, 1986-1990, three integrations initiated on consecutive days were run with prescribed observed sea surface temperature (SST). This paper presents a series of diagnostics of extratropical variability in the model, with particular emphasis on the northern winter. Time-filtered maps of variability indicate that in this season there is insufficient storm track activity penetrating into the Eurasian continent. Related to this the maximum of lower-frequency variability for northern spring are more realistic.more » Blocking is defined objectively in terms of the geostrophic wind at 500 mb. Consistent with the low-frequency transience, in the Euro-Atlantic sector the position of maximum blocking in the model is displaced eastward. The composite structure of blocks over the Pacific is realistic, though their frequency is severely underestimated at all times of year. Shortcomings in the simulated wintertime general circulation were also revealed by studying the projection of 5-day mean fields onto empirical orthogonal functions (EOFs) of the observed flow. The largest differences were apparent for statistics of EOFs of the zonal mean flow. Analysis of weather regime activity, defined from the EOFs, suggested that regimes with positive PNA index were overpopulated, while the negative PNA regimes were underpopulated. A further comparison between observed and modeled low-frequency variance revealed that underestimation of low-frequency variability occurs along the same axes that explain most of the spatial structure of the error in the mean field, suggesting a common dynamical origin for these two aspects of the systematic error. 17 refs., 17 figs., 4 tabs.« less

  9. Wave-induced response of a floating two-dimensional body with a moonpool.

    PubMed

    Fredriksen, Arnt G; Kristiansen, Trygve; Faltinsen, Odd M

    2015-01-28

    Regular wave-induced behaviour of a floating stationary two-dimensional body with a moonpool is studied. The focus is on resonant piston-mode motion in the moonpool and rigid-body motions. Dedicated two-dimensional experiments have been performed. Two numerical hybrid methods, which have previously been applied to related problems, are further developed. Both numerical methods couple potential and viscous flow. The semi-nonlinear hybrid method uses linear free-surface and body-boundary conditions. The other one uses fully nonlinear free-surface and body-boundary conditions. The harmonic polynomial cell method solves the Laplace equation in the potential flow domain, while the finite volume method solves the Navier-Stokes equations in the viscous flow domain near the body. Results from the two codes are compared with the experimental data. The nonlinear hybrid method compares well with the data, while certain discrepancies are observed for the semi-nonlinear method. In particular, the roll motion is over-predicted by the semi-nonlinear hybrid method. Error sources in the semi-nonlinear hybrid method are discussed. The moonpool strongly affects heave motions in a frequency range around the piston-mode resonance frequency of the moonpool. No resonant water motions occur in the moonpool at the piston-mode resonance frequency. Instead large moonpool motions occur at a heave natural frequency associated with small damping near the piston-mode resonance frequency. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  10. Routine cognitive errors: a trait-like predictor of individual differences in anxiety and distress.

    PubMed

    Fetterman, Adam K; Robinson, Michael D

    2011-02-01

    Five studies (N=361) sought to model a class of errors--namely, those in routine tasks--that several literatures have suggested may predispose individuals to higher levels of emotional distress. Individual differences in error frequency were assessed in choice reaction-time tasks of a routine cognitive type. In Study 1, it was found that tendencies toward error in such tasks exhibit trait-like stability over time. In Study 3, it was found that tendencies toward error exhibit trait-like consistency across different tasks. Higher error frequency, in turn, predicted higher levels of negative affect, general distress symptoms, displayed levels of negative emotion during an interview, and momentary experiences of negative emotion in daily life (Studies 2-5). In all cases, such predictive relations remained significant with individual differences in neuroticism controlled. The results thus converge on the idea that error frequency in simple cognitive tasks is a significant and consequential predictor of emotional distress in everyday life. The results are novel, but discussed within the context of the wider literatures that informed them. © 2010 Psychology Press, an imprint of the Taylor & Francis Group, an Informa business

  11. Refractive errors in children and adolescents in Bucaramanga (Colombia).

    PubMed

    Galvis, Virgilio; Tello, Alejandro; Otero, Johanna; Serrano, Andrés A; Gómez, Luz María; Castellanos, Yuly

    2017-01-01

    The aim of this study was to establish the frequency of refractive errors in children and adolescents aged between 8 and 17 years old, living in the metropolitan area of Bucaramanga (Colombia). This study was a secondary analysis of two descriptive cross-sectional studies that applied sociodemographic surveys and assessed visual acuity and refraction. Ametropias were classified as myopic errors, hyperopic errors, and mixed astigmatism. Eyes were considered emmetropic if none of these classifications were made. The data were collated using free software and analyzed with STATA/IC 11.2. One thousand two hundred twenty-eight individuals were included in this study. Girls showed a higher rate of ametropia than boys. Hyperopic refractive errors were present in 23.1% of the subjects, and myopic errors in 11.2%. Only 0.2% of the eyes had high myopia (≤-6.00 D). Mixed astigmatism and anisometropia were uncommon, and myopia frequency increased with age. There were statistically significant steeper keratometric readings in myopic compared to hyperopic eyes. The frequency of refractive errors that we found of 36.7% is moderate compared to the global data. The rates and parameters statistically differed by sex and age groups. Our findings are useful for establishing refractive error rate benchmarks in low-middle-income countries and as a baseline for following their variation by sociodemographic factors.

  12. High-frequency signal and noise estimates of CSR GRACE RL04

    NASA Astrophysics Data System (ADS)

    Bonin, Jennifer A.; Bettadpur, Srinivas; Tapley, Byron D.

    2012-12-01

    A sliding window technique is used to create daily-sampled Gravity Recovery and Climate Experiment (GRACE) solutions with the same background processing as the official CSR RL04 monthly series. By estimating over shorter time spans, more frequent solutions are made using uncorrelated data, allowing for higher frequency resolution in addition to daily sampling. Using these data sets, high-frequency GRACE errors are computed using two different techniques: assuming the GRACE high-frequency signal in a quiet area of the ocean is the true error, and computing the variance of differences between multiple high-frequency GRACE series from different centers. While the signal-to-noise ratios prove to be sufficiently high for confidence at annual and lower frequencies, at frequencies above 3 cycles/year the signal-to-noise ratios in the large hydrological basins looked at here are near 1.0. Comparisons with the GLDAS hydrological model and high frequency GRACE series developed at other centers confirm CSR GRACE RL04's poor ability to accurately and reliably measure hydrological signal above 3-9 cycles/year, due to the low power of the large-scale hydrological signal typical at those frequencies compared to the GRACE errors.

  13. Demonstration of the frequency offset errors introduced by an incorrect setting of the Zeeman/magnetic field adjustment on the cesium beam frequency standard

    NASA Technical Reports Server (NTRS)

    Kaufmann, D. C.

    1976-01-01

    The fine frequency setting of a cesium beam frequency standard is accomplished by adjusting the C field control with the appropriate Zeeman frequency applied to the harmonic generator. A novice operator in the field, even when using the correct Zeeman frequency input, may mistakenly set the C field to any one of seven major Beam I peaks (fingers) represented by the Ramsey curve. This can result in frequency offset errors of as much as 2.5 parts in ten to the tenth. The effects of maladjustment are demonstrated and suggestions are discussed on how to avoid the subtle traps associated with C field adjustments.

  14. Outpatient Prescribing Errors and the Impact of Computerized Prescribing

    PubMed Central

    Gandhi, Tejal K; Weingart, Saul N; Seger, Andrew C; Borus, Joshua; Burdick, Elisabeth; Poon, Eric G; Leape, Lucian L; Bates, David W

    2005-01-01

    Background Medication errors are common among inpatients and many are preventable with computerized prescribing. Relatively little is known about outpatient prescribing errors or the impact of computerized prescribing in this setting. Objective To assess the rates, types, and severity of outpatient prescribing errors and understand the potential impact of computerized prescribing. Design Prospective cohort study in 4 adult primary care practices in Boston using prescription review, patient survey, and chart review to identify medication errors, potential adverse drug events (ADEs) and preventable ADEs. Participants Outpatients over age 18 who received a prescription from 24 participating physicians. Results We screened 1879 prescriptions from 1202 patients, and completed 661 surveys (response rate 55%). Of the prescriptions, 143 (7.6%; 95% confidence interval (CI) 6.4% to 8.8%) contained a prescribing error. Three errors led to preventable ADEs and 62 (43%; 3% of all prescriptions) had potential for patient injury (potential ADEs); 1 was potentially life-threatening (2%) and 15 were serious (24%). Errors in frequency (n=77, 54%) and dose (n=26, 18%) were common. The rates of medication errors and potential ADEs were not significantly different at basic computerized prescribing sites (4.3% vs 11.0%, P=.31; 2.6% vs 4.0%, P=.16) compared to handwritten sites. Advanced checks (including dose and frequency checking) could have prevented 95% of potential ADEs. Conclusions Prescribing errors occurred in 7.6% of outpatient prescriptions and many could have harmed patients. Basic computerized prescribing systems may not be adequate to reduce errors. More advanced systems with dose and frequency checking are likely needed to prevent potentially harmful errors. PMID:16117752

  15. Effects of diffraction by ionospheric electron density irregularities on the range error in GNSS dual-frequency positioning and phase decorrelation

    NASA Astrophysics Data System (ADS)

    Gherm, Vadim E.; Zernov, Nikolay N.; Strangeways, Hal J.

    2011-06-01

    It can be important to determine the correlation of different frequency signals in L band that have followed transionospheric paths. In the future, both GPS and the new Galileo satellite system will broadcast three frequencies enabling more advanced three frequency correction schemes so that knowledge of correlations of different frequency pairs for scintillation conditions is desirable. Even at present, it would be helpful to know how dual-frequency Global Navigation Satellite Systems positioning can be affected by lack of correlation between the L1 and L2 signals. To treat this problem of signal correlation for the case of strong scintillation, a previously constructed simulator program, based on the hybrid method, has been further modified to simulate the fields for both frequencies on the ground, taking account of their cross correlation. Then, the errors in the two-frequency range finding method caused by scintillation have been estimated for particular ionospheric conditions and for a realistic fully three-dimensional model of the ionospheric turbulence. The results which are presented for five different frequency pairs (L1/L2, L1/L3, L1/L5, L2/L3, and L2/L5) show the dependence of diffractional errors on the scintillation index S4 and that the errors diverge from a linear relationship, the stronger are scintillation effects, and may reach up to ten centimeters, or more. The correlation of the phases at spaced frequencies has also been studied and found that the correlation coefficients for different pairs of frequencies depend on the procedure of phase retrieval, and reduce slowly as both the variance of the electron density fluctuations and cycle slips increase.

  16. Research on automatic Hartmann test of membrane mirror

    NASA Astrophysics Data System (ADS)

    Zhong, Xing; Jin, Guang; Liu, Chunyu; Zhang, Peng

    2010-10-01

    Electrostatic membrane mirror is ultra-lightweight and easy to acquire a large diameter comparing with traditional optical elements, so its development and usage is the trend of future large mirrors. In order to research the control method of the static stretching membrane mirror, the surface configuration must be tested. However, membrane mirror's shape is always changed by variable voltages on the electrodes, and the optical properties of membrane materials using in our experiment are poor, so it is difficult to test membrane mirror by interferometer and null compensator method. To solve this problem, an automatic optical test procedure for membrane mirror is designed based on Hartmann screen method. The optical path includes point light source, CCD camera, splitter and diffuse transmittance screen. The spots' positions on the diffuse transmittance screen are pictured by CCD camera connected with computer, and image segmentation and centroid solving is auto processed. The CCD camera's lens distortion is measured, and fixing coefficients are given to eliminate the spots' positions recording error caused by lens distortion. To process the low sampling Hartmann test results, Zernike polynomial fitting method is applied to smooth the wave front. So low frequency error of the membrane mirror can be measured then. Errors affecting the test accuracy are also analyzed in this paper. The method proposed in this paper provides a reference for surface shape detection in membrane mirror research.

  17. LOOP- SIMULATION OF THE AUTOMATIC FREQUENCY CONTROL SUBSYSTEM OF A DIFFERENTIAL MINIMUM SHIFT KEYING RECEIVER

    NASA Technical Reports Server (NTRS)

    Davarian, F.

    1994-01-01

    The LOOP computer program was written to simulate the Automatic Frequency Control (AFC) subsystem of a Differential Minimum Shift Keying (DMSK) receiver with a bit rate of 2400 baud. The AFC simulated by LOOP is a first order loop configuration with a first order R-C filter. NASA has been investigating the concept of mobile communications based on low-cost, low-power terminals linked via geostationary satellites. Studies have indicated that low bit rate transmission is suitable for this application, particularly from the frequency and power conservation point of view. A bit rate of 2400 BPS is attractive due to its applicability to the linear predictive coding of speech. Input to LOOP includes the following: 1) the initial frequency error; 2) the double-sided loop noise bandwidth; 3) the filter time constants; 4) the amount of intersymbol interference; and 5) the bit energy to noise spectral density. LOOP output includes: 1) the bit number and the frequency error of that bit; 2) the computed mean of the frequency error; and 3) the standard deviation of the frequency error. LOOP is written in MS SuperSoft FORTRAN 77 for interactive execution and has been implemented on an IBM PC operating under PC DOS with a memory requirement of approximately 40K of 8 bit bytes. This program was developed in 1986.

  18. Circular Probable Error for Circular and Noncircular Gaussian Impacts

    DTIC Science & Technology

    2012-09-01

    1M simulated impacts ph(k)=mean(imp(:,1).^2+imp(:,2).^2<=CEP^2); % hit frequency on CEP end phit (j)=mean(ph...avg 100 hit frequencies to “incr n” end % GRAPHICS plot(i, phit ,’r-’); % error exponent versus Ph estimate

  19. CORRELATED ERRORS IN EARTH POINTING MISSIONS

    NASA Technical Reports Server (NTRS)

    Bilanow, Steve; Patt, Frederick S.

    2005-01-01

    Two different Earth-pointing missions dealing with attitude control and dynamics changes illustrate concerns with correlated error sources and coupled effects that can occur. On the OrbView-2 (OV-2) spacecraft, the assumption of a nearly-inertially-fixed momentum axis was called into question when a residual dipole bias apparently changed magnitude. The possibility that alignment adjustments and/or sensor calibration errors may compensate for actual motions of the spacecraft is discussed, and uncertainties in the dynamics are considered. Particular consideration is given to basic orbit frequency and twice orbit frequency effects and their high correlation over the short science observation data span. On the Tropical Rainfall Measuring Mission (TRMM) spacecraft, the switch to a contingency Kalman filter control mode created changes in the pointing error patterns. Results from independent checks on the TRMM attitude using science instrument data are reported, and bias shifts and error correlations are discussed. Various orbit frequency effects are common with the flight geometry for Earth pointing instruments. In both dual-spin momentum stabilized spacecraft (like OV-2) and three axis stabilized spacecraft with gyros (like TRMM under Kalman filter control), changes in the initial attitude state propagate into orbit frequency variations in attitude and some sensor measurements. At the same time, orbit frequency measurement effects can arise from dynamics assumptions, environment variations, attitude sensor calibrations, or ephemeris errors. Also, constant environment torques for dual spin spacecraft have similar effects to gyro biases on three axis stabilized spacecraft, effectively shifting the one-revolution-per-orbit (1-RPO) body rotation axis. Highly correlated effects can create a risk for estimation errors particularly when a mission switches an operating mode or changes its normal flight environment. Some error effects will not be obvious from attitude sensor measurement residuals, so some independent checks using imaging sensors are essential and derived science instrument attitude measurements can prove quite valuable in assessing the attitude accuracy.

  20. Airplane wing vibrations due to atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Pastel, R. L.; Caruthers, J. E.; Frost, W.

    1981-01-01

    The magnitude of error introduced due to wing vibration when measuring atmospheric turbulence with a wind probe mounted at the wing tip was studied. It was also determined whether accelerometers mounted on the wing tip are needed to correct this error. A spectrum analysis approach is used to determine the error. Estimates of the B-57 wing characteristics are used to simulate the airplane wing, and von Karman's cross spectrum function is used to simulate atmospheric turbulence. It was found that wing vibration introduces large error in measured spectra of turbulence in the frequency's range close to the natural frequencies of the wing.

  1. Design and analysis of control system for VCSEL of atomic interference magnetometer

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-nan; Sun, Xiao-jie; Kou, Jun; Yang, Feng; Li, Jie; Ren, Zhang; Wei, Zong-kang

    2016-11-01

    Magnetic field detection is an important means of deep space environment exploration. Benefit from simple structure and low power consumption, atomic interference magnetometer become one of the most potential detector payloads. Vertical Cavity Surface Emitting Laser (VCSEL) is usually used as a light source in atomic interference magnetometer and its frequency stability directly affects the stability and sensitivity of magnetometer. In this paper, closed-loop control strategy of VCSEL was designed and analysis, the controller parameters were selected and the feedback error algorithm was optimized as well. According to the results of experiments that were performed on the hardware-in-the-loop simulation platform, the designed closed-loop control system is reasonable and it is able to effectively improve the laser frequency stability during the actual work of the magnetometer.

  2. Personal protective equipment for the Ebola virus disease: A comparison of 2 training programs.

    PubMed

    Casalino, Enrique; Astocondor, Eugenio; Sanchez, Juan Carlos; Díaz-Santana, David Enrique; Del Aguila, Carlos; Carrillo, Juan Pablo

    2015-12-01

    Personal protective equipment (PPE) for preventing Ebola virus disease (EVD) includes basic PPE (B-PPE) and enhanced PPE (E-PPE). Our aim was to compare conventional training programs (CTPs) and reinforced training programs (RTPs) on the use of B-PPE and E-PPE. Four groups were created, designated CTP-B, CTP-E, RTP-B, and RTP-E. All groups received the same theoretical training, followed by 3 practical training sessions. A total of 120 students were included (30 per group). In all 4 groups, the frequency and number of total errors and critical errors decreased significantly over the course of the training sessions (P < .01). The RTP was associated with a greater reduction in the number of total errors and critical errors (P < .0001). During the third training session, we noted an error frequency of 7%-43%, a critical error frequency of 3%-40%, 0.3-1.5 total errors, and 0.1-0.8 critical errors per student. The B-PPE groups had the fewest errors and critical errors (P < .0001). Our results indicate that both training methods improved the student's proficiency, that B-PPE appears to be easier to use than E-PPE, that the RTP achieved better proficiency for both PPE types, and that a number of students are still potentially at risk for EVD contamination despite the improvements observed during the training. Copyright © 2015 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  3. Stochastic Surface Mesh Reconstruction

    NASA Astrophysics Data System (ADS)

    Ozendi, M.; Akca, D.; Topan, H.

    2018-05-01

    A generic and practical methodology is presented for 3D surface mesh reconstruction from the terrestrial laser scanner (TLS) derived point clouds. It has two main steps. The first step deals with developing an anisotropic point error model, which is capable of computing the theoretical precisions of 3D coordinates of each individual point in the point cloud. The magnitude and direction of the errors are represented in the form of error ellipsoids. The following second step is focused on the stochastic surface mesh reconstruction. It exploits the previously determined error ellipsoids by computing a point-wise quality measure, which takes into account the semi-diagonal axis length of the error ellipsoid. The points only with the least errors are used in the surface triangulation. The remaining ones are automatically discarded.

  4. Error Analysis of Magnetohydrodynamic Angular Rate Sensor Combing with Coriolis Effect at Low Frequency.

    PubMed

    Ji, Yue; Xu, Mengjie; Li, Xingfei; Wu, Tengfei; Tuo, Weixiao; Wu, Jun; Dong, Jiuzhi

    2018-06-13

    The magnetohydrodynamic (MHD) angular rate sensor (ARS) with low noise level in ultra-wide bandwidth is developed in lasing and imaging applications, especially the line-of-sight (LOS) system. A modified MHD ARS combined with the Coriolis effect was studied in this paper to expand the sensor’s bandwidth at low frequency (<1 Hz), which is essential for precision LOS pointing and wide-bandwidth LOS jitter suppression. The model and the simulation method were constructed and a comprehensive solving method based on the magnetic and electric interaction methods was proposed. The numerical results on the Coriolis effect and the frequency response of the modified MHD ARS were detailed. In addition, according to the experimental results of the designed sensor consistent with the simulation results, an error analysis of model errors was discussed. Our study provides an error analysis method of MHD ARS combined with the Coriolis effect and offers a framework for future studies to minimize the error.

  5. What are incident reports telling us? A comparative study at two Australian hospitals of medication errors identified at audit, detected by staff and reported to an incident system

    PubMed Central

    Westbrook, Johanna I.; Li, Ling; Lehnbom, Elin C.; Baysari, Melissa T.; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O.

    2015-01-01

    Objectives To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Design Audit of 3291patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as ‘clinically important’. Setting Two major academic teaching hospitals in Sydney, Australia. Main Outcome Measures Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. Results A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6–1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0–253.8), but only 13.0/1000 (95% CI: 3.4–22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4–28.4%) contained ≥1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Conclusions Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. PMID:25583702

  6. A digital frequency stabilization system of external cavity diode laser based on LabVIEW FPGA

    NASA Astrophysics Data System (ADS)

    Liu, Zhuohuan; Hu, Zhaohui; Qi, Lu; Wang, Tao

    2015-10-01

    Frequency stabilization for external cavity diode laser has played an important role in physics research. Many laser frequency locking solutions have been proposed by researchers. Traditionally, the locking process was accomplished by analog system, which has fast feedback control response speed. However, analog system is susceptible to the effects of environment. In order to improve the automation level and reliability of the frequency stabilization system, we take a grating-feedback external cavity diode laser as the laser source and set up a digital frequency stabilization system based on National Instrument's FPGA (NI FPGA). The system consists of a saturated absorption frequency stabilization of beam path, a differential photoelectric detector, a NI FPGA board and a host computer. Many functions, such as piezoelectric transducer (PZT) sweeping, atomic saturation absorption signal acquisition, signal peak identification, error signal obtaining and laser PZT voltage feedback controlling, are totally completed by LabVIEW FPGA program. Compared with the analog system, the system built by the logic gate circuits, performs stable and reliable. User interface programmed by LabVIEW is friendly. Besides, benefited from the characteristics of reconfiguration, the LabVIEW program is good at transplanting in other NI FPGA boards. Most of all, the system periodically checks the error signal. Once the abnormal error signal is detected, FPGA will restart frequency stabilization process without manual control. Through detecting the fluctuation of error signal of the atomic saturation absorption spectrum line in the frequency locking state, we can infer that the laser frequency stability can reach 1MHz.

  7. Robustification and Optimization in Repetitive Control For Minimum Phase and Non-Minimum Phase Systems

    NASA Astrophysics Data System (ADS)

    Prasitmeeboon, Pitcha

    Repetitive control (RC) is a control method that specifically aims to converge to zero tracking error of a control systems that execute a periodic command or have periodic disturbances of known period. It uses the error of one period back to adjust the command in the present period. In theory, RC can completely eliminate periodic disturbance effects. RC has applications in many fields such as high-precision manufacturing in robotics, computer disk drives, and active vibration isolation in spacecraft. The first topic treated in this dissertation develops several simple RC design methods that are somewhat analogous to PID controller design in classical control. From the early days of digital control, emulation methods were developed based on a Forward Rule, a Backward Rule, Tustin's Formula, a modification using prewarping, and a pole-zero mapping method. These allowed one to convert a candidate controller design to discrete time in a simple way. We investigate to what extent they can be used to simplify RC design. A particular design is developed from modification of the pole-zero mapping rules, which is simple and sheds light on the robustness of repetitive control designs. RC convergence requires less than 90 degree model phase error at all frequencies up to Nyquist. A zero-phase cutoff filter is normally used to robustify to high frequency model error when this limit is exceeded. The result is stabilization at the expense of failure to cancel errors above the cutoff. The second topic investigates a series of methods to use data to make real time updates of the frequency response model, allowing one to increase or eliminate the frequency cutoff. These include the use of a moving window employing a recursive discrete Fourier transform (DFT), and use of a real time projection algorithm from adaptive control for each frequency. The results can be used directly to make repetitive control corrections that cancel each error frequency, or they can be used to update a repetitive control FIR compensator. The aim is to reduce the final error level by using real time frequency response model updates to successively increase the cutoff frequency, each time creating the improved model needed to produce convergence zero error up to the higher cutoff. Non-minimum phase systems present a difficult design challenge to the sister field of Iterative Learning Control. The third topic investigates to what extent the same challenges appear in RC. One challenge is that the intrinsic non-minimum phase zero mapped from continuous time is close to the pole of repetitive controller at +1 creating behavior similar to pole-zero cancellation. The near pole-zero cancellation causes slow learning at DC and low frequencies. The Min-Max cost function over the learning rate is presented. The Min-Max can be reformulated as a Quadratically Constrained Linear Programming problem. This approach is shown to be an RC design approach that addresses the main challenge of non-minimum phase systems to have a reasonable learning rate at DC. Although it was illustrated that using the Min-Max objective improves learning at DC and low frequencies compared to other designs, the method requires model accuracy at high frequencies. In the real world, models usually have error at high frequencies. The fourth topic addresses how one can merge the quadratic penalty to the Min-Max cost function to increase robustness at high frequencies. The topic also considers limiting the Min-Max optimization to some frequencies interval and applying an FIR zero-phase low-pass filter to cutoff the learning for frequencies above that interval.

  8. Computer Generated Hologram System for Wavefront Measurement System Calibration

    NASA Technical Reports Server (NTRS)

    Olczak, Gene

    2011-01-01

    Computer Generated Holograms (CGHs) have been used for some time to calibrate interferometers that require nulling optics. A typical scenario is the testing of aspheric surfaces with an interferometer placed near the paraxial center of curvature. Existing CGH technology suffers from a reduced capacity to calibrate middle and high spatial frequencies. The root cause of this shortcoming is as follows: the CGH is not placed at an image conjugate of the asphere due to limitations imposed by the geometry of the test and the allowable size of the CGH. This innovation provides a calibration system where the imaging properties in calibration can be made comparable to the test configuration. Thus, if the test is designed to have good imaging properties, then middle and high spatial frequency errors in the test system can be well calibrated. The improved imaging properties are provided by a rudimentary auxiliary optic as part of the calibration system. The auxiliary optic is simple to characterize and align to the CGH. Use of the auxiliary optic also reduces the size of the CGH required for calibration and the density of the lines required for the CGH. The resulting CGH is less expensive than the existing technology and has reduced write error and alignment error sensitivities. This CGH system is suitable for any kind of calibration using an interferometer when high spatial resolution is required. It is especially well suited for tests that include segmented optical components or large apertures.

  9. Characterization of Errors Inherent in System EMP Vulnerability Assessment Programs,

    DTIC Science & Technology

    1980-10-01

    Patriot system. * B-i aircraft. * E-3A airborne warning and control system aircraft. * PRC-77 radio. * Lance missile system. * Safeguard ABM system...carefully or the offset will create large frequency domain error. Frequency-tying, too, can improve f-domain data. Of the various recording sytems studied

  10. Closing the Seasonal Ocean Surface Temperature Balance in the Eastern Tropical Oceans from Remote Sensing and Model Reanalyses

    NASA Technical Reports Server (NTRS)

    Roberts, J. Brent; Clayson, C. A.

    2012-01-01

    Residual forcing necessary to close the MLTB on seasonal time scales are largest in regions of strongest surface heat flux forcing. Identifying the dominant source of error - surface heat flux error, mixed layer depth estimation, ocean dynamical forcing - remains a challenge in the eastern tropical oceans where ocean processes are very active. Improved sub-surface observations are necessary to better constrain errors. 1. Mixed layer depth evolution is critical to the seasonal evolution of mixed layer temperatures. It determines the inertia of the mixed layer, and scales the sensitivity of the MLTB to errors in surface heat flux and ocean dynamical forcing. This role produces timing impacts for errors in SST prediction. 2. Errors in the MLTB are larger than the historical 10Wm-2 target accuracy. In some regions, a larger accuracy can be tolerated if the goal is to resolve the seasonal SST cycle.

  11. The impact of command signal power distribution, processing delays, and speed scaling on neurally-controlled devices.

    PubMed

    Marathe, A R; Taylor, D M

    2015-08-01

    Decoding algorithms for brain-machine interfacing (BMI) are typically only optimized to reduce the magnitude of decoding errors. Our goal was to systematically quantify how four characteristics of BMI command signals impact closed-loop performance: (1) error magnitude, (2) distribution of different frequency components in the decoding errors, (3) processing delays, and (4) command gain. To systematically evaluate these different command features and their interactions, we used a closed-loop BMI simulator where human subjects used their own wrist movements to command the motion of a cursor to targets on a computer screen. Random noise with three different power distributions and four different relative magnitudes was added to the ongoing cursor motion in real time to simulate imperfect decoding. These error characteristics were tested with four different visual feedback delays and two velocity gains. Participants had significantly more trouble correcting for errors with a larger proportion of low-frequency, slow-time-varying components than they did with jittery, higher-frequency errors, even when the error magnitudes were equivalent. When errors were present, a movement delay often increased the time needed to complete the movement by an order of magnitude more than the delay itself. Scaling down the overall speed of the velocity command can actually speed up target acquisition time when low-frequency errors and delays are present. This study is the first to systematically evaluate how the combination of these four key command signal features (including the relatively-unexplored error power distribution) and their interactions impact closed-loop performance independent of any specific decoding method. The equations we derive relating closed-loop movement performance to these command characteristics can provide guidance on how best to balance these different factors when designing BMI systems. The equations reported here also provide an efficient way to compare a diverse range of decoding options offline.

  12. The impact of command signal power distribution, processing delays, and speed scaling on neurally-controlled devices

    NASA Astrophysics Data System (ADS)

    Marathe, A. R.; Taylor, D. M.

    2015-08-01

    Objective. Decoding algorithms for brain-machine interfacing (BMI) are typically only optimized to reduce the magnitude of decoding errors. Our goal was to systematically quantify how four characteristics of BMI command signals impact closed-loop performance: (1) error magnitude, (2) distribution of different frequency components in the decoding errors, (3) processing delays, and (4) command gain. Approach. To systematically evaluate these different command features and their interactions, we used a closed-loop BMI simulator where human subjects used their own wrist movements to command the motion of a cursor to targets on a computer screen. Random noise with three different power distributions and four different relative magnitudes was added to the ongoing cursor motion in real time to simulate imperfect decoding. These error characteristics were tested with four different visual feedback delays and two velocity gains. Main results. Participants had significantly more trouble correcting for errors with a larger proportion of low-frequency, slow-time-varying components than they did with jittery, higher-frequency errors, even when the error magnitudes were equivalent. When errors were present, a movement delay often increased the time needed to complete the movement by an order of magnitude more than the delay itself. Scaling down the overall speed of the velocity command can actually speed up target acquisition time when low-frequency errors and delays are present. Significance. This study is the first to systematically evaluate how the combination of these four key command signal features (including the relatively-unexplored error power distribution) and their interactions impact closed-loop performance independent of any specific decoding method. The equations we derive relating closed-loop movement performance to these command characteristics can provide guidance on how best to balance these different factors when designing BMI systems. The equations reported here also provide an efficient way to compare a diverse range of decoding options offline.

  13. Error reduction in three-dimensional metrology combining optical and touch probe data

    NASA Astrophysics Data System (ADS)

    Gerde, Janice R.; Christens-Barry, William A.

    2010-08-01

    Analysis of footwear under the Harmonized Tariff Schedule of the United States (HTSUS) is partly based on identifying the boundary ("parting line") between the "external surface area upper" (ESAU) and the sample's sole. Often, that boundary is obscured. We establish the parting line as the curved intersection between the sample outer surface and its insole surface. The outer surface is determined by discrete point cloud coordinates obtained using a laser scanner. The insole surface is defined by point cloud data, obtained using a touch probe device-a coordinate measuring machine (CMM). Because these point cloud data sets do not overlap spatially, a polynomial surface is fitted to the insole data and extended to intersect a mesh fitted to the outer surface point cloud. This line of intersection defines the ESAU boundary, permitting further fractional area calculations to proceed. The defined parting line location is sensitive to the polynomial used to fit experimental data. Extrapolation to the intersection with the ESAU can heighten this sensitivity. We discuss a methodology for transforming these data into a common reference frame. Three scenarios are considered: measurement error in point cloud coordinates, from fitting a polynomial surface to a point cloud then extrapolating beyond the data set, and error from reference frame transformation. These error sources can influence calculated surface areas. We describe experiments to assess error magnitude, the sensitivity of calculated results on these errors, and minimizing error impact on calculated quantities. Ultimately, we must ensure that statistical error from these procedures is minimized and within acceptance criteria.

  14. How allele frequency and study design affect association test statistics with misrepresentation errors.

    PubMed

    Escott-Price, Valentina; Ghodsi, Mansoureh; Schmidt, Karl Michael

    2014-04-01

    We evaluate the effect of genotyping errors on the type-I error of a general association test based on genotypes, showing that, in the presence of errors in the case and control samples, the test statistic asymptotically follows a scaled non-central $\\chi ^2$ distribution. We give explicit formulae for the scaling factor and non-centrality parameter for the symmetric allele-based genotyping error model and for additive and recessive disease models. They show how genotyping errors can lead to a significantly higher false-positive rate, growing with sample size, compared with the nominal significance levels. The strength of this effect depends very strongly on the population distribution of the genotype, with a pronounced effect in the case of rare alleles, and a great robustness against error in the case of large minor allele frequency. We also show how these results can be used to correct $p$-values.

  15. Mapping global surface water inundation dynamics using synergistic information from SMAP, AMSR2 and Landsat

    NASA Astrophysics Data System (ADS)

    Du, J.; Kimball, J. S.; Galantowicz, J. F.; Kim, S.; Chan, S.; Reichle, R. H.; Jones, L. A.; Watts, J. D.

    2017-12-01

    A method to monitor global land surface water (fw) inundation dynamics was developed by exploiting the enhanced fw sensitivity of L-band (1.4 GHz) passive microwave observations from the Soil Moisture Active Passive (SMAP) mission. The L-band fw (fwLBand) retrievals were derived using SMAP H-polarization brightness temperature (Tb) observations and predefined L-band reference microwave emissivities for water and land endmembers. Potential soil moisture and vegetation contributions to the microwave signal were represented from overlapping higher frequency Tb observations from AMSR2. The resulting fwLBand global record has high temporal sampling (1-3 days) and 36-km spatial resolution. The fwLBand annual averages corresponded favourably (R=0.84, p<0.001) with a 250-m resolution static global water map (MOD44W) aggregated at the same spatial scale, while capturing significant inundation variations worldwide. The monthly fwLBand averages also showed seasonal inundation changes consistent with river discharge records within six major US river basins. An uncertainty analysis indicated generally reliable fwLBand performance for major land cover areas and under low to moderate vegetation cover, but with lower accuracy for detecting water bodies covered by dense vegetation. Finer resolution (30-m) fwLBand results were obtained for three sub-regions in North America using an empirical downscaling approach and ancillary global Water Occurrence Dataset (WOD) derived from the historical Landsat record. The resulting 30-m fwLBand retrievals showed favourable classification accuracy for water (commission error 31.84%; omission error 28.08%) and land (commission error 0.82%; omission error 0.99%) and seasonal wet and dry periods when compared to independent water maps derived from Landsat-8 imagery. The new fwLBand algorithms and continuing SMAP and AMSR2 operations provide for near real-time, multi-scale monitoring of global surface water inundation dynamics, potentially benefiting hydrological monitoring, flood assessments, and global climate and carbon modeling.

  16. Multi-layer Retrievals of Greenhouse Gases from a Combined Use of GOSAT TANSO-FTS SWIR and TIR

    NASA Astrophysics Data System (ADS)

    Kikuchi, N.; Kuze, A.; Kataoka, F.; Shiomi, K.; Hashimoto, M.; Suto, H.; Knuteson, R. O.; Iraci, L. T.; Yates, E. L.; Gore, W.; Tanaka, T.; Yokota, T.

    2016-12-01

    The TANSO-FTS sensor onboard GOSAT has three frequency bands in the shortwave infrared (SWIR) and the fourth band in the thermal infrared (TIR). Observations of high-resolution spectra of reflected sunlight in the SWIR are extensively utilized to retrieve column-averaged concentrations of the major greenhouse gases such as carbon dioxide (XCO2) and methane (XCH4). Although global XCO2 and XCH4 distribution retrieved from SWIR data can reduce the uncertainty in the current knowledge about sources and sinks of these gases, information on the vertical profiles would be more useful to constrain the surface flux and also to identify the local emission sources. Based on the degrees of freedom for signal, Kulawik et al. (2016, IWGGMS-12 presentation) shows that 2-layer information on the concentration of CO2 can be extracted from TANSO-FTS SWIR measurements, and the retrieval error is predicted to be about 5 ppm in the lower troposphere. In this study, we present multi-layer retrievals of CO2 and CH4 from a combined use of measurements of TANSO-FTS SWIR and TIR. We selected GOSAT observations at Railroad Valley Playa in Nevada, USA, which is a vicarious calibration site for TANSO-FTS, as we have various ancillary data including atmospheric temperature and humidity taken by a radiosonde, surface temperature, and surface emissivity with a ground based FTS. All of these data are useful especially for retrievals using TIR spectra. Currently, we use the 700-800 cm-1 and 1200-1300 cm-1 TIR windows for CO2 and CH4 retrievals, respectively, in addition to the SWIR bands. We found that by adding TIR windows, 3-layer information can be extracted, and the predicted retrieval error in the CO2 concentration was reduced about 1 ppm in the lower troposphere. We expect that the retrieval error could be further reduced by optimizing TIR windows and by reducing systematic forward model errors.

  17. Performance enhancement of wireless mobile adhoc networks through improved error correction and ICI cancellation

    NASA Astrophysics Data System (ADS)

    Sabir, Zeeshan; Babar, M. Inayatullah; Shah, Syed Waqar

    2012-12-01

    Mobile adhoc network (MANET) refers to an arrangement of wireless mobile nodes that have the tendency of dynamically and freely self-organizing into temporary and arbitrary network topologies. Orthogonal frequency division multiplexing (OFDM) is the foremost choice for MANET system designers at the Physical Layer due to its inherent property of high data rate transmission that corresponds to its lofty spectrum efficiency. The downside of OFDM includes its sensitivity to synchronization errors (frequency offsets and symbol time). Most of the present day techniques employing OFDM for data transmission support mobility as one of the primary features. This mobility causes small frequency offsets due to the production of Doppler frequencies. It results in intercarrier interference (ICI) which degrades the signal quality due to a crosstalk between the subcarriers of OFDM symbol. An efficient frequency-domain block-type pilot-assisted ICI mitigation scheme is proposed in this article which nullifies the effect of channel frequency offsets from the received OFDM symbols. Second problem addressed in this article is the noise effect induced by different sources into the received symbol increasing its bit error rate and making it unsuitable for many applications. Forward-error-correcting turbo codes have been employed into the proposed model which adds redundant bits into the system which are later used for error detection and correction purpose. At the receiver end, maximum a posteriori (MAP) decoding algorithm is implemented using two component MAP decoders. These decoders tend to exchange interleaved extrinsic soft information among each other in the form of log likelihood ratio improving the previous estimate regarding the decoded bit in each iteration.

  18. Deployable reflector antenna performance optimization using automated surface correction and array-feed compensation

    NASA Technical Reports Server (NTRS)

    Schroeder, Lyle C.; Bailey, M. C.; Mitchell, John L.

    1992-01-01

    Methods for increasing the electromagnetic (EM) performance of reflectors with rough surfaces were tested and evaluated. First, one quadrant of the 15-meter hoop-column antenna was retrofitted with computer-driven and controlled motors to allow automated adjustment of the reflector surface. The surface errors, measured with metric photogrammetry, were used in a previously verified computer code to calculate control motor adjustments. With this system, a rough antenna surface (rms of approximately 0.180 inch) was corrected in two iterations to approximately the structural surface smoothness limit of 0.060 inch rms. The antenna pattern and gain improved significantly as a result of these surface adjustments. The EM performance was evaluated with a computer program for distorted reflector antennas which had been previously verified with experimental data. Next, the effects of the surface distortions were compensated for in computer simulations by superimposing excitation from an array feed to maximize antenna performance relative to an undistorted reflector. Results showed that a 61-element array could produce EM performance improvements equal to surface adjustments. When both mechanical surface adjustment and feed compensation techniques were applied, the equivalent operating frequency increased from approximately 6 to 18 GHz.

  19. Coupling of conservative and dissipative forces in frequency-modulation atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Sader, John E.; Jarvis, Suzanne P.

    2006-11-01

    Frequency modulation atomic force microscopy (FM-AFM) utilizes the principle of self-excitation to ensure the cantilever probe vibrates at its resonant frequency, regardless of the tip-sample interaction. Practically, this is achieved by fixing the phase difference between tip deflection and driving force at precisely 90° . This, in turn, decouples the frequency shift and excitation amplitude signals, enabling quantitative interpretation in terms of conservative and dissipative tip-sample interaction forces. In this article, we theoretically investigate the effect of phase detuning in the self-excitation mechanism on the coupling between conservative and dissipative forces in FM-AFM. We find that this coupling depends only on the relative difference in the drive and resonant frequencies far from the surface, and is thus very weakly dependent on the actual phase error particularly for high quality factors. This establishes that FM-AFM is highly robust with respect to phase detuning, and enables quantitative interpretation of the measured frequency shift and excitation amplitude, even while operating away from the resonant frequency with the use of appropriate replacements in the existing formalism. We also examine the calibration of phase shifts in FM-AFM measurements and demonstrate that the commonly used approach of minimizing the excitation amplitude can lead to significant phase detuning, particularly in liquid environments.

  20. Frequency stabilization for multilocation optical FDM networks

    NASA Astrophysics Data System (ADS)

    Jiang, Quan; Kavehrad, Mohsen

    1993-04-01

    In a multi-location optical FDM network, the frequency of each user's transmitter can be offset-locked, through a Fabry-Perot, to an absolute frequency standard which is distributed to the users. To lock the local Fabry-Perot to the frequency standard, the standard has to be frequency-dithered by a sinusoidal signal and the sinusoidal reference has to be transmitted to the user location since the lock-in amplifier in the stabilization system requires the reference for synchronous detection. We proposed two solutions to avoid transmitting the reference. One uses an extraction circuit to obtain the sinusoidal signal from the incoming signal. A nonlinear circuit following the photodiode produces a strong second-order harmonic of the sinusoidal signal and a phase-locked loop is locked to it. The sinusoidal reference is obtained by a divide- by-2 circuit. The phase ambiguity (0 degree(s) or 180 degree(s)) is resolved by using a selection- circuit and an initial scan. The other method uses a pseudo-random sequence instead of a sinusoidal signal to dither the frequency standard and a surface-acoustic-wave (SAW) matched-filter instead of a lock-in amplifier to obtain the frequency error. The matched-filter serves as a correlator and does not require the dither reference.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reister, D.B.; Unseren, M.A.; Baker, J.E.

    We discuss a series of surface following experiments using a range finder mounted on the end of an arm that is mounted on a vehicle. The goal is to keep the range finder at a fixed distance from an unknown surface and to keep the orientation of the range finder perpendicular to the surface. During the experiments, the vehicle moves along a predefined trajectory while planning software determines the position and orientation of the arm. To keep the range finder perpendicular to the surface, the planning software calculates the surface normal for the unknown surface. We assume that the unknownmore » surface is a cylinder (the surface depends on x and y but does not depend on z). To calculate the surface normal, the planning software must calculate the locations (x,y) of points on the surface in world coordinates. The calculation requires data on the position and orientation of the vehicle, the position and orientation of the arm, and the distance from the range finder to the surface. We discuss four series of experiments. During the first series of experiments, the calculated surface normal values had large high frequency random variations. A filter was used to produce an average value for the surface normal and we limited the rate of change in the yaw angle target for the arm. We performed the experiment for a variety of concave and convex surfaces. While the experiments were qualitative successes, the measured distance to the surface was significantly different than the target. The distance errors were systematic, low frequency, and had magnitudes up to 25 mm. During the second series of experiments, we reduced the variations in the calculated surface normal values. While reviewing the data collected while following the surface of a barrel, we found that the radius of the calculated surface was significantly different than the measured radius of the barrel.« less

  2. Analyzing Effect of System Inertia on Grid Frequency Forecasting Usnig Two Stage Neuro-Fuzzy System

    NASA Astrophysics Data System (ADS)

    Chourey, Divyansh R.; Gupta, Himanshu; Kumar, Amit; Kumar, Jitesh; Kumar, Anand; Mishra, Anup

    2018-04-01

    Frequency forecasting is an important aspect of power system operation. The system frequency varies with load-generation imbalance. Frequency variation depends upon various parameters including system inertia. System inertia determines the rate of fall of frequency after the disturbance in the grid. Though, inertia of the system is not considered while forecasting the frequency of power system during planning and operation. This leads to significant errors in forecasting. In this paper, the effect of inertia on frequency forecasting is analysed for a particular grid system. In this paper, a parameter equivalent to system inertia is introduced. This parameter is used to forecast the frequency of a typical power grid for any instant of time. The system gives appreciable result with reduced error.

  3. Modeling methodology for MLS range navigation system errors using flight test data

    NASA Technical Reports Server (NTRS)

    Karmali, M. S.; Phatak, A. V.

    1982-01-01

    Flight test data was used to develop a methodology for modeling MLS range navigation system errors. The data used corresponded to the constant velocity and glideslope approach segment of a helicopter landing trajectory. The MLS range measurement was assumed to consist of low frequency and random high frequency components. The random high frequency component was extracted from the MLS range measurements. This was done by appropriate filtering of the range residual generated from a linearization of the range profile for the final approach segment. This range navigation system error was then modeled as an autoregressive moving average (ARMA) process. Maximum likelihood techniques were used to identify the parameters of the ARMA process.

  4. Sub-nanometer periodic nonlinearity error in absolute distance interferometers

    NASA Astrophysics Data System (ADS)

    Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang

    2015-05-01

    Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°.

  5. Design and tolerance analysis of a transmission sphere by interferometer model

    NASA Astrophysics Data System (ADS)

    Peng, Wei-Jei; Ho, Cheng-Fong; Lin, Wen-Lung; Yu, Zong-Ru; Huang, Chien-Yao; Hsu, Wei-Yao

    2015-09-01

    The design of a 6-in, f/2.2 transmission sphere for Fizeau interferometry is presented in this paper. To predict the actual performance during design phase, we build an interferometer model combined with tolerance analysis in Zemax. Evaluating focus imaging is not enough for a double pass optical system. Thus, we study the interferometer model that includes system error, wavefronts reflected from reference surface and tested surface. Firstly, we generate a deformation map of the tested surface. Because of multiple configurations in Zemax, we can get the test wavefront and the reference wavefront reflected from the tested surface and the reference surface of transmission sphere respectively. According to the theory of interferometry, we subtract both wavefronts to acquire the phase of tested surface. Zernike polynomial is applied to transfer the map from phase to sag and to remove piston, tilt and power. The restored map is the same as original map; because of no system error exists. Secondly, perturbed tolerances including fabrication of lenses and assembly are considered. The system error occurs because the test and reference beam are no longer common path perfectly. The restored map is inaccurate while the system error is added. Although the system error can be subtracted by calibration, it should be still controlled within a small range to avoid calibration error. Generally the reference wavefront error including the system error and the irregularity of the reference surface of 6-in transmission sphere is measured within peak-to-valley (PV) 0.1 λ (λ=0.6328 um), which is not easy to approach. Consequently, it is necessary to predict the value of system error before manufacture. Finally, a prototype is developed and tested by a reference surface with PV 0.1 λ irregularity.

  6. Nonlinear dynamic model of a gear-rotor-bearing system considering the flash temperature

    NASA Astrophysics Data System (ADS)

    Gou, Xiangfeng; Zhu, Lingyun; Qi, Changjun

    2017-12-01

    The instantaneous flash temperature is an important factor for gears in service. To investigate the effect of the flash temperature of a tooth surface on the dynamics of the spur gear system, a modified nonlinear dynamic model of a gear-rotor-bearing system is established. The factors such as the contact temperature of the tooth surface, time-varying stiffness, tooth surface friction, backlash, the comprehensive transmission error and so on are considered. The flash temperature of a tooth surface of pinion and gear is formulated according to Blok's flash temperature theory. The mathematical expression of the contact temperature of the tooth surface varied with time is derived and the tooth profile deformation caused by the change of the flash temperature of the tooth surface is calculated. The expression of the mesh stiffness varied with the flash temperature of the tooth surface is derived based on Hertz contact theory. The temperature stiffness is proposed and added to the nonlinear dynamic model of the system. The influence of load on the flash temperature of the tooth surface is analyzed in the parameters plane. The variation of the flash temperature of the tooth surface is studied. The numerical results indicate that the calculated method of the flash temperature of the gear tooth surface is effective and it can reflect the rules for the change of gear meshing temperature and sliding of the gear tooth surface. The effects of frequency, backlash, bearing clearance, comprehensive transmission error and time-varying stiffness on the nonlinear dynamics of the system are analyzed according to the bifurcation diagrams, Top Lyapunov Exponent (TLE) spectrums, phase portraits and Poincaré maps. Some nonlinear phenomena such as periodic bifurcation, grazing bifurcation, quasi-periodic bifurcation, chaos and its routes to chaos are investigated and the critical parameters are identified. The results provide an understanding of the system and serve as a useful reference in designing such systems.

  7. Exploring the initial steps of the testing process: frequency and nature of pre-preanalytic errors.

    PubMed

    Carraro, Paolo; Zago, Tatiana; Plebani, Mario

    2012-03-01

    Few data are available on the nature of errors in the so-called pre-preanalytic phase, the initial steps of the testing process. We therefore sought to evaluate pre-preanalytic errors using a study design that enabled us to observe the initial procedures performed in the ward, from the physician's test request to the delivery of specimens in the clinical laboratory. After a 1-week direct observational phase designed to identify the operating procedures followed in 3 clinical wards, we recorded all nonconformities and errors occurring over a 6-month period. Overall, the study considered 8547 test requests, for which 15 917 blood sample tubes were collected and 52 982 tests undertaken. No significant differences in error rates were found between the observational phase and the overall study period, but underfilling of coagulation tubes was found to occur more frequently in the direct observational phase (P = 0.043). In the overall study period, the frequency of errors was found to be particularly high regarding order transmission [29 916 parts per million (ppm)] and hemolysed samples (2537 ppm). The frequency of patient misidentification was 352 ppm, and the most frequent nonconformities were test requests recorded in the diary without the patient's name and failure to check the patient's identity at the time of blood draw. The data collected in our study confirm the relative frequency of pre-preanalytic errors and underline the need to consensually prepare and adopt effective standard operating procedures in the initial steps of laboratory testing and to monitor compliance with these procedures over time.

  8. Frequency spectrum analyzer with phase-lock

    DOEpatents

    Boland, Thomas J.

    1984-01-01

    A frequency-spectrum analyzer with phase-lock for analyzing the frequency and amplitude of an input signal is comprised of a voltage controlled oscillator (VCO) which is driven by a ramp generator, and a phase error detector circuit. The phase error detector circuit measures the difference in phase between the VCO and the input signal, and drives the VCO locking it in phase momentarily with the input signal. The input signal and the output of the VCO are fed into a correlator which transfers the input signal to a frequency domain, while providing an accurate absolute amplitude measurement of each frequency component of the input signal.

  9. Time synchronization of a frequency-hopped MFSK communication system

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Polydoros, A.; Huth, G. K.

    1981-01-01

    In a frequency-hopped (FH) multiple-frequency-shift-keyed (MFSK) communication system, frequency hopping causes the necessary frequency transitions for time synchronization estimation rather than the data sequence as in the conventional (nonfrequency-hopped) system. Making use of this observation, this paper presents a fine synchronization (i.e., time errors of less than a hop duration) technique for estimation of FH timing. The performance degradation due to imperfect FH time synchronization is found in terms of the effect on bit error probability as a function of full-band or partial-band noise jamming levels and of the number of hops used in the FH timing estimate.

  10. Capillary bridge stability and dynamics: Active electrostatic stress control and acoustic radiation pressure

    NASA Astrophysics Data System (ADS)

    Wei, Wei

    2005-11-01

    In low gravity, the stability of liquid bridges and other systems having free surfaces is affected by the ambient vibration of the spacecraft. Such vibrations are expected to excite capillary modes. The lowest unstable mode of cylindrical liquid bridges, the (2,0) mode, is particularly sensitive to the vibration when the ratio of the bridge length to the diameter approaches pi. In this work, a Plateau tank has been used to simulate the weightless condition. An optical system has been used to detect the (2,0) mode oscillation amplitude and generate an error signal which is determined by the oscillation amplitude. This error signal is used by the feedback system to produce proper voltages on the electrodes which are concentric with the electrically conducting, grounded bridge. A mode-coupled electrostatic stress is thus generated on the surface of the bridge. The feedback system is designed such that the modal force applied by the Maxwell stress can be proportional to the modal amplitude or modal velocity, which is the derivative of the modal amplitude. Experiments done in the Plateau tank demonstrate that the damping of the capillary oscillation can be enhanced by using the electrostatic stress in proportion to the modal velocity. On the other hand, using the electrostatic stress in proportion to the modal amplitude can raise the natural frequency of the bridge oscillation. If a spacecraft vibration frequency is close to a capillary mode frequency, the amplitude gain can be used to shift the mode frequency away from that of the spacecraft and simultaneously add some artificial damping to further reduce the effect of g-jitter. It is found that the decay of a bridge (2,0) mode oscillation is well modeled by a Duffing equation with a small cubic soft-spring term. The nonlinearity of the bridge (3,0) mode is also studied. The experiments reveal the hysteresis of (3,0) mode bridge oscillations, and this behavior is a property of the soft nonlinearity of the bridge. Relevant to acoustical bridge stabilization, the theoretical radiation force on a compressible cylinder in an acoustic standing wave is also investigated.

  11. Detecting Signatures of GRACE Sensor Errors in Range-Rate Residuals

    NASA Astrophysics Data System (ADS)

    Goswami, S.; Flury, J.

    2016-12-01

    In order to reach the accuracy of the GRACE baseline, predicted earlier from the design simulations, efforts are ongoing since a decade. GRACE error budget is highly dominated by noise from sensors, dealiasing models and modeling errors. GRACE range-rate residuals contain these errors. Thus, their analysis provides an insight to understand the individual contribution to the error budget. Hence, we analyze the range-rate residuals with focus on contribution of sensor errors due to mis-pointing and bad ranging performance in GRACE solutions. For the analysis of pointing errors, we consider two different reprocessed attitude datasets with differences in pointing performance. Then range-rate residuals are computed from these two datasetsrespectively and analysed. We further compare the system noise of four K-and Ka- band frequencies of the two spacecrafts, with range-rate residuals. Strong signatures of mis-pointing errors can be seen in the range-rate residuals. Also, correlation between range frequency noise and range-rate residuals are seen.

  12. Analysis of the load selection on the error of source characteristics identification for an engine exhaust system

    NASA Astrophysics Data System (ADS)

    Zheng, Sifa; Liu, Haitao; Dan, Jiabi; Lian, Xiaomin

    2015-05-01

    Linear time-invariant assumption for the determination of acoustic source characteristics, the source strength and the source impedance in the frequency domain has been proved reasonable in the design of an exhaust system. Different methods have been proposed to its identification and the multi-load method is widely used for its convenience by varying the load number and impedance. Theoretical error analysis has rarely been referred to and previous results have shown an overdetermined set of open pipes can reduce the identification error. This paper contributes a theoretical error analysis for the load selection. The relationships between the error in the identification of source characteristics and the load selection were analysed. A general linear time-invariant model was built based on the four-load method. To analyse the error of the source impedance, an error estimation function was proposed. The dispersion of the source pressure was obtained by an inverse calculation as an indicator to detect the accuracy of the results. It was found that for a certain load length, the load resistance at the frequency points of one-quarter wavelength of odd multiples results in peaks and in the maximum error for source impedance identification. Therefore, the load impedance of frequency range within the one-quarter wavelength of odd multiples should not be used for source impedance identification. If the selected loads have more similar resistance values (i.e., the same order of magnitude), the identification error of the source impedance could be effectively reduced.

  13. Evaluation of causes and frequency of medication errors during information technology downtime.

    PubMed

    Hanuscak, Tara L; Szeinbach, Sheryl L; Seoane-Vazquez, Enrique; Reichert, Brendan J; McCluskey, Charles F

    2009-06-15

    The causes and frequency of medication errors occurring during information technology downtime were evaluated. Individuals from a convenience sample of 78 hospitals who were directly responsible for supporting and maintaining clinical information systems (CISs) and automated dispensing systems (ADSs) were surveyed using an online tool between February 2007 and May 2007 to determine if medication errors were reported during periods of system downtime. The errors were classified using the National Coordinating Council for Medication Error Reporting and Prevention severity scoring index. The percentage of respondents reporting downtime was estimated. Of the 78 eligible hospitals, 32 respondents with CIS and ADS responsibilities completed the online survey for a response rate of 41%. For computerized prescriber order entry, patch installations and system upgrades caused an average downtime of 57% over a 12-month period. Lost interface and interface malfunction were reported for centralized and decentralized ADSs, with an average downtime response of 34% and 29%, respectively. The average downtime response was 31% for software malfunctions linked to clinical decision-support systems. Although patient harm did not result from 30 (54%) medication errors, the potential for harm was present for 9 (16%) of these errors. Medication errors occurred during CIS and ADS downtime despite the availability of backup systems and standard protocols to handle periods of system downtime. Efforts should be directed to reduce the frequency and length of down-time in order to minimize medication errors during such downtime.

  14. Derivation of atmospheric extinction profiles and wind speed over the ocean from a satellite-borne lidar.

    PubMed

    Weinman, J A

    1988-10-01

    A simulated analysis is presented that shows that returns from a single-frequency space-borne lidar can be combined with data from conventional visible satellite imagery to yield profiles of aerosol extinction coefficients and the wind speed at the ocean surface. The optical thickness of the aerosols in the atmosphere can be derived from visible imagery. That measurement of the total optical thickness can constrain the solution to the lidar equation to yield a robust estimate of the extinction profile. The specular reflection of the lidar beam from the ocean can be used to determine the wind speed at the sea surface once the transmission of the atmosphere is known. The impact on the retrieved aerosol profiles and surface wind speed produced by errors in the input parameters and noise in the lidar measurements is also considered.

  15. Radial orbit error reduction and sea surface topography determination using satellite altimetry

    NASA Technical Reports Server (NTRS)

    Engelis, Theodossios

    1987-01-01

    A method is presented in satellite altimetry that attempts to simultaneously determine the geoid and sea surface topography with minimum wavelengths of about 500 km and to reduce the radial orbit error caused by geopotential errors. The modeling of the radial orbit error is made using the linearized Lagrangian perturbation theory. Secular and second order effects are also included. After a rather extensive validation of the linearized equations, alternative expressions of the radial orbit error are derived. Numerical estimates for the radial orbit error and geoid undulation error are computed using the differences of two geopotential models as potential coefficient errors, for a SEASAT orbit. To provide statistical estimates of the radial distances and the geoid, a covariance propagation is made based on the full geopotential covariance. Accuracy estimates for the SEASAT orbits are given which agree quite well with already published results. Observation equations are develped using sea surface heights and crossover discrepancies as observables. A minimum variance solution with prior information provides estimates of parameters representing the sea surface topography and corrections to the gravity field that is used for the orbit generation. The simulation results show that the method can be used to effectively reduce the radial orbit error and recover the sea surface topography.

  16. An assessment of envelope-based demodulation in case of proximity of carrier and modulation frequencies

    NASA Astrophysics Data System (ADS)

    Shahriar, Md Rifat; Borghesani, Pietro; Randall, R. B.; Tan, Andy C. C.

    2017-11-01

    Demodulation is a necessary step in the field of diagnostics to reveal faults whose signatures appear as an amplitude and/or frequency modulation. The Hilbert transform has conventionally been used for the calculation of the analytic signal required in the demodulation process. However, the carrier and modulation frequencies must meet the conditions set by the Bedrosian identity for the Hilbert transform to be applicable for demodulation. This condition, basically requiring the carrier frequency to be sufficiently higher than the frequency of the modulation harmonics, is usually satisfied in many traditional diagnostic applications (e.g. vibration analysis of gear and bearing faults) due to the order-of-magnitude ratio between the carrier and modulation frequency. However, the diversification of the diagnostic approaches and applications shows cases (e.g. electrical signature analysis-based diagnostics) where the carrier frequency is in close proximity to the modulation frequency, thus challenging the applicability of the Bedrosian theorem. This work presents an analytic study to quantify the error introduced by the Hilbert transform-based demodulation when the Bedrosian identity is not satisfied and proposes a mitigation strategy to combat the error. An experimental study is also carried out to verify the analytical results. The outcome of the error analysis sets a confidence limit on the estimated modulation (both shape and magnitude) achieved through the Hilbert transform-based demodulation in case of violated Bedrosian theorem. However, the proposed mitigation strategy is found effective in combating the demodulation error aroused in this scenario, thus extending applicability of the Hilbert transform-based demodulation.

  17. The Efffect of Image Apodization on Global Mode Parameters and Rotational Inversions

    NASA Astrophysics Data System (ADS)

    Larson, Tim; Schou, Jesper

    2016-10-01

    It has long been known that certain systematic errors in the global mode analysis of data from both MDI and HMI depend on how the input images were apodized. Recently it has come to light, while investigating a six-month period in f-mode frequencies, that mode coverage is highest when B0 is maximal. Recalling that the leakage matrix is calculated in the approximation that B0=0, it comes as a surprise that more modes are fitted when the leakage matrix is most incorrect. It is now believed that the six-month oscillation has primarily to do with what portion of the solar surface is visible. Other systematic errors that depend on the part of the disk used include high-latitude anomalies in the rotation rate and a prominent feature in the normalized residuals of odd a-coefficients. Although the most likely cause of all these errors is errors in the leakage matrix, extensive recalculation of the leaks has not made any difference. Thus we conjecture that another effect may be at play, such as errors in the noise model or one that has to do with the alignment of the apodization with the spherical harmonics. In this poster we explore how differently shaped apodizations affect the results of inversions for internal rotation, for both maximal and minimal absolute values of B0.

  18. Partial compensation interferometry measurement system for parameter errors of conicoid surface

    NASA Astrophysics Data System (ADS)

    Hao, Qun; Li, Tengfei; Hu, Yao; Wang, Shaopu; Ning, Yan; Chen, Zhuo

    2018-06-01

    Surface parameters, such as vertex radius of curvature and conic constant, are used to describe the shape of an aspheric surface. Surface parameter errors (SPEs) are deviations affecting the optical characteristics of an aspheric surface. Precise measurement of SPEs is critical in the evaluation of optical surfaces. In this paper, a partial compensation interferometry measurement system for SPE of a conicoid surface is proposed based on the theory of slope asphericity and the best compensation distance. The system is developed to measure the SPE-caused best compensation distance change and SPE-caused surface shape change and then calculate the SPEs with the iteration algorithm for accuracy improvement. Experimental results indicate that the average relative measurement accuracy of the proposed system could be better than 0.02% for the vertex radius of curvature error and 2% for the conic constant error.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaut, Arkadiusz

    We present the results of the estimation of parameters with LISA for nearly monochromatic gravitational waves in the low and high frequency regimes for the time-delay interferometry response. Angular resolution of the detector and the estimation errors of the signal's parameters in the high frequency regimes are calculated as functions of the position in the sky and as functions of the frequency. For the long-wavelength domain we give compact formulas for the estimation errors valid on a wide range of the parameter space.

  20. Short-term prediction of rain attenuation level and volatility in Earth-to-Satellite links at EHF band

    NASA Astrophysics Data System (ADS)

    de Montera, L.; Mallet, C.; Barthès, L.; Golé, P.

    2008-08-01

    This paper shows how nonlinear models originally developed in the finance field can be used to predict rain attenuation level and volatility in Earth-to-Satellite links operating at the Extremely High Frequencies band (EHF, 20 50 GHz). A common approach to solving this problem is to consider that the prediction error corresponds only to scintillations, whose variance is assumed to be constant. Nevertheless, this assumption does not seem to be realistic because of the heteroscedasticity of error time series: the variance of the prediction error is found to be time-varying and has to be modeled. Since rain attenuation time series behave similarly to certain stocks or foreign exchange rates, a switching ARIMA/GARCH model was implemented. The originality of this model is that not only the attenuation level, but also the error conditional distribution are predicted. It allows an accurate upper-bound of the future attenuation to be estimated in real time that minimizes the cost of Fade Mitigation Techniques (FMT) and therefore enables the communication system to reach a high percentage of availability. The performance of the switching ARIMA/GARCH model was estimated using a measurement database of the Olympus satellite 20/30 GHz beacons and this model is shown to outperform significantly other existing models. The model also includes frequency scaling from the downlink frequency to the uplink frequency. The attenuation effects (gases, clouds and rain) are first separated with a neural network and then scaled using specific scaling factors. As to the resulting uplink prediction error, the error contribution of the frequency scaling step is shown to be larger than that of the downlink prediction, indicating that further study should focus on improving the accuracy of the scaling factor.

  1. Distributed control of large space antennas

    NASA Technical Reports Server (NTRS)

    Cameron, J. M.; Hamidi, M.; Lin, Y. H.; Wang, S. J.

    1983-01-01

    A systematic way to choose control design parameters and to evaluate performance for large space antennas is presented. The structural dynamics and control properties for a Hoop and Column Antenna and a Wrap-Rib Antenna are characterized. Some results of the effects of model parameter uncertainties to the stability, surface accuracy, and pointing errors are presented. Critical dynamics and control problems for these antenna configurations are identified and potential solutions are discussed. It was concluded that structural uncertainties and model error can cause serious performance deterioration and can even destabilize the controllers. For the hoop and column antenna, large hoop and long meat and the lack of stiffness between the two substructures result in low structural frequencies. Performance can be improved if this design can be strengthened. The two-site control system is more robust than either single-site control systems for the hoop and column antenna.

  2. Gaussian pre-filtering for uncertainty minimization in digital image correlation using numerically-designed speckle patterns

    NASA Astrophysics Data System (ADS)

    Mazzoleni, Paolo; Matta, Fabio; Zappa, Emanuele; Sutton, Michael A.; Cigada, Alfredo

    2015-03-01

    This paper discusses the effect of pre-processing image blurring on the uncertainty of two-dimensional digital image correlation (DIC) measurements for the specific case of numerically-designed speckle patterns having particles with well-defined and consistent shape, size and spacing. Such patterns are more suitable for large measurement surfaces on large-scale specimens than traditional spray-painted random patterns without well-defined particles. The methodology consists of numerical simulations where Gaussian digital filters with varying standard deviation are applied to a reference speckle pattern. To simplify the pattern application process for large areas and increase contrast to reduce measurement uncertainty, the speckle shape, mean size and on-center spacing were selected to be representative of numerically-designed patterns that can be applied on large surfaces through different techniques (e.g., spray-painting through stencils). Such 'designer patterns' are characterized by well-defined regions of non-zero frequency content and non-zero peaks, and are fundamentally different from typical spray-painted patterns whose frequency content exhibits near-zero peaks. The effect of blurring filters is examined for constant, linear, quadratic and cubic displacement fields. Maximum strains between ±250 and ±20,000 με are simulated, thus covering a relevant range for structural materials subjected to service and ultimate stresses. The robustness of the simulation procedure is verified experimentally using a physical speckle pattern subjected to constant displacements. The stability of the relation between standard deviation of the Gaussian filter and measurement uncertainty is assessed for linear displacement fields at varying image noise levels, subset size, and frequency content of the speckle pattern. It is shown that bias error as well as measurement uncertainty are minimized through Gaussian pre-filtering. This finding does not apply to typical spray-painted patterns without well-defined particles, for which image blurring is only beneficial in reducing bias errors.

  3. Quantifying Uncertainty in Instantaneous Orbital Data Products of TRMM over Indian Subcontinent

    NASA Astrophysics Data System (ADS)

    Jayaluxmi, I.; Nagesh, D.

    2013-12-01

    In the last 20 years, microwave radiometers have taken satellite images of earth's weather proving to be a valuable tool for quantitative estimation of precipitation from space. However, along with the widespread acceptance of microwave based precipitation products, it has also been recognized that they contain large uncertainties. While most of the uncertainty evaluation studies focus on the accuracy of rainfall accumulated over time (e.g., season/year), evaluation of instantaneous rainfall intensities from satellite orbital data products are relatively rare. These instantaneous products are known to potentially cause large uncertainties during real time flood forecasting studies at the watershed scale. Especially over land regions, where the highly varying land surface emissivity offer a myriad of complications hindering accurate rainfall estimation. The error components of orbital data products also tend to interact nonlinearly with hydrologic modeling uncertainty. Keeping these in mind, the present study fosters the development of uncertainty analysis using instantaneous satellite orbital data products (version 7 of 1B11, 2A25, 2A23) derived from the passive and active sensors onboard Tropical Rainfall Measuring Mission (TRMM) satellite, namely TRMM microwave imager (TMI) and Precipitation Radar (PR). The study utilizes 11 years of orbital data from 2002 to 2012 over the Indian subcontinent and examines the influence of various error sources on the convective and stratiform precipitation types. Analysis conducted over the land regions of India investigates three sources of uncertainty in detail. These include 1) Errors due to improper delineation of rainfall signature within microwave footprint (rain/no rain classification), 2) Uncertainty offered by the transfer function linking rainfall with TMI low frequency channels and 3) Sampling errors owing to the narrow swath and infrequent visits of TRMM sensors. Case study results obtained during the Indian summer monsoon months of June-September are presented using contingency table statistics, performance diagram, scatter plots and probability density functions. Our study demonstrates that theory of copula can be efficiently used to represent the highly non linear dependency structure of rainfall with respect to TMI low frequency channels of 19, 21, 37 GHz. This questions the exclusive usage of high frequency 85 GHz channel for TMI overland rainfall retrieval algorithms. Further, the PR sampling errors revealed using a statistical bootstrap technique was found to incur relative sampling errors <30% (for 2 degree grids) over India whose magnitudes were biased towards stratiform rainfall type and sampling technique employed. These findings clearly document that proper characterization of error structure offered by TMI and PR has wider implications for decision making prior to incorporating the resulting orbital products for basin scale hydrologic modeling.

  4. 42 CFR 431.992 - Corrective action plan.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... CMS, designed to reduce improper payments in each program based on its analysis of the error causes in... State must take the following actions: (1) Data analysis. States must conduct data analysis such as reviewing clusters of errors, general error causes, characteristics, and frequency of errors that are...

  5. 42 CFR 431.992 - Corrective action plan.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... CMS, designed to reduce improper payments in each program based on its analysis of the error causes in... State must take the following actions: (1) Data analysis. States must conduct data analysis such as reviewing clusters of errors, general error causes, characteristics, and frequency of errors that are...

  6. Next Generation of Magneto-Dielectric Antennas and Optimum Flux Channels

    NASA Astrophysics Data System (ADS)

    Yousefi, Tara

    There is an ever-growing need for broadband conformal antennas to not only reduce the number of antennas utilized to cover a broad range of frequencies (VHF-UHF) but also to reduce visual and RF signatures associated with communication systems. In many applications antennas needs to be very close to low-impedance mediums or embedded inside low-impedance mediums. However, for conventional metal and dielectric antennas to operate efficiently in such environments either a very narrow bandwidth must be tolerated, or enough loss added to expand the bandwidth, or they must be placed one quarter of a wavelength above the conducting surface. The latter is not always possible since in the HF through low UHF bands, critical to Military and Security functions, this quarter-wavelength requirement would result in impractically large antennas. Despite an error based on a false assumption in the 1950’s, which had severely underestimated the efficiency of magneto-dielectric antennas, recently demonstrated magnetic-antennas have been shown to exhibit extraordinary efficiency in conformal applications. Whereas conventional metal-and-dielectric antennas carrying radiating electric currents suffer a significant disadvantage when placed conformal to the conducting surface of a platform, because they induce opposing image currents in the surface, magnetic-antennas carrying magnetic radiating currents have no such limitation. Their magnetic currents produce co-linear image currents in electrically conducting surfaces. However, the permeable antennas built to date have not yet attained the wide bandwidth expected because the magnetic-flux-channels carrying the wave have not been designed to guide the wave near the speed of light at all frequencies. Instead, they tend to lose the wave by a leaky fast-wave mechanism at low frequencies or they over-bind a slow-wave at high frequencies. In this dissertation, we have studied magnetic antennas in detail and presented the design approach and apparatus required to implement a flux-channel carrying the magnetic current wave near the speed of light over a very broad frequency range which also makes the design of a frequency independent antenna (spiral) possible. We will learn how to construct extremely thin conformal antennas, frequency-independent permeable antennas, and even micron-sized antennas that can be embedded inside the brain without damaging the tissue.

  7. Fixing Stellarator Magnetic Surfaces

    NASA Astrophysics Data System (ADS)

    Hanson, James D.

    1999-11-01

    Magnetic surfaces are a perennial issue for stellarators. The design heuristic of finding a magnetic field with zero perpendicular component on a specified outer surface often yields inner magnetic surfaces with very small resonant islands. However, magnetic fields in the laboratory are not design fields. Island-causing errors can arise from coil placement errors, stray external fields, and design inadequacies such as ignoring coil leads and incomplete characterization of current distributions within the coil pack. The problem addressed is how to eliminate such error-caused islands. I take a perturbation approach, where the zero order field is assumed to have good magnetic surfaces, and comes from a VMEC equilibrium. The perturbation field consists of error and correction pieces. The error correction method is to determine the correction field so that the sum of the error and correction fields gives zero island size at specified rational surfaces. It is particularly important to correctly calculate the island size for a given perturbation field. The method works well with many correction knobs, and a Singular Value Decomposition (SVD) technique is used to determine minimal corrections necessary to eliminate islands.

  8. Combining nutation and surface gravity observations to estimate the Earth's core and inner core resonant frequencies

    NASA Astrophysics Data System (ADS)

    Ziegler, Yann; Lambert, Sébastien; Rosat, Séverine; Nurul Huda, Ibnu; Bizouard, Christian

    2017-04-01

    Nutation time series derived from very long baseline interferometry (VLBI) and time varying surface gravity data recorded by superconducting gravimeters (SG) have long been used separately to assess the Earth's interior via the estimation of the free core and inner core resonance effects on nutation or tidal gravity. The results obtained from these two techniques have been shown recently to be consistent, making relevant the combination of VLBI and SG observables and the estimation of Earth's interior parameters in a single inversion. We present here the intermediate results of the ongoing project of combining nutation and surface gravity time series to improve estimates of the Earth's core and inner core resonant frequencies. We use VLBI nutation time series spanning 1984-2016 derived by the International VLBI Service for geodesy and astrometry (IVS) as the result of a combination of inputs from various IVS analysis centers, and surface gravity data from about 15 SG stations. We address here the resonance model used for describing the Earth's interior response to tidal excitation, the data preparation consisting of the error recalibration and amplitude fitting for nutation data, and processing of SG time-varying gravity to remove any gaps, spikes, steps and other disturbances, followed by the tidal analysis with the ETERNA 3.4 software package, the preliminary estimates of the resonant periods, and the correlations between parameters.

  9. Error correction and statistical analyses for intra-host comparisons of feline immunodeficiency virus diversity from high-throughput sequencing data.

    PubMed

    Liu, Yang; Chiaromonte, Francesca; Ross, Howard; Malhotra, Raunaq; Elleder, Daniel; Poss, Mary

    2015-06-30

    Infection with feline immunodeficiency virus (FIV) causes an immunosuppressive disease whose consequences are less severe if cats are co-infected with an attenuated FIV strain (PLV). We use virus diversity measurements, which reflect replication ability and the virus response to various conditions, to test whether diversity of virulent FIV in lymphoid tissues is altered in the presence of PLV. Our data consisted of the 3' half of the FIV genome from three tissues of animals infected with FIV alone, or with FIV and PLV, sequenced by 454 technology. Since rare variants dominate virus populations, we had to carefully distinguish sequence variation from errors due to experimental protocols and sequencing. We considered an exponential-normal convolution model used for background correction of microarray data, and modified it to formulate an error correction approach for minor allele frequencies derived from high-throughput sequencing. Similar to accounting for over-dispersion in counts, this accounts for error-inflated variability in frequencies - and quite effectively reproduces empirically observed distributions. After obtaining error-corrected minor allele frequencies, we applied ANalysis Of VAriance (ANOVA) based on a linear mixed model and found that conserved sites and transition frequencies in FIV genes differ among tissues of dual and single infected cats. Furthermore, analysis of minor allele frequencies at individual FIV genome sites revealed 242 sites significantly affected by infection status (dual vs. single) or infection status by tissue interaction. All together, our results demonstrated a decrease in FIV diversity in bone marrow in the presence of PLV. Importantly, these effects were weakened or undetectable when error correction was performed with other approaches (thresholding of minor allele frequencies; probabilistic clustering of reads). We also queried the data for cytidine deaminase activity on the viral genome, which causes an asymmetric increase in G to A substitutions, but found no evidence for this host defense strategy. Our error correction approach for minor allele frequencies (more sensitive and computationally efficient than other algorithms) and our statistical treatment of variation (ANOVA) were critical for effective use of high-throughput sequencing data in understanding viral diversity. We found that co-infection with PLV shifts FIV diversity from bone marrow to lymph node and spleen.

  10. Estimating surface soil moisture from SMAP observations using a Neural Network technique.

    PubMed

    Kolassa, J; Reichle, R H; Liu, Q; Alemohammad, S H; Gentine, P; Aida, K; Asanuma, J; Bircher, S; Caldwell, T; Colliander, A; Cosh, M; Collins, C Holifield; Jackson, T J; Martínez-Fernández, J; McNairn, H; Pacheco, A; Thibeault, M; Walker, J P

    2018-01-01

    A Neural Network (NN) algorithm was developed to estimate global surface soil moisture for April 2015 to March 2017 with a 2-3 day repeat frequency using passive microwave observations from the Soil Moisture Active Passive (SMAP) satellite, surface soil temperatures from the NASA Goddard Earth Observing System Model version 5 (GEOS-5) land modeling system, and Moderate Resolution Imaging Spectroradiometer-based vegetation water content. The NN was trained on GEOS-5 soil moisture target data, making the NN estimates consistent with the GEOS-5 climatology, such that they may ultimately be assimilated into this model without further bias correction. Evaluated against in situ soil moisture measurements, the average unbiased root mean square error (ubRMSE), correlation and anomaly correlation of the NN retrievals were 0.037 m 3 m -3 , 0.70 and 0.66, respectively, against SMAP core validation site measurements and 0.026 m 3 m -3 , 0.58 and 0.48, respectively, against International Soil Moisture Network (ISMN) measurements. At the core validation sites, the NN retrievals have a significantly higher skill than the GEOS-5 model estimates and a slightly lower correlation skill than the SMAP Level-2 Passive (L2P) product. The feasibility of the NN method was reflected by a lower ubRMSE compared to the L2P retrievals as well as a higher skill when ancillary parameters in physically-based retrievals were uncertain. Against ISMN measurements, the skill of the two retrieval products was more comparable. A triple collocation analysis against Advanced Microwave Scanning Radiometer 2 (AMSR2) and Advanced Scatterometer (ASCAT) soil moisture retrievals showed that the NN and L2P retrieval errors have a similar spatial distribution, but the NN retrieval errors are generally lower in densely vegetated regions and transition zones.

  11. Research on the method of improving the accuracy of CMM (coordinate measuring machine) testing aspheric surface

    NASA Astrophysics Data System (ADS)

    Cong, Wang; Xu, Lingdi; Li, Ang

    2017-10-01

    Large aspheric surface which have the deviation with spherical surface are being used widely in various of optical systems. Compared with spherical surface, Large aspheric surfaces have lots of advantages, such as improving image quality, correcting aberration, expanding field of view, increasing the effective distance and make the optical system compact, lightweight. Especially, with the rapid development of space optics, space sensor resolution is required higher and viewing angle is requred larger. Aspheric surface will become one of the essential components in the optical system. After finishing Aspheric coarse Grinding surface profile error is about Tens of microns[1].In order to achieve the final requirement of surface accuracy,the aspheric surface must be quickly modified, high precision testing is the basement of rapid convergence of the surface error . There many methods on aspheric surface detection[2], Geometric ray detection, hartmann detection, ronchi text, knifeedge method, direct profile test, interferometry, while all of them have their disadvantage[6]. In recent years the measure of the aspheric surface become one of the import factors which are restricting the aspheric surface processing development. A two meter caliber industrial CMM coordinate measuring machine is avaiable, but it has many drawbacks such as large detection error and low repeatability precision in the measurement of aspheric surface coarse grinding , which seriously affects the convergence efficiency during the aspherical mirror processing. To solve those problems, this paper presents an effective error control, calibration and removal method by calibration mirror position of the real-time monitoring and other effective means of error control, calibration and removal by probe correction and the measurement mode selection method to measure the point distribution program development. This method verified by real engineer examples, this method increases the original industrial-grade coordinate system nominal measurement accuracy PV value of 7 microns to 4microns, Which effectively improves the grinding efficiency of aspheric mirrors and verifies the correctness of the method. This paper also investigates the error detection and operation control method, the error calibration of the CMM and the random error calibration of the CMM .

  12. Assessment of surface turbulent fluxes using geostationary satellite surface skin temperatures and a mixed layer planetary boundary layer scheme

    NASA Technical Reports Server (NTRS)

    Diak, George R.; Stewart, Tod R.

    1989-01-01

    A method is presented for evaluating the fluxes of sensible and latent heating at the land surface, using satellite-measured surface temperature changes in a composite surface layer-mixed layer representation of the planetary boundary layer. The basic prognostic model is tested by comparison with synoptic station information at sites where surface evaporation climatology is well known. The remote sensing version of the model, using satellite-measured surface temperature changes, is then used to quantify the sharp spatial gradient in surface heating/evaporation across the central United States. An error analysis indicates that perhaps five levels of evaporation are recognizable by these methods and that the chief cause of error is the interaction of errors in the measurement of surface temperature change with errors in the assigment of surface roughness character. Finally, two new potential methods for remote sensing of the land-surface energy balance are suggested which will relay on space-borne instrumentation planned for the 1990s.

  13. A new method of hybrid frequency hopping signals selection and blind parameter estimation

    NASA Astrophysics Data System (ADS)

    Zeng, Xiaoyu; Jiao, Wencheng; Sun, Huixian

    2018-04-01

    Frequency hopping communication is widely used in military communications at home and abroad. In the case of single-channel reception, it is scarce to process multiple frequency hopping signals both effectively and simultaneously. A method of hybrid FH signals selection and blind parameter estimation is proposed. The method makes use of spectral transformation, spectral entropy calculation and PRI transformation basic theory to realize the sorting and parameter estimation of the components in the hybrid frequency hopping signal. The simulation results show that this method can correctly classify the frequency hopping component signal, and the estimated error of the frequency hopping period is about 5% and the estimated error of the frequency hopping frequency is less than 1% when the SNR is 10dB. However, the performance of this method deteriorates seriously at low SNR.

  14. When ottoman is easier than chair: an inverse frequency effect in jargon aphasia.

    PubMed

    Marshall, J; Pring, T; Chiat, S; Robson, J

    2001-02-01

    This paper presents evidence of an inverse frequency effect in jargon aphasia. The subject (JP) showed a pre-disposition for low frequency word production on a range of tasks, including picture naming, sentence completion and naming in categories. Her real word errors were also striking, in that these tended to be lower in frequency than the target. Reading data suggested that the inverse frequency effect was present only when production was semantically mediated. It was therefore hypothesised that the effect was at least partly due to the semantic characteristics of low frequency items. Some support for this was obtained from a comprehension task showing that JP's understanding of low frequency terms, which she often produced as errors, was superior to her understanding of high frequency terms. Possible explanations for these findings are considered.

  15. Experiments and error analysis of laser ranging based on frequency-sweep polarization modulation

    NASA Astrophysics Data System (ADS)

    Gao, Shuyuan; Ji, Rongyi; Li, Yao; Cheng, Zhi; Zhou, Weihu

    2016-11-01

    Frequency-sweep polarization modulation ranging uses a polarization-modulated laser beam to determine the distance to the target, the modulation frequency is swept and frequency values are measured when transmitted and received signals are in phase, thus the distance can be calculated through these values. This method gets much higher theoretical measuring accuracy than phase difference method because of the prevention of phase measurement. However, actual accuracy of the system is limited since additional phase retardation occurs in the measuring optical path when optical elements are imperfectly processed and installed. In this paper, working principle of frequency sweep polarization modulation ranging method is analyzed, transmission model of polarization state in light path is built based on the theory of Jones Matrix, additional phase retardation of λ/4 wave plate and PBS, their impact on measuring performance is analyzed. Theoretical results show that wave plate's azimuth error dominates the limitation of ranging accuracy. According to the system design index, element tolerance and error correcting method of system is proposed, ranging system is built and ranging experiment is performed. Experiential results show that with proposed tolerance, the system can satisfy the accuracy requirement. The present work has a guide value for further research about system design and error distribution.

  16. Programming Errors in APL.

    ERIC Educational Resources Information Center

    Kearsley, Greg P.

    This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…

  17. Performance Errors in Weight Training and Their Correction.

    ERIC Educational Resources Information Center

    Downing, John H.; Lander, Jeffrey E.

    2002-01-01

    Addresses general performance errors in weight training, also discussing each category of error separately. The paper focuses on frequency and intensity, incorrect training velocities, full range of motion, and symmetrical training. It also examines specific errors related to the bench press, squat, military press, and bent- over and seated row…

  18. Effect of temporal sampling and timing for soil moisture measurements at field scale

    NASA Astrophysics Data System (ADS)

    Snapir, B.; Hobbs, S.

    2012-04-01

    Estimating soil moisture at field scale is valuable for various applications such as irrigation scheduling in cultivated watersheds, flood and drought prediction, waterborne disease spread assessment, or even determination of mobility with lightweight vehicles. Synthetic aperture radar on satellites in low Earth orbit can provide fine resolution images with a repeat time of a few days. For an Earth observing satellite, the choice of the orbit is driven in particular by the frequency of measurements required to meet a certain accuracy in retrieving the parameters of interest. For a given target, having only one image every week may not enable to capture the full dynamic range of soil moisture - soil moisture can change significantly within a day when rainfall occurs. Hence this study focuses on the effect of temporal sampling and timing of measurements in terms of error on the retrieved signal. All the analyses are based on in situ measurements of soil moisture (acquired every 30 min) from the OzNet Hydrological Monitoring Network in Australia for different fields over several years. The first study concerns sampling frequency. Measurements at different frequencies were simulated by sub-sampling the original data. Linear interpolation was used to estimate the missing intermediate values, and then this time series was compared to the original. The difference between these two signals is computed for different levels of sub-sampling. Results show that the error increases linearly when the interval is less than 1 day. For intervals longer than a day, a sinusoidal component appears on top of the linear growth due to the diurnal variation of surface soil moisture. Thus, for example, the error with measurements every 4.5 days can be slightly less than the error with measurements every 2 days. Next, for a given sampling interval, this study evaluated the effect of the time during the day at which measurements are made. Of course when measurements are very frequent the time of acquisition does not matter, but when few measurements are available (sampling interval > 1 day), the time of acquisition can be important. It is shown that with daily measurements the error can double depending on the time of acquisition. This result is very sensitive to the phase of the sinusoidal variation of soil moisture. For example, in autumn for a given field with soil moisture ranging from 7.08% to 11.44% (mean and standard deviation being respectively 8.68% and 0.74%), daily measurements at 2 pm lead to a mean error of 0.47% v/v, while daily measurements at 9 am/pm produce a mean error of 0.24% v/v. The minimum of the sinusoid occurs every afternoon around 2 pm, after interpolation, measurements acquired at this time underestimate soil moisture, whereas measurements around 9 am/pm correspond to nodes of the sinusoid, hence they represent the average soil moisture. These results concerning the frequency and the timing of measurements can potentially drive the schedule of satellite image acquisition over some fields.

  19. Characterizing the SWOT discharge error budget on the Sacramento River, CA

    NASA Astrophysics Data System (ADS)

    Yoon, Y.; Durand, M. T.; Minear, J. T.; Smith, L.; Merry, C. J.

    2013-12-01

    The Surface Water and Ocean Topography (SWOT) is an upcoming satellite mission (2020 year) that will provide surface-water elevation and surface-water extent globally. One goal of SWOT is the estimation of river discharge directly from SWOT measurements. SWOT discharge uncertainty is due to two sources. First, SWOT cannot measure channel bathymetry and determine roughness coefficient data necessary for discharge calculations directly; these parameters must be estimated from the measurements or from a priori information. Second, SWOT measurement errors directly impact the discharge estimate accuracy. This study focuses on characterizing parameter and measurement uncertainties for SWOT river discharge estimation. A Bayesian Markov Chain Monte Carlo scheme is used to calculate parameter estimates, given the measurements of river height, slope and width, and mass and momentum constraints. The algorithm is evaluated using simulated both SWOT and AirSWOT (the airborne version of SWOT) observations over seven reaches (about 40 km) of the Sacramento River. The SWOT and AirSWOT observations are simulated by corrupting the ';true' HEC-RAS hydraulic modeling results with the instrument error. This experiment answers how unknown bathymetry and roughness coefficients affect the accuracy of the river discharge algorithm. From the experiment, the discharge error budget is almost completely dominated by unknown bathymetry and roughness; 81% of the variance error is explained by uncertainties in bathymetry and roughness. Second, we show how the errors in water surface, slope, and width observations influence the accuracy of discharge estimates. Indeed, there is a significant sensitivity to water surface, slope, and width errors due to the sensitivity of bathymetry and roughness to measurement errors. Increasing water-surface error above 10 cm leads to a corresponding sharper increase of errors in bathymetry and roughness. Increasing slope error above 1.5 cm/km leads to a significant degradation due to direct error in the discharge estimates. As the width error increases past 20%, the discharge error budget is dominated by the width error. Above two experiments are performed based on AirSWOT scenarios. In addition, we explore the sensitivity of the algorithm to the SWOT scenarios.

  20. Grazing Incidence Wavefront Sensing and Verification of X-Ray Optics Performance

    NASA Technical Reports Server (NTRS)

    Saha, Timo T.; Rohrbach, Scott; Zhang, William W.

    2011-01-01

    Evaluation of interferometrically measured mirror metrology data and characterization of a telescope wavefront can be powerful tools in understanding of image characteristics of an x-ray optical system. In the development of soft x-ray telescope for the International X-Ray Observatory (IXO), we have developed new approaches to support the telescope development process. Interferometrically measuring the optical components over all relevant spatial frequencies can be used to evaluate and predict the performance of an x-ray telescope. Typically, the mirrors are measured using a mount that minimizes the mount and gravity induced errors. In the assembly and mounting process the shape of the mirror segments can dramatically change. We have developed wavefront sensing techniques suitable for the x-ray optical components to aid us in the characterization and evaluation of these changes. Hartmann sensing of a telescope and its components is a simple method that can be used to evaluate low order mirror surface errors and alignment errors. Phase retrieval techniques can also be used to assess and estimate the low order axial errors of the primary and secondary mirror segments. In this paper we describe the mathematical foundation of our Hartmann and phase retrieval sensing techniques. We show how these techniques can be used in the evaluation and performance prediction process of x-ray telescopes.

  1. Performance evaluation of coherent free space optical communications with a double-stage fast-steering-mirror adaptive optics system depending on the Greenwood frequency.

    PubMed

    Liu, Wei; Yao, Kainan; Huang, Danian; Lin, Xudong; Wang, Liang; Lv, Yaowen

    2016-06-13

    The Greenwood frequency (GF) is influential in performance improvement for the coherent free space optical communications (CFSOC) system with a closed-loop adaptive optics (AO) unit. We analyze the impact of tilt and high-order aberrations on the mixing efficiency (ME) and bit-error-rate (BER) under different GF. The root-mean-square value (RMS) of the ME related to the RMS of the tilt aberrations, and the GF is derived to estimate the volatility of the ME. Furthermore, a numerical simulation is applied to verify the theoretical analysis, and an experimental correction system is designed with a double-stage fast-steering-mirror and a 97-element continuous surface deformable mirror. The conclusions of this paper provide a reference for designing the AO system for the CFSOC system.

  2. Research on the Wire Network Signal Prediction Based on the Improved NNARX Model

    NASA Astrophysics Data System (ADS)

    Zhang, Zipeng; Fan, Tao; Wang, Shuqing

    It is difficult to obtain accurately the wire net signal of power system's high voltage power transmission lines in the process of monitoring and repairing. In order to solve this problem, the signal measured in remote substation or laboratory is employed to make multipoint prediction to gain the needed data. But, the obtained power grid frequency signal is delay. In order to solve the problem, an improved NNARX network which can predict frequency signal based on multi-point data collected by remote substation PMU is describes in this paper. As the error curved surface of the NNARX network is more complicated, this paper uses L-M algorithm to train the network. The result of the simulation shows that the NNARX network has preferable predication performance which provides accurate real time data for field testing and maintenance.

  3. Differential Deposition for Surface Figure Corrections in Grazing Incidence X-Ray Optics

    NASA Technical Reports Server (NTRS)

    Ramsey, Brian D.; Kilaru, Kiranmayee; Atkins, Carolyn; Gubarev, Mikhail V.; Broadway, David M.

    2015-01-01

    Differential deposition corrects the low- and mid- spatial-frequency deviations in the axial figure of Wolter-type grazing incidence X-ray optics. Figure deviations is one of the major contributors to the achievable angular resolution. Minimizing figure errors can significantly improve the imaging quality of X-ray optics. Material of varying thickness is selectively deposited, using DC magnetron sputtering, along the length of optic to minimize figure deviations. Custom vacuum chambers are built that can incorporate full-shell and segmented Xray optics. Metrology data of preliminary corrections on a single meridian of full-shell x-ray optics show an improvement of mid-spatial frequencies from 6.7 to 1.8 arc secs HPD. Efforts are in progress to correct a full-shell and segmented optics and to verify angular-resolution improvement with X-ray testing.

  4. Navigator alignment using radar scan

    DOEpatents

    Doerry, Armin W.; Marquette, Brandeis

    2016-04-05

    The various technologies presented herein relate to the determination of and correction of heading error of platform. Knowledge of at least one of a maximum Doppler frequency or a minimum Doppler bandwidth pertaining to a plurality of radar echoes can be utilized to facilitate correction of the heading error. Heading error can occur as a result of component drift. In an ideal situation, a boresight direction of an antenna or the front of an aircraft will have associated therewith at least one of a maximum Doppler frequency or a minimum Doppler bandwidth. As the boresight direction of the antenna strays from a direction of travel at least one of the maximum Doppler frequency or a minimum Doppler bandwidth will shift away, either left or right, from the ideal situation.

  5. What are incident reports telling us? A comparative study at two Australian hospitals of medication errors identified at audit, detected by staff and reported to an incident system.

    PubMed

    Westbrook, Johanna I; Li, Ling; Lehnbom, Elin C; Baysari, Melissa T; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O

    2015-02-01

    To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Audit of 3291 patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as 'clinically important'. Two major academic teaching hospitals in Sydney, Australia. Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6-1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0-253.8), but only 13.0/1000 (95% CI: 3.4-22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4-28.4%) contained ≥ 1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. © The Author 2015. Published by Oxford University Press in association with the International Society for Quality in Health Care.

  6. Influence of modulation frequency in rubidium cell frequency standards

    NASA Technical Reports Server (NTRS)

    Audoin, C.; Viennet, J.; Cyr, N.; Vanier, J.

    1983-01-01

    The error signal which is used to control the frequency of the quartz crystal oscillator of a passive rubidium cell frequency standard is considered. The value of the slope of this signal, for an interrogation frequency close to the atomic transition frequency is calculated and measured for various phase (or frequency) modulation waveforms, and for several values of the modulation frequency. A theoretical analysis is made using a model which applies to a system in which the optical pumping rate, the relaxation rates and the RF field are homogeneous. Results are given for sine-wave phase modulation, square-wave frequency modulation and square-wave phase modulation. The influence of the modulation frequency on the slope of the error signal is specified. It is shown that the modulation frequency can be chosen as large as twice the non-saturated full-width at half-maximum without a drastic loss of the sensitivity to an offset of the interrogation frequency from center line, provided that the power saturation factor and the amplitude of modulation are properly adjusted.

  7. Quantifying the Contributions of Environmental Parameters to Ceres Surface Net Radiation Error in China

    NASA Astrophysics Data System (ADS)

    Pan, X.; Yang, Y.; Liu, Y.; Fan, X.; Shan, L.; Zhang, X.

    2018-04-01

    Error source analyses are critical for the satellite-retrieved surface net radiation (Rn) products. In this study, we evaluate the Rn error sources in the Clouds and the Earth's Radiant Energy System (CERES) project at 43 sites from July in 2007 to December in 2007 in China. The results show that cloud fraction (CF), land surface temperature (LST), atmospheric temperature (AT) and algorithm error dominate the Rn error, with error contributions of -20, 15, 10 and 10 W/m2 (net shortwave (NSW)/longwave (NLW) radiation), respectively. For NSW, the dominant error source is algorithm error (more than 10 W/m2), particularly in spring and summer with abundant cloud. For NLW, due to the high sensitivity of algorithm and large LST/CF error, LST and CF are the largest error sources, especially in northern China. The AT influences the NLW error large in southern China because of the large AT error in there. The total precipitable water has weak influence on Rn error even with the high sensitivity of algorithm. In order to improve Rn quality, CF and LST (AT) error in northern (southern) China should be decreased.

  8. Strategy of restraining ripple error on surface for optical fabrication.

    PubMed

    Wang, Tan; Cheng, Haobo; Feng, Yunpeng; Tam, Honyuen

    2014-09-10

    The influence from the ripple error to the high imaging quality is effectively reduced by restraining the ripple height. A method based on the process parameters and the surface error distribution is designed to suppress the ripple height in this paper. The generating mechanism of the ripple error is analyzed by polishing theory with uniform removal character. The relation between the processing parameters (removal functions, pitch of path, and dwell time) and the ripple error is discussed through simulations. With these, the strategy for diminishing the error is presented. A final process is designed and demonstrated on K9 work-pieces using the optimizing strategy with magnetorheological jet polishing. The form error on the surface is decreased from 0.216λ PV (λ=632.8  nm) and 0.039λ RMS to 0.03λ PV and 0.004λ RMS. And the ripple error is restrained well at the same time, because the ripple height is less than 6 nm on the final surface. Results indicate that these strategies are suitable for high-precision optical manufacturing.

  9. Control of deviations and prediction of surface roughness from micro machining of THz waveguides using acoustic emission signals

    NASA Astrophysics Data System (ADS)

    Griffin, James M.; Diaz, Fernanda; Geerling, Edgar; Clasing, Matias; Ponce, Vicente; Taylor, Chris; Turner, Sam; Michael, Ernest A.; Patricio Mena, F.; Bronfman, Leonardo

    2017-02-01

    By using acoustic emission (AE) it is possible to control deviations and surface quality during micro milling operations. The method of micro milling is used to manufacture a submillimetre waveguide where micro machining is employed to achieve the required superior finish and geometrical tolerances. Submillimetre waveguide technology is used in deep space signal retrieval where highest detection efficiencies are needed and therefore every possible signal loss in the receiver has to be avoided and stringent tolerances achieved. With a sub-standard surface finish the signals travelling along the waveguides dissipate away faster than with perfect surfaces where the residual roughness becomes comparable with the electromagnetic skin depth. Therefore, the higher the radio frequency the more critical this becomes. The method of time-frequency analysis (STFT) is used to transfer raw AE into more meaningful salient signal features (SF). This information was then correlated against the measured geometrical deviations and, the onset of catastrophic tool wear. Such deviations can be offset from different AE signals (different deviations from subsequent tests) and feedback for a final spring cut ensuring the geometrical accuracies are met. Geometrical differences can impact on the required transfer of AE signals (change in cut off frequencies and diminished SNR at the interface) and therefore errors have to be minimised to within 1 μm. Rules based on both Classification and Regression Trees (CART) and Neural Networks (NN) were used to implement a simulation displaying how such a control regime could be used as a real time controller, be it corrective measures (via spring cuts) over several initial machining passes or, with a micron cut introducing a level plain measure for allowing setup corrective measures (similar to a spirit level).

  10. Finding the position of tumor inhomogeneities in a gel-like model of a human breast using 3-D pulsed digital holography.

    PubMed

    Hernández-Montes, Maria del Socorro; Pérez-López, Carlos; Santoyo, Fernando Mendoza

    2007-01-01

    3-D pulsed digital holography is a noninvasive optical method used to measure the depth position of breast tumor tissue immersed in a semisolid gel model. A master gel without inhomogeneities is set to resonate at an 810 Hz frequency; then, an identically prepared gel with an inhomogeneity is interrogated with the same resonant frequency in the original setup. Comparatively, and using only an out-of-plane sensitive setup, gel surface displacement can be measured, evidencing an internal inhomogeneity. However, the depth position cannot be measured accurately, since the out-of-plane component has the contribution of in-plane surface displacements. With the information gathered, three sensitivity vectors can be obtained to separate contributions from x, y, and z vibration displacement components, individual displacement maps for the three orthogonal axes can be built, and the inhomogeneity's depth position can be accurately measured. Then, the displacement normal to the gel surface is used to find the depth profile and its cross section. Results from the optical data obtained are compared and correlated to the inhomogeneity's physically measured position. Depth position is found with an error smaller than 1%. The inhomogeneity and its position within the gel can be accurately found, making the method a promising noninvasive alternative to study mammary tumors.

  11. Study of the adsorption of sodium dodecyl sulfate (SDS) at the air/water interface: targeting the sulfate headgroup using vibrational sum frequency spectroscopy.

    PubMed

    Johnson, C Magnus; Tyrode, Eric

    2005-07-07

    The surface sensitive technique vibrational sum frequency spectroscopy (VSFS), has been used to study the adsorption behaviour of SDS to the liquid/vapour interface of aqueous solutions, specifically targeting the sulfate headgroup stretches. In the spectral region extending from 980 to 1850 cm(-1), only the vibrations due to the SO(3) group were detectable. The fitted amplitudes for the symmetric SO(3) stretch observed at 1070 cm(-1) for the polarization combinations ssp and ppp, were seen to follow the adsorption isotherm calculated from surface tension measurements. The orientation of the sulfate headgroup in the concentration range spanning from 1.0 mM to above the critical micellar concentration (c.m.c.) was observed to remain constant within experimental error, with the pseudo-C(3) axis close to the surface normal. Furthermore, the effect of increasing amounts of sodium chloride at SDS concentrations above c.m.c. was also studied, showing an increase of approximately 12% in the fitted amplitude for the symmetric SO(3) stretch when increasing the ionic strength from 0 to 300 mM NaCl. Interestingly, the orientation of the SDS headgroup was also observed to remain constant within this concentration range and identical to the case without NaCl.

  12. Performance of Physical Examination Skills in Medical Students during Diagnostic Medicine Course in a University Hospital of Northwest China

    PubMed Central

    Li, Yan; Li, Na; Han, Qunying; He, Shuixiang; Bae, Ricard S.; Liu, Zhengwen; Lv, Yi; Shi, Bingyin

    2014-01-01

    This study was conducted to evaluate the performance of physical examination (PE) skills during our diagnostic medicine course and analyze the characteristics of the data collected to provide information for practical guidance to improve the quality of teaching. Seventy-two fourth-year medical students were enrolled in the study. All received an assessment of PE skills after receiving a 17-week formal training course and systematic teaching. Their performance was evaluated and recorded in detail using a checklist, which included 5 aspects of PE skills: examination techniques, communication and care skills, content items, appropriateness of examination sequence, and time taken. Error frequency and type were designated as the assessment parameters in the survey. The results showed that the distribution and the percentage in examination errors between male and female students and among the different body parts examined were significantly different (p<0.001). The average error frequency per student in females (0.875) was lower than in males (1.375) although the difference was not statistically significant (p = 0.167). The average error frequency per student in cardiac (1.267) and pulmonary (1.389) examinations was higher than in abdominal (0.867) and head, neck and nervous system examinations (0.917). Female students had a lower average error frequency than males in cardiac examinations (p = 0.041). Additionally, error in examination techniques was the highest type of error among the 5 aspects of PE skills irrespective of participant gender and assessment content (p<0.001). These data suggest that PE skills in cardiac and pulmonary examinations and examination techniques may be included in the main focus of improving the teaching of diagnostics in these medical students. PMID:25329685

  13. Performance of physical examination skills in medical students during diagnostic medicine course in a University Hospital of Northwest China.

    PubMed

    Li, Yan; Li, Na; Han, Qunying; He, Shuixiang; Bae, Ricard S; Liu, Zhengwen; Lv, Yi; Shi, Bingyin

    2014-01-01

    This study was conducted to evaluate the performance of physical examination (PE) skills during our diagnostic medicine course and analyze the characteristics of the data collected to provide information for practical guidance to improve the quality of teaching. Seventy-two fourth-year medical students were enrolled in the study. All received an assessment of PE skills after receiving a 17-week formal training course and systematic teaching. Their performance was evaluated and recorded in detail using a checklist, which included 5 aspects of PE skills: examination techniques, communication and care skills, content items, appropriateness of examination sequence, and time taken. Error frequency and type were designated as the assessment parameters in the survey. The results showed that the distribution and the percentage in examination errors between male and female students and among the different body parts examined were significantly different (p<0.001). The average error frequency per student in females (0.875) was lower than in males (1.375) although the difference was not statistically significant (p = 0.167). The average error frequency per student in cardiac (1.267) and pulmonary (1.389) examinations was higher than in abdominal (0.867) and head, neck and nervous system examinations (0.917). Female students had a lower average error frequency than males in cardiac examinations (p = 0.041). Additionally, error in examination techniques was the highest type of error among the 5 aspects of PE skills irrespective of participant gender and assessment content (p<0.001). These data suggest that PE skills in cardiac and pulmonary examinations and examination techniques may be included in the main focus of improving the teaching of diagnostics in these medical students.

  14. Optimization of removal function in computer controlled optical surfacing

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Guo, Peiji; Ren, Jianfeng

    2010-10-01

    The technical principle of computer controlled optical surfacing (CCOS) and the common method of optimizing removal function that is used in CCOS are introduced in this paper. A new optimizing method time-sharing synthesis of removal function is proposed to solve problems of the removal function being far away from Gaussian type and slow approaching of the removal function error that encountered in the mode of planet motion or translation-rotation. Detailed time-sharing synthesis of using six removal functions is discussed. For a given region on the workpiece, six positions are selected as the centers of the removal function; polishing tool controlled by the executive system of CCOS revolves around each centre to complete a cycle in proper order. The overall removal function obtained by the time-sharing process is the ratio of total material removal in six cycles to time duration of the six cycles, which depends on the arrangement and distribution of the six removal functions. Simulations on the synthesized overall removal functions under two different modes of motion, i.e., planet motion and translation-rotation are performed from which the optimized combination of tool parameters and distribution of time-sharing synthesis removal functions are obtained. The evaluation function when optimizing is determined by an approaching factor which is defined as the ratio of the material removal within the area of half of the polishing tool coverage from the polishing center to the total material removal within the full polishing tool coverage area. After optimization, it is found that the optimized removal function obtained by time-sharing synthesis is closer to the ideal Gaussian type removal function than those by the traditional methods. The time-sharing synthesis method of the removal function provides an efficient way to increase the convergence speed of the surface error in CCOS for the fabrication of aspheric optical surfaces, and to reduce the intermediate- and high-frequency error.

  15. On-machine precision preparation and dressing of ball-headed diamond wheel for the grinding of fused silica

    NASA Astrophysics Data System (ADS)

    Chen, Mingjun; Li, Ziang; Yu, Bo; Peng, Hui; Fang, Zhen

    2013-09-01

    In the grinding of high quality fused silica parts with complex surface or structure using ball-headed metal bonded diamond wheel with small diameter, the existing dressing methods are not suitable to dress the ball-headed diamond wheel precisely due to that they are either on-line in process dressing which may causes collision problem or without consideration for the effects of the tool setting error and electrode wear. An on-machine precision preparation and dressing method is proposed for ball-headed diamond wheel based on electrical discharge machining. By using this method the cylindrical diamond wheel with small diameter is manufactured to hemispherical-headed form. The obtained ball-headed diamond wheel is dressed after several grinding passes to recover geometrical accuracy and sharpness which is lost due to the wheel wear. A tool setting method based on high precision optical system is presented to reduce the wheel center setting error and dimension error. The effect of electrode tool wear is investigated by electrical dressing experiments, and the electrode tool wear compensation model is established based on the experimental results which show that the value of wear ratio coefficient K' tends to be constant with the increasing of the feed length of electrode and the mean value of K' is 0.156. Grinding experiments of fused silica are carried out on a test bench to evaluate the performance of the preparation and dressing method. The experimental results show that the surface roughness of the finished workpiece is 0.03 μm. The effect of the grinding parameter and dressing frequency on the surface roughness is investigated based on the measurement results of the surface roughness. This research provides an on-machine preparation and dressing method for ball-headed metal bonded diamond wheel used in the grinding of fused silica, which provides a solution to the tool setting method and the effect of electrode tool wear.

  16. A Prototype Physical Database for Passive Microwave Retrievals of Precipitation over the US Southern Great Plains

    NASA Technical Reports Server (NTRS)

    Ringerud, S.; Kummerow, C. D.; Peters-Lidard, C. D.

    2015-01-01

    An accurate understanding of the instantaneous, dynamic land surface emissivity is necessary for a physically based, multi-channel passive microwave precipitation retrieval scheme over land. In an effort to assess the feasibility of the physical approach for land surfaces, a semi-empirical emissivity model is applied for calculation of the surface component in a test area of the US Southern Great Plains. A physical emissivity model, using land surface model data as input, is used to calculate emissivity at the 10GHz frequency, combining contributions from the underlying soil and vegetation layers, including the dielectric and roughness effects of each medium. An empirical technique is then applied, based upon a robust set of observed channel covariances, extending the emissivity calculations to all channels. For calculation of the hydrometeor contribution, reflectivity profiles from the Tropical Rainfall Measurement Mission Precipitation Radar (TRMM PR) are utilized along with coincident brightness temperatures (Tbs) from the TRMM Microwave Imager (TMI), and cloud-resolving model profiles. Ice profiles are modified to be consistent with the higher frequency microwave Tbs. Resulting modeled top of the atmosphere Tbs show correlations to observations of 0.9, biases of 1K or less, root-mean-square errors on the order of 5K, and improved agreement over the use of climatological emissivity values. The synthesis of these models and data sets leads to the creation of a simple prototype Tb database that includes both dynamic surface and atmospheric information physically consistent with the land surface model, emissivity model, and atmospheric information.

  17. The mean sea surface height and geoid along the Geosat subtrack from Bermuda to Cape Cod

    NASA Astrophysics Data System (ADS)

    Kelly, Kathryn A.; Joyce, Terrence M.; Schubert, David M.; Caruso, Michael J.

    1991-07-01

    Measurements of near-surface velocity and concurrent sea level along an ascending Geosat subtrack were used to estimate the mean sea surface height and the Earth's gravitational geoid. Velocity measurements were made on three traverses of a Geosat subtrack within 10 days, using an acoustic Doppler current profiler (ADCP). A small bias in the ADCP velocity was removed by considering a mass balance for two pairs of triangles for which expendable bathythermograph measurements were also made. Because of the large curvature of the Gulf Stream, the gradient wind balance was used to estimate the cross-track component of geostrophic velocity from the ADCP vectors; this component was then integrated to obtain the sea surface height profile. The mean sea surface height was estimated as the difference between the instantaneous sea surface height from ADCP and the Geosat residual sea level, with mesoscale errors reduced by low-pass filtering. The error estimates were divided into a bias, tilt, and mesoscale residual; the bias was ignored because profiles were only determined within a constant of integration. The calculated mean sea surface height estimate agreed with an independent estimate of the mean sea surface height from Geosat, obtained by modeling the Gulf Stream as a Gaussian jet, within the expected errors in the estimates: the tilt error was 0.10 m, and the mesoscale error was 0.044 m. To minimize mesoscale errors in the estimate, the alongtrack geoid estimate was computed as the difference between the mean sea level from the Geosat Exact Repeat Mission and an estimate of the mean sea surface height, rather than as the difference between instantaneous profiles of sea level and sea surface height. In the critical region near the Gulf Stream the estimated error reduction using this method was about 0.07 m. Differences between the geoid estimate and a gravimetric geoid were not within the expected errors: the rms mesoscale difference was 0.24 m rms.

  18. Single-Event Upset Characterization of Common First- and Second-Order All-Digital Phase-Locked Loops

    NASA Astrophysics Data System (ADS)

    Chen, Y. P.; Massengill, L. W.; Kauppila, J. S.; Bhuva, B. L.; Holman, W. T.; Loveless, T. D.

    2017-08-01

    The single-event upset (SEU) vulnerability of common first- and second-order all-digital-phase-locked loops (ADPLLs) is investigated through field-programmable gate array-based fault injection experiments. SEUs in the highest order pole of the loop filter and fraction-based phase detectors (PDs) may result in the worst case error response, i.e., limit cycle errors, often requiring system restart. SEUs in integer-based linear PDs may result in loss-of-lock errors, while SEUs in bang-bang PDs only result in temporary-frequency errors. ADPLLs with the same frequency tuning range but fewer bits in the control word exhibit better overall SEU performance.

  19. An optimized method to calculate error correction capability of tool influence function in frequency domain

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Hou, Xi; Wan, Yongjian; Shi, Chunyan

    2017-10-01

    An optimized method to calculate error correction capability of tool influence function (TIF) in certain polishing conditions will be proposed based on smoothing spectral function. The basic mathematical model for this method will be established in theory. A set of polishing experimental data with rigid conformal tool is used to validate the optimized method. The calculated results can quantitatively indicate error correction capability of TIF for different spatial frequency errors in certain polishing conditions. The comparative analysis with previous method shows that the optimized method is simpler in form and can get the same accuracy results with less calculating time in contrast to previous method.

  20. A digital, constant-frequency pulsed phase-locked-loop instrument for real-time, absolute ultrasonic phase measurements

    NASA Astrophysics Data System (ADS)

    Haldren, H. A.; Perey, D. F.; Yost, W. T.; Cramer, K. E.; Gupta, M. C.

    2018-05-01

    A digitally controlled instrument for conducting single-frequency and swept-frequency ultrasonic phase measurements has been developed based on a constant-frequency pulsed phase-locked-loop (CFPPLL) design. This instrument uses a pair of direct digital synthesizers to generate an ultrasonically transceived tone-burst and an internal reference wave for phase comparison. Real-time, constant-frequency phase tracking in an interrogated specimen is possible with a resolution of 0.000 38 rad (0.022°), and swept-frequency phase measurements can be obtained. Using phase measurements, an absolute thickness in borosilicate glass is presented to show the instrument's efficacy, and these results are compared to conventional ultrasonic pulse-echo time-of-flight (ToF) measurements. The newly developed instrument predicted the thickness with a mean error of -0.04 μm and a standard deviation of error of 1.35 μm. Additionally, the CFPPLL instrument shows a lower measured phase error in the absence of changing temperature and couplant thickness than high-resolution cross-correlation ToF measurements at a similar signal-to-noise ratio. By showing higher accuracy and precision than conventional pulse-echo ToF measurements and lower phase errors than cross-correlation ToF measurements, the new digitally controlled CFPPLL instrument provides high-resolution absolute ultrasonic velocity or path-length measurements in solids or liquids, as well as tracking of material property changes with high sensitivity. The ability to obtain absolute phase measurements allows for many new applications than possible with previous ultrasonic pulsed phase-locked loop instruments. In addition to improved resolution, swept-frequency phase measurements add useful capability in measuring properties of layered structures, such as bonded joints, or materials which exhibit non-linear frequency-dependent behavior, such as dispersive media.

  1. Assessment of Oco-2 Target Mode Vulnerability Against Horizontal Variability of Surface Reflectivity Neglected in the Operational Forward Model

    NASA Astrophysics Data System (ADS)

    Davis, A. B.; Qu, Z.

    2014-12-01

    The main goal of NASA's OCO-2 mission is to perform XCO2 column measurements from space with an unprecedented (~1 ppm) precision and accuracy that will enable modelers to globally map CO2 sources and sinks. To achieve this goal, the mission is critically dependent on XCO2product validation that, in turn, is highly dependent on successful use of OCO-2's "target mode" data acquisition. In target mode, OCO-2 rotates in such a way that, as long as it is above the horizon, it looks at a Total Carbon Column Observing Network (TCCON) station equipped with a powerful Fourier Transform spectrometer. TCCON stations measure, among other things, XCO2by looking straight at the Sun. This translates to a far simpler forward model for TCCON than for OCO-2. In the ideal world, OCO-2's spectroscopic signals result from the cumulative gaseous absorption for one direct transmission of sunlight to the ground (like for TCCON), followed by one diffuse reflection, and one direct transmission to the instrument—at a variety of viewing angles in traget mode. In the real world, all manner of multiple surface reflections and/or scatterings contribute to the signal. See figure. In the idealized world of the OCO-2 operational forward model (used in nadir, glint and target modes), the horizontal variability of the scattering atmosphere and reflecting surface are ignored, leading to the adoption of a 1D vector radiative transfer (vRT) model. This is the source of forward model error that we are investigating, with a focus on target mode. In principle, atmospheric variability in the horizontal plane—largely due to clouds—can be avoided by careful screening. Also, it is straightforward to account for angular variability of the surface reflection model in the 1D vRT framework. But it is not clear how unavoidable horizontal variations of the surface reflectivity affects the OCO-2 signal, even if the reflection was isotropic (Lambertian). To characterize this OCO-2 "adjacency" effect, we use a simple surface variability model with a single spatial frequency in each direction, and a single albedo contrast at a time for realistic aerosol and gaseous profiles. This specific 3D RT error is compared with other documented forward model errors and translated into XCO2 error in ppm, for programatic consideration and eventual mitigation.

  2. Active control of sound transmission through partitions composed of discretely controlled modules

    NASA Astrophysics Data System (ADS)

    Leishman, Timothy W.

    This thesis provides a detailed theoretical and experimental investigation of active segmented partitions (ASPs) for the control of sound transmission. ASPs are physically segmented arrays of interconnected acoustically and structurally small modules that are discretely controlled using electronic controllers. Theoretical analyses of the thesis first address physical principles fundamental to ASP modeling and experimental measurement techniques. Next, they explore specific module configurations, primarily using equivalent circuits. Measured normal-incidence transmission losses and related properties of experimental ASPs are determined using plane wave tubes and the two-microphone transfer function technique. A scanning laser vibrometer is also used to evaluate distributed transmitting surface vibrations. ASPs have the inherent potential to provide excellent active sound transmission control (ASTC) through lightweight structures, using very practical control strategies. The thesis analyzes several unique ASP configurations and evaluates their abilities to produce high transmission losses via global minimization of normal transmitting surface vibrations. A novel dual diaphragm configuration is shown to employ this strategy particularly well. It uses an important combination of acoustical actuation and mechano-acoustical segmentation to produce exceptionally high transmission loss (e.g., 50 to 80 dB) over a broad frequency range-including lower audible frequencies. Such performance is shown to be comparable to that produced by much more massive partitions composed of thick layers of steel or concrete and sand. The configuration uses only simple localized error sensors and actuators, permitting effective use of independent single-channel controllers in a decentralized format. This work counteracts the commonly accepted notion that active vibration control of partitions is an ineffective means of controlling sound transmission. With appropriate construction, actuation, and error sensing, ASPs can achieve high sound transmission loss through efficient global control of transmitting surface vibrations. This approach is applicable to a wide variety of source and receiving spaces-and to both near fields and far fields.

  3. Experimental measurement and theoretical modeling of microwave scattering and the structure of the sea surface influencing radar observations from space

    NASA Technical Reports Server (NTRS)

    Arnold, David; Kong, J. A.

    1992-01-01

    The electromagnetic (EM) bias 'epsilon' is an error present in radar altimetry of the ocean due to the nonuniform reflection from wave troughs and crests. The EM bias is defined as the difference between the mean reflecting surface and the mean sea surface. A knowledge of the EM bias is necessary to permit error reduction in mean sea level measurements by satellite radar altimeters. Direct measurements of the EM bias were made from a Shell Offshore oil production platform in the Gulf of Mexico for a six month period during 1989 and 1990. Measurements of the EM bias were made at 5 and 14 Ghz. During the EM bias experiments by Melville et al., a wire wave gauge was used to obtain the modulation of the high frequency waves by the low frequency waves. It became apparent that the EM bias was primarily caused by the modulation of the short waves. This was reported by Arnold et al. The EM bias is explained using physical optics scattering and an empirical model for the short wave modulation. Measurements of the short wave modulation using a wire wave gauge demonstrated a linear dependence of the normalized bias on the short wave modulation strength, M. The theory accurately predicts this dependence by the relation epsilon = -alphaMH sub 1/3. The wind speed dependence of the normalized bias is explained by the dependence of the short wave modulation strength on the wind speed. While other effects such as long wave tilt and curvature will have an effect on the bias, the primary cause of the bias is shown to be due to the short wave modulation. This report will present a theory using physical optics scattering and an empirical model of the short wave modulation to estimate the EM bias. The estimated EM bias will be compared to measurements at C and Ku bands.

  4. Pilot-Assisted Channel Estimation for Orthogonal Multi-Carrier DS-CDMA with Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Shima, Tomoyuki; Tomeba, Hiromichi; Adachi, Fumiyuki

    Orthogonal multi-carrier direct sequence code division multiple access (orthogonal MC DS-CDMA) is a combination of time-domain spreading and orthogonal frequency division multiplexing (OFDM). In orthogonal MC DS-CDMA, the frequency diversity gain can be obtained by applying frequency-domain equalization (FDE) based on minimum mean square error (MMSE) criterion to a block of OFDM symbols and can improve the bit error rate (BER) performance in a severe frequency-selective fading channel. FDE requires an accurate estimate of the channel gain. The channel gain can be estimated by removing the pilot modulation in the frequency domain. In this paper, we propose a pilot-assisted channel estimation suitable for orthogonal MC DS-CDMA with FDE and evaluate, by computer simulation, the BER performance in a frequency-selective Rayleigh fading channel.

  5. Effects of tRNA modification on translational accuracy depend on intrinsic codon-anticodon strength.

    PubMed

    Manickam, Nandini; Joshi, Kartikeya; Bhatt, Monika J; Farabaugh, Philip J

    2016-02-29

    Cellular health and growth requires protein synthesis to be both efficient to ensure sufficient production, and accurate to avoid producing defective or unstable proteins. The background of misreading error frequency by individual tRNAs is as low as 2 × 10(-6) per codon but is codon-specific with some error frequencies above 10(-3) per codon. Here we test the effect on error frequency of blocking post-transcriptional modifications of the anticodon loops of four tRNAs in Escherichia coli. We find two types of responses to removing modification. Blocking modification of tRNA(UUC)(Glu) and tRNA(QUC)(Asp) increases errors, suggesting that the modifications act at least in part to maintain accuracy. Blocking even identical modifications of tRNA(UUU)(Lys) and tRNA(QUA)(Tyr) has the opposite effect of decreasing errors. One explanation could be that the modifications play opposite roles in modulating misreading by the two classes of tRNAs. Given available evidence that modifications help preorder the anticodon to allow it to recognize the codons, however, the simpler explanation is that unmodified 'weak' tRNAs decode too inefficiently to compete against cognate tRNAs that normally decode target codons, which would reduce the frequency of misreading. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. Monitoring of deep brain temperature in infants using multi-frequency microwave radiometry and thermal modelling.

    PubMed

    Han, J W; Van Leeuwen, G M; Mizushina, S; Van de Kamer, J B; Maruyama, K; Sugiura, T; Azzopardi, D V; Edwards, A D

    2001-07-01

    In this study we present a design for a multi-frequency microwave radiometer aimed at prolonged monitoring of deep brain temperature in newborn infants and suitable for use during hypothermic neural rescue therapy. We identify appropriate hardware to measure brightness temperature and evaluate the accuracy of the measurements. We describe a method to estimate the tissue temperature distribution from measured brightness temperatures which uses the results of numerical simulations of the tissue temperature as well as the propagation of the microwaves in a realistic detailed three-dimensional infant head model. The temperature retrieval method is then used to evaluate how the statistical fluctuations in the measured brightness temperatures limit the confidence interval for the estimated temperature: for an 18 degrees C temperature differential between cooled surface and deep brain we found a standard error in the estimated central brain temperature of 0.75 degrees C. Evaluation of the systematic errors arising from inaccuracies in model parameters showed that realistic deviations in tissue parameters have little impact compared to uncertainty in the thickness of the bolus between the receiving antenna and the infant's head or in the skull thickness. This highlights the need to pay particular attention to these latter parameters in future practical implementation of the technique.

  7. Research on the magnetorheological finishing (MRF) technology with dual polishing heads

    NASA Astrophysics Data System (ADS)

    Huang, Wen; Zhang, Yunfei; He, Jianguo; Zheng, Yongcheng; Luo, Qing; Hou, Jing; Yuan, Zhigang

    2014-08-01

    Magnetorheological finishing (MRF) is a key polishing technique capable of rapidly converging to the required surface figure. Due to the deficiency of general one-polishing-head MRF technology, a dual polishing heads MRF technology was studied and a dual polishing heads MRF machine with 8 axes was developed. The machine has the ability to manufacture large aperture optics with high figure accuracy. The large polishing head is suitable for polishing large aperture optics, controlling large spatial length's wave structures, correcting low-medium frequency errors with high removal rates. While the small polishing head has more advantages in manufacturing small aperture optics, controlling small spatial wavelength's wave structures, correcting mid-high frequency and removing nanoscale materials. Material removal characteristic and figure correction ability for each of large and small polishing head was studied. Each of two polishing heads respectively acquired stable and valid polishing removal function and ultra-precision flat sample. After a single polishing iteration using small polishing head, the figure error in 45mm diameter of a 50 mm diameter plano optics was significantly improved from 0.21λ to 0.08λ by PV (RMS 0.053λ to 0.015λ). After three polishing iterations using large polishing head , the figure error in 410mm×410mm of a 430mm×430mm large plano optics was significantly improved from 0.40λ to 0.10λ by PV (RMS 0.068λ to 0.013λ) .This results show that the dual polishing heads MRF machine not only have good material removal stability, but also excellent figure correction capability.

  8. Nonlinear analysis and dynamic compensation of stylus scanning measurement with wide range

    NASA Astrophysics Data System (ADS)

    Hui, Heiyang; Liu, Xiaojun; Lu, Wenlong

    2011-12-01

    Surface topography is an important geometrical feature of a workpiece that influences its quality and functions such as friction, wearing, lubrication and sealing. Precision measurement of surface topography is fundamental for product quality characterizing and assurance. Stylus scanning technique is a widely used method for surface topography measurement, and it is also regarded as the international standard method for 2-D surface characterizing. Usually surface topography, including primary profile, waviness and roughness, can be measured precisely and efficiently by this method. However, by stylus scanning method to measure curved surface topography, the nonlinear error is unavoidable because of the difference of horizontal position of the actual measured point from given sampling point and the nonlinear transformation process from vertical displacement of the stylus tip to angle displacement of the stylus arm, and the error increases with the increasing of measuring range. In this paper, a wide range stylus scanning measurement system based on cylindrical grating interference principle is constructed, the originations of the nonlinear error are analyzed, the error model is established and a solution to decrease the nonlinear error is proposed, through which the error of the collected data is dynamically compensated.

  9. Dynamically tuned vibratory micromechanical gyroscope accelerometer

    NASA Astrophysics Data System (ADS)

    Lee, Byeungleul; Oh, Yong-Soo; Park, Kyu-Yeon; Ha, Byeoungju; Ko, Younil; Kim, Jeong-gon; Kang, Seokjin; Choi, Sangon; Song, Ci M.

    1997-11-01

    A comb driving vibratory micro-gyroscope, which utilizes the dynamically tunable resonant modes for a higher rate- sensitivity without an accelerational error, has been developed and analyzed. The surface micromachining technology is used to fabricate the gyroscope having a vibrating part of 400 X 600 micrometers with 6 mask process, and the poly-silicon structural layer is deposited by LPCVD at 625 degrees C. The gyroscope and the interface electronics housed in a hermetically sealed vacuum package for low vibrational damping condition. This gyroscope is designed to be driven in parallel to the substrate by electrostatic forces and subject to coriolis forces along vertically, with a folded beam structure. In this scheme, the resonant frequency of the driving mode is located below than that of the sensing mode, so it is possible to adjust the sensing mode with a negative stiffness effect by applying inter-plate voltage to tune the vibration modes for a higher rate-sensitivity. Unfortunately, this micromechanical vibratory gyroscope is also sensitive to vertical acceleration force, especially in the case of a low stiffness of the vibrating structure for detecting a very small coriolis force. In this study, we distinguished the rate output and the accelerational error by phase sensitivity synchronous demodulator and devised a feedback loop to maintain resonant frequency of the vertical sensing mode by varying the inter-plate tuning voltage according to the accelerational output. Therefore, this gyroscope has a high rate-sensitivity without an acceleration error, and also can be used for a resonant accelerometer. This gyroscope was tested on the rotational rate table at the separation of 50(Hz) resonant frequencies by dynamically tuning feedback loop. Also self-sustained oscillating loop is used to apply dc 2(V) + ac 30(mVpk) driving voltage to the drive electrodes. The characteristics of the gyroscope at 0.1 (deg/sec) resolution, 50 (Hz) bandwidth, and 1.3 (mV/deg/sec) sensitivity.

  10. Spiral-bevel geometry and gear train precision

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Coy, J. J.

    1983-01-01

    A new aproach to the solution of determination of surface principal curvatures and directions is proposed. Direct relationships between the principal curvatures and directions of the tool surface and those of the principal curvatures and directions of generated gear surface are obtained. The principal curvatures and directions of geartooth surface are obtained without using the complicated equations of these surfaces. A general theory of the train kinematical errors exerted by manufacturing and assembly errors is discussed. Two methods for the determination of the train kinematical errors can be worked out: (1) with aid of a computer, and (2) with a approximate method. Results from noise and vibration measurement conducted on a helicopter transmission are used to illustrate the principals contained in the theory of kinematic errors.

  11. Dual-polarization characteristics of the radar ocean return in the presence of rain

    NASA Technical Reports Server (NTRS)

    Meneghini, R.; Kumagai, H.; Kozu, T.

    1992-01-01

    Experimental data are presented on the polarimetric and dual-wavelength characteristics of the ocean surface in the presence of rain. To explain a portion of the variability observed in scatter plots under rain conditions, a storm model is used that incorporates measured drop size distributions. The fairly large variability indicates that effects of drop size distribution and the presence of partially melted particles can introduce a significant error in the estimate of attenuation. This effect is especially significant in the case of a 10-GHz radar under high rain rates. A surface reference method at this frequency will tend to overestimate the rain attenuation unless melting layer attenuation is properly taken into account. Observations of the cross-polarization return in stratiform rain over an ocean surface show three distinct components. Two of these correspond to aspherical, nonaligned particles in the melting layer seen in the direct and mirror-image returns. The remaining part depends both on the off-nadir depolarization by the surface and on the rain medium. A possible mechanism for this latter effect is the bistatic scattering from the rain to the surface.

  12. Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.

    PubMed

    Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo

    2017-06-01

    Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.

  13. Optical voltage reference

    DOEpatents

    Rankin, Richard; Kotter, Dale

    1994-01-01

    An optical voltage reference for providing an alternative to a battery source. The optical reference apparatus provides a temperature stable, high precision, isolated voltage reference through the use of optical isolation techniques to eliminate current and impedance coupling errors. Pulse rate frequency modulation is employed to eliminate errors in the optical transmission link while phase-lock feedback is employed to stabilize the frequency to voltage transfer function.

  14. Study of Frequency of Errors and Areas of Weaknesses in Business Communications Classes at Kapiolani Community College.

    ERIC Educational Resources Information Center

    Uehara, Soichi

    This study was made to determine the most prevalent errors, areas of weakness, and their frequency in the writing of letters so that a course in business communications classes at Kapiolani Community College (Hawaii) could be prepared that would help students learn to write effectively. The 55 participating students were divided into two groups…

  15. Impact of transport and modelling errors on the estimation of methane sources and sinks by inverse modelling

    NASA Astrophysics Data System (ADS)

    Locatelli, Robin; Bousquet, Philippe; Chevallier, Frédéric

    2013-04-01

    Since the nineties, inverse modelling by assimilating atmospheric measurements into a chemical transport model (CTM) has been used to derive sources and sinks of atmospheric trace gases. More recently, the high global warming potential of methane (CH4) and unexplained variations of its atmospheric mixing ratio caught the attention of several research groups. Indeed, the diversity and the variability of methane sources induce high uncertainty on the present and the future evolution of CH4 budget. With the increase of available measurement data to constrain inversions (satellite data, high frequency surface and tall tower observations, FTIR spectrometry,...), the main limiting factor is about to become the representation of atmospheric transport in CTMs. Indeed, errors in transport modelling directly converts into flux changes when assuming perfect transport in atmospheric inversions. Hence, we propose an inter-model comparison in order to quantify the impact of transport and modelling errors on the CH4 fluxes estimated into a variational inversion framework. Several inversion experiments are conducted using the same set-up (prior emissions, measurement and prior errors, OH field, initial conditions) of the variational system PYVAR, developed at LSCE (Laboratoire des Sciences du Climat et de l'Environnement, France). Nine different models (ACTM, IFS, IMPACT, IMPACT1x1, MOZART, PCTM, TM5, TM51x1 and TOMCAT) used in TRANSCOM-CH4 experiment (Patra el al, 2011) provide synthetic measurements data at up to 280 surface sites to constrain the inversions performed using the PYVAR system. Only the CTM (and the meteorological drivers which drive them) used to create the pseudo-observations vary among inversions. Consequently, the comparisons of the nine inverted methane fluxes obtained for 2005 give a good order of magnitude of the impact of transport and modelling errors on the estimated fluxes with current and future networks. It is shown that transport and modelling errors lead to a discrepancy of 27 TgCH4 per year at global scale, representing 5% of the total methane emissions for 2005. At continental scale, transport and modelling errors have bigger impacts in proportion to the area of the regions, ranging from 36 TgCH4 in North America to 7 TgCH4 in Boreal Eurasian, with a percentage range from 23% to 48%. Thus, contribution of transport and modelling errors to the mismatch between measurements and simulated methane concentrations is large considering the present questions on the methane budget. Moreover, diagnostics of statistics errors included in our inversions have been computed. It shows that errors contained in measurement errors covariance matrix are under-estimated in current inversions, suggesting to include more properly transport and modelling errors in future inversions.

  16. Theoretical insight of adsorption thermodynamics of multifunctional molecules on metal surfaces

    NASA Astrophysics Data System (ADS)

    Loffreda, David

    2006-05-01

    Adsorption thermodynamics based on density functional theory (DFT) calculations are exposed for the interaction of several multifunctional molecules with Pt and Au(1 1 0)-(1 × 2) surfaces. The Gibbs free adsorption energy explicitly depends on the adsorption internal energy, which is derived from DFT adsorption energy, and the vibrational entropy change during the chemisorption process. Zero-point energy (ZPE) corrections have been systematically applied to the adsorption energy. Moreover the vibrational entropy change has been computed on the basis of DFT harmonic frequencies (gas and adsorbed phases, clean surfaces), which have been extended to all the adsorbate vibrations and the metallic surface phonons. The phase diagrams plotted in realistic conditions of temperature (from 100 to 400 K) and pressure (0.15 atm) show that the ZPE corrected adsorption energy is the main contribution. When strong chemisorption is considered on the Pt surface, the multifunctional molecules are adsorbed on the surface in the considered temperature range. In contrast for weak chemisorption on the Au surface, the thermodynamic results should be held cautiously. The systematic errors of the model (choice of the functional, configurational entropy and vibrational entropy) make difficult the prediction of the adsorption-desorption phase boundaries.

  17. Comparing different models of the development of verb inflection in early child Spanish.

    PubMed

    Aguado-Orea, Javier; Pine, Julian M

    2015-01-01

    How children acquire knowledge of verb inflection is a long-standing question in language acquisition research. In the present study, we test the predictions of some current constructivist and generativist accounts of the development of verb inflection by focusing on data from two Spanish-speaking children between the ages of 2;0 and 2;6. The constructivist claim that children's early knowledge of verb inflection is only partially productive is tested by comparing the average number of different inflections per verb in matched samples of child and adult speech. The generativist claim that children's early use of verb inflection is essentially error-free is tested by investigating the rate at which the children made subject-verb agreement errors in different parts of the present tense paradigm. Our results show: 1) that, although even adults' use of verb inflection in Spanish tends to look somewhat lexically restricted, both children's use of verb inflection was significantly less flexible than that of their caregivers, and 2) that, although the rate at which the two children produced subject-verb agreement errors in their speech was very low, this overall error rate hid a consistent pattern of error in which error rates were substantially higher in low frequency than in high frequency contexts, and substantially higher for low frequency than for high frequency verbs. These results undermine the claim that children's use of verb inflection is fully productive from the earliest observable stages, and are consistent with the constructivist claim that knowledge of verb inflection develops only gradually.

  18. An auxiliary frequency tracking system for general purpose lock-in amplifiers

    NASA Astrophysics Data System (ADS)

    Xie, Kai; Chen, Liuhao; Huang, Anfeng; Zhao, Kai; Zhang, Hanlu

    2018-04-01

    Lock-in amplifiers (LIAs) are designed to measure weak signals submerged by noise. This is achieved with a signal modulator to avoid low-frequency noise and a narrow-band filter to suppress out-of-band noise. In asynchronous measurement, even a slight frequency deviation between the modulator and the reference may lead to measurement error because the filter’s passband is not flat. Because many commercial LIAs are unable to track frequency deviations, in this paper we propose an auxiliary frequency tracking system. We analyze the measurement error caused by the frequency deviation and propose both a tracking method and an auto-tracking system. This approach requires only three basic parameters, which can be obtained from any general purpose LIA via its communications interface, to calculate the frequency deviation from the phase difference. The proposed auxiliary tracking system is designed as a peripheral connected to the LIA’s serial port, removing the need for an additional power supply. The test results verified the effectiveness of the proposed system; the modified commercial LIA (model SR-850) was able to track the frequency deviation and continuous drift. For step frequency deviations, a steady tracking error of less than 0.001% was achieved within three adjustments, and the worst tracking accuracy was still better than 0.1% for a continuous frequency drift. The tracking system can be used to expand the application scope of commercial LIAs, especially for remote measurements in which the modulation clock and the local reference are separated.

  19. Low speed phaselock speed control system. [for brushless dc motor

    NASA Technical Reports Server (NTRS)

    Fulcher, R. W.; Sudey, J. (Inventor)

    1975-01-01

    A motor speed control system for an electronically commutated brushless dc motor is provided which includes a phaselock loop with bidirectional torque control for locking the frequency output of a high density encoder, responsive to actual speed conditions, to a reference frequency signal, corresponding to the desired speed. The system includes a phase comparator, which produces an output in accordance with the difference in phase between the reference and encoder frequency signals, and an integrator-digital-to-analog converter unit, which converts the comparator output into an analog error signal voltage. Compensation circuitry, including a biasing means, is provided to convert the analog error signal voltage to a bidirectional error signal voltage which is utilized by an absolute value amplifier, rotational decoder, power amplifier-commutators, and an arrangement of commutation circuitry.

  20. Linear quadratic stochastic control of atomic hydrogen masers.

    PubMed

    Koppang, P; Leland, R

    1999-01-01

    Data are given showing the results of using the linear quadratic Gaussian (LQG) technique to steer remote hydrogen masers to Coordinated Universal Time (UTC) as given by the United States Naval Observatory (USNO) via two-way satellite time transfer and the Global Positioning System (GPS). Data also are shown from the results of steering a hydrogen maser to the real-time USNO mean. A general overview of the theory behind the LQG technique also is given. The LQG control is a technique that uses Kalman filtering to estimate time and frequency errors used as input into a control calculation. A discrete frequency steer is calculated by minimizing a quadratic cost function that is dependent on both the time and frequency errors and the control effort. Different penalties, chosen by the designer, are assessed by the controller as the time and frequency errors and control effort vary from zero. With this feature, controllers can be designed to force the time and frequency differences between two standards to zero, either more or less aggressively depending on the application.

  1. The Frequency Spectral Properties of Electrode-Skin Contact Impedance on Human Head and Its Frequency-Dependent Effects on Frequency-Difference EIT in Stroke Detection from 10Hz to 1MHz.

    PubMed

    Yang, Lin; Dai, Meng; Xu, Canhua; Zhang, Ge; Li, Weichen; Fu, Feng; Shi, Xuetao; Dong, Xiuzhen

    2017-01-01

    Frequency-difference electrical impedance tomography (fdEIT) reconstructs frequency-dependent changes of a complex impedance distribution. It has a potential application in acute stroke detection because there are significant differences in impedance spectra between stroke lesions and normal brain tissues. However, fdEIT suffers from the influences of electrode-skin contact impedance since contact impedance varies greatly with frequency. When using fdEIT to detect stroke, it is critical to know the degree of measurement errors or image artifacts caused by contact impedance. To our knowledge, no study has systematically investigated the frequency spectral properties of electrode-skin contact impedance on human head and its frequency-dependent effects on fdEIT used in stroke detection within a wide frequency band (10 Hz-1 MHz). In this study, we first measured and analyzed the frequency spectral properties of electrode-skin contact impedance on 47 human subjects' heads within 10 Hz-1 MHz. Then, we quantified the frequency-dependent effects of contact impedance on fdEIT in stroke detection in terms of the current distribution beneath the electrodes and the contact impedance imbalance between two measuring electrodes. The results showed that the contact impedance at high frequencies (>100 kHz) significantly changed the current distribution beneath the electrode, leading to nonnegligible errors in boundary voltages and artifacts in reconstructed images. The contact impedance imbalance at low frequencies (<1 kHz) also caused significant measurement errors. We conclude that the contact impedance has critical frequency-dependent influences on fdEIT and further studies on reducing such influences are necessary to improve the application of fdEIT in stroke detection.

  2. The Frequency Spectral Properties of Electrode-Skin Contact Impedance on Human Head and Its Frequency-Dependent Effects on Frequency-Difference EIT in Stroke Detection from 10Hz to 1MHz

    PubMed Central

    Zhang, Ge; Li, Weichen; Fu, Feng; Shi, Xuetao; Dong, Xiuzhen

    2017-01-01

    Frequency-difference electrical impedance tomography (fdEIT) reconstructs frequency-dependent changes of a complex impedance distribution. It has a potential application in acute stroke detection because there are significant differences in impedance spectra between stroke lesions and normal brain tissues. However, fdEIT suffers from the influences of electrode-skin contact impedance since contact impedance varies greatly with frequency. When using fdEIT to detect stroke, it is critical to know the degree of measurement errors or image artifacts caused by contact impedance. To our knowledge, no study has systematically investigated the frequency spectral properties of electrode-skin contact impedance on human head and its frequency-dependent effects on fdEIT used in stroke detection within a wide frequency band (10 Hz-1 MHz). In this study, we first measured and analyzed the frequency spectral properties of electrode-skin contact impedance on 47 human subjects’ heads within 10 Hz-1 MHz. Then, we quantified the frequency-dependent effects of contact impedance on fdEIT in stroke detection in terms of the current distribution beneath the electrodes and the contact impedance imbalance between two measuring electrodes. The results showed that the contact impedance at high frequencies (>100 kHz) significantly changed the current distribution beneath the electrode, leading to nonnegligible errors in boundary voltages and artifacts in reconstructed images. The contact impedance imbalance at low frequencies (<1 kHz) also caused significant measurement errors. We conclude that the contact impedance has critical frequency-dependent influences on fdEIT and further studies on reducing such influences are necessary to improve the application of fdEIT in stroke detection. PMID:28107524

  3. Cross Time-Frequency Analysis for Combining Information of Several Sources: Application to Estimation of Spontaneous Respiratory Rate from Photoplethysmography

    PubMed Central

    Peláez-Coca, M. D.; Orini, M.; Lázaro, J.; Bailón, R.; Gil, E.

    2013-01-01

    A methodology that combines information from several nonstationary biological signals is presented. This methodology is based on time-frequency coherence, that quantifies the similarity of two signals in the time-frequency domain. A cross time-frequency analysis method, based on quadratic time-frequency distribution, has been used for combining information of several nonstationary biomedical signals. In order to evaluate this methodology, the respiratory rate from the photoplethysmographic (PPG) signal is estimated. The respiration provokes simultaneous changes in the pulse interval, amplitude, and width of the PPG signal. This suggests that the combination of information from these sources will improve the accuracy of the estimation of the respiratory rate. Another target of this paper is to implement an algorithm which provides a robust estimation. Therefore, respiratory rate was estimated only in those intervals where the features extracted from the PPG signals are linearly coupled. In 38 spontaneous breathing subjects, among which 7 were characterized by a respiratory rate lower than 0.15 Hz, this methodology provided accurate estimates, with the median error {0.00; 0.98} mHz ({0.00; 0.31}%) and the interquartile range error {4.88; 6.59} mHz ({1.60; 1.92}%). The estimation error of the presented methodology was largely lower than the estimation error obtained without combining different PPG features related to respiration. PMID:24363777

  4. Low-cost FM oscillator for capacitance type of blade tip clearance measurement system

    NASA Technical Reports Server (NTRS)

    Barranger, John P.

    1987-01-01

    The frequency-modulated (FM) oscillator described is part of a blade tip clearance measurement system that meets the needs of a wide class of fans, compressors, and turbines. As a result of advancements in the technology of ultra-high-frequency operational amplifiers, the FM oscillator requires only a single low-cost integrated circuit. Its carrier frequency is 42.8 MHz when it is used with an integrated probe and connecting cable assembly consisting of a 0.81 cm diameter engine-mounted capacitance probe and a 61 cm long hermetically sealed coaxial cable. A complete circuit analysis is given, including amplifier negative resistance characteristics. An error analysis of environmentally induced effects is also derived, and an error-correcting technique is proposed. The oscillator can be calibrated in the static mode and has a negative peak frequency deviation of 400 kHz for a rotor blade thickness of 1.2 mm. High-temperature performance tests of the probe and 13 cm of the adjacent cable show good accuracy up to 600 C, the maximum permissible seal temperature. The major source of error is the residual FM oscillator noise, which produces a clearance error of + or - 10 microns at a clearance of 0.5 mm. The oscillator electronics accommodates the high rotor speeds associated with small engines, the signals from which may have frequency components as high as 1 MHz.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matanovic, Ivana; Atanassov, Plamen; Kiefer, Boris

    The structural equilibrium parameters, the adsorption energies, and the vibrational frequencies of the nitrogen molecule and the hydrogen atom adsorbed on the (111) surface of rhodium have been investigated using different generalized-gradient approximation (GGA), nonlocal correlation, meta-GGA, and hybrid functionals, namely, Perdew, Burke, and Ernzerhof (PBE), Revised-RPBE, vdW-DF, Tao, Perdew, Staroverov, and Scuseria functional (TPSS), and Heyd, Scuseria, and Ernzerhof (HSE06) functional in the plane wave formalism. Among the five tested functionals, nonlocal vdW-DF and meta-GGA TPSS functionals are most successful in describing energetics of dinitrogen physisorption to the Rh(111) surface, while the PBE functional provides the correct chemisorption energymore » for the hydrogen atom. It was also found that TPSS functional produces the best vibrational spectra of the nitrogen molecule and the hydrogen atom on rhodium within the harmonic formalism with the error of 22.62 and 21.1% for the NAN stretching and RhAH stretching frequency. Thus, TPSS functional was proposed as a method of choice for obtaining vibrational spectra of low weight adsorbates on metallic surfaces within the harmonic approximation. At the anharmonic level, by decoupling the RhAH and NAN stretching modes from the bulk phonons and by solving one- and two-dimensional Schr€odinger equation associated with the RhAH, RhAN, and NAN potential energy we calculated the anharmonic correction for NAN and RhAH stretching modes as 231 cm21 and 277 cm21 at PBE level. Anharmonic vibrational frequencies calculated with the use of the hybrid HSE06 function are in best agreement with available experiments.« less

  6. CAUSES: On the Role of Surface Energy Budget Errors to the Warm Surface Air Temperature Error Over the Central United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, H. -Y.; Klein, S. A.; Xie, S.

    Many weather forecasting and climate models simulate a warm surface air temperature (T2m) bias over mid-latitude continents during the summertime, especially over the Great Plains. We present here one of a series of papers from a multi-model intercomparison project (CAUSES: Cloud Above the United States and Errors at the Surface), which aims to evaluate the role of cloud, radiation, and precipitation biases in contributing to T2m bias using a short-term hindcast approach with observations mainly from the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) site during the period of April to August 2011. The present study examines the contributionmore » of surface energy budget errors to the bias. All participating models simulate higher net shortwave and longwave radiative fluxes at the surface but there is no consistency on signs of biases in latent and sensible heat fluxes over the Central U.S. and ARM SGP. Nevertheless, biases in net shortwave and downward longwave fluxes, as well as surface evaporative fraction (EF) are the main contributors to T2m bias. Radiation biases are largely affected by cloud simulations, while EF is affected by soil moisture modulated by seasonal accumulated precipitation and evaporation. An approximate equation is derived to further quantify the magnitudes of radiation and EF contributions to T2m bias. Our analysis suggests that radiation errors are always an important source of T2m error for long-term climate runs with EF errors either of equal or lesser importance. However, for the short-term hindcasts, EF errors are more important provided a model has a substantial EF bias.« less

  7. The detection error of thermal test low-frequency cable based on M sequence correlation algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Dongliang; Ge, Zheyang; Tong, Xin; Du, Chunlin

    2018-04-01

    The problem of low accuracy and low efficiency of off-line detecting on thermal test low-frequency cable faults could be solved by designing a cable fault detection system, based on FPGA export M sequence code(Linear feedback shift register sequence) as pulse signal source. The design principle of SSTDR (Spread spectrum time-domain reflectometry) reflection method and hardware on-line monitoring setup figure is discussed in this paper. Testing data show that, this detection error increases with fault location of thermal test low-frequency cable.

  8. Pediatric Anesthesiology Fellows' Perception of Quality of Attending Supervision and Medical Errors.

    PubMed

    Benzon, Hubert A; Hajduk, John; De Oliveira, Gildasio; Suresh, Santhanam; Nizamuddin, Sarah L; McCarthy, Robert; Jagannathan, Narasimhan

    2018-02-01

    Appropriate supervision has been shown to reduce medical errors in anesthesiology residents and other trainees across various specialties. Nonetheless, supervision of pediatric anesthesiology fellows has yet to be evaluated. The main objective of this survey investigation was to evaluate supervision of pediatric anesthesiology fellows in the United States. We hypothesized that there was an indirect association between perceived quality of faculty supervision of pediatric anesthesiology fellow trainees and the frequency of medical errors reported. A survey of pediatric fellows from 53 pediatric anesthesiology fellowship programs in the United States was performed. The primary outcome was the frequency of self-reported errors by fellows, and the primary independent variable was supervision scores. Questions also assessed barriers for effective faculty supervision. One hundred seventy-six pediatric anesthesiology fellows were invited to participate, and 104 (59%) responded to the survey. Nine of 103 (9%, 95% confidence interval [CI], 4%-16%) respondents reported performing procedures, on >1 occasion, for which they were not properly trained for. Thirteen of 101 (13%, 95% CI, 7%-21%) reported making >1 mistake with negative consequence to patients, and 23 of 104 (22%, 95% CI, 15%-31%) reported >1 medication error in the last year. There were no differences in median (interquartile range) supervision scores between fellows who reported >1 medication error compared to those reporting ≤1 errors (3.4 [3.0-3.7] vs 3.4 [3.1-3.7]; median difference, 0; 99% CI, -0.3 to 0.3; P = .96). Similarly, there were no differences in those who reported >1 mistake with negative patient consequences, 3.3 (3.0-3.7), compared with those who did not report mistakes with negative patient consequences (3.4 [3.3-3.7]; median difference, 0.1; 99% CI, -0.2 to 0.6; P = .35). We detected a high rate of self-reported medication errors in pediatric anesthesiology fellows in the United States. Interestingly, fellows' perception of quality of faculty supervision was not associated with the frequency of reported errors. The current results with a narrow CI suggest the need to evaluate other potential factors that can be associated with the high frequency of reported errors by pediatric fellows (eg, fatigue, burnout). The identification of factors that lead to medical errors by pediatric anesthesiology fellows should be a main research priority to improve both trainee education and best practices of pediatric anesthesia.

  9. The association between frequency of self-reported medical errors and anesthesia trainee supervision: a survey of United States anesthesiology residents-in-training.

    PubMed

    De Oliveira, Gildasio S; Rahmani, Rod; Fitzgerald, Paul C; Chang, Ray; McCarthy, Robert J

    2013-04-01

    Poor supervision of physician trainees can be detrimental not only to resident education but also to patient care and safety. Inadequate supervision has been associated with more frequent deaths of patients under the care of junior residents. We hypothesized that residents reporting more medical errors would also report lower quality of supervision scores than the ones with lower reported medical errors. The primary objective of this study was to evaluate the association between the frequency of medical errors reported by residents and their perceived quality of faculty supervision. A cross-sectional nationwide survey was sent to 1000 residents randomly selected from anesthesiology training departments across the United States. Residents from 122 residency programs were invited to participate, the median (interquartile range) per institution was 7 (4-11). Participants were asked to complete a survey assessing demography, perceived quality of faculty supervision, and perceived causes of inadequate perceived supervision. Responses to the statements "I perform procedures for which I am not properly trained," "I make mistakes that have negative consequences for the patient," and "I have made a medication error (drug or incorrect dose) in the last year" were used to assess error rates. Average supervision scores were determined using the De Oliveira Filho et al. scale and compared among the frequency of self-reported error categories using the Kruskal-Wallis test. Six hundred four residents responded to the survey (60.4%). Forty-five (7.5%) of the respondents reported performing procedures for which they were not properly trained, 24 (4%) reported having made mistakes with negative consequences to patients, and 16 (3%) reported medication errors in the last year having occurred multiple times or often. Supervision scores were inversely correlated with the frequency of reported errors for all 3 questions evaluating errors. At a cutoff value of 3, supervision scores demonstrated an overall accuracy (area under the curve) (99% confidence interval) of 0.81 (0.73-0.86), 0.89 (0.77-0.95), and 0.93 (0.77-0.98) for predicting a response of multiple times or often to the question of performing procedures for which they were not properly trained, reported mistakes with negative consequences to patients, and reported medication errors in the last year, respectively. Anesthesiology trainees who reported a greater incidence of medical errors with negative consequences to patients and drug errors also reported lower scores for supervision by faculty. Our findings suggest that further studies of the association between supervision and patient safety are warranted. (Anesth Analg 2013;116:892-7).

  10. A Learner Corpus-Based Study on Verb Errors of Turkish EFL Learners

    ERIC Educational Resources Information Center

    Can, Cem

    2017-01-01

    As learner corpora have presently become readily accessible, it is practicable to examine interlanguage errors and carry out error analysis (EA) on learner-generated texts. The data available in a learner corpus enable researchers to investigate authentic learner errors and their respective frequencies in terms of types and tokens as well as…

  11. The Nature of Error in Adolescent Student Writing

    ERIC Educational Resources Information Center

    Wilcox, Kristen Campbell; Yagelski, Robert; Yu, Fang

    2014-01-01

    This study examined the nature and frequency of error in high school native English speaker (L1) and English learner (L2) writing. Four main research questions were addressed: Are there significant differences in students' error rates in English language arts (ELA) and social studies? Do the most common errors made by students differ in ELA…

  12. Quasi-static shape adjustment of a 15 meter diameter space antenna

    NASA Technical Reports Server (NTRS)

    Belvin, W. Keith; Herstrom, Catherine L.; Edighoffer, Harold H.

    1987-01-01

    A 15 meter diameter Hoop-Column antenna has been analyzed and tested to study shape adjustment of the reflector surface. The Hoop-Column antenna concept employs pretensioned cables and mesh to produce a paraboloidal reflector surface. Fabrication errors and thermal distortions may significantly reduce surface accuracy and consequently degrade electromagnetic performance. Thus, the ability to adjust the surface shape is desirable. The shape adjustment algorithm consisted of finite element and least squares error analyses to minimize the surface distortions. Experimental results verified the analysis. Application of the procedure resulted in a reduction of surface error by 38 percent. Quasi-static shape adjustment has the potential for on-orbit compensation for a variety of surface shape distortions.

  13. OMEGA SYSTEM SYNCHRONIZATION.

    DTIC Science & Technology

    TIME SIGNALS, * SYNCHRONIZATION (ELECTRONICS)), NETWORKS, FREQUENCY, STANDARDS, RADIO SIGNALS, ERRORS, VERY LOW FREQUENCY, PROPAGATION, ACCURACY, ATOMIC CLOCKS, CESIUM, RADIO STATIONS, NAVAL SHORE FACILITIES

  14. Optical voltage reference

    DOEpatents

    Rankin, R.; Kotter, D.

    1994-04-26

    An optical voltage reference for providing an alternative to a battery source is described. The optical reference apparatus provides a temperature stable, high precision, isolated voltage reference through the use of optical isolation techniques to eliminate current and impedance coupling errors. Pulse rate frequency modulation is employed to eliminate errors in the optical transmission link while phase-lock feedback is employed to stabilize the frequency to voltage transfer function. 2 figures.

  15. The Accuracy of Two-Way Satellite Time Transfer Calibrations

    DTIC Science & Technology

    2005-01-01

    20392, USA Abstract Results from successive calibrations of Two-Way Satellite Time and Frequency Transfer ( TWSTFT ) operational equipment at...USNO and five remote stations using portable TWSTFT equipment are analyzed for internal and external errors, finding an average random error of ±0.35...most accurate means of operational long-distance time transfer are Two-Way Satellite Time and Frequency Transfer ( TWSTFT ) and carrier-phase GPS

  16. An observational study of drug administration errors in a Malaysian hospital (study of drug administration errors).

    PubMed

    Chua, S S; Tea, M H; Rahman, M H A

    2009-04-01

    Drug administration errors were the second most frequent type of medication errors, after prescribing errors but the latter were often intercepted hence, administration errors were more probably to reach the patients. Therefore, this study was conducted to determine the frequency and types of drug administration errors in a Malaysian hospital ward. This is a prospective study that involved direct, undisguised observations of drug administrations in a hospital ward. A researcher was stationed in the ward under study for 15 days to observe all drug administrations which were recorded in a data collection form and then compared with the drugs prescribed for the patient. A total of 1118 opportunities for errors were observed and 127 administrations had errors. This gave an error rate of 11.4 % [95% confidence interval (CI) 9.5-13.3]. If incorrect time errors were excluded, the error rate reduced to 8.7% (95% CI 7.1-10.4). The most common types of drug administration errors were incorrect time (25.2%), followed by incorrect technique of administration (16.3%) and unauthorized drug errors (14.1%). In terms of clinical significance, 10.4% of the administration errors were considered as potentially life-threatening. Intravenous routes were more likely to be associated with an administration error than oral routes (21.3% vs. 7.9%, P < 0.001). The study indicates that the frequency of drug administration errors in developing countries such as Malaysia is similar to that in the developed countries. Incorrect time errors were also the most common type of drug administration errors. A non-punitive system of reporting medication errors should be established to encourage more information to be documented so that risk management protocol could be developed and implemented.

  17. Peripheral dysgraphia characterized by the co-occurrence of case substitutions in uppercase and letter substitutions in lowercase writing.

    PubMed

    Di Pietro, M; Schnider, A; Ptak, R

    2011-10-01

    Patients with peripheral dysgraphia due to impairment at the allographic level produce writing errors that affect the letter-form and are characterized by case confusions or the failure to write in a specific case or style (e.g., cursive). We studied the writing errors of a patient with pure peripheral dysgraphia who had entirely intact oral spelling, but produced many well-formed letter errors in written spelling. The comparison of uppercase print and lowercase cursive spelling revealed an uncommon pattern: while most uppercase errors were case substitutions (e.g., A - a), almost all lowercase errors were letter substitutions (e.g., n - r). Analyses of the relationship between target letters and substitution errors showed that errors were neither influenced by consonant-vowel status nor by letter frequency, though word length affected error frequency in lowercase writing. Moreover, while graphomotor similarity did not predict either the occurrence of uppercase or lowercase errors, visuospatial similarity was a significant predictor of lowercase errors. These results suggest that lowercase representations of cursive letter-forms are based on a description of entire letters (visuospatial features) and are not - as previously found for uppercase letters - specified in terms of strokes (graphomotor features). Copyright © 2010 Elsevier Srl. All rights reserved.

  18. Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data.

    PubMed

    Salahuddin, Saqib; Porter, Emily; Meaney, Paul M; O'Halloran, Martin

    2017-02-01

    The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues.

  19. Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data

    PubMed Central

    Salahuddin, Saqib; Porter, Emily; Meaney, Paul M.; O’Halloran, Martin

    2016-01-01

    The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues. PMID:28191324

  20. Improving Papanicolaou test quality and reducing medical errors by using Toyota production system methods.

    PubMed

    Raab, Stephen S; Andrew-Jaja, Carey; Condel, Jennifer L; Dabbs, David J

    2006-01-01

    The objective of the study was to determine whether the Toyota production system process improves Papanicolaou test quality and patient safety. An 8-month nonconcurrent cohort study that included 464 case and 639 control women who had a Papanicolaou test was performed. Office workflow was redesigned using Toyota production system methods by introducing a 1-by-1 continuous flow process. We measured the frequency of Papanicolaou tests without a transformation zone component, follow-up and Bethesda System diagnostic frequency of atypical squamous cells of undetermined significance, and diagnostic error frequency. After the intervention, the percentage of Papanicolaou tests lacking a transformation zone component decreased from 9.9% to 4.7% (P = .001). The percentage of Papanicolaou tests with a diagnosis of atypical squamous cells of undetermined significance decreased from 7.8% to 3.9% (P = .007). The frequency of error per correlating cytologic-histologic specimen pair decreased from 9.52% to 7.84%. The introduction of the Toyota production system process resulted in improved Papanicolaou test quality.

  1. A bundle with a preformatted medical order sheet and an introductory course to reduce prescription errors in neonates.

    PubMed

    Palmero, David; Di Paolo, Ermindo R; Beauport, Lydie; Pannatier, André; Tolsa, Jean-François

    2016-01-01

    The objective of this study was to assess whether the introduction of a new preformatted medical order sheet coupled with an introductory course affected prescription quality and the frequency of errors during the prescription stage in a neonatal intensive care unit (NICU). Two-phase observational study consisting of two consecutive 4-month phases: pre-intervention (phase 0) and post-intervention (phase I) conducted in an 11-bed NICU in a Swiss university hospital. Interventions consisted of the introduction of a new preformatted medical order sheet with explicit information supplied, coupled with a staff introductory course on appropriate prescription and medication errors. The main outcomes measured were formal aspects of prescription and frequency and nature of prescription errors. Eighty-three and 81 patients were included in phase 0 and phase I, respectively. A total of 505 handwritten prescriptions in phase 0 and 525 in phase I were analysed. The rate of prescription errors decreased significantly from 28.9% in phase 0 to 13.5% in phase I (p < 0.05). Compared with phase 0, dose errors, name confusion and errors in frequency and rate of drug administration decreased in phase I, from 5.4 to 2.7% (p < 0.05), 5.9 to 0.2% (p < 0.05), 3.6 to 0.2% (p < 0.05), and 4.7 to 2.1% (p < 0.05), respectively. The rate of incomplete and ambiguous prescriptions decreased from 44.2 to 25.7 and 8.5 to 3.2% (p < 0.05), respectively. Inexpensive and simple interventions can improve the intelligibility of prescriptions and reduce medication errors. Medication errors are frequent in NICUs and prescription is one of the most critical steps. CPOE reduce prescription errors, but their implementation is not available everywhere. Preformatted medical order sheet coupled with an introductory course decrease medication errors in a NICU. Preformatted medical order sheet is an inexpensive and readily implemented alternative to CPOE.

  2. Models and methods to characterize site amplification from a pair of records

    USGS Publications Warehouse

    Safak, E.

    1997-01-01

    The paper presents a tutorial review of the models and methods that are used to characterize site amplification from the pairs of rock- and soil-site records, and introduces some new techniques with better theoretical foundations. The models and methods discussed include spectral and cross-spectral ratios, spectral ratios for downhole records, response spectral ratios, constant amplification factors, parametric models, physical models, and time-varying filters. An extensive analytical and numerical error analysis of spectral and cross-spectral ratios shows that probabilistically cross-spectral ratios give more reliable estimates of site amplification. Spectral ratios should not be used to determine site amplification from downhole-surface recording pairs because of the feedback in the downhole sensor. Response spectral ratios are appropriate for low frequencies, but overestimate the amplification at high frequencies. The best method to be used depends on how much precision is required in the estimates.

  3. Assessing the utility of frequency dependent nudging for reducing biases in biogeochemical models

    NASA Astrophysics Data System (ADS)

    Lagman, Karl B.; Fennel, Katja; Thompson, Keith R.; Bianucci, Laura

    2014-09-01

    Bias errors, resulting from inaccurate boundary and forcing conditions, incorrect model parameterization, etc. are a common problem in environmental models including biogeochemical ocean models. While it is important to correct bias errors wherever possible, it is unlikely that any environmental model will ever be entirely free of such errors. Hence, methods for bias reduction are necessary. A widely used technique for online bias reduction is nudging, where simulated fields are continuously forced toward observations or a climatology. Nudging is robust and easy to implement, but suppresses high-frequency variability and introduces artificial phase shifts. As a solution to this problem Thompson et al. (2006) introduced frequency dependent nudging where nudging occurs only in prescribed frequency bands, typically centered on the mean and the annual cycle. They showed this method to be effective for eddy resolving ocean circulation models. Here we add a stability term to the previous form of frequency dependent nudging which makes the method more robust for non-linear biological models. Then we assess the utility of frequency dependent nudging for biological models by first applying the method to a simple predator-prey model and then to a 1D ocean biogeochemical model. In both cases we only nudge in two frequency bands centered on the mean and the annual cycle, and then assess how well the variability in higher frequency bands is recovered. We evaluate the effectiveness of frequency dependent nudging in comparison to conventional nudging and find significant improvements with the former.

  4. Development of a 3-D Pen Input Device

    DTIC Science & Technology

    2008-09-01

    of a unistroke which can be written on any surface or in the air while correcting integration errors from the...navigation frame of a unistroke, which can be written on any surface or in the air while correcting integration errors from the measurements of the IMU... be written on any surface or in the air while correcting integration errors from the measurements of the IMU (Inertial Measurement Unit) of the

  5. Effects of weather on the retrieval of sea ice concentration and ice type from passive microwave data

    NASA Technical Reports Server (NTRS)

    Maslanik, J. A.

    1992-01-01

    Effects of wind, water vapor, and cloud liquid water on ice concentration and ice type calculated from passive microwave data are assessed through radiative transfer calculations and observations. These weather effects can cause overestimates in ice concentration and more substantial underestimates in multi-year ice percentage by decreasing polarization and by decreasing the gradient between frequencies. The effect of surface temperature and air temperature on the magnitudes of weather-related errors is small for ice concentration and substantial for multiyear ice percentage. The existing weather filter in the NASA Team Algorithm addresses only weather effects over open ocean; the additional use of local open-ocean tie points and an alternative weather correction for the marginal ice zone can further reduce errors due to weather. Ice concentrations calculated using 37 versus 18 GHz data show little difference in total ice covered area, but greater differences in intermediate concentration classes. Given the magnitude of weather-related errors in ice classification from passive microwave data, corrections for weather effects may be necessary to detect small trends in ice covered area and ice type for climate studies.

  6. An automatic tooth preparation technique: A preliminary study

    NASA Astrophysics Data System (ADS)

    Yuan, Fusong; Wang, Yong; Zhang, Yaopeng; Sun, Yuchun; Wang, Dangxiao; Lyu, Peijun

    2016-04-01

    The aim of this study is to validate the feasibility and accuracy of a new automatic tooth preparation technique in dental healthcare. An automatic tooth preparation robotic device with three-dimensional motion planning software was developed, which controlled an ultra-short pulse laser (USPL) beam (wavelength 1,064 nm, pulse width 15 ps, output power 30 W, and repeat frequency rate 100 kHz) to complete the tooth preparation process. A total of 15 freshly extracted human intact first molars were collected and fixed into a phantom head, and the target preparation shapes of these molars were designed using customised computer-aided design (CAD) software. The accuracy of tooth preparation was evaluated using the Geomagic Studio and Imageware software, and the preparing time of each tooth was recorded. Compared with the target preparation shape, the average shape error of the 15 prepared molars was 0.05-0.17 mm, the preparation depth error of the occlusal surface was approximately 0.097 mm, and the error of the convergence angle was approximately 1.0°. The average preparation time was 17 minutes. These results validated the accuracy and feasibility of the automatic tooth preparation technique.

  7. An automatic tooth preparation technique: A preliminary study.

    PubMed

    Yuan, Fusong; Wang, Yong; Zhang, Yaopeng; Sun, Yuchun; Wang, Dangxiao; Lyu, Peijun

    2016-04-29

    The aim of this study is to validate the feasibility and accuracy of a new automatic tooth preparation technique in dental healthcare. An automatic tooth preparation robotic device with three-dimensional motion planning software was developed, which controlled an ultra-short pulse laser (USPL) beam (wavelength 1,064 nm, pulse width 15 ps, output power 30 W, and repeat frequency rate 100 kHz) to complete the tooth preparation process. A total of 15 freshly extracted human intact first molars were collected and fixed into a phantom head, and the target preparation shapes of these molars were designed using customised computer-aided design (CAD) software. The accuracy of tooth preparation was evaluated using the Geomagic Studio and Imageware software, and the preparing time of each tooth was recorded. Compared with the target preparation shape, the average shape error of the 15 prepared molars was 0.05-0.17 mm, the preparation depth error of the occlusal surface was approximately 0.097 mm, and the error of the convergence angle was approximately 1.0°. The average preparation time was 17 minutes. These results validated the accuracy and feasibility of the automatic tooth preparation technique.

  8. An integral formulation for wave propagation on weakly non-uniform potential flows

    NASA Astrophysics Data System (ADS)

    Mancini, Simone; Astley, R. Jeremy; Sinayoko, Samuel; Gabard, Gwénaël; Tournour, Michel

    2016-12-01

    An integral formulation for acoustic radiation in moving flows is presented. It is based on a potential formulation for acoustic radiation on weakly non-uniform subsonic mean flows. This work is motivated by the absence of suitable kernels for wave propagation on non-uniform flow. The integral solution is formulated using a Green's function obtained by combining the Taylor and Lorentz transformations. Although most conventional approaches based on either transform solve the Helmholtz problem in a transformed domain, the current Green's function and associated integral equation are derived in the physical space. A dimensional error analysis is developed to identify the limitations of the current formulation. Numerical applications are performed to assess the accuracy of the integral solution. It is tested as a means of extrapolating a numerical solution available on the outer boundary of a domain to the far field, and as a means of solving scattering problems by rigid surfaces in non-uniform flows. The results show that the error associated with the physical model deteriorates with increasing frequency and mean flow Mach number. However, the error is generated only in the domain where mean flow non-uniformities are significant and is constant in regions where the flow is uniform.

  9. High-frequency fluctuations in Denmark Strait transport

    NASA Astrophysics Data System (ADS)

    Haine, T. W. N.

    2010-07-01

    Denmark Strait ocean current transport exhibits quasi-regular fluctuations immediately south of the sill with periods of 2-4 days. The transport variability is similar to the mean transport itself. Using a circulation model we explore prospects to monitor the fluctuations. The model has realistic transport and shows water leaving Denmark Strait in equivalent-barotropic cyclones that are nearly geostrophic and correlate with sea-surface height (SSH). Existing satellite altimeter observations of SSH have adequate space/time sampling to reconstruct the transport fluctuations using a regression developed from the model results, but measurement error overwhelms the signal. From the model results, the pending Surface Water and Ocean Topography (SWOT) wide-swath altimeter appears accurate enough, and with good-enough coverage, to allow the transport fluctuations to be reconstructed. Bottom pressure recorders at the exit of the Denmark Strait can also reproduce the transport variability.

  10. Measurements of the Casimir-Lifshitz force in fluids: The effect of electrostatic forces and Debye screening

    NASA Astrophysics Data System (ADS)

    Munday, J. N.; Capasso, Federico; Parsegian, V. Adrian; Bezrukov, Sergey M.

    2008-09-01

    We present detailed measurements of the Casimir-Lifshitz force between two gold surfaces (a sphere and a plate) immersed in ethanol and study the effect of residual electrostatic forces, which are dominated by static fields within the apparatus and can be reduced with proper shielding. Electrostatic forces are further reduced by Debye screening through the addition of salt ions to the liquid. Additionally, the salt leads to a reduction of the Casimir-Lifshitz force by screening the zero-frequency contribution to the force; however, the effect is small between gold surfaces at the measured separations and within experimental error. An improved calibration procedure is described and compared with previous methods. Finally, the experimental results are compared with Lifshitz’s theory and found to be consistent for the materials used in the experiment.

  11. Direction Dependent Effects In Widefield Wideband Full Stokes Radio Imaging

    NASA Astrophysics Data System (ADS)

    Jagannathan, Preshanth; Bhatnagar, Sanjay; Rau, Urvashi; Taylor, Russ

    2015-01-01

    Synthesis imaging in radio astronomy is affected by instrumental and atmospheric effects which introduce direction dependent gains.The antenna power pattern varies both as a function of time and frequency. The broad band time varying nature of the antenna power pattern when not corrected leads to gross errors in full stokes imaging and flux estimation. In this poster we explore the errors that arise in image deconvolution while not accounting for the time and frequency dependence of the antenna power pattern. Simulations were conducted with the wideband full stokes power pattern of the Very Large Array(VLA) antennas to demonstrate the level of errors arising from direction-dependent gains. Our estimate is that these errors will be significant in wide-band full-pol mosaic imaging as well and algorithms to correct these errors will be crucial for many up-coming large area surveys (e.g. VLASS)

  12. An Ensemble Method for Spelling Correction in Consumer Health Questions

    PubMed Central

    Kilicoglu, Halil; Fiszman, Marcelo; Roberts, Kirk; Demner-Fushman, Dina

    2015-01-01

    Orthographic and grammatical errors are a common feature of informal texts written by lay people. Health-related questions asked by consumers are a case in point. Automatic interpretation of consumer health questions is hampered by such errors. In this paper, we propose a method that combines techniques based on edit distance and frequency counts with a contextual similarity-based method for detecting and correcting orthographic errors, including misspellings, word breaks, and punctuation errors. We evaluate our method on a set of spell-corrected questions extracted from the NLM collection of consumer health questions. Our method achieves a F1 score of 0.61, compared to an informed baseline of 0.29, achieved using ESpell, a spelling correction system developed for biomedical queries. Our results show that orthographic similarity is most relevant in spelling error correction in consumer health questions and that frequency and contextual information are complementary to orthographic features. PMID:26958208

  13. Assimilation of Satellite Sea Surface Salinity Fields: Validating Ocean Analyses and Identifying Errors in Surface Buoyancy Fluxes

    NASA Astrophysics Data System (ADS)

    Mehra, A.; Nadiga, S.; Bayler, E. J.; Behringer, D.

    2014-12-01

    Recently available satellite sea-surface salinity (SSS) fields provide an important new global data stream for assimilation into ocean forecast systems. In this study, we present results from assimilating satellite SSS fields from NASA's Aquarius mission into the National Oceanic and Atmospheric Administration's (NOAA) operational Modular Ocean Model version 4 (MOM4), the oceanic component of NOAA's operational seasonal-interannual Climate Forecast System (CFS). Experiments on the sensitivity of the ocean's overall state to different relaxation time periods were run to evaluate the importance of assimilating high-frequency (daily to mesoscale) and low-frequency (seasonal) SSS variability. Aquarius SSS data (Aquarius Data Processing System (ADPS) version 3.0), mapped daily fields at 1-degree spatial resolution, were used. Four model simulations were started from the same initial ocean condition and forced with NOAA's daily Climate Forecast System Reanalysis (CFSR) fluxes, using a relaxation technique to assimilate daily satellite sea surface temperature (SST) fields and selected SSS fields, where, except as noted, a 30-day relaxation period is used. The simulations are: (1) WOAMC, the reference case and similar to the operational setup, assimilating monthly climatological SSS from the 2009 NOAA World Ocean Atlas; (2) AQ_D, assimilating daily Aquarius SSS; (3) AQ_M, assimilating monthly Aquarius SSS; and (4) AQ_D10, assimilating daily Aquarius SSS, but using a 10-day relaxation period. The analysis focuses on the tropical Pacific Ocean, where the salinity dynamics are intense and dominated by El Niño interannual variability in the cold tongue region and by high-frequency precipitation events in the western Pacific warm pool region. To assess the robustness of results and conclusions, we also examine the results for the tropical Atlantic and Indian Oceans. Preliminary validation studies are conducted using observations, such as satellite sea-surface height (SSH) fields and in situ Argo buoy vertical profiles of temperature and salinity, to demonstrate that SSS data assimilation improves ocean state representation of the following variables: ocean heat content (0-300m), dynamic height (0-1000m), mixed-layer depth, sea surface heigh, and surface buoyancy fluxes.

  14. Seismic loading due to mining: Wave amplification and vibration of structures

    NASA Astrophysics Data System (ADS)

    Lokmane, N.; Semblat, J.-F.; Bonnet, G.; Driad, L.; Duval, A.-M.

    2003-04-01

    A vibration induced by the ground motion, whatever its source is, can in certain cases damage surface structures. The scientific works allowing the analysis of this phenomenon are numerous and well established. However, they generally concern dynamic motion from real earthquakes. The goal of this work is to analyse the impact of shaking induced by mining on the structures located on the surface. The methods allowing to assess the consequences of earthquakes of strong amplitude are well established, when the methodology to estimate the consequences of moderate but frequent dynamic loadings is not well defined. The mining such as the "Houillères de Bassin du Centre et du Midi" (HBCM) involves vibrations which are regularly felt on the surface. An extracting work of coal generates shaking similar to those caused by earthquakes (standard waves and laws of propagation) but of rather low magnitude. On the other hand, their recurrent feature makes the vibrations more harmful. A three-dimensional modeling of standard structure of the site was carried out. The first results show that the fundamental frequencies of this structure are compatible with the amplification measurements carried out on site. The motion amplification in the surface soil layers is then analyzed. The modeling works are performed on the surface soil layers of Gardanne (Provence), where measurements of microtremors were performed. The analysis of H/V spectral ratio (horizontal on vertical component) indeed makes it possible to characterize the fundamental frequencies of the surface soil layers. This experiment also allows to characterize local evolution of amplification induced by the topmost soil layers. The numerical methods we consider to model seismic wave propagation and amplification in the site, is the Boundary Element Methode (BEM) The main advantage of the boundary element method is to get rid of artificial truncations of the mesh (as in Finite Element Method) in the case of infinite medium. For dynamic problems, these truncations lead to spurious wave reflections giving a numerical error in the solution. The experimental and numerical (BEM) results on surface motion amplification are then compared in terms of both amplitude and frequency range.

  15. Removing function model and experiments on ultrasonic polishing molding die

    NASA Astrophysics Data System (ADS)

    Huang, Qitai; Ni, Ying; Yu, Jingchi

    2010-10-01

    Low temperature glass molding technology is the main method on volume-producing high precision middle and small diameter optical cells in the future. While the accuracy of the molding die will effect the cell precision, so the high precision molding die development is one of the most important part of the low temperature glass molding technology. The molding die is manufactured from high rigid and crisp metal alloy, with the ultrasonic vibration character of high vibration frequency and concentrative energy distribution; abrasive particles will impact the rigid metal alloy surface with very high speed that will remove the material from the work piece. Ultrasonic can make the rigid metal alloy molding die controllable polishing and reduce the roughness and surface error. Different from other ultrasonic fabrication method, untouched ultrasonic polishing is applied on polish the molding die, that means the tool does not touch the work piece in the process of polishing. The abrasive particles vibrate around the balance position with high speed and frequency under the drive of ultrasonic vibration in the liquid medium and impact the workspace surface, the energy of abrasive particles come from ultrasonic vibration, while not from the direct hammer blow of the tool. So a nummular vibrator simple harmonic vibrates on an infinity plane surface is considered as a model of ultrasonic polishing working condition. According to Huygens theory the sound field distribution on a plane surface is analyzed and calculated, the tool removing function is also deduced from this distribution. Then the simple point ultrasonic polishing experiment is proceeded to certificate the theory validity.

  16. Reason and Condition for Mode Kissing in MASW Method

    NASA Astrophysics Data System (ADS)

    Gao, Lingli; Xia, Jianghai; Pan, Yudi; Xu, Yixian

    2016-05-01

    Identifying correct modes of surface waves and picking accurate phase velocities are critical for obtaining an accurate S-wave velocity in MASW method. In most cases, inversion is easily conducted by picking the dispersion curves corresponding to different surface-wave modes individually. Neighboring surface-wave modes, however, will nearly meet (kiss) at some frequencies for some models. Around the frequencies, they have very close roots and energy peak shifts from one mode to another. At current dispersion image resolution, it is difficult to distinguish different modes when mode-kissing occurs, which is commonly seen in near-surface earth models. It will cause mode misidentification, and as a result, lead to a larger overestimation of S-wave velocity and error on depth. We newly defined two mode types based on the characteristics of the vertical eigendisplacements calculated by generalized reflection and transmission coefficient method. Rayleigh-wave mode near the kissing points (osculation points) change its type, that is to say, one Rayleigh-wave mode will contain different mode types. This mode type conversion will cause the mode-kissing phenomenon in dispersion images. Numerical tests indicate that the mode-kissing phenomenon is model dependent and that the existence of strong S-wave velocity contrasts increases the possibility of mode-kissing. The real-world data shows mode misidentification caused by mode-kissing phenomenon will result in higher S-wave velocity of bedrock. It reminds us to pay attention to this phenomenon when some of the underground information is known.

  17. How important is mode-coupling in global surface wave tomography?

    NASA Astrophysics Data System (ADS)

    Mikesell, Dylan; Nolet, Guust; Voronin, Sergey; Ritsema, Jeroen; Van Heijst, Hendrik-Jan

    2016-04-01

    To investigate the influence of mode coupling for fundamental mode Rayleigh waves with periods between 64 and 174s, we analysed 3,505,902 phase measurements obtained along minor arc trajectories as well as 2,163,474 phases along major arcs. This is a selection of five frequency bands from the data set of Van Heijst and Woodhouse, extended with more recent earthquakes, that served to define upper mantle S velocity in model S40RTS. Since accurate estimation of the misfits (as represented by χ2) is essential, we used the method of Voronin et al. (GJI 199:276, 2014) to obtain objective estimates of the standard errors in this data set. We adapted Voronin's method slightly to avoid that systematic errors along clusters of raypaths can be accommodated by source corrections. This was done by simultaneously analysing multiple clusters of raypaths originating from the same group of earthquakes but traveling in different directions. For the minor arc data, phase errors at the one sigma level range from 0.26 rad at a period of 174s to 0.89 rad at 64s. For the major arcs, these errors are roughly twice as high (0.40 and 2.09 rad, respectively). In the subsequent inversion we removed any outliers that could not be fitted at the 3 sigma level in an almost undamped inversion. Using these error estimates and the theory of finite-frequency tomography to include the effects of scattering, we solved for models with χ2 = N (the number of data) both including and excluding the effect of mode coupling between Love and Rayleigh waves. We shall present some dramatic differences between the two models, notably near ocean-continent boundaries (e.g. California) where mode conversions are likely to be largest. But a sharpening of other features, such as cratons and high-velocity blobs in the oceanic domain, is also observed when mode coupling is taken into account. An investigation of the influence of coupling on azimuthal anisotropy is still under way at the time of writing of this abstract, but the results of this will be included in the presentation.

  18. High frequency observations of Iapetus on the Green Bank Telescope aided by improvements in understanding the telescope response to wind

    NASA Astrophysics Data System (ADS)

    Ries, Paul A.

    2012-05-01

    The Green Bank Telescope is a 100m, fully steerable, single dish radio telescope located in Green Bank, West Virginia and capable of making observations from meter wavelengths to 3mm. However, observations at wavelengths short of 2 cm pose significant observational challenges due to pointing and surface errors. The first part of this thesis details efforts to combat wind-induced pointing errors, which reduce by half the amount of time available for high-frequency work on the telescope. The primary tool used for understanding these errors was an optical quadrant detector that monitored the motion of the telescope's feed arm. In this work, a calibration was developed that tied quadrant detector readings directly to telescope pointing error. These readings can be used for single-beam observations in order to determine if the telescope was blown off-source at some point due to wind. With observations with the 3 mm MUSTANG bolometer array, pointing errors due to wind can mostly be removed (> ⅔) during data reduction. Iapetus is a moon known for its stark albedo dichotomy, with the leading hemisphere only a tenth as bright as the trailing. In order to investigate this dichotomy, Iapetus was observed repeatedly with the GBT at wavelengths between 3 and 11 mm, with the original intention being to use the data to determine a thermal light-curve. Instead, the data showed incredible wavelength-dependent deviation from a black-body curve, with an emissivity as low as 0.3 at 9 mm. Numerous techniques were used to demonstrate that this low emissivity is a physical phenomenon rather than an observational one, including some using the quadrant detector to make sure the low emissivities are not due to being blown off source. This emissivity is the among the lowest ever detected in the solar system, but can be achieved using physically realistic ice models that are also used to model microwave emission from snowpacks and glaciers on Earth. These models indicate that the trailing hemisphere contains a scattering layer of depth 100 cm and grain size of 1-2 mm. The leading hemisphere is shown to exhibit a thermal depth effect.

  19. Setup errors and effectiveness of Optical Laser 3D Surface imaging system (Sentinel) in postoperative radiotherapy of breast cancer.

    PubMed

    Wei, Xiaobo; Liu, Mengjiao; Ding, Yun; Li, Qilin; Cheng, Changhai; Zong, Xian; Yin, Wenming; Chen, Jie; Gu, Wendong

    2018-05-08

    Breast-conserving surgery (BCS) plus postoperative radiotherapy has become the standard treatment for early-stage breast cancer. The aim of this study was to compare the setup accuracy of optical surface imaging by the Sentinel system with cone-beam computerized tomography (CBCT) imaging currently used in our clinic for patients received BCS. Two optical surface scans were acquired before and immediately after couch movement correction. The correlation between the setup errors as determined by the initial optical surface scan and CBCT was analyzed. The deviation of the second optical surface scan from the reference planning CT was considered an estimate for the residual errors for the new method for patient setup correction. The consequences in terms for necessary planning target volume (PTV) margins for treatment sessions without setup correction applied. We analyzed 145 scans in 27 patients treated for early stage breast cancer. The setup errors of skin marker based patient alignment by optical surface scan and CBCT were correlated, and the residual setup errors as determined by the optical surface scan after couch movement correction were reduced. Optical surface imaging provides a convenient method for improving the setup accuracy for breast cancer patient without unnecessary imaging dose.

  20. Quantitative comparison of tympanic membrane displacements using two optical methods to recover the optical phase

    NASA Astrophysics Data System (ADS)

    Santiago-Lona, Cynthia V.; Hernández-Montes, María del Socorro; Mendoza-Santoyo, Fernando; Esquivel-Tejeda, Jesús

    2018-02-01

    The study and quantification of the tympanic membrane (TM) displacements add important information to advance the knowledge about the hearing process. A comparative statistical analysis between two commonly used demodulation methods employed to recover the optical phase in digital holographic interferometry, namely the fast Fourier transform and phase-shifting interferometry, is presented as applied to study thin tissues such as the TM. The resulting experimental TM surface displacement data are used to contrast both methods through the analysis of variance and F tests. Data are gathered when the TMs are excited with continuous sound stimuli at levels 86, 89 and 93 dB SPL for the frequencies of 800, 1300 and 2500 Hz under the same experimental conditions. The statistical analysis shows repeatability in z-direction displacements with a standard deviation of 0.086, 0.098 and 0.080 μm using the Fourier method, and 0.080, 0.104 and 0.055 μm with the phase-shifting method at a 95% confidence level for all frequencies. The precision and accuracy are evaluated by means of the coefficient of variation; the results with the Fourier method are 0.06143, 0.06125, 0.06154 and 0.06154, 0.06118, 0.06111 with phase-shifting. The relative error between both methods is 7.143, 6.250 and 30.769%. On comparing the measured displacements, the results indicate that there is no statistically significant difference between both methods for frequencies at 800 and 1300 Hz; however, errors and other statistics increase at 2500 Hz.

  1. Mitigating leakage errors due to cavity modes in a superconducting quantum computer

    NASA Astrophysics Data System (ADS)

    McConkey, T. G.; Béjanin, J. H.; Earnest, C. T.; McRae, C. R. H.; Pagel, Z.; Rinehart, J. R.; Mariantoni, M.

    2018-07-01

    A practical quantum computer requires quantum bit (qubit) operations with low error probabilities in extensible architectures. We study a packaging method that makes it possible to address hundreds of superconducting qubits by means of coaxial Pogo pins. A qubit chip is housed in a superconducting box, where both box and chip dimensions lead to unwanted modes that can interfere with qubit operations. We analyze these interference effects in the context of qubit coherent leakage and qubit decoherence induced by damped modes. We propose two methods, half-wave fencing and antinode pinning, to mitigate the resulting errors by detuning the resonance frequency of the modes from the qubit frequency. We perform electromagnetic field simulations indicating that the resonance frequency of the modes increases with the number of installed pins and can be engineered to be significantly higher than the highest qubit frequency. We estimate that the error probabilities and decoherence rates due to suitably shifted modes in realistic scenarios can be up to two orders of magnitude lower than the state-of-the-art superconducting qubit error and decoherence rates. Our methods can be extended to different types of packages that do not rely on Pogo pins. Conductive bump bonds, for example, can serve the same purpose in qubit architectures based on flip chip technology. Metalized vias, instead, can be used to mitigate modes due to the increasing size of the dielectric substrate on which qubit arrays are patterned.

  2. Novel parametric reduced order model for aeroengine blade dynamics

    NASA Astrophysics Data System (ADS)

    Yuan, Jie; Allegri, Giuliano; Scarpa, Fabrizio; Rajasekaran, Ramesh; Patsias, Sophoclis

    2015-10-01

    The work introduces a novel reduced order model (ROM) technique to describe the dynamic behavior of turbofan aeroengine blades. We introduce an equivalent 3D frame model to describe the coupled flexural/torsional mode shapes, with their relevant natural frequencies and associated modal masses. The frame configurations are identified through a structural identification approach based on a simulated annealing algorithm with stochastic tunneling. The cost functions are constituted by linear combinations of relative errors associated to the resonance frequencies, the individual modal assurance criteria (MAC), and on either overall static or modal masses. When static masses are considered the optimized 3D frame can represent the blade dynamic behavior with an 8% error on the MAC, a 1% error on the associated modal frequencies and a 1% error on the overall static mass. When using modal masses in the cost function the performance of the ROM is similar, but the overall error increases to 7%. The approach proposed in this paper is considerably more accurate than state-of-the-art blade ROMs based on traditional Timoshenko beams, and provides excellent accuracy at reduced computational time when compared against high fidelity FE models. A sensitivity analysis shows that the proposed model can adequately predict the global trends of the variations of the natural frequencies when lumped masses are used for mistuning analysis. The proposed ROM also follows extremely closely the sensitivity of the high fidelity finite element models when the material parameters are used in the sensitivity.

  3. Accounting for hardware imperfections in EIT image reconstruction algorithms.

    PubMed

    Hartinger, Alzbeta E; Gagnon, Hervé; Guardo, Robert

    2007-07-01

    Electrical impedance tomography (EIT) is a non-invasive technique for imaging the conductivity distribution of a body section. Different types of EIT images can be reconstructed: absolute, time difference and frequency difference. Reconstruction algorithms are sensitive to many errors which translate into image artefacts. These errors generally result from incorrect modelling or inaccurate measurements. Every reconstruction algorithm incorporates a model of the physical set-up which must be as accurate as possible since any discrepancy with the actual set-up will cause image artefacts. Several methods have been proposed in the literature to improve the model realism, such as creating anatomical-shaped meshes, adding a complete electrode model and tracking changes in electrode contact impedances and positions. Absolute and frequency difference reconstruction algorithms are particularly sensitive to measurement errors and generally assume that measurements are made with an ideal EIT system. Real EIT systems have hardware imperfections that cause measurement errors. These errors translate into image artefacts since the reconstruction algorithm cannot properly discriminate genuine measurement variations produced by the medium under study from those caused by hardware imperfections. We therefore propose a method for eliminating these artefacts by integrating a model of the system hardware imperfections into the reconstruction algorithms. The effectiveness of the method has been evaluated by reconstructing absolute, time difference and frequency difference images with and without the hardware model from data acquired on a resistor mesh phantom. Results have shown that artefacts are smaller for images reconstructed with the model, especially for frequency difference imaging.

  4. Broadband CARS spectral phase retrieval using a time-domain Kramers–Kronig transform

    PubMed Central

    Liu, Yuexin; Lee, Young Jong; Cicerone, Marcus T.

    2014-01-01

    We describe a closed-form approach for performing a Kramers–Kronig (KK) transform that can be used to rapidly and reliably retrieve the phase, and thus the resonant imaginary component, from a broadband coherent anti-Stokes Raman scattering (CARS) spectrum with a nonflat background. In this approach we transform the frequency-domain data to the time domain, perform an operation that ensures a causality criterion is met, then transform back to the frequency domain. The fact that this method handles causality in the time domain allows us to conveniently account for spectrally varying nonresonant background from CARS as a response function with a finite rise time. A phase error accompanies KK transform of data with finite frequency range. In examples shown here, that phase error leads to small (<1%) errors in the retrieved resonant spectra. PMID:19412273

  5. A new method for weakening the combined effect of residual errors on multibeam bathymetric data

    NASA Astrophysics Data System (ADS)

    Zhao, Jianhu; Yan, Jun; Zhang, Hongmei; Zhang, Yuqing; Wang, Aixue

    2014-12-01

    Multibeam bathymetric system (MBS) has been widely applied in the marine surveying for providing high-resolution seabed topography. However, some factors degrade the precision of bathymetry, including the sound velocity, the vessel attitude, the misalignment angle of the transducer and so on. Although these factors have been corrected strictly in bathymetric data processing, the final bathymetric result is still affected by their residual errors. In deep water, the result usually cannot meet the requirements of high-precision seabed topography. The combined effect of these residual errors is systematic, and it's difficult to separate and weaken the effect using traditional single-error correction methods. Therefore, the paper puts forward a new method for weakening the effect of residual errors based on the frequency-spectrum characteristics of seabed topography and multibeam bathymetric data. Four steps, namely the separation of the low-frequency and the high-frequency part of bathymetric data, the reconstruction of the trend of actual seabed topography, the merging of the actual trend and the extracted microtopography, and the accuracy evaluation, are involved in the method. Experiment results prove that the proposed method could weaken the combined effect of residual errors on multibeam bathymetric data and efficiently improve the accuracy of the final post-processing results. We suggest that the method should be widely applied to MBS data processing in deep water.

  6. Error Pattern Analysis Applied to Technical Writing: An Editor's Guide for Writers.

    ERIC Educational Resources Information Center

    Monagle, E. Brette

    The use of error pattern analysis can reduce the time and money spent on editing and correcting manuscripts. What is required is noting, classifying, and keeping a frequency count of errors. First an editor should take a typical page of writing and circle each error. After the editor has done a sufficiently large number of pages to identify an…

  7. Validating and calibrating the Nintendo Wii balance board to derive reliable center of pressure measures.

    PubMed

    Leach, Julia M; Mancini, Martina; Peterka, Robert J; Hayes, Tamara L; Horak, Fay B

    2014-09-29

    The Nintendo Wii balance board (WBB) has generated significant interest in its application as a postural control measurement device in both the clinical and (basic, clinical, and rehabilitation) research domains. Although the WBB has been proposed as an alternative to the "gold standard" laboratory-grade force plate, additional research is necessary before the WBB can be considered a valid and reliable center of pressure (CoP) measurement device. In this study, we used the WBB and a laboratory-grade AMTI force plate (AFP) to simultaneously measure the CoP displacement of a controlled dynamic load, which has not been done before. A one-dimensional inverted pendulum was displaced at several different displacement angles and load heights to simulate a variety of postural sway amplitudes and frequencies (<1 Hz). Twelve WBBs were tested to address the issue of inter-device variability. There was a significant effect of sway amplitude, frequency, and direction on the WBB's CoP measurement error, with an increase in error as both sway amplitude and frequency increased and a significantly greater error in the mediolateral (ML) (compared to the anteroposterior (AP)) sway direction. There was no difference in error across the 12 WBB's, supporting low inter-device variability. A linear calibration procedure was then implemented to correct the WBB's CoP signals and reduce measurement error. There was a significant effect of calibration on the WBB's CoP signal accuracy, with a significant reduction in CoP measurement error (quantified by root-mean-squared error) from 2-6 mm (before calibration) to 0.5-2 mm (after calibration). WBB-based CoP signal calibration also significantly reduced the percent error in derived (time-domain) CoP sway measures, from -10.5% (before calibration) to -0.05% (after calibration) (percent errors averaged across all sway measures and in both sway directions). In this study, we characterized the WBB's CoP measurement error under controlled, dynamic conditions and implemented a linear calibration procedure for WBB CoP signals that is recommended to reduce CoP measurement error and provide more reliable estimates of time-domain CoP measures. Despite our promising results, additional work is necessary to understand how our findings translate to the clinical and rehabilitation research domains. Once the WBB's CoP measurement error is fully characterized in human postural sway (which differs from our simulated postural sway in both amplitude and frequency content), it may be used to measure CoP displacement in situations where lower accuracy and precision is acceptable.

  8. Validating and Calibrating the Nintendo Wii Balance Board to Derive Reliable Center of Pressure Measures

    PubMed Central

    Leach, Julia M.; Mancini, Martina; Peterka, Robert J.; Hayes, Tamara L.; Horak, Fay B.

    2014-01-01

    The Nintendo Wii balance board (WBB) has generated significant interest in its application as a postural control measurement device in both the clinical and (basic, clinical, and rehabilitation) research domains. Although the WBB has been proposed as an alternative to the “gold standard” laboratory-grade force plate, additional research is necessary before the WBB can be considered a valid and reliable center of pressure (CoP) measurement device. In this study, we used the WBB and a laboratory-grade AMTI force plate (AFP) to simultaneously measure the CoP displacement of a controlled dynamic load, which has not been done before. A one-dimensional inverted pendulum was displaced at several different displacement angles and load heights to simulate a variety of postural sway amplitudes and frequencies (<1 Hz). Twelve WBBs were tested to address the issue of inter-device variability. There was a significant effect of sway amplitude, frequency, and direction on the WBB's CoP measurement error, with an increase in error as both sway amplitude and frequency increased and a significantly greater error in the mediolateral (ML) (compared to the anteroposterior (AP)) sway direction. There was no difference in error across the 12 WBB's, supporting low inter-device variability. A linear calibration procedure was then implemented to correct the WBB's CoP signals and reduce measurement error. There was a significant effect of calibration on the WBB's CoP signal accuracy, with a significant reduction in CoP measurement error (quantified by root-mean-squared error) from 2–6 mm (before calibration) to 0.5–2 mm (after calibration). WBB-based CoP signal calibration also significantly reduced the percent error in derived (time-domain) CoP sway measures, from −10.5% (before calibration) to −0.05% (after calibration) (percent errors averaged across all sway measures and in both sway directions). In this study, we characterized the WBB's CoP measurement error under controlled, dynamic conditions and implemented a linear calibration procedure for WBB CoP signals that is recommended to reduce CoP measurement error and provide more reliable estimates of time-domain CoP measures. Despite our promising results, additional work is necessary to understand how our findings translate to the clinical and rehabilitation research domains. Once the WBB's CoP measurement error is fully characterized in human postural sway (which differs from our simulated postural sway in both amplitude and frequency content), it may be used to measure CoP displacement in situations where lower accuracy and precision is acceptable. PMID:25268919

  9. Assessment of the global monthly mean surface insolation estimated from satellite measurements using global energy balance archive data

    NASA Technical Reports Server (NTRS)

    Li, Zhanqing; Whitlock, Charles H.; Charlock, Thomas P.

    1995-01-01

    Global sets of surface radiation budget (SRB) have been obtained from satellite programs. These satellite-based estimates need validation with ground-truth observations. This study validates the estimates of monthly mean surface insolation contained in two satellite-based SRB datasets with the surface measurements made at worldwide radiation stations from the Global Energy Balance Archive (GEBA). One dataset was developed from the Earth Radiation Budget Experiment (ERBE) using the algorithm of Li et al. (ERBE/SRB), and the other from the International Satellite Cloud Climatology Project (ISCCP) using the algorithm of Pinker and Laszlo and that of Staylor (GEWEX/SRB). Since the ERBE/SRB data contain the surface net solar radiation only, the values of surface insolation were derived by making use of the surface albedo data contained GEWEX/SRB product. The resulting surface insolation has a bias error near zero and a root-mean-square error (RMSE) between 8 and 28 W/sq m. The RMSE is mainly associated with poor representation of surface observations within a grid cell. When the number of surface observations are sufficient, the random error is estimated to be about 5 W/sq m with present satellite-based estimates. In addition to demonstrating the strength of the retrieving method, the small random error demonstrates how well the ERBE derives from the monthly mean fluxes at the top of the atmosphere (TOA). A larger scatter is found for the comparison of transmissivity than for that of insolation. Month to month comparison of insolation reveals a weak seasonal trend in bias error with an amplitude of about 3 W/sq m. As for the insolation data from the GEWEX/SRB, larger bias errors of 5-10 W/sq m are evident with stronger seasonal trends and almost identical RMSEs.

  10. Flux Sampling Errors for Aircraft and Towers

    NASA Technical Reports Server (NTRS)

    Mahrt, Larry

    1998-01-01

    Various errors and influences leading to differences between tower- and aircraft-measured fluxes are surveyed. This survey is motivated by reports in the literature that aircraft fluxes are sometimes smaller than tower-measured fluxes. Both tower and aircraft flux errors are larger with surface heterogeneity due to several independent effects. Surface heterogeneity may cause tower flux errors to increase with decreasing wind speed. Techniques to assess flux sampling error are reviewed. Such error estimates suffer various degrees of inapplicability in real geophysical time series due to nonstationarity of tower time series (or inhomogeneity of aircraft data). A new measure for nonstationarity is developed that eliminates assumptions on the form of the nonstationarity inherent in previous methods. When this nonstationarity measure becomes large, the surface energy imbalance increases sharply. Finally, strategies for obtaining adequate flux sampling using repeated aircraft passes and grid patterns are outlined.

  11. Impact of electronic chemotherapy order forms on prescribing errors at an urban medical center: results from an interrupted time-series analysis.

    PubMed

    Elsaid, K; Truong, T; Monckeberg, M; McCarthy, H; Butera, J; Collins, C

    2013-12-01

    To evaluate the impact of electronic standardized chemotherapy templates on incidence and types of prescribing errors. A quasi-experimental interrupted time series with segmented regression. A 700-bed multidisciplinary tertiary care hospital with an ambulatory cancer center. A multidisciplinary team including oncology physicians, nurses, pharmacists and information technologists. Standardized, regimen-specific, chemotherapy prescribing forms were developed and implemented over a 32-month period. Trend of monthly prevented prescribing errors per 1000 chemotherapy doses during the pre-implementation phase (30 months), immediate change in the error rate from pre-implementation to implementation and trend of errors during the implementation phase. Errors were analyzed according to their types: errors in communication or transcription, errors in dosing calculation and errors in regimen frequency or treatment duration. Relative risk (RR) of errors in the post-implementation phase (28 months) compared with the pre-implementation phase was computed with 95% confidence interval (CI). Baseline monthly error rate was stable with 16.7 prevented errors per 1000 chemotherapy doses. A 30% reduction in prescribing errors was observed with initiating the intervention. With implementation, a negative change in the slope of prescribing errors was observed (coefficient = -0.338; 95% CI: -0.612 to -0.064). The estimated RR of transcription errors was 0.74; 95% CI (0.59-0.92). The estimated RR of dosing calculation errors was 0.06; 95% CI (0.03-0.10). The estimated RR of chemotherapy frequency/duration errors was 0.51; 95% CI (0.42-0.62). Implementing standardized chemotherapy-prescribing templates significantly reduced all types of prescribing errors and improved chemotherapy safety.

  12. Frequency and analysis of non-clinical errors made in radiology reports using the National Integrated Medical Imaging System voice recognition dictation software.

    PubMed

    Motyer, R E; Liddy, S; Torreggiani, W C; Buckley, O

    2016-11-01

    Voice recognition (VR) dictation of radiology reports has become the mainstay of reporting in many institutions worldwide. Despite benefit, such software is not without limitations, and transcription errors have been widely reported. Evaluate the frequency and nature of non-clinical transcription error using VR dictation software. Retrospective audit of 378 finalised radiology reports. Errors were counted and categorised by significance, error type and sub-type. Data regarding imaging modality, report length and dictation time was collected. 67 (17.72 %) reports contained ≥1 errors, with 7 (1.85 %) containing 'significant' and 9 (2.38 %) containing 'very significant' errors. A total of 90 errors were identified from the 378 reports analysed, with 74 (82.22 %) classified as 'insignificant', 7 (7.78 %) as 'significant', 9 (10 %) as 'very significant'. 68 (75.56 %) errors were 'spelling and grammar', 20 (22.22 %) 'missense' and 2 (2.22 %) 'nonsense'. 'Punctuation' error was most common sub-type, accounting for 27 errors (30 %). Complex imaging modalities had higher error rates per report and sentence. Computed tomography contained 0.040 errors per sentence compared to plain film with 0.030. Longer reports had a higher error rate, with reports >25 sentences containing an average of 1.23 errors per report compared to 0-5 sentences containing 0.09. These findings highlight the limitations of VR dictation software. While most error was deemed insignificant, there were occurrences of error with potential to alter report interpretation and patient management. Longer reports and reports on more complex imaging had higher error rates and this should be taken into account by the reporting radiologist.

  13. Image Registration: A Necessary Evil

    NASA Technical Reports Server (NTRS)

    Bell, James; McLachlan, Blair; Hermstad, Dexter; Trosin, Jeff; George, Michael W. (Technical Monitor)

    1995-01-01

    Registration of test and reference images is a key component of nearly all PSP data reduction techniques. This is done to ensure that a test image pixel viewing a particular point on the model is ratioed by the reference image pixel which views the same point. Typically registration is needed to account for model motion due to differing airloads when the wind-off and wind-on images are taken. Registration is also necessary when two cameras are used for simultaneous acquisition of data from a dual-frequency paint. This presentation will discuss the advantages and disadvantages of several different image registration techniques. In order to do so, it is necessary to propose both an accuracy requirement for image registration and a means for measuring the accuracy of a particular technique. High contrast regions in the unregistered images are most sensitive to registration errors, and it is proposed that these regions be used to establish the error limits for registration. Once this is done, the actual registration error can be determined by locating corresponding points on the test and reference images, and determining how well a particular registration technique matches them. An example of this procedure is shown for three transforms used to register images of a semispan model. Thirty control points were located on the model. A subset of the points were used to determine the coefficients of each registration transform, and the error with which each transform aligned the remaining points was determined. The results indicate the general superiority of a third-order polynomial over other candidate transforms, as well as showing how registration accuracy varies with number of control points. Finally, it is proposed that image registration may eventually be done away with completely. As more accurate image resection techniques and more detailed model surface grids become available, it will be possible to map raw image data onto the model surface accurately. Intensity ratio data can then be obtained by a "model surface ratio," rather than an image ratio. The problems and advantages of this technique will be discussed.

  14. 20 Meter Solar Sail Analysis and Correlation

    NASA Technical Reports Server (NTRS)

    Taleghani, B.; Lively, P.; Banik, J.; Murphy, D.; Trautt, T.

    2005-01-01

    This presentation discusses studies conducted to determine the element type and size that best represents a 20-meter solar sail under ground-test load conditions, the performance of test/Analysis correlation by using Static Shape Optimization Method for Q4 sail, and system dynamic. TRIA3 elements better represent wrinkle patterns than do QUAD3 elements Baseline, ten-inch elements are small enough to accurately represent sail shape, and baseline TRIA3 mesh requires a reasonable computation time of 8 min. 21 sec. In the test/analysis correlation by using Static shape optimization method for Q4 sail, ten parameters were chosen and varied during optimization. 300 sail models were created with random parameters. A response surfaces for each targets which were created based on the varied parameters. Parameters were optimized based on response surface. Deflection shape comparison for 0 and 22.5 degrees yielded a 4.3% and 2.1% error respectively. For the system dynamic study testing was done on the booms without the sails attached. The nominal boom properties produced a good correlation to test data the frequencies were within 10%. Boom dominated analysis frequencies and modes compared well with the test results.

  15. Autonomous selection of PDE inpainting techniques vs. exemplar inpainting techniques for void fill of high resolution digital surface models

    NASA Astrophysics Data System (ADS)

    Rahmes, Mark; Yates, J. Harlan; Allen, Josef DeVaughn; Kelley, Patrick

    2007-04-01

    High resolution Digital Surface Models (DSMs) may contain voids (missing data) due to the data collection process used to obtain the DSM, inclement weather conditions, low returns, system errors/malfunctions for various collection platforms, and other factors. DSM voids are also created during bare earth processing where culture and vegetation features have been extracted. The Harris LiteSite TM Toolkit handles these void regions in DSMs via two novel techniques. We use both partial differential equations (PDEs) and exemplar based inpainting techniques to accurately fill voids. The PDE technique has its origin in fluid dynamics and heat equations (a particular subset of partial differential equations). The exemplar technique has its origin in texture analysis and image processing. Each technique is optimally suited for different input conditions. The PDE technique works better where the area to be void filled does not have disproportionately high frequency data in the neighborhood of the boundary of the void. Conversely, the exemplar based technique is better suited for high frequency areas. Both are autonomous with respect to detecting and repairing void regions. We describe a cohesive autonomous solution that dynamically selects the best technique as each void is being repaired.

  16. Generation of a crowned pinion tooth surface by a surface of revolution

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Zhang, J.; Handschuh, R. F.

    1988-01-01

    A method of generating crowned pinion tooth surfaces using a surface of revolution is developed. The crowned pinion meshes with a regular involute gear and has a prescribed parabolic type of transmission errors when the gears operate in the aligned mode. When the gears are misaligned the transmission error remains parabolic with the maximum level still remaining very small (less than 0.34 arc sec for the numerical examples). Tooth contact analysis (TCA) is used to simulate the conditions of meshing, determine the transmission error, and determine the bearing contact.

  17. Global shear speed structure of the upper mantle and transition zone

    NASA Astrophysics Data System (ADS)

    Schaeffer, A. J.; Lebedev, S.

    2013-07-01

    The rapid expansion of broad-band seismic networks over the last decade has paved the way for a new generation of global tomographic models. Significantly improved resolution of global upper-mantle and crustal structure can now be achieved, provided that structural information is extracted effectively from both surface and body waves and that the effects of errors in the data are controlled and minimized. Here, we present a new global, vertically polarized shear speed model that yields considerable improvements in resolution, compared to previous ones, for a variety of features in the upper mantle and crust. The model, SL2013sv, is constrained by an unprecedentedly large set of waveform fits (˜3/4 of a million broad-band seismograms), computed in seismogram-dependent frequency bands, up to a maximum period range of 11-450 s. Automated multimode inversion of surface and S-wave forms was used to extract a set of linear equations with uncorrelated uncertainties from each seismogram. The equations described perturbations in elastic structure within approximate sensitivity volumes between sources and receivers. Going beyond ray theory, we calculated the phase of every mode at every frequency and its derivative with respect to S- and P-velocity perturbations by integration over a sensitivity area in a 3-D reference model; the (normally small) perturbations of the 3-D model required to fit the waveforms were then linearized using these accurate derivatives. The equations yielded by the waveform inversion of all the seismograms were simultaneously inverted for a 3-D model of shear and compressional speeds and azimuthal anisotropy within the crust and upper mantle. Elaborate outlier analysis was used to control the propagation of errors in the data (source parameters, timing at the stations, etc.). The selection of only the most mutually consistent equations exploited the data redundancy provided by our data set and strongly reduced the effect of the errors, increasing the resolution of the imaging. Our new shear speed model is parametrized on a triangular grid with a ˜280 km spacing. In well-sampled continental domains, lateral resolution approaches or exceeds that of regional-scale studies. The close match of known surface expressions of deep structure with the distribution of anomalies in the model provides a useful benchmark. In oceanic regions, spreading ridges are very well resolved, with narrow anomalies in the shallow mantle closely confined near the ridge axis, and those deeper, down to 100-120 km, showing variability in their width and location with respect to the ridge. Major subduction zones worldwide are well captured, extending from shallow depths down to the transition zone. The large size of our waveform fit data set also provides a strong statistical foundation to re-examine the validity field of the JWKB approximation and surface wave ray theory. Our analysis shows that the approximations are likely to be valid within certain time-frequency portions of most seismograms with high signal-to-noise ratios, and these portions can be identified using a set of consistent criteria that we apply in the course of waveform fitting.

  18. Surface errors without semantic impairment in acquired dyslexia: a voxel-based lesion–symptom mapping study

    PubMed Central

    Pillay, Sara B.; Humphries, Colin J.; Gross, William L.; Graves, William W.; Book, Diane S.

    2016-01-01

    Patients with surface dyslexia have disproportionate difficulty pronouncing irregularly spelled words (e.g. pint), suggesting impaired use of lexical-semantic information to mediate phonological retrieval. Patients with this deficit also make characteristic ‘regularization’ errors, in which an irregularly spelled word is mispronounced by incorrect application of regular spelling-sound correspondences (e.g. reading plaid as ‘played’), indicating over-reliance on sublexical grapheme–phoneme correspondences. We examined the neuroanatomical correlates of this specific error type in 45 patients with left hemisphere chronic stroke. Voxel-based lesion–symptom mapping showed a strong positive relationship between the rate of regularization errors and damage to the posterior half of the left middle temporal gyrus. Semantic deficits on tests of single-word comprehension were generally mild, and these deficits were not correlated with the rate of regularization errors. Furthermore, the deep occipital-temporal white matter locus associated with these mild semantic deficits was distinct from the lesion site associated with regularization errors. Thus, in contrast to patients with surface dyslexia and semantic impairment from anterior temporal lobe degeneration, surface errors in our patients were not related to a semantic deficit. We propose that these patients have an inability to link intact semantic representations with phonological representations. The data provide novel evidence for a post-semantic mechanism mediating the production of surface errors, and suggest that the posterior middle temporal gyrus may compute an intermediate representation linking semantics with phonology. PMID:26966139

  19. Automated detection of heuristics and biases among pathologists in a computer-based system.

    PubMed

    Crowley, Rebecca S; Legowski, Elizabeth; Medvedeva, Olga; Reitmeyer, Kayse; Tseytlin, Eugene; Castine, Melissa; Jukic, Drazen; Mello-Thoms, Claudia

    2013-08-01

    The purpose of this study is threefold: (1) to develop an automated, computer-based method to detect heuristics and biases as pathologists examine virtual slide cases, (2) to measure the frequency and distribution of heuristics and errors across three levels of training, and (3) to examine relationships of heuristics to biases, and biases to diagnostic errors. The authors conducted the study using a computer-based system to view and diagnose virtual slide cases. The software recorded participant responses throughout the diagnostic process, and automatically classified participant actions based on definitions of eight common heuristics and/or biases. The authors measured frequency of heuristic use and bias across three levels of training. Biases studied were detected at varying frequencies, with availability and search satisficing observed most frequently. There were few significant differences by level of training. For representativeness and anchoring, the heuristic was used appropriately as often or more often than it was used in biased judgment. Approximately half of the diagnostic errors were associated with one or more biases. We conclude that heuristic use and biases were observed among physicians at all levels of training using the virtual slide system, although their frequencies varied. The system can be employed to detect heuristic use and to test methods for decreasing diagnostic errors resulting from cognitive biases.

  20. Tracking Architecture Based on Dual-Filter with State Feedback and Its Application in Ultra-Tight GPS/INS Integration

    PubMed Central

    Zhang, Xi; Miao, Lingjuan; Shao, Haijun

    2016-01-01

    If a Kalman Filter (KF) is applied to Global Positioning System (GPS) baseband signal preprocessing, the estimates of signal phase and frequency can have low variance, even in highly dynamic situations. This paper presents a novel preprocessing scheme based on a dual-filter structure. Compared with the traditional model utilizing a single KF, this structure avoids carrier tracking being subjected to code tracking errors. Meanwhile, as the loop filters are completely removed, state feedback values are adopted to generate local carrier and code. Although local carrier frequency has a wide fluctuation, the accuracy of Doppler shift estimation is improved. In the ultra-tight GPS/Inertial Navigation System (INS) integration, the carrier frequency derived from the external navigation information is not viewed as the local carrier frequency directly. That facilitates retaining the design principle of state feedback. However, under harsh conditions, the GPS outputs may still bear large errors which can destroy the estimation of INS errors. Thus, an innovative integrated navigation filter is constructed by modeling the non-negligible errors in the estimated Doppler shifts, to ensure INS is properly calibrated. Finally, field test and semi-physical simulation based on telemetered missile trajectory validate the effectiveness of methods proposed in this paper. PMID:27144570

  1. Tracking Architecture Based on Dual-Filter with State Feedback and Its Application in Ultra-Tight GPS/INS Integration.

    PubMed

    Zhang, Xi; Miao, Lingjuan; Shao, Haijun

    2016-05-02

    If a Kalman Filter (KF) is applied to Global Positioning System (GPS) baseband signal preprocessing, the estimates of signal phase and frequency can have low variance, even in highly dynamic situations. This paper presents a novel preprocessing scheme based on a dual-filter structure. Compared with the traditional model utilizing a single KF, this structure avoids carrier tracking being subjected to code tracking errors. Meanwhile, as the loop filters are completely removed, state feedback values are adopted to generate local carrier and code. Although local carrier frequency has a wide fluctuation, the accuracy of Doppler shift estimation is improved. In the ultra-tight GPS/Inertial Navigation System (INS) integration, the carrier frequency derived from the external navigation information is not viewed as the local carrier frequency directly. That facilitates retaining the design principle of state feedback. However, under harsh conditions, the GPS outputs may still bear large errors which can destroy the estimation of INS errors. Thus, an innovative integrated navigation filter is constructed by modeling the non-negligible errors in the estimated Doppler shifts, to ensure INS is properly calibrated. Finally, field test and semi-physical simulation based on telemetered missile trajectory validate the effectiveness of methods proposed in this paper.

  2. Investigating Surface Bias Errors in the Weather Research and Forecasting (WRF) Model using a Geographic Information System (GIS)

    DTIC Science & Technology

    2015-02-01

    WRF ) Model using a Geographic Information System (GIS) by Jeffrey A Smith, Theresa A Foley, John W Raby, and Brian Reen...ARL-TR-7212 ● FEB 2015 US Army Research Laboratory Investigating Surface Bias Errors in the Weather Research and Forecasting ( WRF ) Model...SUBTITLE Investigating surface bias errors in the Weather Research and Forecasting ( WRF ) Model using a Geographic Information System (GIS) 5a

  3. On the effect of surface emissivity on temperature retrievals. [for meteorology

    NASA Technical Reports Server (NTRS)

    Kornfield, J.; Susskind, J.

    1977-01-01

    The paper is concerned with errors in temperature retrieval caused by incorrectly assuming that surface emissivity is equal to unity. An error equation that applies to present-day atmospheric temperature sounders is derived, and the bias errors resulting from various emissivity discrepancies are calculated. A model of downward flux is presented and used to determine the effective downward flux. In the 3.7-micron region of the spectrum, emissivities of 0.6 to 0.9 have been observed over land. At a surface temperature of 290 K, if the true emissivity is 0.6 and unit emissivity is assumed, the error would be approximately 11 C. In the 11-micron region, the maximum deviation of the surface emissivity from unity was 0.05.

  4. A finite element method based microwave heat transfer modeling of frozen multi-component foods

    NASA Astrophysics Data System (ADS)

    Pitchai, Krishnamoorthy

    Microwave heating is fast and convenient, but is highly non-uniform. Non-uniform heating in microwave cooking affects not only food quality but also food safety. Most food industries develop microwavable food products based on "cook-and-look" approach. This approach is time-consuming, labor intensive and expensive and may not result in optimal food product design that assures food safety and quality. Design of microwavable food can be realized through a simulation model which describes the physical mechanisms of microwave heating in mathematical expressions. The objective of this study was to develop a microwave heat transfer model to predict spatial and temporal profiles of various heterogeneous foods such as multi-component meal (chicken nuggets and mashed potato), multi-component and multi-layered meal (lasagna), and multi-layered food with active packages (pizza) during microwave heating. A microwave heat transfer model was developed by solving electromagnetic and heat transfer equations using finite element method in commercially available COMSOL Multiphysics v4.4 software. The microwave heat transfer model included detailed geometry of the cavity, phase change, and rotation of the food on the turntable. The predicted spatial surface temperature patterns and temporal profiles were validated against the experimental temperature profiles obtained using a thermal imaging camera and fiber-optic sensors. The predicted spatial surface temperature profile of different multi-component foods was in good agreement with the corresponding experimental profiles in terms of hot and cold spot patterns. The root mean square error values of temporal profiles ranged from 5.8 °C to 26.2 °C in chicken nuggets as compared 4.3 °C to 4.7 °C in mashed potatoes. In frozen lasagna, root mean square error values at six locations ranged from 6.6 °C to 20.0 °C for 6 min of heating. A microwave heat transfer model was developed to include susceptor assisted microwave heating of a frozen pizza. The root mean square error values of transient temperature profiles of five locations ranged from 5.0 °C to 12.6 °C. A methodology was developed to incorporate electromagnetic frequency spectrum in the coupled electromagnetic and heat transfer model. Implementing the electromagnetic frequency spectrum in the simulation improved the accuracy of temperature field pattern and transient temperature profile as compared to mono-chromatic frequency of 2.45 GHz. The bulk moisture diffusion coefficient of cooked pasta was calculated as a function of temperature at a constant water activity using desorption isotherms.

  5. Performance analysis of a coherent free space optical communication system based on experiment.

    PubMed

    Cao, Jingtai; Zhao, Xiaohui; Liu, Wei; Gu, Haijun

    2017-06-26

    Based on our previous study and designed experimental AO system with a 97-element continuous surface deformable mirror, we conduct the performance analysis of a coherent free space optical communication (FSOC) system for mixing efficiency (ME), bit error rate (BER) and outage probability under different Greenwood frequency and atmospheric coherent length. The results show that the influence of the atmospheric temporal characteristics on the performance is slightly stronger than that of the spatial characteristics when the receiving aperture and the number of sub-apertures are given. This analysis result provides a reference for the design of the coherent FSOC system.

  6. Rain rate range profiling from a spaceborne radar

    NASA Technical Reports Server (NTRS)

    Meneghini, R.

    1980-01-01

    At certain frequencies and incidence angles the relative invariance of the surface scattering properites over land can be used to estimate the total attenuation and the integrated rain from a spaceborne attenuation-wavelength radar. The technique is generalized so that rain rate profiles along the radar beam can be estimated, i.e., rain rate determination at each range bin. This is done by modifying the standard algorithm for an attenuating-wavelength radar to include in it the measurement of the total attenuation. Simple error analyses of the estimates show that this type of profiling is possible if the total attenuation can be measured with a modest degree of accuracy.

  7. Independent oscillatory patterns determine performance fluctuations in children with attention deficit/hyperactivity disorder.

    PubMed

    Yordanova, Juliana; Albrecht, Björn; Uebel, Henrik; Kirov, Roumen; Banaschewski, Tobias; Rothenberger, Aribert; Kolev, Vasil

    2011-06-01

    The maintenance of stable goal-directed behaviour is a hallmark of conscious executive control in humans. Notably, both correct and error human actions may have a subconscious activation-based determination. One possible source of subconscious interference may be the default mode network that, in contrast to attentional network, manifests intrinsic oscillations at very low (<0.1 Hz) frequencies. In the present study, we analyse the time dynamics of performance accuracy to search for multisecond periodic fluctuations of error occurrence. Attentional lapses in attention deficit/hyperactivity disorder are proposed to originate from interferences from intrinsically oscillating networks. Identifying periodic error fluctuations with a frequency<0.1 Hz in patients with attention deficit/hyperactivity disorder would provide a behavioural evidence for such interferences. Performance was monitored during a visual flanker task in 92 children (7- to 16-year olds), 47 with attention deficit/hyperactivity disorder, combined type and 45 healthy controls. Using an original approach, the time distribution of error occurrence was analysed in the frequency and time-frequency domains in order to detect rhythmic periodicity. Major results demonstrate that in both patients and controls, error behaviour was characterized by multisecond rhythmic fluctuations with a period of ∼12 s, appearing with a delay after transition to task. Only in attention deficit/hyperactivity disorder, was there an additional 'pathological' oscillation of error generation, which determined periodic drops of performance accuracy each 20-30 s. Thus, in patients, periodic error fluctuations were modulated by two independent oscillatory patterns. The findings demonstrate that: (i) attentive behaviour of children is determined by multisecond regularities; and (ii) a unique additional periodicity guides performance fluctuations in patients. These observations may re-conceptualize the understanding of attentive behaviour beyond the executive top-down control and may reveal new origins of psychopathological behaviours in attention deficit/hyperactivity disorder.

  8. Digital hum filtering

    USGS Publications Warehouse

    Knapp, R.W.; Anderson, N.L.

    1994-01-01

    Data may be overprinted by a steady-state cyclical noise (hum). Steady-state indicates that the noise is invariant with time; its attributes, frequency, amplitude, and phase, do not change with time. Hum recorded on seismic data usually is powerline noise and associated higher harmonics; leakage from full-waveform rectified cathodic protection devices that contain the odd higher harmonics of powerline frequencies; or vibrational noise from mechanical devices. The fundamental frequency of powerline hum may be removed during data acquisition with the use of notch filters. Unfortunately, notch filters do not discriminate signal and noise, attenuating both. They also distort adjacent frequencies by phase shifting. Finally, they attenuate only the fundamental mode of the powerline noise; higher harmonics and frequencies other than that of powerlines are not removed. Digital notch filters, applied during processing, have many of the same problems as analog filters applied in the field. The method described here removes hum of a particular frequency. Hum attributes are measured by discrete Fourier analysis, and the hum is canceled from the data by subtraction. Errors are slight and the result of the presence of (random) noise in the window or asynchrony of the hum and data sampling. Error is minimized by increasing window size or by resampling to a finer interval. Errors affect the degree of hum attenuation, not the signal. The residual is steady-state hum of the same frequency. ?? 1994.

  9. Sampling Errors of SSM/I and TRMM Rainfall Averages: Comparison with Error Estimates from Surface Data and a Sample Model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.

  10. Quantitative modeling of the accuracy in registering preoperative patient-specific anatomic models into left atrial cardiac ablation procedures

    PubMed Central

    Rettmann, Maryam E.; Holmes, David R.; Kwartowitz, David M.; Gunawan, Mia; Johnson, Susan B.; Camp, Jon J.; Cameron, Bruce M.; Dalegrave, Charles; Kolasa, Mark W.; Packer, Douglas L.; Robb, Richard A.

    2014-01-01

    Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Data from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamic in vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved landmark-only registration provided the noise in the surface points is not excessively high. Increased variability on the landmark fiducials resulted in increased registration errors; however, refinement of the initial landmark registration by the surface-based algorithm can compensate for small initial misalignments. The surface-based registration algorithm is quite robust to noise on the surface points and continues to improve landmark registration even at high levels of noise on the surface points. Both the canine and patient studies also demonstrate that combined landmark and surface registration has lower errors than landmark registration alone. Conclusions: In this work, we describe a model for evaluating the impact of noise variability on the input parameters of a registration algorithm in the context of cardiac ablation therapy. The model can be used to predict both registration error as well as assess which inputs have the largest effect on registration accuracy. PMID:24506630

  11. Quantitative modeling of the accuracy in registering preoperative patient-specific anatomic models into left atrial cardiac ablation procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rettmann, Maryam E., E-mail: rettmann.maryam@mayo.edu; Holmes, David R.; Camp, Jon J.

    2014-02-15

    Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Datamore » from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamicin vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved landmark-only registration provided the noise in the surface points is not excessively high. Increased variability on the landmark fiducials resulted in increased registration errors; however, refinement of the initial landmark registration by the surface-based algorithm can compensate for small initial misalignments. The surface-based registration algorithm is quite robust to noise on the surface points and continues to improve landmark registration even at high levels of noise on the surface points. Both the canine and patient studies also demonstrate that combined landmark and surface registration has lower errors than landmark registration alone. Conclusions: In this work, we describe a model for evaluating the impact of noise variability on the input parameters of a registration algorithm in the context of cardiac ablation therapy. The model can be used to predict both registration error as well as assess which inputs have the largest effect on registration accuracy.« less

  12. UNDERSTANDING SYSTEMATIC MEASUREMENT ERROR IN THERMAL-OPTICAL ANALYSIS FOR PM BLACK CARBON USING RESPONSE SURFACES AND SURFACE CONFIDENCE INTERVALS

    EPA Science Inventory

    Results from a NIST-EPA Interagency Agreement on Understanding Systematic Measurement Error in Thermal-Optical Analysis for PM Black Carbon Using Response Surfaces and Surface Confidence Intervals will be presented at the American Association for Aerosol Research (AAAR) 24th Annu...

  13. Stitching-error reduction in gratings by shot-shifted electron-beam lithography

    NASA Technical Reports Server (NTRS)

    Dougherty, D. J.; Muller, R. E.; Maker, P. D.; Forouhar, S.

    2001-01-01

    Calculations of the grating spatial-frequency spectrum and the filtering properties of multiple-pass electron-beam writing demonstrate a tradeoff between stitching-error suppression and minimum pitch separation. High-resolution measurements of optical-diffraction patterns show a 25-dB reduction in stitching-error side modes.

  14. Optical linear algebra processors: noise and error-source modeling.

    PubMed

    Casasent, D; Ghosh, A

    1985-06-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  15. Optical linear algebra processors - Noise and error-source modeling

    NASA Technical Reports Server (NTRS)

    Casasent, D.; Ghosh, A.

    1985-01-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  16. Active stabilization of a rapidly chirped laser by an optoelectronic digital servo-loop control.

    PubMed

    Gorju, G; Jucha, A; Jain, A; Crozatier, V; Lorgeré, I; Le Gouët, J-L; Bretenaker, F; Colice, M

    2007-03-01

    We propose and demonstrate a novel active stabilization scheme for wide and fast frequency chirps. The system measures the laser instantaneous frequency deviation from a perfectly linear chirp, thanks to a digital phase detection process, and provides an error signal that is used to servo-loop control the chirped laser. This way, the frequency errors affecting a laser scan over 10 GHz on the millisecond timescale are drastically reduced below 100 kHz. This active optoelectronic digital servo-loop control opens new and interesting perspectives in fields where rapidly chirped lasers are crucial.

  17. Normal contour error measurement on-machine and compensation method for polishing complex surface by MRF

    NASA Astrophysics Data System (ADS)

    Chen, Hua; Chen, Jihong; Wang, Baorui; Zheng, Yongcheng

    2016-10-01

    The Magnetorheological finishing (MRF) process, based on the dwell time method with the constant normal spacing for flexible polishing, would bring out the normal contour error in the fine polishing complex surface such as aspheric surface. The normal contour error would change the ribbon's shape and removal characteristics of consistency for MRF. Based on continuously scanning the normal spacing between the workpiece and the finder by the laser range finder, the novel method was put forward to measure the normal contour errors while polishing complex surface on the machining track. The normal contour errors was measured dynamically, by which the workpiece's clamping precision, multi-axis machining NC program and the dynamic performance of the MRF machine were achieved for the verification and security check of the MRF process. The unit for measuring the normal contour errors of complex surface on-machine was designed. Based on the measurement unit's results as feedback to adjust the parameters of the feed forward control and the multi-axis machining, the optimized servo control method was presented to compensate the normal contour errors. The experiment for polishing 180mm × 180mm aspherical workpiece of fused silica by MRF was set up to validate the method. The results show that the normal contour error was controlled in less than 10um. And the PV value of the polished surface accuracy was improved from 0.95λ to 0.09λ under the conditions of the same process parameters. The technology in the paper has been being applied in the PKC600-Q1 MRF machine developed by the China Academe of Engineering Physics for engineering application since 2014. It is being used in the national huge optical engineering for processing the ultra-precision optical parts.

  18. Reconstructing spatial-temporal continuous MODIS land surface temperature using the DINEOF method

    NASA Astrophysics Data System (ADS)

    Zhou, Wang; Peng, Bin; Shi, Jiancheng

    2017-10-01

    Land surface temperature (LST) is one of the key states of the Earth surface system. Remote sensing has the capability to obtain high-frequency LST observations with global coverage. However, mainly due to cloud cover, there are always gaps in the remotely sensed LST product, which hampers the application of satellite-based LST in data-driven modeling of surface energy and water exchange processes. We explored the suitability of the data interpolating empirical orthogonal functions (DINEOF) method in moderate resolution imaging spectroradiometer LST reconstruction around Ali on the Tibetan Plateau. To validate the reconstruction accuracy, synthetic clouds during both daytime and nighttime are created. With DINEOF reconstruction, the root mean square error and bias under synthetic clouds in daytime are 4.57 and -0.0472 K, respectively, and during the nighttime are 2.30 and 0.0045 K, respectively. The DINEOF method can well recover the spatial pattern of LST. Time-series analysis of LST before and after DINEOF reconstruction from 2002 to 2016 shows that the annual and interannual variabilities of LST can be well reconstructed by the DINEOF method.

  19. Adsorption of Emerging Munitions Contaminants on Cellulose Surface: A Combined Theoretical and Experimental Investigation.

    PubMed

    Shukla, Manoj K; Poda, Aimee

    2016-06-01

    This manuscript reports results of an integrated theoretical and experimental investigation of adsorption of two emerging contaminants (DNAN and FOX-7) and legacy compound TNT on cellulose surface. Cellulose was modeled as trimeric form of the linear chain of 1 → 4 linked of β-D-glucopyranos in (4)C1 chair conformation. Geometries of modeled cellulose, munitions compounds and their complexes were optimized at the M06-2X functional level of Density Functional Theory using the 6-31G(d,p) basis set in gas phase and in water solution. The effect of water solution was modeled using the CPCM approach. Nature of potential energy surfaces was ascertained through harmonic vibrational frequency analysis. Interaction energies were corrected for basis set superposition error and the 6-311G(d,p) basis set was used. Molecular electrostatic potential mapping was performed to understand the reactivity of the investigated systems. It was predicted that adsorbates will be weakly adsorbed on the cellulose surface in water solution than in the gas phase.

  20. Impact of Uncertainty in the Drop Size Distribution on Oceanic Rainfall Retrievals From Passive Microwave Observations

    NASA Technical Reports Server (NTRS)

    Wilheit, Thomas T.; Chandrasekar, V.; Li, Wanyu

    2007-01-01

    The variability of the drop size distribution (DSD) is one of the factors that must be considered in understanding the uncertainties in the retrieval of oceanic precipitation from passive microwave observations. Here, we have used observations from the Precipitation Radar on the Tropical Rainfall Measuring Mission spacecraft to infer the relationship between the DSD and the rain rate and the variability in this relationship. The impact on passive microwave rain rate retrievals varies with the frequency and rain rate. The total uncertainty for a given pixel can be slightly larger than 10% at the low end (ca. 10 GHz) of frequencies commonly used for this purpose and smaller at higher frequencies (up to 37 GHz). Since the error is not totally random, averaging many pixels, as in a monthly rainfall total, should roughly halve this uncertainty. The uncertainty may be lower at rain rates less than about 30 mm/h, but the lack of sensitivity of the surface reference technique to low rain rates makes it impossible to tell from the present data set.

  1. The Iatroref study: medical errors are associated with symptoms of depression in ICU staff but not burnout or safety culture.

    PubMed

    Garrouste-Orgeas, Maité; Perrin, Marion; Soufir, Lilia; Vesin, Aurélien; Blot, François; Maxime, Virginie; Beuret, Pascal; Troché, Gilles; Klouche, Kada; Argaud, Laurent; Azoulay, Elie; Timsit, Jean-François

    2015-02-01

    Staff behaviours to optimise patient safety may be influenced by burnout, depression and strength of the safety culture. We evaluated whether burnout, symptoms of depression and safety culture affected the frequency of medical errors and adverse events (selected using Delphi techniques) in ICUs. Prospective, observational, multicentre (31 ICUs) study from August 2009 to December 2011. Burnout, depression symptoms and safety culture were evaluated using the Maslach Burnout Inventory (MBI), CES-Depression scale and Safety Attitudes Questionnaire, respectively. Of 1,988 staff members, 1,534 (77.2 %) participated. Frequencies of medical errors and adverse events were 804.5/1,000 and 167.4/1,000 patient-days, respectively. Burnout prevalence was 3 or 40 % depending on the definition (severe emotional exhaustion, depersonalisation and low personal accomplishment; or MBI score greater than -9). Depression symptoms were identified in 62/330 (18.8 %) physicians and 188/1,204 (15.6 %) nurses/nursing assistants. Median safety culture score was 60.7/100 [56.8-64.7] in physicians and 57.5/100 [52.4-61.9] in nurses/nursing assistants. Depression symptoms were an independent risk factor for medical errors. Burnout was not associated with medical errors. The safety culture score had a limited influence on medical errors. Other independent risk factors for medical errors or adverse events were related to ICU organisation (40 % of ICU staff off work on the previous day), staff (specific safety training) and patients (workload). One-on-one training of junior physicians during duties and existence of a hospital risk-management unit were associated with lower risks. The frequency of selected medical errors in ICUs was high and was increased when staff members had symptoms of depression.

  2. Vibration-Induced Errors in MEMS Tuning Fork Gyroscopes with Imbalance.

    PubMed

    Fang, Xiang; Dong, Linxi; Zhao, Wen-Sheng; Yan, Haixia; Teh, Kwok Siong; Wang, Gaofeng

    2018-05-29

    This paper discusses the vibration-induced error in non-ideal MEMS tuning fork gyroscopes (TFGs). Ideal TFGs which are thought to be immune to vibrations do not exist, and imbalance between two gyros of TFGs is an inevitable phenomenon. Three types of fabrication imperfections (i.e., stiffness imbalance, mass imbalance, and damping imbalance) are studied, considering different imbalance radios. We focus on the coupling types of two gyros of TFGs in both drive and sense directions, and the vibration sensitivities of four TFG designs with imbalance are simulated and compared. It is found that non-ideal TFGs with two gyros coupled both in drive and sense directions (type CC TFGs) are the most insensitive to vibrations with frequencies close to the TFG operating frequencies. However, sense-axis vibrations with in-phase resonant frequencies of a coupled gyros system result in severe error outputs to TFGs with two gyros coupled in the sense direction, which is mainly attributed to the sense capacitance nonlinearity. With increasing stiffness coupled ratio of the coupled gyros system, the sensitivity to vibrations with operating frequencies is cut down, yet sensitivity to vibrations with in-phase frequencies is amplified.

  3. Methods for estimating magnitude and frequency of peak flows for natural streams in Utah

    USGS Publications Warehouse

    Kenney, Terry A.; Wilkowske, Chris D.; Wright, Shane J.

    2007-01-01

    Estimates of the magnitude and frequency of peak streamflows is critical for the safe and cost-effective design of hydraulic structures and stream crossings, and accurate delineation of flood plains. Engineers, planners, resource managers, and scientists need accurate estimates of peak-flow return frequencies for locations on streams with and without streamflow-gaging stations. The 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows were estimated for 344 unregulated U.S. Geological Survey streamflow-gaging stations in Utah and nearby in bordering states. These data along with 23 basin and climatic characteristics computed for each station were used to develop regional peak-flow frequency and magnitude regression equations for 7 geohydrologic regions of Utah. These regression equations can be used to estimate the magnitude and frequency of peak flows for natural streams in Utah within the presented range of predictor variables. Uncertainty, presented as the average standard error of prediction, was computed for each developed equation. Equations developed using data from more than 35 gaging stations had standard errors of prediction that ranged from 35 to 108 percent, and errors for equations developed using data from less than 35 gaging stations ranged from 50 to 357 percent.

  4. Effect of Anisotropy on Shape Measurement Accuracy of Silicon Wafer Using Three-Point-Support Inverting Method

    NASA Astrophysics Data System (ADS)

    Ito, Yukihiro; Natsu, Wataru; Kunieda, Masanori

    This paper describes the influences of anisotropy found in the elastic modulus of monocrystalline silicon wafers on the measurement accuracy of the three-point-support inverting method which can measure the warp and thickness of thin large panels simultaneously. Deflection due to gravity depends on the crystal orientation relative to the positions of the three-point-supports. Thus the deviation of actual crystal orientation from the direction indicated by the notch fabricated on the wafer causes measurement errors. Numerical analysis of the deflection confirmed that the uncertainty of thickness measurement increases from 0.168µm to 0.524µm due to this measurement error. In addition, experimental results showed that the rotation of crystal orientation relative to the three-point-supports is effective for preventing wafer vibration excited by disturbance vibration because the resonance frequency of wafers can be changed. Thus, surface shape measurement accuracy was improved by preventing resonant vibration during measurement.

  5. Error and Complexity Analysis for a Collocation-Grid-Projection Plus Precorrected-FFT Algorithm for Solving Potential Integral Equations with LaPlace or Helmholtz Kernels

    NASA Technical Reports Server (NTRS)

    Phillips, J. R.

    1996-01-01

    In this paper we derive error bounds for a collocation-grid-projection scheme tuned for use in multilevel methods for solving boundary-element discretizations of potential integral equations. The grid-projection scheme is then combined with a precorrected FFT style multilevel method for solving potential integral equations with 1/r and e(sup ikr)/r kernels. A complexity analysis of this combined method is given to show that for homogeneous problems, the method is order n natural log n nearly independent of the kernel. In addition, it is shown analytically and experimentally that for an inhomogeneity generated by a very finely discretized surface, the combined method slows to order n(sup 4/3). Finally, examples are given to show that the collocation-based grid-projection plus precorrected-FFT scheme is competitive with fast-multipole algorithms when considering realistic problems and 1/r kernels, but can be used over a range of spatial frequencies with only a small performance penalty.

  6. Surface-roughness considerations for atmospheric correction of ocean color sensors. II: Error in the retrieved water-leaving radiance.

    PubMed

    Gordon, H R; Wang, M

    1992-07-20

    In the algorithm for the atmospheric correction of coastal zone color scanner (CZCS) imagery, it is assumed that the sea surface is flat. Simulations are carried out to assess the error incurred when the CZCS-type algorithm is applied to a realistic ocean in which the surface is roughened by the wind. In situations where there is no direct Sun glitter (either a large solar zenith angle or the sensor tilted away from the specular image of the Sun), the following conclusions appear justified: (1) the error induced by ignoring the surface roughness is less, similar1 CZCS digital count for wind speeds up to approximately 17 m/s, and therefore can be ignored for this sensor; (2) the roughness-induced error is much more strongly dependent on the wind speed than on the wave shadowing, suggesting that surface effects can be adequately dealt with without precise knowledge of the shadowing; and (3) the error induced by ignoring the Rayleigh-aerosol interaction is usually larger than that caused by ignoring the surface roughness, suggesting that in refining algorithms for future sensors more effort should be placed on dealing with the Rayleigh-aerosol interaction than on the roughness of the sea surface.

  7. Backward-gazing method for measuring solar concentrators shape errors.

    PubMed

    Coquand, Mathieu; Henault, François; Caliot, Cyril

    2017-03-01

    This paper describes a backward-gazing method for measuring the optomechanical errors of solar concentrating surfaces. It makes use of four cameras placed near the solar receiver and simultaneously recording images of the sun reflected by the optical surfaces. Simple data processing then allows reconstructing the slope and shape errors of the surfaces. The originality of the method is enforced by the use of generalized quad-cell formulas and approximate mathematical relations between the slope errors of the mirrors and their reflected wavefront in the case of sun-tracking heliostats at high-incidence angles. Numerical simulations demonstrate that the measurement accuracy is compliant with standard requirements of solar concentrating optics in the presence of noise or calibration errors. The method is suited to fine characterization of the optical and mechanical errors of heliostats and their facets, or to provide better control for real-time sun tracking.

  8. High Precision Metrology on the Ultra-Lightweight W 50.8 cm f/1.25 Parabolic SHARPI Primary Mirror using a CGH Null Lens

    NASA Technical Reports Server (NTRS)

    Antonille, Scott

    2004-01-01

    For potential use on the SHARPI mission, Eastman Kodak has delivered a 50.8cm CA f/1.25 ultra-lightweight UV parabolic mirror with a surface figure error requirement of 6nm RMS. We address the challenges involved in verifying and mapping the surface error of this large lightweight mirror to +/-3nm using a diffractive CGH null lens. Of main concern is removal of large systematic errors resulting from surface deflections of the mirror due to gravity as well as smaller contributions from system misalignment and reference optic errors. We present our efforts to characterize these errors and remove their wavefront error contribution in post-processing as well as minimizing the uncertainty these calculations introduce. Data from Kodak and preliminary measurements from NASA Goddard will be included.

  9. Frequency and Type of Situational Awareness Errors Contributing to Death and Brain Damage: A Closed Claims Analysis.

    PubMed

    Schulz, Christian M; Burden, Amanda; Posner, Karen L; Mincer, Shawn L; Steadman, Randolph; Wagner, Klaus J; Domino, Karen B

    2017-08-01

    Situational awareness errors may play an important role in the genesis of patient harm. The authors examined closed anesthesia malpractice claims for death or brain damage to determine the frequency and type of situational awareness errors. Surgical and procedural anesthesia death and brain damage claims in the Anesthesia Closed Claims Project database were analyzed. Situational awareness error was defined as failure to perceive relevant clinical information, failure to comprehend the meaning of available information, or failure to project, anticipate, or plan. Patient and case characteristics, primary damaging events, and anesthesia payments in claims with situational awareness errors were compared to other death and brain damage claims from 2002 to 2013. Anesthesiologist situational awareness errors contributed to death or brain damage in 198 of 266 claims (74%). Respiratory system damaging events were more common in claims with situational awareness errors (56%) than other claims (21%, P < 0.001). The most common specific respiratory events in error claims were inadequate oxygenation or ventilation (24%), difficult intubation (11%), and aspiration (10%). Payments were made in 85% of situational awareness error claims compared to 46% in other claims (P = 0.001), with no significant difference in payment size. Among 198 claims with anesthesia situational awareness error, perception errors were most common (42%), whereas comprehension errors (29%) and projection errors (29%) were relatively less common. Situational awareness error definitions were operationalized for reliable application to real-world anesthesia cases. Situational awareness errors may have contributed to catastrophic outcomes in three quarters of recent anesthesia malpractice claims.Situational awareness errors resulting in death or brain damage remain prevalent causes of malpractice claims in the 21st century.

  10. Height-Error Analysis for the FAA-Air Force Replacement Radar Program (FARR)

    DTIC Science & Technology

    1991-08-01

    7719 Figure 1-7 CLIMATOLOGY ERRORS BY MONWTH PERCENT FREQUENCY TABLE OF ERROR BY MONTH ERROR MONTH Col Pc IJAl IFEB )MA IA R IAY JJ’N IJUL JAUG (SEP...MONTH Col Pct IJAN IFEB IMPJ JAPR 1 MM IJUN IJUL JAUG ISEP J--T IN~ IDEC I Total ----- -- - - --------------------------.. . -.. 4...MONTH ERROR MONTH Col Pct IJAN IFEB IM4AR IAPR IMAY jJum IJU JAUG ISEP JOCT IN JDEC I Total . .- 4

  11. Accuracy of non-resonant laser-induced thermal acoustics (LITA) in a convergent-divergent nozzle flow

    NASA Astrophysics Data System (ADS)

    Richter, J.; Mayer, J.; Weigand, B.

    2018-02-01

    Non-resonant laser-induced thermal acoustics (LITA) was applied to measure Mach number, temperature and turbulence level along the centerline of a transonic nozzle flow. The accuracy of the measurement results was systematically studied regarding misalignment of the interrogation beam and frequency analysis of the LITA signals. 2D steady-state Reynolds-averaged Navier-Stokes (RANS) simulations were performed for reference. The simulations were conducted using ANSYS CFX 18 employing the shear-stress transport turbulence model. Post-processing of the LITA signals is performed by applying a discrete Fourier transformation (DFT) to determine the beat frequencies. It is shown that the systematical error of the DFT, which depends on the number of oscillations, signal chirp, and damping rate, is less than 1.5% for our experiments resulting in an average error of 1.9% for Mach number. Further, the maximum calibration error is investigated for a worst-case scenario involving maximum in situ readjustment of the interrogation beam within the limits of constructive interference. It is shown that the signal intensity becomes zero if the interrogation angle is altered by 2%. This, together with the accuracy of frequency analysis, results in an error of about 5.4% for temperature throughout the nozzle. Comparison with numerical results shows good agreement within the error bars.

  12. Color-motion feature-binding errors are mediated by a higher-order chromatic representation

    PubMed Central

    Shevell, Steven K.; Wang, Wei

    2017-01-01

    Peripheral and central moving objects of the same color may be perceived to move in the same direction even though peripheral objects have a different true direction of motion [Nature 429, 262 (2004)]. The perceived, illusory direction of peripheral motion is a color-motion feature-binding error. Recent work shows that such binding errors occur even without an exact color match between central and peripheral objects, and, moreover, the frequency of the binding errors in the periphery declines as the chromatic difference increases between the central and peripheral objects [J. Opt. Soc. Am. A 31, A60 (2014)]. This change in the frequency of binding errors with the chromatic difference raises the general question of the chromatic representation from which the difference is determined. Here, basic properties of the chromatic representation are tested to discover whether it depends on independent chromatic differences on the l and the s cardinal axes or, alternatively, on a more specific higher-order chromatic representation. Experimental tests compared the rate of feature-binding errors when the central and peripheral colors had the identical s chromaticity (so zero difference in s) and a fixed magnitude of l difference, while varying the identical s level in center and periphery (thus always keeping the s difference at zero). A chromatic representation based on independent l and s differences would result in the same frequency of color-motion binding errors at every s level. The results are contrary to this prediction, thus showing that the chromatic representation at the level of color-motion feature binding depends on a higherorder chromatic mechanism. PMID:26974945

  13. Evaluation of complications of root canal treatment performed by undergraduate dental students.

    PubMed

    AlRahabi, Mothanna K

    2017-12-01

    This study evaluated the technical quality of root canal treatment (RCT) and detected iatrogenic errors in an undergraduate dental clinic at the College of Dentistry, Taibah University, Saudi Arabia. Dental records of 280 patients who received RCT between 2013 and 2016 undertaken by dental students were investigated by retrospective chart review. Root canal obturation was evaluated on the basis of the length of obturation being ≤2 mm from the radiographic apex, with uniform radiodensity and good adaptation to root canal walls. Inadequate root canal obturation included cases containing procedural errors such as furcal perforation, ledge, canal transportation, strip perforation, root perforation, instrument separation, voids in the obturation, or underfilling or overfilling of the obturation. In 193 (68.9%) teeth, RCT was adequate and without procedural errors. However, in 87 (31.1%) teeth, RCT was inadequate and contained procedural errors. The frequency of procedural errors in the entire sample was 31.1% as follows: underfilling, 49.9%; overfilling, 24.1%; voids, 12.6%; broken instruments, 9.2%; apical perforation, 2.3%; and root canal transportation, 2.3%. There were no significant differences (p > 0.05) in the type or frequency of procedural errors between the fourth- and fifth-year students. Lower molars (43.1%) and upper incisors (19.2%) exhibited the highest and lowest frequencies of procedural errors, respectively. The technical quality of RCT performed by undergraduate dental students was classified as 'adequate' in 68.9% of the cases. There is a need for improvement in the training of students at the preclinical and clinical levels.

  14. Cost-effectiveness of the stream-gaging program in Kentucky

    USGS Publications Warehouse

    Ruhl, K.J.

    1989-01-01

    This report documents the results of a study of the cost-effectiveness of the stream-gaging program in Kentucky. The total surface-water program includes 97 daily-discharge stations , 12 stage-only stations, and 35 crest-stage stations and is operated on a budget of $950,700. One station used for research lacks adequate source of funding and should be discontinued when the research ends. Most stations in the network are multiple-use with 65 stations operated for the purpose of defining hydrologic systems, 48 for project operation, 47 for definition of regional hydrology, and 43 for hydrologic forecasting purposes. Eighteen stations support water quality monitoring activities, one station is used for planning and design, and one station is used for research. The average standard error of estimation of streamflow records was determined only for stations in the Louisville Subdistrict. Under current operating policy, with a budget of $223,500, the average standard error of estimation is 28.5%. Altering the travel routes and measurement frequency to reduce the amount of lost stage record would allow a slight decrease in standard error to 26.9%. The results indicate that the collection of streamflow records in the Louisville Subdistrict is cost effective in its present mode of operation. In the Louisville Subdistrict, a minimum budget of $214,200 is required to operate the current network at an average standard error of 32.7%. A budget less than this does not permit proper service and maintenance of the gages and recorders. The maximum budget analyzed was $268,200, which would result in an average standard error of 16.9% indicating that if the budget was increased by 20%, the percent standard error would be reduced 40 %. (USGS)

  15. Performance prediction of a synchronization link for distributed aerospace wireless systems.

    PubMed

    Wang, Wen-Qin; Shao, Huaizong

    2013-01-01

    For reasons of stealth and other operational advantages, distributed aerospace wireless systems have received much attention in recent years. In a distributed aerospace wireless system, since the transmitter and receiver placed on separated platforms which use independent master oscillators, there is no cancellation of low-frequency phase noise as in the monostatic cases. Thus, high accurate time and frequency synchronization techniques are required for distributed wireless systems. The use of a dedicated synchronization link to quantify and compensate oscillator frequency instability is investigated in this paper. With the mathematical statistical models of phase noise, closed-form analytic expressions for the synchronization link performance are derived. The possible error contributions including oscillator, phase-locked loop, and receiver noise are quantified. The link synchronization performance is predicted by utilizing the knowledge of the statistical models, system error contributions, and sampling considerations. Simulation results show that effective synchronization error compensation can be achieved by using this dedicated synchronization link.

  16. MIMO equalization with adaptive step size for few-mode fiber transmission systems.

    PubMed

    van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J

    2014-01-13

    Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.

  17. Consequences of land-cover misclassification in models of impervious surface

    USGS Publications Warehouse

    McMahon, G.

    2007-01-01

    Model estimates of impervious area as a function of landcover area may be biased and imprecise because of errors in the land-cover classification. This investigation of the effects of land-cover misclassification on impervious surface models that use National Land Cover Data (NLCD) evaluates the consequences of adjusting land-cover within a watershed to reflect uncertainty assessment information. Model validation results indicate that using error-matrix information to adjust land-cover values used in impervious surface models does not substantially improve impervious surface predictions. Validation results indicate that the resolution of the landcover data (Level I and Level II) is more important in predicting impervious surface accurately than whether the land-cover data have been adjusted using information in the error matrix. Level I NLCD, adjusted for land-cover misclassification, is preferable to the other land-cover options for use in models of impervious surface. This result is tied to the lower classification error rates for the Level I NLCD. ?? 2007 American Society for Photogrammetry and Remote Sensing.

  18. Correction of localized shape errors on optical surfaces by altering the localized density of surface or near-surface layers

    DOEpatents

    Taylor, John S.; Folta, James A.; Montcalm, Claude

    2005-01-18

    Figure errors are corrected on optical or other precision surfaces by changing the local density of material in a zone at or near the surface. Optical surface height is correlated with the localized density of the material within the same region. A change in the height of the optical surface can then be caused by a change in the localized density of the material at or near the surface.

  19. Influence of OPD in wavelength-shifting interferometry

    NASA Astrophysics Data System (ADS)

    Wang, Hongjun; Tian, Ailing; Liu, Bingcai; Dang, Juanjuan

    2009-12-01

    Phase-shifting interferometry is a powerful tool for high accuracy optical measurement. It operates by change the optical path length in the reference arm or test arm. This method practices by move optical device. So it has much problem when the optical device is very large and heavy. For solve this problem, the wavelength-shifting interferometry was put forwarded. In wavelength-shifting interferometry, the phase shifting angle was achieved by change the wavelength of optical source. The phase shifting angle was decided by wavelength and OPD (Optical Path Difference) between test and reference wavefront. So the OPD is an important factor to measure results. But in measurement, because the positional error and profile error of under testing optical element is exist, the phase shifting angle is different in different test point when wavelength scanning, it will introduce phase shifting angle error, so it will introduce optical surface measure error. For analysis influence of OPD on optical surface error, the relation between surface error and OPD was researched. By simulation, the relation between phase shifting error and OPD was established. By analysis, the error compensation method was put forward. After error compensation, the measure results can be improved to great extend.

  20. Influence of OPD in wavelength-shifting interferometry

    NASA Astrophysics Data System (ADS)

    Wang, Hongjun; Tian, Ailing; Liu, Bingcai; Dang, Juanjuan

    2010-03-01

    Phase-shifting interferometry is a powerful tool for high accuracy optical measurement. It operates by change the optical path length in the reference arm or test arm. This method practices by move optical device. So it has much problem when the optical device is very large and heavy. For solve this problem, the wavelength-shifting interferometry was put forwarded. In wavelength-shifting interferometry, the phase shifting angle was achieved by change the wavelength of optical source. The phase shifting angle was decided by wavelength and OPD (Optical Path Difference) between test and reference wavefront. So the OPD is an important factor to measure results. But in measurement, because the positional error and profile error of under testing optical element is exist, the phase shifting angle is different in different test point when wavelength scanning, it will introduce phase shifting angle error, so it will introduce optical surface measure error. For analysis influence of OPD on optical surface error, the relation between surface error and OPD was researched. By simulation, the relation between phase shifting error and OPD was established. By analysis, the error compensation method was put forward. After error compensation, the measure results can be improved to great extend.

  1. 2-Step Maximum Likelihood Channel Estimation for Multicode DS-CDMA with Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation.

  2. Robust 2-Qubit Gates in a Linear Ion Crystal Using a Frequency-Modulated Driving Force

    NASA Astrophysics Data System (ADS)

    Leung, Pak Hong; Landsman, Kevin A.; Figgatt, Caroline; Linke, Norbert M.; Monroe, Christopher; Brown, Kenneth R.

    2018-01-01

    In an ion trap quantum computer, collective motional modes are used to entangle two or more qubits in order to execute multiqubit logical gates. Any residual entanglement between the internal and motional states of the ions results in loss of fidelity, especially when there are many spectator ions in the crystal. We propose using a frequency-modulated driving force to minimize such errors. In simulation, we obtained an optimized frequency-modulated 2-qubit gate that can suppress errors to less than 0.01% and is robust against frequency drifts over ±1 kHz . Experimentally, we have obtained a 2-qubit gate fidelity of 98.3(4)%, a state-of-the-art result for 2-qubit gates with five ions.

  3. A neuroimaging study of conflict during word recognition.

    PubMed

    Riba, Jordi; Heldmann, Marcus; Carreiras, Manuel; Münte, Thomas F

    2010-08-04

    Using functional magnetic resonance imaging the neural activity associated with error commission and conflict monitoring in a lexical decision task was assessed. In a cohort of 20 native speakers of Spanish conflict was introduced by presenting words with high and low lexical frequency and pseudo-words with high and low syllabic frequency for the first syllable. Erroneous versus correct responses showed activation in the frontomedial and left inferior frontal cortex. A similar pattern was found for correctly classified words of low versus high lexical frequency and for correctly classified pseudo-words of high versus low syllabic frequency. Conflict-related activations for language materials largely overlapped with error-induced activations. The effect of syllabic frequency underscores the role of sublexical processing in visual word recognition and supports the view that the initial syllable mediates between the letter and word level.

  4. Creating a monthly time series of the potentiometric surface in the Upper Floridan aquifer, Northern Tampa Bay area, Florida, January 2000-December 2009

    USGS Publications Warehouse

    Lee, Terrie M.; Fouad, Geoffrey G.

    2014-01-01

    In Florida’s karst terrain, where groundwater and surface waters interact, a mapping time series of the potentiometric surface in the Upper Floridan aquifer offers a versatile metric for assessing the hydrologic condition of both the aquifer and overlying streams and wetlands. Long-term groundwater monitoring data were used to generate a monthly time series of potentiometric surfaces in the Upper Floridan aquifer over a 573-square-mile area of west-central Florida between January 2000 and December 2009. Recorded groundwater elevations were collated for 260 groundwater monitoring wells in the Northern Tampa Bay area, and a continuous time series of daily observations was created for 197 of the wells by estimating missing daily values through regression relations with other monitoring wells. Kriging was used to interpolate the monthly average potentiometric-surface elevation in the Upper Floridan aquifer over a decade. The mapping time series gives spatial and temporal coherence to groundwater monitoring data collected continuously over the decade by three different organizations, but at various frequencies. Further, the mapping time series describes the potentiometric surface beneath parts of six regionally important stream watersheds and 11 municipal well fields that collectively withdraw about 90 million gallons per day from the Upper Floridan aquifer. Monthly semivariogram models were developed using monthly average groundwater levels at wells. Kriging was used to interpolate the monthly average potentiometric-surface elevations and to quantify the uncertainty in the interpolated elevations. Drawdown of the potentiometric surface within well fields was likely the cause of a characteristic decrease and then increase in the observed semivariance with increasing lag distance. This characteristic made use of the hole effect model appropriate for describing the monthly semivariograms and the interpolated surfaces. Spatial variance reflected in the monthly semivariograms decreased markedly between 2002 and 2003, timing that coincided with decreases in well-field pumping. Cross-validation results suggest that the kriging interpolation may smooth over the drawdown of the potentiometric surface near production wells. The groundwater monitoring network of 197 wells yielded an average kriging error in the potentiometric-surface elevations of 2 feet or less over approximately 70 percent of the map area. Additional data collection within the existing monitoring network of 260 wells and near selected well fields could reduce the error in individual months. Reducing the kriging error in other areas would require adding new monitoring wells. Potentiometric-surface elevations fluctuated by as much as 30 feet over the study period, and the spatially averaged elevation for the entire surface rose by about 2 feet over the decade. Monthly potentiometric-surface elevations describe the lateral groundwater flow patterns in the aquifer and are usable at a variety of spatial scales to describe vertical groundwater recharge and discharge conditions for overlying surface-water features.

  5. Numerical correction of the phase error due to electromagnetic coupling effects in 1D EIT borehole measurements

    NASA Astrophysics Data System (ADS)

    Zhao, Y.; Zimmermann, E.; Huisman, J. A.; Treichel, A.; Wolters, B.; van Waasen, S.; Kemna, A.

    2012-12-01

    Spectral Electrical Impedance Tomography (EIT) allows obtaining images of the complex electrical conductivity for a broad frequency range (mHz to kHz). It has recently received increased interest in the field of near-surface geophysics and hydrogeophysics because of the relationships between complex electrical properties and hydrogeological and biogeochemical properties and processes observed in the laboratory with Spectral Induced Polarization (SIP). However, these laboratory results have also indicated that a high phase accuracy is required for surface and borehole EIT measurements because many soils and sediments are only weakly polarizable and show phase angles between 1 and 20 mrad. In the case of borehole EIT measurements, long cables and electrode chains (>10 meters) are typically used, which leads to undesired inductive coupling between the electric loops for current injection and potential measurement and capacitive coupling between the electrically conductive cable shielding and the soil. Depending on the electrical properties of the subsurface and the measured transfer impedances, both coupling effects can cause large phase errors that have typically limited the frequency bandwidth of field EIT measurement to the mHz to Hz range. The aim of this study is i) to develop correction procedures for these coupling effects to extend the applicability of EIT to the kHz range and ii) to validate these corrections using controlled laboratory measurements and field measurements. In order to do so, the inductive coupling effect was modeled using electronic circuit models and the capacitive coupling effect was modeled by integrating discrete capacitances in the electrical forward model describing the EIT measurement process. The correction methods were successfully verified with measurements under controlled conditions in a water-filled rain barrel, where a high phase accuracy of 2 mrad in the frequency range up to 10 kHz was achieved. In a field demonstration using a 25 m borehole chain with 8 electrodes with 1 m electrode separation, the corrections were also applied within a 1D inversion of the borehole EIT measurements. The results show that the correction methods increased the measurement accuracy considerably.

  6. Effect of neoclassical toroidal viscosity on error-field penetration thresholds in tokamak plasmas.

    PubMed

    Cole, A J; Hegna, C C; Callen, J D

    2007-08-10

    A model for field-error penetration is developed that includes nonresonant as well as the usual resonant field-error effects. The nonresonant components cause a neoclassical toroidal viscous torque that keeps the plasma rotating at a rate comparable to the ion diamagnetic frequency. The new theory is used to examine resonant error-field penetration threshold scaling in Ohmic tokamak plasmas. Compared to previous theoretical results, we find the plasma is less susceptible to error-field penetration and locking, by a factor that depends on the nonresonant error-field amplitude.

  7. Calibration of misalignment errors in the non-null interferometry based on reverse iteration optimization algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Xinmu; Hao, Qun; Hu, Yao; Wang, Shaopu; Ning, Yan; Li, Tengfei; Chen, Shufen

    2017-10-01

    With no necessity of compensating the whole aberration introduced by the aspheric surfaces, non-null test has the advantage over null test in applicability. However, retrace error, which is brought by the path difference between the rays reflected from the surface under test (SUT) and the incident rays, is introduced into the measurement and makes up of the residual wavefront aberrations (RWAs) along with surface figure error (SFE), misalignment error and other influences. Being difficult to separate from RWAs, the misalignment error may remain after measurement and it is hard to identify whether it is removed or not. It is a primary task to study the removal of misalignment error. A brief demonstration of digital Moiré interferometric technique is presented and a calibration method for misalignment error on the basis of reverse iteration optimization (RIO) algorithm in non-null test method is addressed. The proposed method operates mostly in the virtual system, and requires no accurate adjustment in the real interferometer, which is of significant advantage in reducing the errors brought by repeating complicated manual adjustment, furthermore improving the accuracy of the aspheric surface test. Simulation verification is done in this paper. The calibration accuracy of the position and attitude can achieve at least a magnitude of 10-5 mm and 0.0056×10-6rad, respectively. The simulation demonstrates that the influence of misalignment error can be precisely calculated and removed after calibration.

  8. Influence of surface error on electromagnetic performance of reflectors based on Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Li, Tuanjie; Shi, Jiachen; Tang, Yaqiong

    2018-04-01

    This paper investigates the influence of surface error distribution on the electromagnetic performance of antennas. The normalized Zernike polynomials are used to describe a smooth and continuous deformation surface. Based on the geometrical optics and piecewise linear fitting method, the electrical performance of reflector described by the Zernike polynomials is derived to reveal the relationship between surface error distribution and electromagnetic performance. Then the relation database between surface figure and electric performance is built for ideal and deformed surfaces to realize rapidly calculation of far-field electric performances. The simulation analysis of the influence of Zernike polynomials on the electrical properties for the axis-symmetrical reflector with the axial mode helical antenna as feed is further conducted to verify the correctness of the proposed method. Finally, the influence rules of surface error distribution on electromagnetic performance are summarized. The simulation results show that some terms of Zernike polynomials may decrease the amplitude of main lobe of antenna pattern, and some may reduce the pointing accuracy. This work extracts a new concept for reflector's shape adjustment in manufacturing process.

  9. Speech Errors in Progressive Non-Fluent Aphasia

    ERIC Educational Resources Information Center

    Ash, Sharon; McMillan, Corey; Gunawardena, Delani; Avants, Brian; Morgan, Brianna; Khan, Alea; Moore, Peachie; Gee, James; Grossman, Murray

    2010-01-01

    The nature and frequency of speech production errors in neurodegenerative disease have not previously been precisely quantified. In the present study, 16 patients with a progressive form of non-fluent aphasia (PNFA) were asked to tell a story from a wordless children's picture book. Errors in production were classified as either phonemic,…

  10. Investigating the Relationship between Conceptual and Procedural Errors in the Domain of Probability Problem-Solving.

    ERIC Educational Resources Information Center

    O'Connell, Ann Aileen

    The relationships among types of errors observed during probability problem solving were studied. Subjects were 50 graduate students in an introductory probability and statistics course. Errors were classified as text comprehension, conceptual, procedural, and arithmetic. Canonical correlation analysis was conducted on the frequencies of specific…

  11. Accuracy of acoustic velocity metering systems for measurement of low velocity in open channels

    USGS Publications Warehouse

    Laenen, Antonius; Curtis, R. E.

    1989-01-01

    Acoustic velocity meter (AVM) accuracy depends on equipment limitations, the accuracy of acoustic-path length and angle determination, and the stability of the mean velocity to acoustic-path velocity relation. Equipment limitations depend on path length and angle, transducer frequency, timing oscillator frequency, and signal-detection scheme. Typically, the velocity error from this source is about +or-1 to +or-10 mms/sec. Error in acoustic-path angle or length will result in a proportional measurement bias. Typically, an angle error of one degree will result in a velocity error of 2%, and a path-length error of one meter in 100 meter will result in an error of 1%. Ray bending (signal refraction) depends on path length and density gradients present in the stream. Any deviation from a straight acoustic path between transducer will change the unique relation between path velocity and mean velocity. These deviations will then introduce error in the mean velocity computation. Typically, for a 200-meter path length, the resultant error is less than one percent, but for a 1,000 meter path length, the error can be greater than 10%. Recent laboratory and field tests have substantiated assumptions of equipment limitations. Tow-tank tests of an AVM system with a 4.69-meter path length yielded an average standard deviation error of 9.3 mms/sec, and the field tests of an AVM system with a 20.5-meter path length yielded an average standard deviation error of a 4 mms/sec. (USGS)

  12. Evaluation of a scale-model experiment to investigate long-range acoustic propagation

    NASA Technical Reports Server (NTRS)

    Parrott, Tony L.; Mcaninch, Gerry L.; Carlberg, Ingrid A.

    1987-01-01

    Tests were conducted to evaluate the feasibility of using a scale-model experiment situated in an anechoic facility to investigate long-range sound propagation over ground terrain. For a nominal scale factor of 100:1, attenuations along a linear array of six microphones colinear with a continuous-wave type of sound source were measured over a wavelength range from 10 to 160 for a nominal test frequency of 10 kHz. Most tests were made for a hard model surface (plywood), but limited tests were also made for a soft model surface (plywood with felt). For grazing-incidence propagation over the hard surface, measured and predicted attenuation trends were consistent for microphone locations out to between 40 and 80 wavelengths. Beyond 80 wavelengths, significant variability was observed that was caused by disturbances in the propagation medium. Also, there was evidence of extraneous propagation-path contributions to data irregularities at more remote microphones. Sensitivity studies for the hard-surface and microphone indicated a 2.5 dB change in the relative excess attenuation for a systematic error in source and microphone elevations on the order of 1 mm. For the soft-surface model, no comparable sensitivity was found.

  13. Reduction of Orifice-Induced Pressure Errors

    NASA Technical Reports Server (NTRS)

    Plentovich, Elizabeth B.; Gloss, Blair B.; Eves, John W.; Stack, John P.

    1987-01-01

    Use of porous-plug orifice reduces or eliminates errors, induced by orifice itself, in measuring static pressure on airfoil surface in wind-tunnel experiments. Piece of sintered metal press-fitted into static-pressure orifice so it matches surface contour of model. Porous material reduces orifice-induced pressure error associated with conventional orifice of same or smaller diameter. Also reduces or eliminates additional errors in pressure measurement caused by orifice imperfections. Provides more accurate measurements in regions with very thin boundary layers.

  14. Multistage estimation of received carrier signal parameters under very high dynamic conditions of the receiver

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1991-01-01

    A multistage estimator is provided for the parameters of a received carrier signal possibly phase-modulated by unknown data and experiencing very high Doppler, Doppler rate, etc., as may arise, for example, in the case of Global Positioning Systems (GPS) where the signal parameters are directly related to the position, velocity and jerk of the GPS ground-based receiver. In a two-stage embodiment of the more general multistage scheme, the first stage, selected to be a modified least squares algorithm referred to as differential least squares (DLS), operates as a coarse estimator resulting in higher rms estimation errors but with a relatively small probability of the frequency estimation error exceeding one-half of the sampling frequency, provides relatively coarse estimates of the frequency and its derivatives. The second stage of the estimator, an extended Kalman filter (EKF), operates on the error signal available from the first stage refining the overall estimates of the phase along with a more refined estimate of frequency as well and in the process also reduces the number of cycle slips.

  15. Multistage estimation of received carrier signal parameters under very high dynamic conditions of the receiver

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1990-01-01

    A multistage estimator is provided for the parameters of a received carrier signal possibly phase-modulated by unknown data and experiencing very high Doppler, Doppler rate, etc., as may arise, for example, in the case of Global Positioning Systems (GPS) where the signal parameters are directly related to the position, velocity and jerk of the GPS ground-based receiver. In a two-stage embodiment of the more general multistage scheme, the first stage, selected to be a modified least squares algorithm referred to as differential least squares (DLS), operates as a coarse estimator resulting in higher rms estimation errors but with a relatively small probability of the frequency estimation error exceeding one-half of the sampling frequency, provides relatively coarse estimates of the frequency and its derivatives. The second stage of the estimator, an extended Kalman filter (EKF), operates on the error signal available from the first stage refining the overall estimates of the phase along with a more refined estimate of frequency as well and in the process also reduces the number of cycle slips.

  16. Addressing global uncertainty and sensitivity in first-principles based microkinetic models by an adaptive sparse grid approach

    NASA Astrophysics Data System (ADS)

    Döpking, Sandra; Plaisance, Craig P.; Strobusch, Daniel; Reuter, Karsten; Scheurer, Christoph; Matera, Sebastian

    2018-01-01

    In the last decade, first-principles-based microkinetic modeling has been developed into an important tool for a mechanistic understanding of heterogeneous catalysis. A commonly known, but hitherto barely analyzed issue in this kind of modeling is the presence of sizable errors from the use of approximate Density Functional Theory (DFT). We here address the propagation of these errors to the catalytic turnover frequency (TOF) by global sensitivity and uncertainty analysis. Both analyses require the numerical quadrature of high-dimensional integrals. To achieve this efficiently, we utilize and extend an adaptive sparse grid approach and exploit the confinement of the strongly non-linear behavior of the TOF to local regions of the parameter space. We demonstrate the methodology on a model of the oxygen evolution reaction at the Co3O4 (110)-A surface, using a maximum entropy error model that imposes nothing but reasonable bounds on the errors. For this setting, the DFT errors lead to an absolute uncertainty of several orders of magnitude in the TOF. We nevertheless find that it is still possible to draw conclusions from such uncertain models about the atomistic aspects controlling the reactivity. A comparison with derivative-based local sensitivity analysis instead reveals that this more established approach provides incomplete information. Since the adaptive sparse grids allow for the evaluation of the integrals with only a modest number of function evaluations, this approach opens the way for a global sensitivity analysis of more complex models, for instance, models based on kinetic Monte Carlo simulations.

  17. Response Surface Modeling Using Multivariate Orthogonal Functions

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; DeLoach, Richard

    2001-01-01

    A nonlinear modeling technique was used to characterize response surfaces for non-dimensional longitudinal aerodynamic force and moment coefficients, based on wind tunnel data from a commercial jet transport model. Data were collected using two experimental procedures - one based on modem design of experiments (MDOE), and one using a classical one factor at a time (OFAT) approach. The nonlinear modeling technique used multivariate orthogonal functions generated from the independent variable data as modeling functions in a least squares context to characterize the response surfaces. Model terms were selected automatically using a prediction error metric. Prediction error bounds computed from the modeling data alone were found to be- a good measure of actual prediction error for prediction points within the inference space. Root-mean-square model fit error and prediction error were less than 4 percent of the mean response value in all cases. Efficacy and prediction performance of the response surface models identified from both MDOE and OFAT experiments were investigated.

  18. Accuracy Evaluation of a 3-Dimensional Surface Imaging System for Guidance in Deep-Inspiration Breath-Hold Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alderliesten, Tanja; Sonke, Jan-Jakob; Betgen, Anja

    2013-02-01

    Purpose: To investigate the applicability of 3-dimensional (3D) surface imaging for image guidance in deep-inspiration breath-hold radiation therapy (DIBH-RT) for patients with left-sided breast cancer. For this purpose, setup data based on captured 3D surfaces was compared with setup data based on cone beam computed tomography (CBCT). Methods and Materials: Twenty patients treated with DIBH-RT after breast-conserving surgery (BCS) were included. Before the start of treatment, each patient underwent a breath-hold CT scan for planning purposes. During treatment, dose delivery was preceded by setup verification using CBCT of the left breast. 3D surfaces were captured by a surface imaging systemmore » concurrently with the CBCT scan. Retrospectively, surface registrations were performed for CBCT to CT and for a captured 3D surface to CT. The resulting setup errors were compared with linear regression analysis. For the differences between setup errors, group mean, systematic error, random error, and 95% limits of agreement were calculated. Furthermore, receiver operating characteristic (ROC) analysis was performed. Results: Good correlation between setup errors was found: R{sup 2}=0.70, 0.90, 0.82 in left-right, craniocaudal, and anterior-posterior directions, respectively. Systematic errors were {<=}0.17 cm in all directions. Random errors were {<=}0.15 cm. The limits of agreement were -0.34-0.48, -0.42-0.39, and -0.52-0.23 cm in left-right, craniocaudal, and anterior-posterior directions, respectively. ROC analysis showed that a threshold between 0.4 and 0.8 cm corresponds to promising true positive rates (0.78-0.95) and false positive rates (0.12-0.28). Conclusions: The results support the application of 3D surface imaging for image guidance in DIBH-RT after BCS.« less

  19. Segmentation of ECG from Surface EMG Using DWT and EMD: A Comparison Study

    NASA Astrophysics Data System (ADS)

    Shahbakhti, Mohammad; Heydari, Elnaz; Luu, Gia Thien

    2014-10-01

    The electrocardiographic (ECG) signal is a major artifact during recording the surface electromyography (SEMG). Removal of this artifact is one of the important tasks before SEMG analysis for biomedical goals. In this paper, the application of discrete wavelet transform (DWT) and empirical mode decomposition (EMD) for elimination of ECG artifact from SEMG is investigated. The focus of this research is to reach the optimized number of decomposed levels using mean power frequency (MPF) by both techniques. In order to implement the proposed methods, ten simulated and three real ECG contaminated SEMG signals have been tested. Signal-to-noise ratio (SNR) and mean square error (MSE) between the filtered and the pure signals are applied as the performance indexes of this research. The obtained results suggest both techniques could remove ECG artifact from SEMG signals fair enough, however, DWT performs much better and faster in real data.

  20. Opto-thermal analysis of a lightweighted mirror for solar telescope.

    PubMed

    Banyal, Ravinder K; Ravindra, B; Chatterjee, S

    2013-03-25

    In this paper, an opto-thermal analysis of a moderately heated lightweighted solar telescope mirror is carried out using 3D finite element analysis (FEA). A physically realistic heat transfer model is developed to account for the radiative heating and energy exchange of the mirror with surroundings. The numerical simulations show the non-uniform temperature distribution and associated thermo-elastic distortions of the mirror blank clearly mimicking the underlying discrete geometry of the lightweighted substrate. The computed mechanical deformation data is analyzed with surface polynomials and the optical quality of the mirror is evaluated with the help of a ray-tracing software. The thermal print-through distortions are further shown to contribute to optical figure changes and mid-spatial frequency errors of the mirror surface. A comparative study presented for three commonly used substrate materials, namely, Zerodur, Pyrex and Silicon Carbide (SiC) is relevant to vast area of large optics requirements in ground and space applications.

Top