Factors Governing Surface Form Accuracy In Diamond Machined Components
NASA Astrophysics Data System (ADS)
Myler, J. K.; Page, D. A.
1988-10-01
Manufacturing methods for diamond machined optical surfaces, for application at infrared wavelengths, require that a new set of criteria must be recognised for the specification of surface form. Appropriate surface form parameters are discussed with particular reference to an XY cartesian geometry CNC machine. Methods for reducing surface form errors in diamond machining are discussed for certain areas such as tool wear, tool centring, and the fixturing of the workpiece. Examples of achievable surface form accuracy are presented. Traditionally, optical surfaces have been produced by use of random polishing techniques using polishing compounds and lapping tools. For lens manufacture, the simplest surface which could be created corresponded to a sphere. The sphere is a natural outcome of a random grinding and polishing process. The measurement of the surface form accuracy would most commonly be performed using a contact test gauge plate, polished to a sphere of known radius of curvature. QA would simply be achieved using a diffuse monochromatic source and looking for residual deviations between the polished surface and the test plate. The specifications governing the manufacture of surfaces using these techniques would call for the accuracy to which the generated surface should match the test plate as defined by a spherical deviations from the required curvature and a non spherical astigmatic error. Consequently, optical design software has tolerancing routines which specifically allow the designer to assess the influence of spherical error and astigmatic error on the optical performance. The creation of general aspheric surfaces is not so straightforward using conventional polishing techniques since the surface profile is non spherical and a good approximation to a power series. For infra red applications (X = 8-12p,m) numerically controlled single point diamond turning is an alternative manufacturing technology capable of creating aspheric profiles as well as simple spheres. It is important however to realise that a diamond turning process will possess a new set of criteria which limit the accuracy of the surface profile created corresponding to a completely new set of specifications. The most important factors are:- tool centring accuracy, surface waviness, conical form error, and other rotationally symmetric non spherical errors. The fixturing of the workpiece is very different from that of a conventional lap, since in many cases the diamond machine resembles a conventional lathe geometry where the workpiece rotates at a few thousand R.P.M. Substrates must be held rigidly for rotation at such speeds as compared with more delicate mounting methods for conventional laps. Consequently the workpiece may suffer from other forms of deformation which are non-rotationally symmetric due to mounting stresses (static deformation) and stresses induced at the speed of rotation (dynamic deformation). The magnitude of each of these contributions to overall form error will be a function of the type of machine, the material, substrate, and testing design. The following sections describe each of these effects in more detail based on experience obtained on a Pneumo Precision MSG325 XY CNC machine. Certain in-process measurement techniques have been devised to minimise and quantify each contribution.
Stochastic Surface Mesh Reconstruction
NASA Astrophysics Data System (ADS)
Ozendi, M.; Akca, D.; Topan, H.
2018-05-01
A generic and practical methodology is presented for 3D surface mesh reconstruction from the terrestrial laser scanner (TLS) derived point clouds. It has two main steps. The first step deals with developing an anisotropic point error model, which is capable of computing the theoretical precisions of 3D coordinates of each individual point in the point cloud. The magnitude and direction of the errors are represented in the form of error ellipsoids. The following second step is focused on the stochastic surface mesh reconstruction. It exploits the previously determined error ellipsoids by computing a point-wise quality measure, which takes into account the semi-diagonal axis length of the error ellipsoid. The points only with the least errors are used in the surface triangulation. The remaining ones are automatically discarded.
A Rapid Method to Achieve Aero-Engine Blade Form Detection
Sun, Bin; Li, Bing
2015-01-01
This paper proposes a rapid method to detect aero-engine blade form, according to the characteristics of an aero-engine blade surface. This method first deduces an inclination error model in free-form surface measurements based on the non-contact laser triangulation principle. Then a four-coordinate measuring system was independently developed, a special fixture was designed according to the blade shape features, and a fast measurement of the blade features path was planned. Finally, by using the inclination error model for correction of acquired data, the measurement error that was caused by tilt form is compensated. As a result the measurement accuracy of the Laser Displacement Sensor was less than 10 μm. After the experimental verification, this method makes full use of optical non-contact measurement fast speed, high precision and wide measuring range of features. Using a standard gauge block as a measurement reference, the coordinate system conversion data is simple and practical. It not only improves the measurement accuracy of the blade surface, but also its measurement efficiency. Therefore, this method increases the value of the measurement of complex surfaces. PMID:26039420
A rapid method to achieve aero-engine blade form detection.
Sun, Bin; Li, Bing
2015-06-01
This paper proposes a rapid method to detect aero-engine blade form, according to the characteristics of an aero-engine blade surface. This method first deduces an inclination error model in free-form surface measurements based on the non-contact laser triangulation principle. Then a four-coordinate measuring system was independently developed, a special fixture was designed according to the blade shape features, and a fast measurement of the blade features path was planned. Finally, by using the inclination error model for correction of acquired data, the measurement error that was caused by tilt form is compensated. As a result the measurement accuracy of the Laser Displacement Sensor was less than 10 μm. After the experimental verification, this method makes full use of optical non-contact measurement fast speed, high precision and wide measuring range of features. Using a standard gauge block as a measurement reference, the coordinate system conversion data is simple and practical. It not only improves the measurement accuracy of the blade surface, but also its measurement efficiency. Therefore, this method increases the value of the measurement of complex surfaces.
Fabrication of ф 160 mm convex hyperbolic mirror for remote sensing instrument
NASA Astrophysics Data System (ADS)
Kuo, Ching-Hsiang; Yu, Zong-Ru; Ho, Cheng-Fang; Hsu, Wei-Yao; Chen, Fong-Zhi
2012-10-01
In this study, efficient polishing processes with inspection procedures for a large convex hyperbolic mirror of Cassegrain optical system are presented. The polishing process combines the techniques of conventional lapping and CNC polishing. We apply the conventional spherical lapping process to quickly remove the sub-surface damage (SSD) layer caused by grinding process and to get the accurate radius of best-fit sphere (BFS) of aspheric surface with fine surface texture simultaneously. Thus the removed material for aspherization process can be minimized and the polishing time for SSD removal can also be reduced substantially. The inspection procedure was carried out by using phase shift interferometer with CGH and stitching technique. To acquire the real surface form error of each sub aperture, the wavefront errors of the reference flat and CGH flat due to gravity effect of the vertical setup are calibrated in advance. Subsequently, we stitch 10 calibrated sub-aperture surface form errors to establish the whole irregularity of the mirror in 160 mm diameter for correction polishing. The final result of the In this study, efficient polishing processes with inspection procedures for a large convex hyperbolic mirror of Cassegrain optical system are presented. The polishing process combines the techniques of conventional lapping and CNC polishing. We apply the conventional spherical lapping process to quickly remove the sub-surface damage (SSD) layer caused by grinding process and to get the accurate radius of best-fit sphere (BFS) of aspheric surface with fine surface texture simultaneously. Thus the removed material for aspherization process can be minimized and the polishing time for SSD removal can also be reduced substantially. The inspection procedure was carried out by using phase shift interferometer with CGH and stitching technique. To acquire the real surface form error of each sub aperture, the wavefront errors of the reference flat and CGH flat due to gravity effect of the vertical setup are calibrated in advance. Subsequently, we stitch 10 calibrated sub-aperture surface form errors to establish the whole irregularity of the mirror in 160 mm diameter for correction polishing. The final result of the Fabrication of ф160 mm Convex Hyperbolic Mirror for Remote Sensing Instrument160 mm convex hyperbolic mirror is 0.15 μm PV and 17.9 nm RMS.160 mm convex hyperbolic mirror is 0.15 μm PV and 17.9 nm RMS.
Machining process influence on the chip form and surface roughness by neuro-fuzzy technique
NASA Astrophysics Data System (ADS)
Anicic, Obrad; Jović, Srđan; Aksić, Danilo; Skulić, Aleksandar; Nedić, Bogdan
2017-04-01
The main aim of the study was to analyze the influence of six machining parameters on the chip shape formation and surface roughness as well during turning of Steel 30CrNiMo8. Three components of cutting forces were used as inputs together with cutting speed, feed rate, and depth of cut. It is crucial for the engineers to use optimal machining parameters to get the best results or to high control of the machining process. Therefore, there is need to find the machining parameters for the optimal procedure of the machining process. Adaptive neuro-fuzzy inference system (ANFIS) was used to estimate the inputs influence on the chip shape formation and surface roughness. According to the results, the cutting force in direction of the depth of cut has the highest influence on the chip form. The testing error for the cutting force in direction of the depth of cut has testing error 0.2562. This cutting force determines the depth of cut. According to the results, the depth of cut has the highest influence on the surface roughness. Also the depth of cut has the highest influence on the surface roughness. The testing error for the cutting force in direction of the depth of cut has testing error 5.2753. Generally the depth of cut and the cutting force which provides the depth of cut are the most dominant factors for chip forms and surface roughness. Any small changes in depth of cut or in cutting force which provide the depth of cut could drastically affect the chip form or surface roughness of the working material.
Use of dual coolant displacing media for in-process optical measurement of form profiles
NASA Astrophysics Data System (ADS)
Gao, Y.; Xie, F.
2018-07-01
In-process measurement supports feedback control to reduce workpiece surface form error. Without it, the workpiece surface must be measured offline causing significant errors in workpiece positioning and reduced productivity. To offer better performance, a new in-process optical measurement method based on the use of dual coolant displacing media is proposed and studied, which uses an air and liquid phase together to resist coolant and to achieve in-process measurement. In the proposed new design, coolant is used to replace the previously used clean water to avoid coolant dilution. Compared with the previous methods, the distance between the applicator and the workpiece surface can be relaxed to 1 mm. The result is 4 times larger than before, thus permitting measurement of curved surfaces. The use of air is up to 1.5 times less than the best method previously available. For a sample workpiece with curved surfaces, the relative error of profile measurement under coolant conditions can be as small as 0.1% compared with the one under no coolant conditions. Problems in comparing measured 3D surfaces are discussed. A comparative study between a Bruker Npflex optical profiler and the developed new in-process optical profiler was conducted. For a surface area of 5.5 mm × 5.5 mm, the average measurement error under coolant conditions is only 0.693 µm. In addition, the error due to the new method is only 0.10 µm when compared between coolant and no coolant conditions. The effect of a thin liquid film on workpiece surface is discussed. The experimental results show that the new method can successfully solve the coolant dilution problem and is able to accurately measure the workpiece surface whilst fully submerged in the opaque coolant. The proposed new method is advantageous and should be very useful for in-process optical form profile measurement in precision machining.
Surface characterization protocol for precision aspheric optics
NASA Astrophysics Data System (ADS)
Sarepaka, RamaGopal V.; Sakthibalan, Siva; Doodala, Somaiah; Panwar, Rakesh S.; Kotaria, Rajendra
2017-10-01
In Advanced Optical Instrumentation, Aspherics provide an effective performance alternative. The aspheric fabrication and surface metrology, followed by aspheric design are complementary iterative processes for Precision Aspheric development. As in fabrication, a holistic approach of aspheric surface characterization is adopted to evaluate actual surface error and to aim at the deliverance of aspheric optics with desired surface quality. Precision optical surfaces are characterized by profilometry or by interferometry. Aspheric profiles are characterized by contact profilometers, through linear surface scans to analyze their Form, Figure and Finish errors. One must ensure that, the surface characterization procedure does not add to the resident profile errors (generated during the aspheric surface fabrication). This presentation examines the errors introduced post-surface generation and during profilometry of aspheric profiles. This effort is to identify sources of errors and is to optimize the metrology process. The sources of error during profilometry may be due to: profilometer settings, work-piece placement on the profilometer stage, selection of zenith/nadir points of aspheric profiles, metrology protocols, clear aperture - diameter analysis, computational limitations of the profiler and the software issues etc. At OPTICA, a PGI 1200 FTS contact profilometer (Taylor-Hobson make) is used for this study. Precision Optics of various profiles are studied, with due attention to possible sources of errors during characterization, with multi-directional scan approach for uniformity and repeatability of error estimation. This study provides an insight of aspheric surface characterization and helps in optimal aspheric surface production methodology.
Flux Sampling Errors for Aircraft and Towers
NASA Technical Reports Server (NTRS)
Mahrt, Larry
1998-01-01
Various errors and influences leading to differences between tower- and aircraft-measured fluxes are surveyed. This survey is motivated by reports in the literature that aircraft fluxes are sometimes smaller than tower-measured fluxes. Both tower and aircraft flux errors are larger with surface heterogeneity due to several independent effects. Surface heterogeneity may cause tower flux errors to increase with decreasing wind speed. Techniques to assess flux sampling error are reviewed. Such error estimates suffer various degrees of inapplicability in real geophysical time series due to nonstationarity of tower time series (or inhomogeneity of aircraft data). A new measure for nonstationarity is developed that eliminates assumptions on the form of the nonstationarity inherent in previous methods. When this nonstationarity measure becomes large, the surface energy imbalance increases sharply. Finally, strategies for obtaining adequate flux sampling using repeated aircraft passes and grid patterns are outlined.
NASA Astrophysics Data System (ADS)
Katahira, Yu; Fukuta, Masahiko; Katsuki, Masahide; Momochi, Takeshi; Yamamoto, Yoshihiro
2016-09-01
Recently, it has been required to improve qualities of aspherical lenses mounted on camera units. Optical lenses in highvolume production generally are applied with molding process using cemented carbide or Ni-P coated steel, which can be selected from lens material such as glass and plastic. Additionally it can be obtained high quality of the cut or ground surface on mold due to developments of different mold product technologies. As results, it can be less than 100nmPV as form-error and 1nmRa as surface roughness in molds. Furthermore it comes to need higher quality, not only formerror( PV) and surface roughness(Ra) but also other surface characteristics. For instance, it can be caused distorted shapes at imaging by middle spatial frequency undulations on the lens surface. In this study, we made focus on several types of sinuous structures, which can be classified into form errors for designed surface and deteriorate optical system performances. And it was obtained mold product processes minimalizing undulations on the surface. In the report, it was mentioned about the analyzing process by using PSD so as to evaluate micro undulations on the machined surface quantitatively. In addition, it was mentioned that the grinding process with circumferential velocity control was effective for large aperture lenses fabrication and could minimalize undulations appeared on outer area of the machined surface, and mentioned about the optical glass lens molding process by using the high precision press machine.
Strategy of restraining ripple error on surface for optical fabrication.
Wang, Tan; Cheng, Haobo; Feng, Yunpeng; Tam, Honyuen
2014-09-10
The influence from the ripple error to the high imaging quality is effectively reduced by restraining the ripple height. A method based on the process parameters and the surface error distribution is designed to suppress the ripple height in this paper. The generating mechanism of the ripple error is analyzed by polishing theory with uniform removal character. The relation between the processing parameters (removal functions, pitch of path, and dwell time) and the ripple error is discussed through simulations. With these, the strategy for diminishing the error is presented. A final process is designed and demonstrated on K9 work-pieces using the optimizing strategy with magnetorheological jet polishing. The form error on the surface is decreased from 0.216λ PV (λ=632.8 nm) and 0.039λ RMS to 0.03λ PV and 0.004λ RMS. And the ripple error is restrained well at the same time, because the ripple height is less than 6 nm on the final surface. Results indicate that these strategies are suitable for high-precision optical manufacturing.
Conversion of radius of curvature to power (and vice versa)
NASA Astrophysics Data System (ADS)
Wickenhagen, Sven; Endo, Kazumasa; Fuchs, Ulrike; Youngworth, Richard N.; Kiontke, Sven R.
2015-09-01
Manufacturing optical components relies on good measurements and specifications. One of the most precise measurements routinely required is the form accuracy. In practice, form deviation from the ideal surface is effectively low frequency errors, where the form error most often accounts for no more than a few undulations across a surface. These types of errors are measured in a variety of ways including interferometry and tactile methods like profilometry, with the latter often being employed for aspheres and general surface shapes such as freeforms. This paper provides a basis for a correct description of power and radius of curvature tolerances, including best practices and calculating the power value with respect to the radius deviation (and vice versa) of the surface form. A consistent definition of the sagitta is presented, along with different cases in manufacturing that are of interest to fabricators and designers. The results make clear how the definitions and results should be documented, for all measurement setups. Relationships between power and radius of curvature are shown that allow specifying the preferred metric based on final accuracy and measurement method. Results shown include all necessary equations for conversion to give optical designers and manufacturers a consistent and robust basis for decision-making. The paper also gives guidance on preferred methods for different scenarios for surface types, accuracy required, and metrology methods employed.
Rapid fabrication of miniature lens arrays by four-axis single point diamond machining
McCall, Brian; Tkaczyk, Tomasz S.
2013-01-01
A novel method for fabricating lens arrays and other non-rotationally symmetric free-form optics is presented. This is a diamond machining technique using 4 controlled axes of motion – X, Y, Z, and C. As in 3-axis diamond micro-milling, a diamond ball endmill is mounted to the work spindle of a 4-axis ultra-precision computer numerical control (CNC) machine. Unlike 3-axis micro-milling, the C-axis is used to hold the cutting edge of the tool in contact with the lens surface for the entire cut. This allows the feed rates to be doubled compared to the current state of the art of micro-milling while producing an optically smooth surface with very low surface form error and exceptionally low radius error. PMID:23481813
Thin film concentrator panel development
NASA Technical Reports Server (NTRS)
Zimmerman, D. K.
1982-01-01
The development and testing of a rigid panel concept that utilizes a thin film reflective surface for application to a low-cost point-focusing solar concentrator is discussed. It is shown that a thin film reflective surface is acceptable for use on solar concentrators, including 1500 F applications. Additionally, it is shown that a formed steel sheet substrate is a good choice for concentrator panels. The panel has good optical properties, acceptable forming tolerances, environmentally resistant substrate and stiffeners, and adaptability to low to mass production rates. Computer simulations of the concentrator optics were run using the selected reflector panel design. Experimentally determined values for reflector surface specularity and reflectivity along with dimensional data were used in the analysis. The simulations provided intercept factor and net energy into the aperture as a function of aperture size for different surface errors and pointing errors. Point source and Sun source optical tests were also performed.
Transversal Clifford gates on folded surface codes
Moussa, Jonathan E.
2016-10-12
Surface and color codes are two forms of topological quantum error correction in two spatial dimensions with complementary properties. Surface codes have lower-depth error detection circuits and well-developed decoders to interpret and correct errors, while color codes have transversal Clifford gates and better code efficiency in the number of physical qubits needed to achieve a given code distance. A formal equivalence exists between color codes and folded surface codes, but it does not guarantee the transferability of any of these favorable properties. However, the equivalence does imply the existence of constant-depth circuit implementations of logical Clifford gates on folded surfacemore » codes. We achieve and improve this result by constructing two families of folded surface codes with transversal Clifford gates. This construction is presented generally for qudits of any dimension. Lastly, the specific application of these codes to universal quantum computation based on qubit fusion is also discussed.« less
Zhang, Xiaodong; Zeng, Zhen; Liu, Xianlei; Fang, Fengzhou
2015-09-21
Freeform surface is promising to be the next generation optics, however it needs high form accuracy for excellent performance. The closed-loop of fabrication-measurement-compensation is necessary for the improvement of the form accuracy. It is difficult to do an off-machine measurement during the freeform machining because the remounting inaccuracy can result in significant form deviations. On the other side, on-machine measurement may hides the systematic errors of the machine because the measuring device is placed in situ on the machine. This study proposes a new compensation strategy based on the combination of on-machine and off-machine measurement. The freeform surface is measured in off-machine mode with nanometric accuracy, and the on-machine probe achieves accurate relative position between the workpiece and machine after remounting. The compensation cutting path is generated according to the calculated relative position and shape errors to avoid employing extra manual adjustment or highly accurate reference-feature fixture. Experimental results verified the effectiveness of the proposed method.
A new multiple air beam approach for in-process form error optical measurement
NASA Astrophysics Data System (ADS)
Gao, Y.; Li, R.
2018-07-01
In-process measurement can provide feedback for the control of workpiece precision in terms of size, roughness and, in particular, mid-spatial frequency form error. Optical measurement methods are of the non-contact type and possess high precision, as required for in-process form error measurement. In precision machining, coolant is commonly used to reduce heat generation and thermal deformation on the workpiece surface. However, the use of coolant will induce an opaque coolant barrier if optical measurement methods are used. In this paper, a new multiple air beam approach is proposed. The new approach permits the displacement of coolant from any direction and with a large thickness, i.e. with a large amount of coolant. The model, the working principle, and the key features of the new approach are presented. Based on the proposed new approach, a new in-process form error optical measurement system is developed. The coolant removal capability and the performance of this new multiple air beam approach are assessed. The experimental results show that the workpiece surface y(x, z) can be measured successfully with standard deviation up to 0.3011 µm even under a large amount of coolant, such that the coolant thickness is 15 mm. This means a relative uncertainty of 2σ up to 4.35% and the workpiece surface is deeply immersed in the opaque coolant. The results also show that, in terms of coolant removal capability, air supply and air velocity, the proposed new approach improves by, respectively, 3.3, 1.3 and 5.3 times on the previous single air beam approach. The results demonstrate the significant improvements brought by the new multiple air beam method together with the developed measurement system.
Precision machining of optical surfaces with subaperture correction technologies MRF and IBF
NASA Astrophysics Data System (ADS)
Schmelzer, Olaf; Feldkamp, Roman
2015-10-01
Precision optical elements are used in a wide range of technical instrumentations. Many optical systems e.g. semiconductor inspection modules, laser heads for laser material processing or high end movie cameras, contain precision optics even aspherical or freeform surfaces. Critical parameters for such systems are wavefront error, image field curvature or scattered light. Following these demands the lens parameters are also critical concerning power and RMSi of the surface form error and micro roughness. How can we reach these requirements? The emphasis of this discussion is set on the application of subaperture correction technologies in the fabrication of high-end aspheres and free-forms. The presentation focuses on the technology chain necessary for the production of high-precision aspherical optical components and the characterization of the applied subaperture finishing tools MRF (magneto-rheological finishing) and IBF (ion beam figuring). These technologies open up the possibility of improving the performance of optical systems.
NASA Technical Reports Server (NTRS)
Rahmat-Samii, Y.
1983-01-01
Based on the works of Ruze (1966) and Vu (1969), a novel mathematical model has been developed to determine efficiently the average power pattern degradations caused by random surface errors. In this model, both nonuniform root mean square (rms) surface errors and nonuniform illumination functions are employed. In addition, the model incorporates the dependence on F/D in the construction of the solution. The mathematical foundation of the model rests on the assumption that in each prescribed annular region of the antenna, the geometrical rms surface value is known. It is shown that closed-form expressions can then be derived, which result in a very efficient computational method for the average power pattern. Detailed parametric studies are performed with these expressions to determine the effects of different random errors and illumination tapers on parameters such as gain loss and sidelobe levels. The results clearly demonstrate that as sidelobe levels decrease, their dependence on the surface rms/wavelength becomes much stronger and, for a specified tolerance level, a considerably smaller rms/wavelength is required to maintain the low sidelobes within the required bounds.
Method of making silicon on insalator material using oxygen implantation
Hite, Larry R.; Houston, Ted; Matloubian, Mishel
1989-01-01
The described embodiments of the present invention provide a semiconductor on insulator structure providing a semiconductor layer less susceptible to single event upset errors (SEU) due to radiation. The semiconductor layer is formed by implanting ions which form an insulating layer beneath the surface of a crystalline semiconductor substrate. The remaining crystalline semiconductor layer above the insulating layer provides nucleation sites for forming a crystalline semiconductor layer above the insulating layer. The damage caused by implantation of the ions for forming an insulating layer is left unannealed before formation of the semiconductor layer by epitaxial growth. The epitaxial layer, thus formed, provides superior characteristics for prevention of SEU errors, in that the carrier lifetime within the epitaxial layer, thus formed, is less than the carrier lifetime in epitaxial layers formed on annealed material while providing adequate semiconductor characteristics.
Quantum error correction in crossbar architectures
NASA Astrophysics Data System (ADS)
Helsen, Jonas; Steudtner, Mark; Veldhorst, Menno; Wehner, Stephanie
2018-07-01
A central challenge for the scaling of quantum computing systems is the need to control all qubits in the system without a large overhead. A solution for this problem in classical computing comes in the form of so-called crossbar architectures. Recently we made a proposal for a large-scale quantum processor (Li et al arXiv:1711.03807 (2017)) to be implemented in silicon quantum dots. This system features a crossbar control architecture which limits parallel single-qubit control, but allows the scheme to overcome control scaling issues that form a major hurdle to large-scale quantum computing systems. In this work, we develop a language that makes it possible to easily map quantum circuits to crossbar systems, taking into account their architecture and control limitations. Using this language we show how to map well known quantum error correction codes such as the planar surface and color codes in this limited control setting with only a small overhead in time. We analyze the logical error behavior of this surface code mapping for estimated experimental parameters of the crossbar system and conclude that logical error suppression to a level useful for real quantum computation is feasible.
Simulation of aspheric tolerance with polynomial fitting
NASA Astrophysics Data System (ADS)
Li, Jing; Cen, Zhaofeng; Li, Xiaotong
2018-01-01
The shape of the aspheric lens changes caused by machining errors, resulting in a change in the optical transfer function, which affects the image quality. At present, there is no universally recognized tolerance criterion standard for aspheric surface. To study the influence of aspheric tolerances on the optical transfer function, the tolerances of polynomial fitting are allocated on the aspheric surface, and the imaging simulation is carried out by optical imaging software. Analysis is based on a set of aspheric imaging system. The error is generated in the range of a certain PV value, and expressed as a form of Zernike polynomial, which is added to the aspheric surface as a tolerance term. Through optical software analysis, the MTF of optical system can be obtained and used as the main evaluation index. Evaluate whether the effect of the added error on the MTF of the system meets the requirements of the current PV value. Change the PV value and repeat the operation until the acceptable maximum allowable PV value is obtained. According to the actual processing technology, consider the error of various shapes, such as M type, W type, random type error. The new method will provide a certain development for the actual free surface processing technology the reference value.
Modeling and simulation for fewer-axis grinding of complex surface
NASA Astrophysics Data System (ADS)
Li, Zhengjian; Peng, Xiaoqiang; Song, Ci
2017-10-01
As the basis of fewer-axis grinding of complex surface, the grinding mathematical model is of great importance. A mathematical model of the grinding wheel was established, and then coordinate and normal vector of the wheel profile could be calculated. Through normal vector matching at the cutter contact point and the coordinate system transformation, the grinding mathematical model was established to work out the coordinate of the cutter location point. Based on the model, interference analysis was simulated to find out the right position and posture of workpiece for grinding. Then positioning errors of the workpiece including the translation positioning error and the rotation positioning error were analyzed respectively, and the main locating datum was obtained. According to the analysis results, the grinding tool path was planned and generated to grind the complex surface, and good form accuracy was obtained. The grinding mathematical model is simple, feasible and can be widely applied.
Fabrication and correction of freeform surface based on Zernike polynomials by slow tool servo
NASA Astrophysics Data System (ADS)
Cheng, Yuan-Chieh; Hsu, Ming-Ying; Peng, Wei-Jei; Hsu, Wei-Yao
2017-10-01
Recently, freeform surface widely using to the optical system; because it is have advance of optical image and freedom available to improve the optical performance. For freeform optical fabrication by integrating freeform optical design, precision freeform manufacture, metrology freeform optics and freeform compensate method, to modify the form deviation of surface, due to production process of freeform lens ,compared and provides more flexibilities and better performance. This paper focuses on the fabrication and correction of the free-form surface. In this study, optical freeform surface using multi-axis ultra-precision manufacturing could be upgrading the quality of freeform. It is a machine equipped with a positioning C-axis and has the CXZ machining function which is also called slow tool servo (STS) function. The freeform compensate method of Zernike polynomials results successfully verified; it is correction the form deviation of freeform surface. Finally, the freeform surface are measured experimentally by Ultrahigh Accurate 3D Profilometer (UA3P), compensate the freeform form error with Zernike polynomial fitting to improve the form accuracy of freeform.
Land Surface Temperature Measurements form EOS MODIS Data
NASA Technical Reports Server (NTRS)
Wan, Zhengming
1996-01-01
We have developed a physics-based land-surface temperature (LST) algorithm for simultaneously retrieving surface band-averaged emissivities and temperatures from day/night pairs of MODIS (Moderate Resolution Imaging Spectroradiometer) data in seven thermal infrared bands. The set of 14 nonlinear equations in the algorithm is solved with the statistical regression method and the least-squares fit method. This new LST algorithm was tested with simulated MODIS data for 80 sets of band-averaged emissivities calculated from published spectral data of terrestrial materials in wide ranges of atmospheric and surface temperature conditions. Comprehensive sensitivity and error analysis has been made to evaluate the performance of the new LST algorithm and its dependence on variations in surface emissivity and temperature, upon atmospheric conditions, as well as the noise-equivalent temperature difference (NE(Delta)T) and calibration accuracy specifications of the MODIS instrument. In cases with a systematic calibration error of 0.5%, the standard deviations of errors in retrieved surface daytime and nighttime temperatures fall between 0.4-0.5 K over a wide range of surface temperatures for mid-latitude summer conditions. The standard deviations of errors in retrieved emissivities in bands 31 and 32 (in the 10-12.5 micrometer IR spectral window region) are 0.009, and the maximum error in retrieved LST values falls between 2-3 K. Several issues related to the day/night LST algorithm (uncertainties in the day/night registration and in surface emissivity changes caused by dew occurrence, and the cloud cover) have been investigated. The LST algorithms have been validated with MODIS Airborne Simulator (MAS) dada and ground-based measurement data in two field campaigns conducted in Railroad Valley playa, NV in 1995 and 1996. The MODIS LST version 1 software has been delivered.
Some Insights of Spectral Optimization in Ocean Color Inversion
NASA Technical Reports Server (NTRS)
Lee, Zhongping; Franz, Bryan; Shang, Shaoling; Dong, Qiang; Arnone, Robert
2011-01-01
In the past decades various algorithms have been developed for the retrieval of water constituents from the measurement of ocean color radiometry, and one of the approaches is spectral optimization. This approach defines an error target (or error function) between the input remote sensing reflectance and the output remote sensing reflectance, with the latter modeled with a few variables that represent the optically active properties (such as the absorption coefficient of phytoplankton and the backscattering coefficient of particles). The values of the variables when the error reach a minimum (optimization is achieved) are considered the properties that form the input remote sensing reflectance; or in other words, the equations are solved numerically. The applications of this approach implicitly assume that the error is a monotonic function of the various variables. Here, with data from numerical simulation and field measurements, we show the shape of the error surface, in a way to justify the possibility of finding a solution of the various variables. In addition, because the spectral properties could be modeled differently, impacts of such differences on the error surface as well as on the retrievals are also presented.
Model and algorithm based on accurate realization of dwell time in magnetorheological finishing.
Song, Ci; Dai, Yifan; Peng, Xiaoqiang
2010-07-01
Classically, a dwell-time map is created with a method such as deconvolution or numerical optimization, with the input being a surface error map and influence function. This dwell-time map is the numerical optimum for minimizing residual form error, but it takes no account of machine dynamics limitations. The map is then reinterpreted as machine speeds and accelerations or decelerations in a separate operation. In this paper we consider combining the two methods in a single optimization by the use of a constrained nonlinear optimization model, which regards both the two-norm of the surface residual error and the dwell-time gradient as an objective function. This enables machine dynamic limitations to be properly considered within the scope of the optimization, reducing both residual surface error and polishing times. Further simulations are introduced to demonstrate the feasibility of the model, and the velocity map is reinterpreted from the dwell time, meeting the requirement of velocity and the limitations of accelerations or decelerations. Indeed, the model and algorithm can also apply to other computer-controlled subaperture methods.
Design of an off-axis visual display based on a free-form projection screen to realize stereo vision
NASA Astrophysics Data System (ADS)
Zhao, Yuanming; Cui, Qingfeng; Piao, Mingxu; Zhao, Lidong
2017-10-01
A free-form projection screen is designed for an off-axis visual display, which shows great potential in applications such as flight training for providing both accommodation and convergence cues for pilots. The method based on point cloud is proposed for the design of the free-form surface, and the design of the point cloud is controlled by a program written in the macro-language. In the visual display based on the free-form projection screen, when the error of the screen along Z-axis is 1 mm, the error of visual distance at each filed is less than 1%. And the resolution of the design for full field is better than 1‧, which meet the requirement of resolution for human eyes.
Improving Global Net Surface Heat Flux with Ocean Reanalysis
NASA Astrophysics Data System (ADS)
Carton, J.; Chepurin, G. A.; Chen, L.; Grodsky, S.
2017-12-01
This project addresses the current level of uncertainty in surface heat flux estimates. Time mean surface heat flux estimates provided by atmospheric reanalyses differ by 10-30W/m2. They are generally unbalanced globally, and have been shown by ocean simulation studies to be incompatible with ocean temperature and velocity measurements. Here a method is presented 1) to identify the spatial and temporal structure of the underlying errors and 2) to reduce them by exploiting hydrographic observations and the analysis increments produced by an ocean reanalysis using sequential data assimilation. The method is applied to fluxes computed from daily state variables obtained from three widely used reanalyses: MERRA2, ERA-Interim, and JRA-55, during an eight year period 2007-2014. For each of these seasonal heat flux errors/corrections are obtained. In a second set of experiments the heat fluxes are corrected and the ocean reanalysis experiments are repeated. This second round of experiments shows that the time mean error in the corrected fluxes is reduced to within ±5W/m2 over the interior subtropical and midlatitude oceans, with the most significant changes occuring over the Southern Ocean. The global heat flux imbalance of each reanalysis is reduced to within a few W/m2 with this single correction. Encouragingly, the corrected forms of the three sets of fluxes are also shown to converge. In the final discussion we present experiments beginning with a modified form of the ERA-Int reanalysis, produced by the DAKKAR program, in which state variables have been individually corrected based on independent measurements. Finally, we discuss the separation of flux error from model error.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koepferl, Christine M.; Robitaille, Thomas P.; Dale, James E., E-mail: koepferl@usm.lmu.de
We use a large data set of realistic synthetic observations (produced in Paper I of this series) to assess how observational techniques affect the measurement physical properties of star-forming regions. In this part of the series (Paper II), we explore the reliability of the measured total gas mass, dust surface density and dust temperature maps derived from modified blackbody fitting of synthetic Herschel observations. We find from our pixel-by-pixel analysis of the measured dust surface density and dust temperature a worrisome error spread especially close to star formation sites and low-density regions, where for those “contaminated” pixels the surface densitiesmore » can be under/overestimated by up to three orders of magnitude. In light of this, we recommend to treat the pixel-based results from this technique with caution in regions with active star formation. In regions of high background typical in the inner Galactic plane, we are not able to recover reliable surface density maps of individual synthetic regions, since low-mass regions are lost in the far-infrared background. When measuring the total gas mass of regions in moderate background, we find that modified blackbody fitting works well (absolute error: + 9%; −13%) up to 10 kpc distance (errors increase with distance). Commonly, the initial images are convolved to the largest common beam-size, which smears contaminated pixels over large areas. The resulting information loss makes this commonly used technique less verifiable as now χ {sup 2} values cannot be used as a quality indicator of a fitted pixel. Our control measurements of the total gas mass (without the step of convolution to the largest common beam size) produce similar results (absolute error: +20%; −7%) while having much lower median errors especially for the high-mass stellar feedback phase. In upcoming papers (Paper III; Paper IV) of this series we test the reliability of measured star formation rate with direct and indirect techniques.« less
Mathematical simulation of bearing ring grinding process
NASA Astrophysics Data System (ADS)
Koltunov, I. I.; Gorbunova, T. N.; Tumanova, M. B.
2018-03-01
The paper suggests the method of forming a solid finite element model of the bearing ring. Implementation of the model allowed one to evaluate the influence of the inner cylindrical surface grinding scheme on the ring shape error.
Yang, Rui; Tong, Juxiu; Hu, Bill X; Li, Jiayun; Wei, Wenshuo
2017-06-01
Agricultural non-point source pollution is a major factor in surface water and groundwater pollution, especially for nitrogen (N) pollution. In this paper, an experiment was conducted in a direct-seeded paddy field under traditional continuously flooded irrigation (CFI). The water movement and N transport and transformation were simulated via the Hydrus-1D model, and the model was calibrated using field measurements. The model had a total water balance error of 0.236 cm and a relative error (error/input total water) of 0.23%. For the solute transport model, the N balance error and relative error (error/input total N) were 0.36 kg ha -1 and 0.40%, respectively. The study results indicate that the plow pan plays a crucial role in vertical water movement in paddy fields. Water flow was mainly lost through surface runoff and underground drainage, with proportions to total input water of 32.33 and 42.58%, respectively. The water productivity in the study was 0.36 kg m -3 . The simulated N concentration results revealed that ammonia was the main form in rice uptake (95% of total N uptake), and its concentration was much larger than for nitrate under CFI. Denitrification and volatilization were the main losses, with proportions to total consumption of 23.18 and 14.49%, respectively. Leaching (10.28%) and surface runoff loss (2.05%) were the main losses of N pushed out of the system by water. Hydrus-1D simulation was an effective method to predict water flow and N concentrations in the three different forms. The study provides results that could be used to guide water and fertilization management and field results for numerical studies of water flow and N transport and transformation in the future.
Analytical skin friction and heat transfer formula for compressible internal flows
NASA Technical Reports Server (NTRS)
Dechant, Lawrence J.; Tattar, Marc J.
1994-01-01
An analytic, closed-form friction formula for turbulent, internal, compressible, fully developed flow was derived by extending the incompressible law-of-the-wall relation to compressible cases. The model is capable of analyzing heat transfer as a function of constant surface temperatures and surface roughness as well as analyzing adiabatic conditions. The formula reduces to Prandtl's law of friction for adiabatic, smooth, axisymmetric flow. In addition, the formula reduces to the Colebrook equation for incompressible, adiabatic, axisymmetric flow with various roughnesses. Comparisons with available experiments show that the model averages roughly 12.5 percent error for adiabatic flow and 18.5 percent error for flow involving heat transfer.
Reliable and accurate extraction of Hamaker constants from surface force measurements.
Miklavcic, S J
2018-08-15
A simple and accurate closed-form expression for the Hamaker constant that best represents experimental surface force data is presented. Numerical comparisons are made with the current standard least squares approach, which falsely assumes error-free separation measurements, and a nonlinear version assuming independent measurements of force and separation are subject to error. The comparisons demonstrate that not only is the proposed formula easily implemented it is also considerably more accurate. This option is appropriate for any value of Hamaker constant, high or low, and certainly for any interacting system exhibiting an inverse square distance dependent van der Waals force. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Dürsteler, Juan Carlos
2016-12-01
A review of the use of aspherics in the last decades, understood in a broad sense as encompassing single-vision lenses with conicoid surfaces and free-form and progressive addition lenses (PALs) as well, is provided. The appearance of conicoid surfaces to correct aphakia and later to provide thinner and more aesthetically appealing plus lenses and the introduction of PALs and free-form surfaces have shaped the advances in spectacle lenses in the last three decades. This document basically considers the main target optical aberrations, the idiosyncrasy of single lenses for correction of refractive errors and the restrictions and particularities of PAL design and their links to science vision and perception.
Adaptive optics based non-null interferometry for optical free form surfaces test
NASA Astrophysics Data System (ADS)
Zhang, Lei; Zhou, Sheng; Li, Jingsong; Yu, Benli
2018-03-01
An adaptive optics based non-null interferometry (ANI) is proposed for optical free form surfaces testing, in which an open-loop deformable mirror (DM) is employed as a reflective compensator, to compensate various low-order aberrations flexibly. The residual wavefront aberration is treated by the multi-configuration ray tracing (MCRT) algorithm. The MCRT algorithm based on the simultaneous ray tracing for multiple system models, in which each model has different DM surface deformation. With the MCRT algorithm, the final figure error can be extracted together with the surface misalignment aberration correction after the initial system calibration. The flexible test for free form surface is achieved with high accuracy, without auxiliary device for DM deformation monitoring. Experiments proving the feasibility, repeatability and high accuracy of the ANI were carried out to test a bi-conic surface and a paraboloidal surface, with a high stable ALPAOTM DM88. The accuracy of the final test result of the paraboloidal surface was better than 1/20 Μ PV value. It is a successful attempt in research of flexible optical free form surface metrology and would have enormous potential in future application with the development of the DM technology.
NASA Astrophysics Data System (ADS)
Maloney, Chris; Lormeau, Jean Pierre; Dumas, Paul
2016-07-01
Many astronomical sensing applications operate in low-light conditions; for these applications every photon counts. Controlling mid-spatial frequencies and surface roughness on astronomical optics are critical for mitigating scattering effects such as flare and energy loss. By improving these two frequency regimes higher contrast images can be collected with improved efficiency. Classically, Magnetorheological Finishing (MRF) has offered an optical fabrication technique to correct low order errors as well has quilting/print-through errors left over in light-weighted optics from conventional polishing techniques. MRF is a deterministic, sub-aperture polishing process that has been used to improve figure on an ever expanding assortment of optical geometries, such as planos, spheres, on and off axis aspheres, primary mirrors and freeform optics. Precision optics are routinely manufactured by this technology with sizes ranging from 5-2,000mm in diameter. MRF can be used for form corrections; turning a sphere into an asphere or free form, but more commonly for figure corrections achieving figure errors as low as 1nm RMS while using careful metrology setups. Recent advancements in MRF technology have improved the polishing performance expected for astronomical optics in low, mid and high spatial frequency regimes. Deterministic figure correction with MRF is compatible with most materials, including some recent examples on Silicon Carbide and RSA905 Aluminum. MRF also has the ability to produce `perfectly-bad' compensating surfaces, which may be used to compensate for measured or modeled optical deformation from sources such as gravity or mounting. In addition, recent advances in MRF technology allow for corrections of mid-spatial wavelengths as small as 1mm simultaneously with form error correction. Efficient midspatial frequency corrections make use of optimized process conditions including raster polishing in combination with a small tool size. Furthermore, a novel MRF fluid, called C30, has been developed to finish surfaces to ultra-low roughness (ULR) and has been used as the low removal rate fluid required for fine figure correction of mid-spatial frequency errors. This novel MRF fluid is able to achieve <4Å RMS on Nickel-plated Aluminum and even <1.5Å RMS roughness on Silicon, Fused Silica and other materials. C30 fluid is best utilized within a fine figure correction process to target mid-spatial frequency errors as well as smooth surface roughness 'for free' all in one step. In this paper we will discuss recent advancements in MRF technology and the ability to meet requirements for precision optics in low, mid and high spatial frequency regimes and how improved MRF performance addresses the need for achieving tight specifications required for astronomical optics.
Inflatable antenna for earth observing systems
NASA Astrophysics Data System (ADS)
Wang, Hong-Jian; Guan, Fu-ling; Xu, Yan; Yi, Min
2010-09-01
This paper describe mechanical design, dynamic analysis, and deployment demonstration of the antenna , and the photogrammetry detecting RMS of inflatable antenna surface, the possible errors results form the measurement are also analysed. Ticra's Grasp software are used to predict the inflatable antenna pattern based on the coordinates of the 460 points on the parabolic surface, the final results verified the whole design process.
Analysis technique for controlling system wavefront error with active/adaptive optics
NASA Astrophysics Data System (ADS)
Genberg, Victor L.; Michels, Gregory J.
2017-08-01
The ultimate goal of an active mirror system is to control system level wavefront error (WFE). In the past, the use of this technique was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for controlling system level WFE using a linear optics model is presented. An error estimate is included in the analysis output for both surface error disturbance fitting and actuator influence function fitting. To control adaptive optics, the technique has been extended to write system WFE in state space matrix form. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.
Time-dependent grid adaptation for meshes of triangles and tetrahedra
NASA Technical Reports Server (NTRS)
Rausch, Russ D.
1993-01-01
This paper presents in viewgraph form a method of optimizing grid generation for unsteady CFD flow calculations that distributes the numerical error evenly throughout the mesh. Adaptive meshing is used to locally enrich in regions of relatively large errors and to locally coarsen in regions of relatively small errors. The enrichment/coarsening procedures are robust for isotropic cells; however, enrichment of high aspect ratio cells may fail near boundary surfaces with relatively large curvature. The enrichment indicator worked well for the cases shown, but in general requires user supervision for a more efficient solution.
NASA Astrophysics Data System (ADS)
Güttler, I.
2012-04-01
Systematic errors in near-surface temperature (T2m), total cloud cover (CLD), shortwave albedo (ALB) and surface net longwave (SNL) and shortwave energy flux (SNS) are detected in simulations of RegCM on 50 km resolution over the European CORDEX domain when forced with ERA-Interim reanalysis. Simulated T2m is compared to CRU 3.0 and other variables to GEWEX-SRB 3.0 dataset. Most of systematic errors found in SNL and SNS are consistent with errors in T2m, CLD and ALB: they include prevailing negative errors in T2m and positive errors in CLD present during most of the year. Errors in T2m and CLD can be associated with the overestimation of SNL and SNS in most simulations. Impact of errors in albedo are primarily confined to north Africa, where e.g. underestimation of albedo in JJA is consistent with associated surface heating and positive SNS and T2m errors. Sensitivity to the choice of the PBL scheme and various parameters in PBL schemes is examined from an ensemble of 20 simulations. The recently implemented prognostic PBL scheme performs over Europe with a mixed success when compared to standard diagnostic scheme with a general increase of errors in T2m and CLD over all of the domain. Nevertheless, the improvements in T2m can be found in e.g. north-eastern Europe during DJF and western Europe during JJA where substantial warm biases existed in simulations with the diagnostic scheme. The most detectable impact, in terms of the JJA T2m errors over western Europe, comes form the variation in the formulation of mixing length. In order to reduce the above errors an update of the RegCM albedo values and further work in customizing PBL scheme is suggested.
NASA Astrophysics Data System (ADS)
Tan, Jiubin; Qiang, Xifu; Ding, Xuemei
1991-08-01
Optical sensors have two notable advantages in modern precision measurement. One is that they can be used in nondestructive measurement because the sensors need not touch the surfaces of workpieces in measuring. The other one is that they can strongly resist electromagnetic interferences, vibrations, and noises, so they are suitable to be used in machining sites. But the drift of light intensity and the changing of the reflection coefficient at different measuring positions of a workpiece may have great influence on measured results. To solve the problem, a spectroscopic differential characteristic compensating method is put forward. The method can be used effectively not only in compensating the measuring errors resulted from the drift of light intensity but also in eliminating the influence to measured results caused by the changing of the reflection coefficient. Also, the article analyzes the possibility of and the means of separating data errors of a clinical measuring system for form and position errors of circular workpieces.
Novel 3-D free-form surface profilometry for reverse engineering
NASA Astrophysics Data System (ADS)
Chen, Liang-Chia; Huang, Zhi-Xue
2005-01-01
This article proposes an innovative 3-D surface contouring approach for automatic and accurate free-form surface reconstruction using a sensor integration concept. The study addresses a critical problem in accurate measurement of free-form surfaces by developing an automatic reconstruction approach. Unacceptable measuring accuracy issues are mainly due to the errors arising from the use of inadequate measuring strategies, ending up with inaccurate digitised data and costly post-data processing in Reverse Engineering (RE). This article is thus aimed to develop automatic digitising strategies for ensuring surface reconstruction efficiency, as well as accuracy. The developed approach consists of two main stages, namely the rapid shape identification (RSI) and the automated laser scanning (ALS) for completing 3-D surface profilometry. This developed approach effectively utilises the advantages of on-line geometric information to evaluate the degree of satisfaction of user-defined digitising accuracy under a triangular topological patch. An industrial case study was used to attest the feasibility of the approach.
New method to control form and texture on industrially-sized lenses
NASA Astrophysics Data System (ADS)
Walker, D. D.
2003-05-01
Summary This paper provides a progress-report on the development of the Precessions polishing process. This is a new small-tool polishing technique for producing aspheric forms and correcting spherical forms. Precessions polishing has been developed by Zeeko Ltd in collaboration with the Optical Science Laboratory at University College London and Loh Optikmaschinen. The Zeeko/Loh All machine (see figure below) has a capacity of 200mm diameter, and is targeted at industrial lenses and mirrors. The baseline of the PrecessionsTM process is a sub-diameter physical tool working the surface with a polishing slurry. Position and orientation of the tooling is controlled by a 7-axis CNC polishing machine that has been custom-designed for the purpose. The tool comprises an inflated, bulged, rubber-membrane (the 'bonnet'), covered with one of the usual proprietary flexible polishing surfaces familiar to opticians. The membrane moulds itself around the local asphere, keeping good contact everywhere. It is spun about its axis to give high removal-rates, and attacks the surface of the part working on the side of the bulged surface, rather than the classical pole-down configuration. The contact area and polishing pressure can be varied independently by changing the degree to which the bonnet is compressed, and the internal fluid pressure. The rotation axis is precessed around the local normal to the part, and this averages surface texture and achieves a near-Gaussian tool removal-profile (Influence function'). For axially-symmetric parts, the part is rotated and the tool moved radially, thereby creating a spiral tool-path. An off- line software application analyses i) the surface error-profile, and ii) experimental data on the tool influence functions for different spot-sizes. An iterative numerical optimisation method is then used to compute the dwell-time and spot- size for each zone of the spiral on the surface, to rectify the form error.
A new approach to the form and position error measurement of the auto frame surface based on laser
NASA Astrophysics Data System (ADS)
Wang, Hua; Li, Wei
2013-03-01
Auto frame is a very large workpiece, with length up to 12 meters and width up to 2 meters, and it's very easy to know that it's inconvenient and not automatic to measure such a large workpiece by independent manual operation. In this paper we propose a new approach to reconstruct the 3D model of the large workpiece, especially the auto truck frame, based on multiple pulsed lasers, for the purpose of measuring the form and position errors. In a concerned area, it just needs one high-speed camera and two lasers. It is a fast, high-precision and economical approach.
NASA Astrophysics Data System (ADS)
Li, Xingchang; Zhang, Zhiyu; Hu, Haifei; Li, Yingjie; Xiong, Ling; Zhang, Xuejun; Yan, Jiwang
2018-04-01
On-machine measurements can improve the form accuracy of optical surfaces in single-point diamond turning applications; however, commercially available linear variable differential transformer sensors are inaccurate and can potentially scratch the surface. We present an on-machine measurement system based on capacitive displacement sensors for high-precision optical surfaces. In the proposed system, a position-trigger method of measurement was developed to ensure strict correspondence between the measurement points and the measurement data with no intervening time-delay. In addition, a double-sensor measurement was proposed to reduce the electric signal noise during spindle rotation. Using the proposed system, the repeatability of 80-nm peak-to-valley (PV) and 8-nm root-mean-square (RMS) was achieved through analyzing four successive measurement results. The accuracy of 109-nm PV and 14-nm RMS was obtained by comparing with the interferometer measurement result. An aluminum spherical mirror with a diameter of 300 mm was fabricated, and the resulting measured form error after one compensation cut was decreased to 254 nm in PV and 52 nm in RMS. These results confirm that the measurements of the surface form errors were successfully used to modify the cutting tool path during the compensation cut, thereby ensuring that the diamond turning process was more deterministic. In addition, the results show that the noise level was significantly reduced with the reference sensor even under a high rotational speed.
Weak charge form factor and radius of 208Pb through parity violation in electron scattering
Horowitz, C. J.; Ahmed, Z.; Jen, C. -M.; ...
2012-03-26
We use distorted wave electron scattering calculations to extract the weak charge form factor F W(more » $$\\bar{q}$$), the weak charge radius R W, and the point neutron radius R n, of 208Pb from the PREX parity violating asymmetry measurement. The form factor is the Fourier transform of the weak charge density at the average momentum transfer $$\\bar{q}$$ = 0.475 fm -1. We find F W($$\\bar{q}$$) = 0.204 ± 0.028(exp) ± 0.001(model). We use the Helm model to infer the weak radius from F W($$\\bar{q}$$). We find RW = 5.826 ± 0.181(exp) ± 0.027(model) fm. Here the exp error includes PREX statistical and systematic errors, while the model error describes the uncertainty in R W from uncertainties in the surface thickness σ of the weak charge density. The weak radius is larger than the charge radius, implying a 'weak charge skin' where the surface region is relatively enriched in weak charges compared to (electromagnetic) charges. We extract the point neutron radius R n = 5.751 ± 0.175 (exp) ± 0.026(model) ± 0.005(strange) fm, from R W. Here there is only a very small error (strange) from possible strange quark contributions. We find R n to be slightly smaller than R W because of the nucleon's size. As a result, we find a neutron skin thickness of R n-R p = 0.302 ± 0.175 (exp) ± 0.026 (model) ± 0.005 (strange) fm, where R p is the point proton radius.« less
Model-based phase-shifting interferometer
NASA Astrophysics Data System (ADS)
Liu, Dong; Zhang, Lei; Shi, Tu; Yang, Yongying; Chong, Shiyao; Miao, Liang; Huang, Wei; Shen, Yibing; Bai, Jian
2015-10-01
A model-based phase-shifting interferometer (MPI) is developed, in which a novel calculation technique is proposed instead of the traditional complicated system structure, to achieve versatile, high precision and quantitative surface tests. In the MPI, the partial null lens (PNL) is employed to implement the non-null test. With some alternative PNLs, similar as the transmission spheres in ZYGO interferometers, the MPI provides a flexible test for general spherical and aspherical surfaces. Based on modern computer modeling technique, a reverse iterative optimizing construction (ROR) method is employed for the retrace error correction of non-null test, as well as figure error reconstruction. A self-compiled ray-tracing program is set up for the accurate system modeling and reverse ray tracing. The surface figure error then can be easily extracted from the wavefront data in forms of Zernike polynomials by the ROR method. Experiments of the spherical and aspherical tests are presented to validate the flexibility and accuracy. The test results are compared with those of Zygo interferometer (null tests), which demonstrates the high accuracy of the MPI. With such accuracy and flexibility, the MPI would possess large potential in modern optical shop testing.
Highly accurate surface maps from profilometer measurements
NASA Astrophysics Data System (ADS)
Medicus, Kate M.; Nelson, Jessica D.; Mandina, Mike P.
2013-04-01
Many aspheres and free-form optical surfaces are measured using a single line trace profilometer which is limiting because accurate 3D corrections are not possible with the single trace. We show a method to produce an accurate fully 2.5D surface height map when measuring a surface with a profilometer using only 6 traces and without expensive hardware. The 6 traces are taken at varying angular positions of the lens, rotating the part between each trace. The output height map contains low form error only, the first 36 Zernikes. The accuracy of the height map is ±10% of the actual Zernike values and within ±3% of the actual peak to valley number. The calculated Zernike values are affected by errors in the angular positioning, by the centering of the lens, and to a small effect, choices made in the processing algorithm. We have found that the angular positioning of the part should be better than 1?, which is achievable with typical hardware. The centering of the lens is essential to achieving accurate measurements. The part must be centered to within 0.5% of the diameter to achieve accurate results. This value is achievable with care, with an indicator, but the part must be edged to a clean diameter.
Optical microtopographic inspection of asphalt pavement surfaces
NASA Astrophysics Data System (ADS)
Costa, Manuel F. M.; Freitas, E. F.; Torres, H.; Cerezo, V.
2017-08-01
Microtopographic and rugometric characterization of surfaces is routinely and effectively performed non-invasively by a number of different optical methods. Rough surfaces are also inspected using optical profilometers and microtopographer. The characterization of road asphalt pavement surfaces produced in different ways and compositions is fundamental for economical and safety reasons. Having complex structures, including topographically with different ranges of form error and roughness, the inspection of asphalt pavement surfaces is difficult to perform non-invasively. In this communication we will report on the optical non-contact rugometric characterization of the surface of different types of road pavements performed at the Microtopography Laboratory of the Physics Department of the University of Minho.
Program documentation: Surface heating rate of thin skin models (THNSKN)
NASA Technical Reports Server (NTRS)
Mcbryde, J. D.
1975-01-01
Program THNSKN computes the mean heating rate at a maximum of 100 locations on the surface of thin skin transient heating rate models. Output is printed in tabular form and consists of time history tabulation of temperatures, average temperatures, heat loss without conduction correction, mean heating rate, least squares heating rate, and the percent standard error of the least squares heating rates. The input tape used is produced by the program EHTS03.
Surface inspection system for carriage parts
NASA Astrophysics Data System (ADS)
Denkena, Berend; Acker, Wolfram
2006-04-01
Quality standards are very high in carriage manufacturing, due to the fact, that the visual quality impression is highly relevant for the purchase decision for the customer. In carriage parts even very small dents can be visible on the varnished and polished surface by observing reflections. The industrial demands are to detect these form errors on the unvarnished part. In order to meet the requirements, a stripe projection system for automatic recognition of waviness and form errors is introduced1. It bases on a modified stripe projection method using a high resolution line scan camera. Particular emphasis is put on achieving a short measuring time and a high resolution in depth, aiming at a reliable automatic recognition of dents and waviness of 10 μm on large curved surfaces of approximately 1 m width. The resulting point cloud needs to be filtered in order to detect dents. Therefore a spatial filtering technique is used. This works well on smoothly curved surfaces, if frequency parameters are well defined. On more complex parts like mudguards the method is restricted by the fact that frequencies near the define dent frequencies occur within the surface as well. To allow analysis of complex parts, the system is currently extended by including 3D CAD models into the process of inspection. For smoothly curved surfaces, the measuring speed of the prototype is mainly limited by the amount of light produced by the stripe projector. For complex surfaces the measuring speed is limited by the time consuming matching process. Currently, the development focuses on the improvement of the measuring speed.
NASA Astrophysics Data System (ADS)
Schröder, Markus; Meyer, Hans-Dieter
2017-08-01
We propose a Monte Carlo method, "Monte Carlo Potfit," for transforming high-dimensional potential energy surfaces evaluated on discrete grid points into a sum-of-products form, more precisely into a Tucker form. To this end we use a variational ansatz in which we replace numerically exact integrals with Monte Carlo integrals. This largely reduces the numerical cost by avoiding the evaluation of the potential on all grid points and allows a treatment of surfaces up to 15-18 degrees of freedom. We furthermore show that the error made with this ansatz can be controlled and vanishes in certain limits. We present calculations on the potential of HFCO to demonstrate the features of the algorithm. To demonstrate the power of the method, we transformed a 15D potential of the protonated water dimer (Zundel cation) in a sum-of-products form and calculated the ground and lowest 26 vibrationally excited states of the Zundel cation with the multi-configuration time-dependent Hartree method.
Photorealistic ray tracing to visualize automobile side mirror reflective scenes.
Lee, Hocheol; Kim, Kyuman; Lee, Gang; Lee, Sungkoo; Kim, Jingu
2014-10-20
We describe an interactive visualization procedure for determining the optimal surface of a special automobile side mirror, thereby removing the blind spot, without the need for feedback from the error-prone manufacturing process. If the horizontally progressive curvature distributions are set to the semi-mathematical expression for a free-form surface, the surface point set can then be derived through numerical integration. This is then converted to a NURBS surface while retaining the surface curvature. Then, reflective scenes from the driving environment can be virtually realized using photorealistic ray tracing, in order to evaluate how these reflected images would appear to drivers.
Sensitivity analysis of brain morphometry based on MRI-derived surface models
NASA Astrophysics Data System (ADS)
Klein, Gregory J.; Teng, Xia; Schoenemann, P. T.; Budinger, Thomas F.
1998-07-01
Quantification of brain structure is important for evaluating changes in brain size with growth and aging and for characterizing neurodegeneration disorders. Previous quantification efforts using ex vivo techniques suffered considerable error due to shrinkage of the cerebrum after extraction from the skull, deformation of slices during sectioning, and numerous other factors. In vivo imaging studies of brain anatomy avoid these problems and allow repetitive studies following progression of brain structure changes due to disease or natural processes. We have developed a methodology for obtaining triangular mesh models of the cortical surface from MRI brain datasets. The cortex is segmented from nonbrain tissue using a 2D region-growing technique combined with occasional manual edits. Once segmented, thresholding and image morphological operations (erosions and openings) are used to expose the regions between adjacent surfaces in deep cortical folds. A 2D region- following procedure is then used to find a set of contours outlining the cortical boundary on each slice. The contours on all slices are tiled together to form a closed triangular mesh model approximating the cortical surface. This model can be used for calculation of cortical surface area and volume, as well as other parameters of interest. Except for the initial segmentation of the cortex from the skull, the technique is automatic and requires only modest computation time on modern workstations. Though the use of image data avoids many of the pitfalls of ex vivo and sectioning techniques, our MRI-based technique is still vulnerable to errors that may impact the accuracy of estimated brain structure parameters. Potential inaccuracies include segmentation errors due to incorrect thresholding, missed deep sulcal surfaces, falsely segmented holes due to image noise and surface tiling artifacts. The focus of this paper is the characterization of these errors and how they affect measurements of cortical surface area and volume.
An assessment technique for computer-socket manufacturing
Sanders, Joan; Severance, Michael
2015-01-01
An assessment strategy is presented for testing the quality of carving and forming of individual computer aided manufacturing facilities. The strategy is potentially useful to facilities making sockets and companies marketing manufacturing equipment. To execute the strategy, an evaluator fabricates a collection of test models and sockets using the manufacturing suite under evaluation, and then measures their shapes using scanning equipment. Overall socket quality is assessed by comparing socket shapes with electronic file shapes. Then model shapes are compared with electronic file shapes to characterize carving performance. Socket shapes are compared with model shapes to characterize forming performance. The mean radial error (MRE), which is the average difference in radii between the two shapes being compared, provides insight into sizing quality. Inter-quartile range (IQR), the range of radial error for the best matched half of the points on the surfaces being compared, provides insight into shape quality. By determining MRE and IQR for carving and forming separately, the source(s) of socket shape error may be pinpointed. The developed strategy may provide a useful tool to the prosthetics community and industry to help identify problems and limitations in computer aided manufacturing and insight into appropriate modifications to overcome them. PMID:21938663
NASA Astrophysics Data System (ADS)
Zhu, Lianqing; Chen, Yunfang; Chen, Qingshan; Meng, Hao
2011-05-01
According to minimum zone condition, a method for evaluating the profile error of Archimedes helicoid surface based on Genetic Algorithm (GA) is proposed. The mathematic model of the surface is provided and the unknown parameters in the equation of surface are acquired through least square method. Principle of GA is explained. Then, the profile error of Archimedes Helicoid surface is obtained through GA optimization method. To validate the proposed method, the profile error of an Archimedes helicoid surface, Archimedes Cylindrical worm (ZA worm) surface, is evaluated. The results show that the proposed method is capable of correctly evaluating the profile error of Archimedes helicoid surface and satisfy the evaluation standard of the Minimum Zone Method. It can be applied to deal with the measured data of profile error of complex surface obtained by three coordinate measurement machines (CMM).
1982-09-30
OF TI4IS PIAGE(,,aW Date Eateed) cont. block 20: .... differential wheelwear and thereby prevent form errors. Wheel sharpness can also be monitored to...80 5. NOMENCLATURE V = Work surface speed (m/s) w Nw = Work speed (RPM) Vs = Wheel surface speed (m/sec) Ns = Wheel ...speed (RPM) Vt = Traverse speed (m/sec) 0 = Work diameter (mm)W D = Wheel diameter (mm) z= Dress lead (um/rev) c = 2*diamond depth-of-dress (um) d
Figure correction of a metallic ellipsoidal neutron focusing mirror
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Jiang, E-mail: jiang.guo@riken.jp; Yamagata, Yutaka; Morita, Shin-ya
2015-06-15
An increasing number of neutron focusing mirrors is being adopted in neutron scattering experiments in order to provide high fluxes at sample positions, reduce measurement time, and/or increase statistical reliability. To realize a small focusing spot and high beam intensity, mirrors with both high form accuracy and low surface roughness are required. To achieve this, we propose a new figure correction technique to fabricate a two-dimensional neutron focusing mirror made with electroless nickel-phosphorus (NiP) by effectively combining ultraprecision shaper cutting and fine polishing. An arc envelope shaper cutting method is introduced to generate high form accuracy, while a fine polishingmore » method, in which the material is removed effectively without losing profile accuracy, is developed to reduce the surface roughness of the mirror. High form accuracy in the minor-axis and the major-axis is obtained through tool profile error compensation and corrective polishing, respectively, and low surface roughness is acquired under a low polishing load. As a result, an ellipsoidal neutron focusing mirror is successfully fabricated with high form accuracy of 0.5 μm peak-to-valley and low surface roughness of 0.2 nm root-mean-square.« less
Efficient machining of ultra precise steel moulds with freeform surfaces
NASA Astrophysics Data System (ADS)
Bulla, B.; Robertson, D. J.; Dambon, O.; Klocke, F.
2013-09-01
Ultra precision diamond turning of hardened steel to produce optical quality surfaces can be realized by applying an ultrasonic assisted process. With this technology optical moulds used typically for injection moulding can be machined directly from steel without the requirement to overcoat the mould with a diamond machinable material such as Nickel Phosphor. This has both the advantage of increasing the mould tool lifetime and also reducing manufacture costs by dispensing with the relatively expensive plating process. This publication will present results we have obtained for generating free form moulds in hardened steel by means of ultrasonic assisted diamond turning with a vibration frequency of 80 kHz. To provide a baseline with which to characterize the system performance we perform plane cutting experiments on different steel alloys with different compositions. The baseline machining results provides us information on the surface roughness and on tool wear caused during machining and we relate these to material composition. Moving on to freeform surfaces, we will present a theoretical background to define the machine program parameters for generating free forms by applying slow slide servo machining techniques. A solution for optimal part generation is introduced which forms the basis for the freeform machining experiments. The entire process chain, from the raw material through to ultra precision machining is presented, with emphasis on maintaining surface alignment when moving a component from CNC pre-machining to final machining using ultrasonic assisted diamond turning. The free form moulds are qualified on the basis of the surface roughness measurements and a form error map comparing the machined surface with the originally defined surface. These experiments demonstrate the feasibility of efficient free form machining applying ultrasonic assisted diamond turning of hardened steel.
Hoyo, Javier Del; Choi, Heejoo; Burge, James H; Kim, Geon-Hee; Kim, Dae Wook
2017-06-20
The control of surface errors as a function of spatial frequency is critical during the fabrication of modern optical systems. A large-scale surface figure error is controlled by a guided removal process, such as computer-controlled optical surfacing. Smaller-scale surface errors are controlled by polishing process parameters. Surface errors of only a few millimeters may degrade the performance of an optical system, causing background noise from scattered light and reducing imaging contrast for large optical systems. Conventionally, the microsurface roughness is often given by the root mean square at a high spatial frequency range, with errors within a 0.5×0.5 mm local surface map with 500×500 pixels. This surface specification is not adequate to fully describe the characteristics for advanced optical systems. The process for controlling and minimizing mid- to high-spatial frequency surface errors with periods of up to ∼2-3 mm was investigated for many optical fabrication conditions using the measured surface power spectral density (PSD) of a finished Zerodur optical surface. Then, the surface PSD was systematically related to various fabrication process parameters, such as the grinding methods, polishing interface materials, and polishing compounds. The retraceable experimental polishing conditions and processes used to produce an optimal optical surface PSD are presented.
Effects of Reynolds number on orifice induced pressure error
NASA Technical Reports Server (NTRS)
Plentovich, E. B.; Gloss, B. B.
1982-01-01
Data previously reported for orifice induced pressure errors are extended to the case of higher Reynolds number flows, and a remedy is presented in the form of a porous metal plug for the orifice. Test orifices with apertures 0.330, 0.660, and 1.321 cm in diam. were fabricated on a flat plate for trials in the NASA Langley wind tunnel at Mach numbers 0.40-0.72. A boundary layer survey rake was also mounted on the flat plate to allow measurement of the total boundary layer pressures at the orifices. At the high Reynolds number flows studied, the orifice induced pressure error was found to be a function of the ratio of the orifice diameter to the boundary layer thickness. The error was effectively eliminated by the insertion of a porous metal disc set flush with the orifice outside surface.
Chromatic dispersive confocal technology for intra-oral scanning: first in-vitro results
NASA Astrophysics Data System (ADS)
Ertl, T.; Zint, M.; Konz, A.; Brauer, E.; Hörhold, H.; Hibst, R.
2015-02-01
Various test objects, plaster models, partially equipped with extracted teeth and pig jaws representing various clinical situations of tooth preparations were used for in-vitro scanning tests with an experimental intra-oral scanning system based on chromatic-dispersive confocal technology. Scanning results were compared against data sets of the same object captured by an industrial μCT measuring system. Compared to μCT data an average error of 18 - 30 μm was achieved for a single tooth scan area and less than 40 to 60 μm error measured over the restoration + the neighbor teeth and pontic areas up to 7 units. Mean error for a full jaw is within 100 - 140 μm. The length error for a 3 - 4 unit bridge situation form contact point to contact point is below 100 μm and excellent interproximal surface coverage and prep margin clarity was achieved.
Fiber-optic projected-fringe digital interferometry
NASA Technical Reports Server (NTRS)
Mercer, Carolyn R.; Beheim, Glenn
1990-01-01
A phase-stepped projected-fringe interferometer was developed which uses a closed-loop fiber-optic phase-control system to make very accurate surface profile measurements. The closed-loop phase-control system greatly reduces phase-stepping error, which is frequently the dominant source of error in digital interferometers. Two beams emitted from a fiber-optic coupler are combined to form an interference fringe pattern on a diffusely reflecting object. Reflections off of the fibers' output faces are used to create a phase-indicating signal for the closed-loop optical phase controller. The controller steps the phase difference between the two beams by pi/2 radians in order to determine the object's surface profile using a solid-state camera and a computer. The system combines the ease of alignment and automated data reduction of phase-stepping projected-fringe interferometry with the greatly improved phase-stepping accuracy of our closed-loop phase-controller. The system is demonstrated by measuring the profile of a plate containing several convex surfaces whose heights range from 15 to 25 micron high.
Verification of micro-scale photogrammetry for smooth three-dimensional object measurement
NASA Astrophysics Data System (ADS)
Sims-Waterhouse, Danny; Piano, Samanta; Leach, Richard
2017-05-01
By using sub-millimetre laser speckle pattern projection we show that photogrammetry systems are able to measure smooth three-dimensional objects with surface height deviations less than 1 μm. The projection of laser speckle patterns allows correspondences on the surface of smooth spheres to be found, and as a result, verification artefacts with low surface height deviations were measured. A combination of VDI/VDE and ISO standards were also utilised to provide a complete verification method, and determine the quality parameters for the system under test. Using the proposed method applied to a photogrammetry system, a 5 mm radius sphere was measured with an expanded uncertainty of 8.5 μm for sizing errors, and 16.6 μm for form errors with a 95 % confidence interval. Sphere spacing lengths between 6 mm and 10 mm were also measured by the photogrammetry system, and were found to have expanded uncertainties of around 20 μm with a 95 % confidence interval.
NASA Astrophysics Data System (ADS)
Kunimura, Shinsuke; Ohmori, Hitoshi
We present a rapid process for producing flat and smooth surfaces. In this technical note, a fabrication result of a carbon mirror is shown. Electrolytic in-process dressing (ELID) grinding with a metal bonded abrasive wheel, then a metal-resin bonded abrasive wheel, followed by a conductive rubber bonded abrasive wheel, and finally magnetorheological finishing (MRF) were performed as the first, second, third, and final steps, respectively in this process. Flatness over the whole surface was improved by performing the first and second steps. After the third step, peak to valley (PV) and root mean square (rms) values in an area of 0.72 x 0.54 mm2 on the surface were improved. These values were further improved after the final step, and a PV value of 10 nm and an rms value of 1 nm were obtained. Form errors and small surface irregularities such as surface waviness and micro roughness were efficiently reduced by performing ELID grinding using the above three kinds of abrasive wheels because of the high removal rate of ELID grinding, and residual small irregularities were reduced by short time MRF. This process makes it possible to produce flat and smooth surfaces in several hours.
Estimations of ABL fluxes and other turbulence parameters from Doppler lidar data
NASA Technical Reports Server (NTRS)
Gal-Chen, Tzvi; Xu, Mei; Eberhard, Wynn
1989-01-01
Techniques for extraction boundary layer parameters from measurements of a short-pulse CO2 Doppler lidar are described. The measurements are those collected during the First International Satellites Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE). By continuously operating the lidar for about an hour, stable statistics of the radial velocities can be extracted. Assuming that the turbulence is horizontally homogeneous, the mean wind, its standard deviations, and the momentum fluxes were estimated. Spectral analysis of the radial velocities is also performed from which, by examining the amplitude of the power spectrum at the inertial range, the kinetic energy dissipation was deduced. Finally, using the statistical form of the Navier-Stokes equations, the surface heat flux is derived as the residual balance between the vertical gradient of the third moment of the vertical velocity and the kinetic energy dissipation. Combining many measurements would normally reduce the error provided that, it is unbiased and uncorrelated. The nature of some of the algorithms however, is such that, biased and correlated errors may be generated even though the raw measurements are not. Data processing procedures were developed that eliminate bias and minimize error correlation. Once bias and error correlations are accounted for, the large sample size is shown to reduce the errors substantially. The principal features of the derived turbulence statistics for two case studied are presented.
Ingressive Speech Errors: A Service Evaluation of Speech-Sound Therapy in a Child Aged 4;6
ERIC Educational Resources Information Center
Hrastelj, Laura; Knight, Rachael-Anne
2017-01-01
Background: A pattern of ingressive substitutions for word-final sibilants can be identified in a small number of cases in child speech disorder, with growing evidence suggesting it is a phonological difficulty, despite the unusual surface form. Phonological difficulty implies a problem with the cognitive process of organizing speech into sound…
Spatio-temporal representativeness of ground-based downward solar radiation measurements
NASA Astrophysics Data System (ADS)
Schwarz, Matthias; Wild, Martin; Folini, Doris
2017-04-01
Surface solar radiation (SSR) is most directly observed with ground based pyranometer measurements. Besides measurement uncertainties, which arise from the pyranometer instrument itself, also errors attributed to the limited spatial representativeness of observations from single sites for their large-scale surrounding have to be taken into account when using such measurements for energy balance studies. In this study the spatial representativeness of 157 homogeneous European downward surface solar radiation time series from the Global Energy Balance Archive (GEBA) and the Baseline Surface Radiation Network (BSRN) were examined for the period 1983-2015 by using the high resolution (0.05°) surface solar radiation data set from the Satellite Application Facility on Climate Monitoring (CM-SAF SARAH) as a proxy for the spatiotemporal variability of SSR. By correlating deseasonalized monthly SSR time series form surface observations against single collocated satellite derived SSR time series, a mean spatial correlation pattern was calculated and validated against purely observational based patterns. Generally decreasing correlations with increasing distance from station, with high correlations (R2 = 0.7) in proximity to the observational sites (±0.5°), was found. When correlating surface observations against time series from spatially averaged satellite derived SSR data (and thereby simulating coarser and coarser grids), very high correspondence between sites and the collocated pixels has been found for pixel sizes up to several degrees. Moreover, special focus was put on the quantification of errors which arise in conjunction to spatial sampling when estimating the temporal variability and trends for a larger region from a single surface observation site. For 15-year trends on a 1° grid, errors due to spatial sampling in the order of half of the measurement uncertainty for monthly mean values were found.
Evaluation of alignment error of micropore X-ray optics caused by hot plastic deformation
NASA Astrophysics Data System (ADS)
Numazawa, Masaki; Ishi, Daiki; Ezoe, Yuichiro; Takeuchi, Kazuma; Terada, Masaru; Fujitani, Maiko; Ishikawa, Kumi; Nakajima, Kazuo; Morishita, Kohei; Ohashi, Takaya; Mitsuda, Kazuhisa; Nakamura, Kasumi; Noda, Yusuke
2018-06-01
We report on the evaluation and characterization of micro-electromechanical system (MEMS) X-ray optics produced by silicon dry etching and hot plastic deformation. Sidewalls of micropores formed by etching through a silicon wafer are used as X-ray reflecting mirrors. The wafer is deformed into a spherical shape to focus parallel incidence X-rays. We quantitatively evaluated a mirror alignment error using an X-ray pencil beam (Al Kα line at 1.49 keV). The deviation angle caused only by the deformation was estimated from angular shifts of the X-ray focusing point before and after the deformation to be 2.7 ± 0.3 arcmin on average within the optics. This gives an angular resolution of 12.9 ± 1.4 arcmin in half-power diameter (HPD). The surface profile of the deformed optics measured using a NH-3Ns surface profiler (Mitaka Kohki) also indicated that the resolution was 11.4 ± 0.9 arcmin in HPD, suggesting that we can simply evaluate the alignment error caused by the hot plastic deformation.
The measurement of an aspherical mirror by three-dimensional nanoprofiler
NASA Astrophysics Data System (ADS)
Tokuta, Yusuke; Okita, Kenya; Okuda, Kohei; Kitayama, Takao; Nakano, Motohiro; Nakatani, Shun; Kudo, Ryota; Yamamura, Kazuya; Endo, Katsuyoshi
2015-09-01
Aspherical optical elements with high accuracy are important in several fields such as third-generation synchrotron radiation and extreme-ultraviolet lithography. Then the demand of measurement method for aspherical or free-form surface with nanometer resolution is rising. Our purpose is to develop a non-contact profiler to measure free-form surfaces directly with repeatability of figure error of less than 1 nm PV. To achieve this purpose we have developed three-dimensional Nanoprofiler which traces normal vectors of sample surface. The measurement principle is based on the straightness of LASER light and the accuracy of a rotational goniometer. This machine consists of four rotational stages, one translational stage and optical head which has the quadrant photodiode (QPD) and LASER head at optically equal position. In this measurement method, we conform the incident light beam to reflect the beam by controlling five stages and determine the normal vectors and the coordinates of the surface from signal of goniometers, translational stage and QPD. We can obtain three-dimensional figure from the normal vectors and the coordinates by a reconstruction algorithm. To evaluate performance of this machine we measure a concave aspherical mirror ten times. From ten results we calculate measurement repeatability, and we evaluate measurement uncertainty to compare the result with that measured by an interferometer. In consequence, the repeatability of measurement was 2.90 nm (σ) and the difference between the two profiles was +/-20 nm. We conclude that the two profiles was correspondent considering systematic errors of each machine.
Dong, Zhichao; Cheng, Haobo; Tam, Hon-Yuen
2014-01-20
As further application investigations on fixed abrasive diamond pellets (FADPs), this work exhibits their potential capability for diminishing mid-spatial frequency errors (MSFEs, i.e., periodic small structure) of optical surfaces. Benefitting from its high surficial rigidness, the FADPs tool has a natural smoothing effect to periodic small errors. Compared with the previous design, this proposed new tool employs more compliance to aspherical surfaces due to the pellets being mutually separated and bonded on a steel plate with elastic back of silica rubber adhesive. Moreover, a unicursal Peano-like path is presented for improving MSFEs, which can enhance the multidirectionality and uniformity of the tool's motion. Experiments were conducted to validate the effectiveness of FADPs for diminishing MSFEs. In the lapping of a Φ=420 mm Zerodur paraboloid workpiece, the grinding ripples were quickly diminished (210 min) by both visual inspection and profile metrology, as well as the power spectrum density (PSD) analysis, RMS was reduced from 4.35 to 0.55 μm. In the smoothing of a Φ=101 mm fused silica workpiece, MSFEs were obviously improved from the inspection of surface form maps, interferometric fringe patterns, and PSD analysis. The mid-spatial frequency RMS was diminished from 0.017λ to 0.014λ (λ=632.8 nm).
Air and smear sample calculational tool for Fluor Hanford Radiological control
DOE Office of Scientific and Technical Information (OSTI.GOV)
BAUMANN, B.L.
2003-07-11
A spreadsheet calculation tool was developed to automate the calculations performed for determining the concentration of airborne radioactivity and smear counting as outlined in HNF-13536, Section 5.2.7, ''Analyzing Air and Smear Samples''. This document reports on the design and testing of the calculation tool. Radiological Control Technicians (RCTs) will save time and reduce hand written and calculation errors by using an electronic form for documenting and calculating work place air samples. Current expectations are RCTs will perform an air sample and collect the filter or perform a smear for surface contamination. RCTs will then survey the filter for gross alphamore » and beta/gamma radioactivity and with the gross counts utilize either hand calculation method or a calculator to determine activity on the filter. The electronic form will allow the RCT with a few key strokes to document the individual's name, payroll, gross counts, instrument identifiers; produce an error free record. This productivity gain is realized by the enhanced ability to perform mathematical calculations electronically (reducing errors) and at the same time, documenting the air sample.« less
Worthwhile optical method for free-form mirrors qualification
NASA Astrophysics Data System (ADS)
Sironi, G.; Canestrari, R.; Toso, G.; Pareschi, G.
2013-09-01
We present an optical method for free-form mirrors qualification developed by the Italian National Institute for Astrophysics (INAF) in the context of the ASTRI (Astrofisica con Specchi a Tecnologia Replicante Italiana) Project which includes, among its items, the design, development and installation of a dual-mirror telescope prototype for the Cherenkov Telescope Array (CTA) observatory. The primary mirror panels of the telescope prototype are free-form concave mirrors with few microns accuracy required on the shape error. The developed technique is based on the synergy between a Ronchi-like optical test performed on the reflecting surface and the image, obtained by means of the TraceIT ray-tracing proprietary code, a perfect optics should generate in the same configuration. This deflectometry test allows the reconstruction of the slope error map that the TraceIT code can process to evaluate the measured mirror optical performance at the telescope focus. The advantages of the proposed method is that it substitutes the use of 3D coordinates measuring machine reducing production time and costs and offering the possibility to evaluate on-site the mirror image quality at the focus. In this paper we report the measuring concept and compare the obtained results to the similar ones obtained processing the shape error acquired by means of a 3D coordinates measuring machine.
Continuous-wave ultrasound reflectometry for surface roughness imaging applications
Kinnick, R. R.; Greenleaf, J. F.; Fatemi, M.
2009-01-01
Background Measurement of surface roughness irregularities that result from various sources such as manufacturing processes, surface damage, and corrosion, is an important indicator of product quality for many nondestructive testing (NDT) industries. Many techniques exist, however because of their qualitative, time-consuming and direct-contact modes, it is of some importance to work out new experimental methods and efficient tools for quantitative estimation of surface roughness. Objective and Method Here we present continuous-wave ultrasound reflectometry (CWUR) as a novel nondestructive modality for imaging and measuring surface roughness in a non-contact mode. In CWUR, voltage variations due to phase shifts in the reflected ultrasound waves are recorded and processed to form an image of surface roughness. Results An acrylic test block with surface irregularities ranging from 4.22 μm to 19.05 μm as measured by a coordinate measuring machine (CMM), is scanned by an ultrasound transducer having a diameter of 45 mm, a focal distance of 70 mm, and a central frequency of 3 MHz. It is shown that CWUR technique gives very good agreement with the results obtained through CMM inasmuch as the maximum average percent error is around 11.5%. Conclusion Images obtained here demonstrate that CWUR may be used as a powerful noncontact and quantitative tool for nondestructive inspection and imaging of surface irregularities at the micron-size level with an average error of less than 11.5%. PMID:18664399
Panel positioning error and support mechanism for a 30-m THz radio telescope
NASA Astrophysics Data System (ADS)
Yang, De-Hua; Okoh, Daniel; Zhou, Guo-Hua; Li, Ai-Hua; Li, Guo-Ping; Cheng, Jing-Quan
2011-06-01
A 30-m TeraHertz (THz) radio telescope is proposed to operate at 200 μm with an active primary surface. This paper presents sensitivity analysis of active surface panel positioning errors with optical performance in terms of the Strehl ratio. Based on Ruze's surface error theory and using a Monte Carlo simulation, the effects of six rigid panel positioning errors, such as piston, tip, tilt, radial, azimuthal and twist displacements, were directly derived. The optical performance of the telescope was then evaluated using the standard Strehl ratio. We graphically illustrated the various panel error effects by presenting simulations of complete ensembles of full reflector surface errors for the six different rigid panel positioning errors. Study of the panel error sensitivity analysis revealed that the piston error and tilt/tip errors are dominant while the other rigid errors are much less important. Furthermore, as indicated by the results, we conceived of an alternative Master-Slave Concept-based (MSC-based) active surface by implementating a special Series-Parallel Concept-based (SPC-based) hexapod as the active panel support mechanism. A new 30-m active reflector based on the two concepts was demonstrated to achieve correction for all the six rigid panel positioning errors in an economically feasible way.
The Stokes problem for the ellipsoid using ellipsoidal kernels
NASA Technical Reports Server (NTRS)
Zhu, Z.
1981-01-01
A brief review of Stokes' problem for the ellipsoid as a reference surface is given. Another solution of the problem using an ellipsoidal kernel, which represents an iterative form of Stokes' integral, is suggested with a relative error of the order of the flattening. On studying of Rapp's method in detail the procedures of improving its convergence are discussed.
Experimental Investigation of Jet Impingement Heat Transfer Using Thermochromic Liquid Crystals
NASA Technical Reports Server (NTRS)
Dempsey, Brian Paul
1997-01-01
Jet impingement cooling of a hypersonic airfoil leading edge is experimentally investigated using thermochromic liquid crystals (TLCS) to measure surface temperature. The experiment uses computer data acquisition with digital imaging of the TLCs to determine heat transfer coefficients during a transient experiment. The data reduction relies on analysis of a coupled transient conduction - convection heat transfer problem that characterizes the experiment. The recovery temperature of the jet is accounted for by running two experiments with different heating rates, thereby generating a second equation that is used to solve for the recovery temperature. The resulting solution requires a complicated numerical iteration that is handled by a computer. Because the computational data reduction method is complex, special attention is paid to error assessment. The error analysis considers random and systematic errors generated by the instrumentation along with errors generated by the approximate nature of the numerical methods. Results of the error analysis show that the experimentally determined heat transfer coefficients are accurate to within 15%. The error analysis also shows that the recovery temperature data may be in error by more than 50%. The results show that the recovery temperature data is only reliable when the recovery temperature of the jet is greater than 5 C, i.e. the jet velocity is in excess of 100 m/s. Parameters that were investigated include nozzle width, distance from the nozzle exit to the airfoil surface, and jet velocity. Heat transfer data is presented in graphical and tabular forms. An engineering analysis of hypersonic airfoil leading edge cooling is performed using the results from these experiments. Several suggestions for the improvement of the experimental technique are discussed.
MODIS Aerosol Optical Depth Bias Adjustment Using Machine Learning Algorithms
NASA Technical Reports Server (NTRS)
Albayrak, Arif; Wei, Jennifer; Petrenko, Maksym; Lary, David; Leptoukh, Gregory
2011-01-01
To monitor the earth atmosphere and its surface changes, satellite based instruments collect continuous data. While some of the data is directly used, some others such as aerosol properties are indirectly retrieved from the observation data. While retrieved variables (RV) form very powerful products, they don't come without obstacles. Different satellite viewing geometries, calibration issues, dynamically changing atmospheric and earth surface conditions, together with complex interactions between observed entities and their environment affect them greatly. This results in random and systematic errors in the final products.
NASA Technical Reports Server (NTRS)
Rummel, R.
1975-01-01
Integral formulas in the parameter domain are used instead of a representation by spherical harmonics. The neglected regions will cause a truncation error. The application of the discrete form of the integral equations connecting the satellite observations with surface gravity anomalies is discussed in comparison with the least squares prediction method. One critical point of downward continuation is the proper choice of the boundary surface. Practical feasibilities are in conflict with theoretical considerations. The properties of different approaches for this question are analyzed.
Image defects from surface and alignment errors in grazing incidence telescopes
NASA Technical Reports Server (NTRS)
Saha, Timo T.
1989-01-01
The rigid body motions and low frequency surface errors of grazing incidence Wolter telescopes are studied. The analysis is based on surface error descriptors proposed by Paul Glenn. In his analysis, the alignment and surface errors are expressed in terms of Legendre-Fourier polynomials. Individual terms in the expression correspond to rigid body motions (decenter and tilt) and low spatial frequency surface errors of mirrors. With the help of the Legendre-Fourier polynomials and the geometry of grazing incidence telescopes, exact and approximated first order equations are derived in this paper for the components of the ray intercepts at the image plane. These equations are then used to calculate the sensitivities of Wolter type I and II telescopes for the rigid body motions and surface deformations. The rms spot diameters calculated from this theory and OSAC ray tracing code agree very well. This theory also provides a tool to predict how rigid body motions and surface errors of the mirrors compensate each other.
New fabrication method for an ellipsoidal neutron focusing mirror with a metal substrate.
Guo, Jiang; Takeda, Shin; Morita, Shin-ya; Hino, Masahiro; Oda, Tatsuro; Kato, Jun-ichi; Yamagata, Yutaka; Furusaka, Michihiro
2014-10-06
We propose an ellipsoidal neutron focusing mirror using a metal substrate made with electroless nickel-phosphorus (NiP) plated material for the first time. Electroless NiP has great advantages for realizing an ellipsoidal neutron mirror because of its amorphous structure, good machinability and relatively large critical angle of total reflection for neutrons. We manufactured the mirror by combining ultrahigh precision cutting and fine polishing to generate high form accuracy and low surface roughness. The form accuracy of the mirror was estimated to be 5.3 μm P-V and 0.8 μm P-V for the minor-axis and major-axis direction respectively, while the surface roughness was reduced to 0.2 nm rms. The effect of form error on focusing spot size was evaluated by using a laser beam and the focusing performance of the mirror was verified by neutron experiments.
NASA Astrophysics Data System (ADS)
Xu, B.
2017-12-01
Interferometric Synthetic Aperture Radar (InSAR) has the advantages of high spatial resolution which enable measure line of sight (LOS) surface displacements with nearly complete spatial continuity and a satellite's perspective that permits large areas view of Earth's surface quickly and efficiently. However, using InSAR to observe long wavelength and small magnitude deformation signals is still significantly limited by various unmodeled errors sources i.e. atmospheric delays, orbit induced errors, Digital Elevation Model (DEM) errors. Independent component analysis (ICA) is a probabilistic method for separating linear mixed signals generated by different underlying physical processes.The signal sources which form the interferograms are statistically independent both in space and in time, thus, they can be separated by ICA approach.The seismic behavior in the Los Angeles Basin is active and the basin has experienced numerous moderate to large earthquakes since the early Pliocene. Hence, understanding the seismotectonic deformation in the Los Angeles Basin is important for analyzing seismic behavior. Compare with the tectonic deformations, nontectonic deformations due to groundwater and oil extraction may be mainly responsible for the surface deformation in the Los Angeles basin. Using the small baseline subset (SBAS) InSAR method, we extracted the surface deformation time series in the Los Angeles basin with a time span of 7 years (September 27, 2003-September 25,2010). Then, we successfully separate the atmospheric noise from InSAR time series and detect different processes caused by different mechanisms.
Estimating pore and cement volumes in thin section
Halley, R.B.
1978-01-01
Point count estimates of pore, grain and cement volumes from thin sections are inaccurate, often by more than 100 percent, even though they may be surprisingly precise (reproducibility + or - 3 percent). Errors are produced by: 1) inclusion of submicroscopic pore space within solid volume and 2) edge effects caused by grain curvature within a 30-micron thick thin section. Submicroscopic porosity may be measured by various physical tests or may be visually estimated from scanning electron micrographs. Edge error takes the form of an envelope around grains and increases with decreasing grain size and sorting, increasing grain irregularity and tighter grain packing. Cements are greatly involved in edge error because of their position at grain peripheries and their generally small grain size. Edge error is minimized by methods which reduce the thickness of the sample viewed during point counting. Methods which effectively reduce thickness include use of ultra-thin thin sections or acetate peels, point counting in reflected light, or carefully focusing and counting on the upper surface of the thin section.
Problems in determining the surface density of the Galactic disk
NASA Technical Reports Server (NTRS)
Statler, Thomas S.
1989-01-01
A new method is presented for determining the local surface density of the Galactic disk from distance and velocity measurements of stars toward the Galactic poles. The procedure is fully three-dimensional, approximating the Galactic potential by a potential of Staeckel form and using the analytic third integral to treat the tilt and the change of shape of the velocity ellipsoid consistently. Applying the procedure to artificial data superficially resembling the K dwarf sample of Kuijken and Gilmore (1988, 1989), it is shown that the current best estimates of local disk surface density are uncertain by at least 30 percent. Of this, about 25 percent is due to the size of the velocity sample, about 15 percent comes from uncertainties in the rotation curve and the solar galactocentric distance, and about 10 percent from ignorance of the shape of the velocity distribution above z = 1 kpc, the errors adding in quadrature. Increasing the sample size by a factor of 3 will reduce the error to 20 percent. To achieve 10 percent accuracy, observations will be needed along other lines of sight to constrain the shape of the velocity ellipsoid.
NASA Technical Reports Server (NTRS)
Roberts, J. Brent; Clayson, C. A.
2012-01-01
Residual forcing necessary to close the MLTB on seasonal time scales are largest in regions of strongest surface heat flux forcing. Identifying the dominant source of error - surface heat flux error, mixed layer depth estimation, ocean dynamical forcing - remains a challenge in the eastern tropical oceans where ocean processes are very active. Improved sub-surface observations are necessary to better constrain errors. 1. Mixed layer depth evolution is critical to the seasonal evolution of mixed layer temperatures. It determines the inertia of the mixed layer, and scales the sensitivity of the MLTB to errors in surface heat flux and ocean dynamical forcing. This role produces timing impacts for errors in SST prediction. 2. Errors in the MLTB are larger than the historical 10Wm-2 target accuracy. In some regions, a larger accuracy can be tolerated if the goal is to resolve the seasonal SST cycle.
A surface code quantum computer in silicon
Hill, Charles D.; Peretz, Eldad; Hile, Samuel J.; House, Matthew G.; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y.; Hollenberg, Lloyd C. L.
2015-01-01
The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel—posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited. PMID:26601310
A surface code quantum computer in silicon.
Hill, Charles D; Peretz, Eldad; Hile, Samuel J; House, Matthew G; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y; Hollenberg, Lloyd C L
2015-10-01
The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel-posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited.
DFT-GGA errors in NO chemisorption energies on (111) transition metal surfaces
NASA Astrophysics Data System (ADS)
Huang, Xu; Mason, Sara E.
2014-03-01
We investigate whether well-known DFT-GGA errors in predicting the chemisorption energy (Echem) of CO on transition metal surfaces manifest in analogous NO chemisorption systems. While widely investigated in the case of CO/metal, analogous DFT-GGA errors have long been claimed to be absent in NO/metal chemisorption. Here, we provide theoretical evidence of systematic enhanced back-donation in NO/metal chemisorption at the DFT-GGA level. We use electronic structure analysis to show that the partially filled molecular NO 2π* orbital rehybridizes with the transition metal d-band to form new bonding and anti-bonding states. We relate the back-donation charge transfer associated with chemisorption to the promotion of an electron from the 5σ orbital to the 2π* orbital in the gas-phase NO G2Σ- ← X2Π excitation. We establish linear relationships between Echem and ΔEG ← X and formulate an Echem correction scheme in the style of Mason et al. [Physical Review B 69, 161401(R)]. We apply the NO Echem correction method to the (111) surfaces of Pt, Pd, Rh, and Ir, with NO chemisorption modeled at a coverage of 0.25 ML. We note that the slope of Echemvs. ΔEG ← X and the dipole moment depend strongly on adsorption site for each metal, and we construct an approximate correction scheme which we test using NO/Pt(100) chemisorption.
NASA Astrophysics Data System (ADS)
Liu, Dan; Fu, Xiu-hua; Jia, Zong-he; Wang, Zhe; Dong, Huan
2014-08-01
In the high-energy laser test system, surface profile and finish of the optical element are put forward higher request. Taking a focusing aspherical zerodur lens with a diameter of 100mm as example, using CNC and classical machining method of combining surface profile and surface quality of the lens were investigated. Taking profilometer and high power microscope measurement results as a guide, by testing and simulation analysis, process parameters were improved constantly in the process of manufacturing. Mid and high frequency error were trimmed and improved so that the surface form gradually converged to the required accuracy. The experimental results show that the final accuracy of the surface is less than 0.5μm and the surface finish is □, which fulfils the accuracy requirement of aspherical focusing lens in optical system.
Evans, Drew R; Craig, Vincent S J
2006-03-23
Cantilever beams, both microscopic and macroscopic, are used as sensors in a great variety of applications. An optical lever system is commonly employed to determine the deflection and thereby the profile of the cantilever under load. The sensitivity of the optical lever must be calibrated, and this is usually achieved by application of a known load or deflection to the free end of the cantilever. When the sensing operation involves a different type of load or a combination of types of loadings, the calibration and the deflection values derived from it become invalid. Here we develop a master equation that permits the true deflection of the cantilever to be obtained simply from the measurement of the apparent deflection for uniformly distributed loadings and end-moment loadings. These loadings are relevant to the uniform adsorption or application of material to the cantilever or the application of a surface stress to the cantilever and should assist experimentalists using the optical lever, such as in the atomic force microscope, to measure cantilever deflections in a great variety of sensing applications. We then apply this treatment to the experimental evaluation of surface stress. Three forms of Stoney's equation that relate the apparent deflection to the surface stress, which is valid for both macroscopic and microscopic experiments, are derived. Analysis of the errors arising from incorrect modeling of the loading conditions of the cantilever currently applied in experiments is also presented. It is shown that the reported literature values for surface stress in microscopic experiments are typically 9% smaller than their true value. For macroscopic experiments, we demonstrate that the added mass of the film or coating generally dominates the measured deflection and must be accounted for accurately if surface stress measurements are to be made. Further, the reported measurements generally use a form of Stoney's equation that is in error, resulting in an overestimation of surface stress by a factor >5.
Error reduction in three-dimensional metrology combining optical and touch probe data
NASA Astrophysics Data System (ADS)
Gerde, Janice R.; Christens-Barry, William A.
2010-08-01
Analysis of footwear under the Harmonized Tariff Schedule of the United States (HTSUS) is partly based on identifying the boundary ("parting line") between the "external surface area upper" (ESAU) and the sample's sole. Often, that boundary is obscured. We establish the parting line as the curved intersection between the sample outer surface and its insole surface. The outer surface is determined by discrete point cloud coordinates obtained using a laser scanner. The insole surface is defined by point cloud data, obtained using a touch probe device-a coordinate measuring machine (CMM). Because these point cloud data sets do not overlap spatially, a polynomial surface is fitted to the insole data and extended to intersect a mesh fitted to the outer surface point cloud. This line of intersection defines the ESAU boundary, permitting further fractional area calculations to proceed. The defined parting line location is sensitive to the polynomial used to fit experimental data. Extrapolation to the intersection with the ESAU can heighten this sensitivity. We discuss a methodology for transforming these data into a common reference frame. Three scenarios are considered: measurement error in point cloud coordinates, from fitting a polynomial surface to a point cloud then extrapolating beyond the data set, and error from reference frame transformation. These error sources can influence calculated surface areas. We describe experiments to assess error magnitude, the sensitivity of calculated results on these errors, and minimizing error impact on calculated quantities. Ultimately, we must ensure that statistical error from these procedures is minimized and within acceptance criteria.
Evaluation and testing of image quality of the Space Solar Extreme Ultraviolet Telescope
NASA Astrophysics Data System (ADS)
Peng, Jilong; Yi, Zhong; Zhou, Shuhong; Yu, Qian; Hou, Yinlong; Wang, Shanshan
2018-01-01
For the space solar extreme ultraviolet telescope, the star point test can not be performed in the x-ray band (19.5nm band) as there is not light source of bright enough. In this paper, the point spread function of the optical system is calculated to evaluate the imaging performance of the telescope system. Combined with the actual processing surface error, such as small grinding head processing and magnetorheological processing, the optical design software Zemax and data analysis software Matlab are used to directly calculate the system point spread function of the space solar extreme ultraviolet telescope. Matlab codes are programmed to generate the required surface error grid data. These surface error data is loaded to the specified surface of the telescope system by using the communication technique of DDE (Dynamic Data Exchange), which is used to connect Zemax and Matlab. As the different processing methods will lead to surface error with different size, distribution and spatial frequency, the impact of imaging is also different. Therefore, the characteristics of the surface error of different machining methods are studied. Combining with its position in the optical system and simulation its influence on the image quality, it is of great significance to reasonably choose the processing technology. Additionally, we have also analyzed the relationship between the surface error and the image quality evaluation. In order to ensure the final processing of the mirror to meet the requirements of the image quality, we should choose one or several methods to evaluate the surface error according to the different spatial frequency characteristics of the surface error.
Forming maps of targets having multiple reflectors with a biomimetic audible sonar.
Kuc, Roman
2018-05-01
A biomimetic audible sonar mimics human echolocation by emitting clicks and sensing echoes binaurally to investigate the limitations in acoustic mapping of 2.5 dimensional targets. A monaural sonar that provides only echo time-of-flight values produces biased maps that lie outside the target surfaces. Reflector bearing estimates derived from the first echoes detected by a binaural sonar are employed to form unbiased maps. Multiple echoes from a target introduce phantom-reflector artifacts into its map because later echoes are produced by reflectors at bearings different from those determined from the first echoes. In addition, overlapping echoes interfere to produce bearing errors. Addressing the causes of these bearing errors motivates a processing approach that employs template matching to extract valid echoes. Interfering echoes can mimic a valid echo and also form PR artifacts. These artifacts are eliminated by recognizing the bearing fluctuations that characterize echo interference. Removing PR artifacts produces a map that resembles the physical target shape to within the resolution capabilities of the sonar. The remaining differences between the target shape and the final map are void artifacts caused by invalid or missing echoes.
Design and tolerance analysis of a transmission sphere by interferometer model
NASA Astrophysics Data System (ADS)
Peng, Wei-Jei; Ho, Cheng-Fong; Lin, Wen-Lung; Yu, Zong-Ru; Huang, Chien-Yao; Hsu, Wei-Yao
2015-09-01
The design of a 6-in, f/2.2 transmission sphere for Fizeau interferometry is presented in this paper. To predict the actual performance during design phase, we build an interferometer model combined with tolerance analysis in Zemax. Evaluating focus imaging is not enough for a double pass optical system. Thus, we study the interferometer model that includes system error, wavefronts reflected from reference surface and tested surface. Firstly, we generate a deformation map of the tested surface. Because of multiple configurations in Zemax, we can get the test wavefront and the reference wavefront reflected from the tested surface and the reference surface of transmission sphere respectively. According to the theory of interferometry, we subtract both wavefronts to acquire the phase of tested surface. Zernike polynomial is applied to transfer the map from phase to sag and to remove piston, tilt and power. The restored map is the same as original map; because of no system error exists. Secondly, perturbed tolerances including fabrication of lenses and assembly are considered. The system error occurs because the test and reference beam are no longer common path perfectly. The restored map is inaccurate while the system error is added. Although the system error can be subtracted by calibration, it should be still controlled within a small range to avoid calibration error. Generally the reference wavefront error including the system error and the irregularity of the reference surface of 6-in transmission sphere is measured within peak-to-valley (PV) 0.1 λ (λ=0.6328 um), which is not easy to approach. Consequently, it is necessary to predict the value of system error before manufacture. Finally, a prototype is developed and tested by a reference surface with PV 0.1 λ irregularity.
Remote sensing of ocean currents
NASA Technical Reports Server (NTRS)
Goldstein, R. M.; Zebker, H. A.; Barnett, T. P.
1989-01-01
A method of remotely measuring near-surface ocean currents with a synthetic aperture radar (SAR) is described. The apparatus consists of a single SAR transmitter and two receiving antennas. The phase difference between SAR image scenes obtained from the antennas forms an interferogram that is directly proportional to the surface current. The first field test of this technique against conventional measurements gives estimates of mean currents accurate to order 20 percent, that is, root-mean-square errors of 5 to 10 centimeters per second in mean flows of 27 to 56 centimeters per second. If the full potential of the method could be realized with spacecraft, then it might be possible to routinely monitor the surface currents of the world's oceans.
Water Level Prediction of Lake Cascade Mahakam Using Adaptive Neural Network Backpropagation (ANNBP)
NASA Astrophysics Data System (ADS)
Mislan; Gaffar, A. F. O.; Haviluddin; Puspitasari, N.
2018-04-01
A natural hazard information and flood events are indispensable as a form of prevention and improvement. One of the causes is flooding in the areas around the lake. Therefore, forecasting the surface of Lake water level to anticipate flooding is required. The purpose of this paper is implemented computational intelligence method namely Adaptive Neural Network Backpropagation (ANNBP) to forecasting the Lake Cascade Mahakam. Based on experiment, performance of ANNBP indicated that Lake water level prediction have been accurate by using mean square error (MSE) and mean absolute percentage error (MAPE). In other words, computational intelligence method can produce good accuracy. A hybrid and optimization of computational intelligence are focus in the future work.
Radial orbit error reduction and sea surface topography determination using satellite altimetry
NASA Technical Reports Server (NTRS)
Engelis, Theodossios
1987-01-01
A method is presented in satellite altimetry that attempts to simultaneously determine the geoid and sea surface topography with minimum wavelengths of about 500 km and to reduce the radial orbit error caused by geopotential errors. The modeling of the radial orbit error is made using the linearized Lagrangian perturbation theory. Secular and second order effects are also included. After a rather extensive validation of the linearized equations, alternative expressions of the radial orbit error are derived. Numerical estimates for the radial orbit error and geoid undulation error are computed using the differences of two geopotential models as potential coefficient errors, for a SEASAT orbit. To provide statistical estimates of the radial distances and the geoid, a covariance propagation is made based on the full geopotential covariance. Accuracy estimates for the SEASAT orbits are given which agree quite well with already published results. Observation equations are develped using sea surface heights and crossover discrepancies as observables. A minimum variance solution with prior information provides estimates of parameters representing the sea surface topography and corrections to the gravity field that is used for the orbit generation. The simulation results show that the method can be used to effectively reduce the radial orbit error and recover the sea surface topography.
Comparative study of solar optics for paraboloidal concentrators
NASA Technical Reports Server (NTRS)
Wen, L.; Poon, P.; Carley, W.; Huang, L.
1979-01-01
Different analytical methods for computing the flux distribution on the focal plane of a paraboloidal solar concentrator are reviewed. An analytical solution in algebraic form is also derived for an idealized model. The effects resulting from using different assumptions in the definition of optical parameters used in these methodologies are compared and discussed in detail. These parameters include solar irradiance distribution (limb darkening and circumsolar), reflector surface specular spreading, surface slope error, and concentrator pointing inaccuracy. The type of computational method selected for use depends on the maturity of the design and the data available at the time the analysis is made.
Partial compensation interferometry measurement system for parameter errors of conicoid surface
NASA Astrophysics Data System (ADS)
Hao, Qun; Li, Tengfei; Hu, Yao; Wang, Shaopu; Ning, Yan; Chen, Zhuo
2018-06-01
Surface parameters, such as vertex radius of curvature and conic constant, are used to describe the shape of an aspheric surface. Surface parameter errors (SPEs) are deviations affecting the optical characteristics of an aspheric surface. Precise measurement of SPEs is critical in the evaluation of optical surfaces. In this paper, a partial compensation interferometry measurement system for SPE of a conicoid surface is proposed based on the theory of slope asphericity and the best compensation distance. The system is developed to measure the SPE-caused best compensation distance change and SPE-caused surface shape change and then calculate the SPEs with the iteration algorithm for accuracy improvement. Experimental results indicate that the average relative measurement accuracy of the proposed system could be better than 0.02% for the vertex radius of curvature error and 2% for the conic constant error.
Fixing Stellarator Magnetic Surfaces
NASA Astrophysics Data System (ADS)
Hanson, James D.
1999-11-01
Magnetic surfaces are a perennial issue for stellarators. The design heuristic of finding a magnetic field with zero perpendicular component on a specified outer surface often yields inner magnetic surfaces with very small resonant islands. However, magnetic fields in the laboratory are not design fields. Island-causing errors can arise from coil placement errors, stray external fields, and design inadequacies such as ignoring coil leads and incomplete characterization of current distributions within the coil pack. The problem addressed is how to eliminate such error-caused islands. I take a perturbation approach, where the zero order field is assumed to have good magnetic surfaces, and comes from a VMEC equilibrium. The perturbation field consists of error and correction pieces. The error correction method is to determine the correction field so that the sum of the error and correction fields gives zero island size at specified rational surfaces. It is particularly important to correctly calculate the island size for a given perturbation field. The method works well with many correction knobs, and a Singular Value Decomposition (SVD) technique is used to determine minimal corrections necessary to eliminate islands.
Terrestrial Water Mass Load Changes from Gravity Recovery and Climate Experiment (GRACE)
NASA Technical Reports Server (NTRS)
Seo, K.-W.; Wilson, C. R.; Famiglietti, J. S.; Chen, J. L.; Rodell M.
2006-01-01
Recent studies show that data from the Gravity Recovery and Climate Experiment (GRACE) is promising for basin- to global-scale water cycle research. This study provides varied assessments of errors associated with GRACE water storage estimates. Thirteen monthly GRACE gravity solutions from August 2002 to December 2004 are examined, along with synthesized GRACE gravity fields for the same period that incorporate simulated errors. The synthetic GRACE fields are calculated using numerical climate models and GRACE internal error estimates. We consider the influence of measurement noise, spatial leakage error, and atmospheric and ocean dealiasing (AOD) model error as the major contributors to the error budget. Leakage error arises from the limited range of GRACE spherical harmonics not corrupted by noise. AOD model error is due to imperfect correction for atmosphere and ocean mass redistribution applied during GRACE processing. Four methods of forming water storage estimates from GRACE spherical harmonics (four different basin filters) are applied to both GRACE and synthetic data. Two basin filters use Gaussian smoothing, and the other two are dynamic basin filters which use knowledge of geographical locations where water storage variations are expected. Global maps of measurement noise, leakage error, and AOD model errors are estimated for each basin filter. Dynamic basin filters yield the smallest errors and highest signal-to-noise ratio. Within 12 selected basins, GRACE and synthetic data show similar amplitudes of water storage change. Using 53 river basins, covering most of Earth's land surface excluding Antarctica and Greenland, we document how error changes with basin size, latitude, and shape. Leakage error is most affected by basin size and latitude, and AOD model error is most dependent on basin latitude.
NASA Astrophysics Data System (ADS)
Cong, Wang; Xu, Lingdi; Li, Ang
2017-10-01
Large aspheric surface which have the deviation with spherical surface are being used widely in various of optical systems. Compared with spherical surface, Large aspheric surfaces have lots of advantages, such as improving image quality, correcting aberration, expanding field of view, increasing the effective distance and make the optical system compact, lightweight. Especially, with the rapid development of space optics, space sensor resolution is required higher and viewing angle is requred larger. Aspheric surface will become one of the essential components in the optical system. After finishing Aspheric coarse Grinding surface profile error is about Tens of microns[1].In order to achieve the final requirement of surface accuracy,the aspheric surface must be quickly modified, high precision testing is the basement of rapid convergence of the surface error . There many methods on aspheric surface detection[2], Geometric ray detection, hartmann detection, ronchi text, knifeedge method, direct profile test, interferometry, while all of them have their disadvantage[6]. In recent years the measure of the aspheric surface become one of the import factors which are restricting the aspheric surface processing development. A two meter caliber industrial CMM coordinate measuring machine is avaiable, but it has many drawbacks such as large detection error and low repeatability precision in the measurement of aspheric surface coarse grinding , which seriously affects the convergence efficiency during the aspherical mirror processing. To solve those problems, this paper presents an effective error control, calibration and removal method by calibration mirror position of the real-time monitoring and other effective means of error control, calibration and removal by probe correction and the measurement mode selection method to measure the point distribution program development. This method verified by real engineer examples, this method increases the original industrial-grade coordinate system nominal measurement accuracy PV value of 7 microns to 4microns, Which effectively improves the grinding efficiency of aspheric mirrors and verifies the correctness of the method. This paper also investigates the error detection and operation control method, the error calibration of the CMM and the random error calibration of the CMM .
NASA Technical Reports Server (NTRS)
Diak, George R.; Stewart, Tod R.
1989-01-01
A method is presented for evaluating the fluxes of sensible and latent heating at the land surface, using satellite-measured surface temperature changes in a composite surface layer-mixed layer representation of the planetary boundary layer. The basic prognostic model is tested by comparison with synoptic station information at sites where surface evaporation climatology is well known. The remote sensing version of the model, using satellite-measured surface temperature changes, is then used to quantify the sharp spatial gradient in surface heating/evaporation across the central United States. An error analysis indicates that perhaps five levels of evaporation are recognizable by these methods and that the chief cause of error is the interaction of errors in the measurement of surface temperature change with errors in the assigment of surface roughness character. Finally, two new potential methods for remote sensing of the land-surface energy balance are suggested which will relay on space-borne instrumentation planned for the 1990s.
Method of surface error visualization using laser 3D projection technology
NASA Astrophysics Data System (ADS)
Guo, Lili; Li, Lijuan; Lin, Xuezhu
2017-10-01
In the process of manufacturing large components, such as aerospace, automobile and shipping industry, some important mold or stamped metal plate requires precise forming on the surface, which usually needs to be verified, if necessary, the surface needs to be corrected and reprocessed. In order to make the correction of the machined surface more convenient, this paper proposes a method based on Laser 3D projection system, this method uses the contour form of terrain contour, directly showing the deviation between the actually measured data and the theoretical mathematical model (CAD) on the measured surface. First, measure the machined surface to get the point cloud data and the formation of triangular mesh; secondly, through coordinate transformation, unify the point cloud data to the theoretical model and calculate the three-dimensional deviation, according to the sign (positive or negative) and size of the deviation, use the color deviation band to denote the deviation of three-dimensional; then, use three-dimensional contour lines to draw and represent every coordinates deviation band, creating the projection files; finally, import the projection files into the laser projector, and make the contour line projected to the processed file with 1:1 in the form of a laser beam, compare the Full-color 3D deviation map with the projection graph, then, locate and make quantitative correction to meet the processing precision requirements. It can display the trend of the machined surface deviation clearly.
Prescribing Errors Involving Medication Dosage Forms
Lesar, Timothy S
2002-01-01
CONTEXT Prescribing errors involving medication dose formulations have been reported to occur frequently in hospitals. No systematic evaluations of the characteristics of errors related to medication dosage formulation have been performed. OBJECTIVE To quantify the characteristics, frequency, and potential adverse patient effects of prescribing errors involving medication dosage forms . DESIGN Evaluation of all detected medication prescribing errors involving or related to medication dosage forms in a 631-bed tertiary care teaching hospital. MAIN OUTCOME MEASURES Type, frequency, and potential for adverse effects of prescribing errors involving or related to medication dosage forms. RESULTS A total of 1,115 clinically significant prescribing errors involving medication dosage forms were detected during the 60-month study period. The annual number of detected errors increased throughout the study period. Detailed analysis of the 402 errors detected during the last 16 months of the study demonstrated the most common errors to be: failure to specify controlled release formulation (total of 280 cases; 69.7%) both when prescribing using the brand name (148 cases; 36.8%) and when prescribing using the generic name (132 cases; 32.8%); and prescribing controlled delivery formulations to be administered per tube (48 cases; 11.9%). The potential for adverse patient outcome was rated as potentially “fatal or severe” in 3 cases (0.7%), and “serious” in 49 cases (12.2%). Errors most commonly involved cardiovascular agents (208 cases; 51.7%). CONCLUSIONS Hospitalized patients are at risk for adverse outcomes due to prescribing errors related to inappropriate use of medication dosage forms. This information should be considered in the development of strategies to prevent adverse patient outcomes resulting from such errors. PMID:12213138
The DiskMass Survey. II. Error Budget
NASA Astrophysics Data System (ADS)
Bershady, Matthew A.; Verheijen, Marc A. W.; Westfall, Kyle B.; Andersen, David R.; Swaters, Rob A.; Martinsson, Thomas
2010-06-01
We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ_{*}), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25°-35° is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction ({F}_bar) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σdyn), disk stellar mass-to-light ratio (Υ^disk_{*}), and disk maximality ({F}_{*,max}^disk≡ V^disk_{*,max}/ V_c). Random and systematic errors in these quantities for individual galaxies will be ~25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.
Characterizing the SWOT discharge error budget on the Sacramento River, CA
NASA Astrophysics Data System (ADS)
Yoon, Y.; Durand, M. T.; Minear, J. T.; Smith, L.; Merry, C. J.
2013-12-01
The Surface Water and Ocean Topography (SWOT) is an upcoming satellite mission (2020 year) that will provide surface-water elevation and surface-water extent globally. One goal of SWOT is the estimation of river discharge directly from SWOT measurements. SWOT discharge uncertainty is due to two sources. First, SWOT cannot measure channel bathymetry and determine roughness coefficient data necessary for discharge calculations directly; these parameters must be estimated from the measurements or from a priori information. Second, SWOT measurement errors directly impact the discharge estimate accuracy. This study focuses on characterizing parameter and measurement uncertainties for SWOT river discharge estimation. A Bayesian Markov Chain Monte Carlo scheme is used to calculate parameter estimates, given the measurements of river height, slope and width, and mass and momentum constraints. The algorithm is evaluated using simulated both SWOT and AirSWOT (the airborne version of SWOT) observations over seven reaches (about 40 km) of the Sacramento River. The SWOT and AirSWOT observations are simulated by corrupting the ';true' HEC-RAS hydraulic modeling results with the instrument error. This experiment answers how unknown bathymetry and roughness coefficients affect the accuracy of the river discharge algorithm. From the experiment, the discharge error budget is almost completely dominated by unknown bathymetry and roughness; 81% of the variance error is explained by uncertainties in bathymetry and roughness. Second, we show how the errors in water surface, slope, and width observations influence the accuracy of discharge estimates. Indeed, there is a significant sensitivity to water surface, slope, and width errors due to the sensitivity of bathymetry and roughness to measurement errors. Increasing water-surface error above 10 cm leads to a corresponding sharper increase of errors in bathymetry and roughness. Increasing slope error above 1.5 cm/km leads to a significant degradation due to direct error in the discharge estimates. As the width error increases past 20%, the discharge error budget is dominated by the width error. Above two experiments are performed based on AirSWOT scenarios. In addition, we explore the sensitivity of the algorithm to the SWOT scenarios.
NASA Astrophysics Data System (ADS)
Pan, X.; Yang, Y.; Liu, Y.; Fan, X.; Shan, L.; Zhang, X.
2018-04-01
Error source analyses are critical for the satellite-retrieved surface net radiation (Rn) products. In this study, we evaluate the Rn error sources in the Clouds and the Earth's Radiant Energy System (CERES) project at 43 sites from July in 2007 to December in 2007 in China. The results show that cloud fraction (CF), land surface temperature (LST), atmospheric temperature (AT) and algorithm error dominate the Rn error, with error contributions of -20, 15, 10 and 10 W/m2 (net shortwave (NSW)/longwave (NLW) radiation), respectively. For NSW, the dominant error source is algorithm error (more than 10 W/m2), particularly in spring and summer with abundant cloud. For NLW, due to the high sensitivity of algorithm and large LST/CF error, LST and CF are the largest error sources, especially in northern China. The AT influences the NLW error large in southern China because of the large AT error in there. The total precipitable water has weak influence on Rn error even with the high sensitivity of algorithm. In order to improve Rn quality, CF and LST (AT) error in northern (southern) China should be decreased.
Compact Assumption Applied to the Monopole Term of Farassat's Formulations
NASA Technical Reports Server (NTRS)
Lopes, Leonard V.
2015-01-01
Farassat's formulations provide an acoustic prediction at an observer location provided a source surface, including motion and flow conditions. This paper presents compact forms for the monopole term of several of Farassat's formulations. When the physical surface is elongated, such as the case of a high aspect ratio rotorcraft blade, compact forms can be derived which are shown to be a function of the blade cross sectional area by reducing the computation from a surface integral to a line integral. The compact forms of all formulations are applied to two example cases: a short span wing with constant airfoil cross section moving at three forward flight Mach numbers and a rotor at two advance ratios. Acoustic pressure time histories and power spectral densities of monopole noise predicted from the compact forms of all the formulations at several observer positions are shown to compare very closely to the predictions from their non-compact counterparts. A study on the influence of rotorcraft blade shape on the high frequency portion of the power spectral density shows that there is a direct correlation between the aspect ratio of the airfoil and the error incurred by using the compact form. Finally, a prediction of pressure gradient from the non-compact and compact forms of the thickness term of Formulation G1A shows that using the compact forms results in a 99.6% improvement in computation time, which will be critical when noise is incorporated into a design environment.
NASA Astrophysics Data System (ADS)
Li, Ping; Jin, Tan; Guo, Zongfu; Lu, Ange; Qu, Meina
2016-10-01
High efficiency machining of large precision optical surfaces is a challenging task for researchers and engineers worldwide. The higher form accuracy and lower subsurface damage helps to significantly reduce the cycle time for the following polishing process, save the cost of production, and provide a strong enabling technology to support the large telescope and laser energy fusion projects. In this paper, employing an Infeed Grinding (IG) mode with a rotary table and a cup wheel, a multi stage grinding process chain, as well as precision compensation technology, a Φ300mm diameter plano mirror is ground by the Schneider Surfacing Center SCG 600 that delivers a new level of quality and accuracy when grinding such large flats. Results show a PV form error of Pt<2 μm, the surface roughness Ra<30 nm and Rz<180 nm, with subsurface damage <20 μm, and a material removal rates of up to 383.2 mm3/s.
The mean sea surface height and geoid along the Geosat subtrack from Bermuda to Cape Cod
NASA Astrophysics Data System (ADS)
Kelly, Kathryn A.; Joyce, Terrence M.; Schubert, David M.; Caruso, Michael J.
1991-07-01
Measurements of near-surface velocity and concurrent sea level along an ascending Geosat subtrack were used to estimate the mean sea surface height and the Earth's gravitational geoid. Velocity measurements were made on three traverses of a Geosat subtrack within 10 days, using an acoustic Doppler current profiler (ADCP). A small bias in the ADCP velocity was removed by considering a mass balance for two pairs of triangles for which expendable bathythermograph measurements were also made. Because of the large curvature of the Gulf Stream, the gradient wind balance was used to estimate the cross-track component of geostrophic velocity from the ADCP vectors; this component was then integrated to obtain the sea surface height profile. The mean sea surface height was estimated as the difference between the instantaneous sea surface height from ADCP and the Geosat residual sea level, with mesoscale errors reduced by low-pass filtering. The error estimates were divided into a bias, tilt, and mesoscale residual; the bias was ignored because profiles were only determined within a constant of integration. The calculated mean sea surface height estimate agreed with an independent estimate of the mean sea surface height from Geosat, obtained by modeling the Gulf Stream as a Gaussian jet, within the expected errors in the estimates: the tilt error was 0.10 m, and the mesoscale error was 0.044 m. To minimize mesoscale errors in the estimate, the alongtrack geoid estimate was computed as the difference between the mean sea level from the Geosat Exact Repeat Mission and an estimate of the mean sea surface height, rather than as the difference between instantaneous profiles of sea level and sea surface height. In the critical region near the Gulf Stream the estimated error reduction using this method was about 0.07 m. Differences between the geoid estimate and a gravimetric geoid were not within the expected errors: the rms mesoscale difference was 0.24 m rms.
A new polishing process for large-aperture and high-precision aspheric surface
NASA Astrophysics Data System (ADS)
Nie, Xuqing; Li, Shengyi; Dai, Yifan; Song, Ci
2013-07-01
The high-precision aspheric surface is hard to be achieved due to the mid-spatial frequency error in the finishing step. The influence of mid-spatial frequency error is studied through the simulations and experiments. In this paper, a new polishing process based on magnetorheological finishing (MRF), smooth polishing (SP) and ion beam figuring (IBF) is proposed. A 400mm aperture parabolic surface is polished with this new process. The smooth polishing (SP) is applied after rough machining to control the MSF error. In the middle finishing step, most of low-spatial frequency error is removed by MRF rapidly, then the mid-spatial frequency error is restricted by SP, finally ion beam figuring is used to finish the surface. The surface accuracy is improved from the initial 37.691nm (rms, 95% aperture) to the final 4.195nm. The results show that the new polishing process is effective to manufacture large-aperture and high-precision aspheric surface.
NASA Astrophysics Data System (ADS)
Porto da Silveira, I.; Zuidema, P.; Kirtman, B. P.
2017-12-01
The rugged topography of the Andes Cordillera along with strong coastal upwelling, strong sea surface temperatures (SST) gradients and extensive but geometrically-thin stratocumulus decks turns the Southeast Pacific (SEP) into a challenge for numerical modeling. In this study, hindcast simulations using the Community Climate System Model (CCSM4) at two resolutions were analyzed to examine the importance of resolution alone, with the parameterizations otherwise left unchanged. The hindcasts were initialized on January 1 with the real-time oceanic and atmospheric reanalysis (CFSR) from 1982 to 2003, forming a 10-member ensemble. The two resolutions are (0.1o oceanic and 0.5o atmospheric) and (1.125o oceanic and 0.9o atmospheric). The SST error growth in the first six days of integration (fast errors) and those resulted from model drift (saturated errors) are assessed and compared towards evaluating the model processes responsible for the SST error growth. For the high-resolution simulation, SST fast errors are positive (+0.3oC) near the continental borders and negative offshore (-0.1oC). Both are associated with a decrease in cloud cover, a weakening of the prevailing southwesterly winds and a reduction of latent heat flux. The saturated errors possess a similar spatial pattern, but are larger and are more spatially concentrated. This suggests that the processes driving the errors already become established within the first week, in contrast to the low-resolution simulations. These, instead, manifest too-warm SSTs related to too-weak upwelling, driven by too-strong winds and Ekman pumping. Nevertheless, the ocean surface tends to be cooler in the low-resolution simulation than the high-resolution due to a higher cloud cover. Throughout the integration, saturated SST errors become positive and could reach values up to +4oC. These are accompanied by upwelling dumping and a decrease in cloud cover. High and low resolution models presented notable differences in how SST errors variability drove atmospheric changes, especially because the high resolution is sensitive to resurgence regions. This allows the model to resolve cloud heights and establish different radiative feedbacks.
Measurement of aspheric mirror by nanoprofiler using normal vector tracing
NASA Astrophysics Data System (ADS)
Kitayama, Takao; Shiraji, Hiroki; Yamamura, Kazuya; Endo, Katsuyoshi
2016-09-01
Aspheric or free-form optics with high accuracy are necessary in many fields such as third-generation synchrotron radiation and extreme-ultraviolet lithography. Therefore the demand of measurement method for aspherical or free-form surface with nanometer accuracy increases. Purpose of our study is to develop a non-contact measurement technology for aspheric or free-form surfaces directly with high repeatability. To achieve this purpose we have developed threedimensional Nanoprofiler which detects normal vectors of sample surface. The measurement principle is based on the straightness of laser light and the accurate motion of rotational goniometers. This machine consists of four rotational stages, one translational stage and optical head which has the quadrant photodiode (QPD) and laser source. In this measurement method, we conform the incident light beam to reflect the beam by controlling five stages and determine the normal vectors and the coordinates of the surface from signal of goniometers, translational stage and QPD. We can obtain three-dimensional figure from the normal vectors and their coordinates by surface reconstruction algorithm. To evaluate performance of this machine we measure a concave aspheric mirror with diameter of 150 mm. As a result we achieve to measure large area of 150mm diameter. And we observe influence of systematic errors which the machine has. Then we simulated the influence and subtracted it from measurement result.
NASA Technical Reports Server (NTRS)
Aires, F.; Prigent, C.; Rossow, W. B.; Rothstein, M.; Hansen, James E. (Technical Monitor)
2000-01-01
The analysis of microwave observations over land to determine atmospheric and surface parameters is still limited due to the complexity of the inverse problem. Neural network techniques have already proved successful as the basis of efficient retrieval methods for non-linear cases, however, first-guess estimates, which are used in variational methods to avoid problems of solution non-uniqueness or other forms of solution irregularity, have up to now not been used with neural network methods. In this study, a neural network approach is developed that uses a first-guess. Conceptual bridges are established between the neural network and variational methods. The new neural method retrieves the surface skin temperature, the integrated water vapor content, the cloud liquid water path and the microwave surface emissivities between 19 and 85 GHz over land from SSM/I observations. The retrieval, in parallel, of all these quantities improves the results for consistency reasons. A data base to train the neural network is calculated with a radiative transfer model and a a global collection of coincident surface and atmospheric parameters extracted from the National Center for Environmental Prediction reanalysis, from the International Satellite Cloud Climatology Project data and from microwave emissivity atlases previously calculated. The results of the neural network inversion are very encouraging. The r.m.s. error of the surface temperature retrieval over the globe is 1.3 K in clear sky conditions and 1.6 K in cloudy scenes. Water vapor is retrieved with a r.m.s. error of 3.8 kg/sq m in clear conditions and 4.9 kg/sq m in cloudy situations. The r.m.s. error in cloud liquid water path is 0.08 kg/sq m . The surface emissivities are retrieved with an accuracy of better than 0.008 in clear conditions and 0.010 in cloudy conditions. Microwave land surface temperature retrieval presents a very attractive complement to the infrared estimates in cloudy areas: time record of land surface temperature will be produced.
Nonlinear analysis and dynamic compensation of stylus scanning measurement with wide range
NASA Astrophysics Data System (ADS)
Hui, Heiyang; Liu, Xiaojun; Lu, Wenlong
2011-12-01
Surface topography is an important geometrical feature of a workpiece that influences its quality and functions such as friction, wearing, lubrication and sealing. Precision measurement of surface topography is fundamental for product quality characterizing and assurance. Stylus scanning technique is a widely used method for surface topography measurement, and it is also regarded as the international standard method for 2-D surface characterizing. Usually surface topography, including primary profile, waviness and roughness, can be measured precisely and efficiently by this method. However, by stylus scanning method to measure curved surface topography, the nonlinear error is unavoidable because of the difference of horizontal position of the actual measured point from given sampling point and the nonlinear transformation process from vertical displacement of the stylus tip to angle displacement of the stylus arm, and the error increases with the increasing of measuring range. In this paper, a wide range stylus scanning measurement system based on cylindrical grating interference principle is constructed, the originations of the nonlinear error are analyzed, the error model is established and a solution to decrease the nonlinear error is proposed, through which the error of the collected data is dynamically compensated.
Spiral-bevel geometry and gear train precision
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Coy, J. J.
1983-01-01
A new aproach to the solution of determination of surface principal curvatures and directions is proposed. Direct relationships between the principal curvatures and directions of the tool surface and those of the principal curvatures and directions of generated gear surface are obtained. The principal curvatures and directions of geartooth surface are obtained without using the complicated equations of these surfaces. A general theory of the train kinematical errors exerted by manufacturing and assembly errors is discussed. Two methods for the determination of the train kinematical errors can be worked out: (1) with aid of a computer, and (2) with a approximate method. Results from noise and vibration measurement conducted on a helicopter transmission are used to illustrate the principals contained in the theory of kinematic errors.
Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.
Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo
2017-06-01
Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.
Evaluation of deflectometry for E-ELT optics.
NASA Astrophysics Data System (ADS)
Sironi, G.; Canestrari, R.; Civitani, M. M.
A deflectometrical facility was developed at Italian National Institute for Astrophysics-OAB in the context of the ASTRI project to characterize free-form segments for Cherenkov optics. The test works as an inverse Ronchi test in combination with a ray-tracing code: the under-test surface is illuminated by a known light pattern and the pattern warped by local surface errors is observed. Knowing the geometry of the system it is possible to retrieve the surface normal vectors. This contribution presents the analysis of the upgrades and of the configuration modifications required to allow the use of deflectometry in the realization of optical components suitable for European Extremely Large Telescope and as a specific case to support the manufacturing of the Multi-conjugate Adaptive Optics Relay (MAORY) module.
Linearizing feedforward/feedback attitude control
NASA Technical Reports Server (NTRS)
Paielli, Russell A.; Bach, Ralph E.
1991-01-01
An approach to attitude control theory is introduced in which a linear form is postulated for the closed-loop rotation error dynamics, then the exact control law required to realize it is derived. The nonminimal (four-component) quaternion form is used to attitude because it is globally nonsingular, but the minimal (three-component) quaternion form is used for attitude error because it has no nonlinear constraints to prevent the rotational error dynamics from being linearized, and the definition of the attitude error is based on quaternion algebra. This approach produces an attitude control law that linearizes the closed-loop rotational error dynamics exactly, without any attitude singularities, even if the control errors become large.
NASA Astrophysics Data System (ADS)
Gómez-Pedrero, José A.; Rodríguez-Ibañez, Diego; Alonso, José; Quirgoa, Juan A.
2015-09-01
With the advent of techniques devised for the mass production of optical components made with surfaces of arbitrary form (also known as free form surfaces) in the last years, a parallel development of measuring systems adapted for these new kind of surfaces constitutes a real necessity for the industry. Profilometry is one of the preferred methods for the assessment of the quality of a surface, and is widely employed in the optical fabrication industry for the quality control of its products. In this work, we present the design, development and assembly of a new profilometer with five axis of movement, specifically suited to the measurement of medium size (up to 150 mm of diameter) "free-form" optical surfaces with sub-micrometer accuracy and low measuring times. The apparatus is formed by three X, Y, Z linear motorized positioners plus and additional angular and a tilt positioner employed to locate accurately the surface to be measured and the probe which can be a mechanical or an optical one, being optical one a confocal sensor based on chromatic aberration. Both optical and mechanical probes guarantee an accuracy lower than the micrometer in the determination of the surface height, thus ensuring an accuracy in the surface curvatures of the order of 0.01 D or better. An original calibration procedure based on the measurement of a precision sphere has been developed in order to correct the perpendicularity error between the axes of the linear positioners. To reduce the measuring time of the profilometer, a custom electronics, based on an Arduino™ controller, have been designed and produced in order to synchronize the five motorized positioners and the optical and mechanical probes so that a medium size surface (around 10 cm of diameter) with a dynamic range in curvatures of around 10 D, can be measured in less than 300 seconds (using three axes) keeping the resolution in height and curvature in the figures mentioned above.
Nucleation theory - Is replacement free energy needed?. [error analysis of capillary approximation
NASA Technical Reports Server (NTRS)
Doremus, R. H.
1982-01-01
It has been suggested that the classical theory of nucleation of liquid from its vapor as developed by Volmer and Weber (1926) needs modification with a factor referred to as the replacement free energy and that the capillary approximation underlying the classical theory is in error. Here, the classical nucleation equation is derived from fluctuation theory, Gibb's result for the reversible work to form a critical nucleus, and the rate of collision of gas molecules with a surface. The capillary approximation is not used in the derivation. The chemical potential of small drops is then considered, and it is shown that the capillary approximation can be derived from thermodynamic equations. The results show that no corrections to Volmer's equation are needed.
Development of a digital automatic control law for steep glideslope capture and flare
NASA Technical Reports Server (NTRS)
Halyo, N.
1977-01-01
A longitudinal digital guidance and control law for steep glideslopes using MLS (Microwave Landing System) data is developed for CTOL aircraft using modern estimation and control techniques. The control law covers the final approach phases of glideslope capture, glideslope tracking, and flare to touchdown for automatic landings under adverse weather conditions. The control law uses a constant gain Kalman filter to process MLS and body-mounted accelerometer data to form estimates of flight path errors and wind velocities including wind shear. The flight path error estimates and wind estimates are used for feedback in generating control surface commands. Results of a digital simulation of the aircraft dynamics and the guidance and control law are presented for various wind conditions.
NASA Astrophysics Data System (ADS)
Dadashzadeh, N.; Duzgun, H. S. B.; Yesiloglu-Gultekin, N.
2017-08-01
While advanced numerical techniques in slope stability analysis are successfully used in deterministic studies, they have so far found limited use in probabilistic analyses due to their high computation cost. The first-order reliability method (FORM) is one of the most efficient probabilistic techniques to perform probabilistic stability analysis by considering the associated uncertainties in the analysis parameters. However, it is not possible to directly use FORM in numerical slope stability evaluations as it requires definition of a limit state performance function. In this study, an integrated methodology for probabilistic numerical modeling of rock slope stability is proposed. The methodology is based on response surface method, where FORM is used to develop an explicit performance function from the results of numerical simulations. The implementation of the proposed methodology is performed by considering a large potential rock wedge in Sumela Monastery, Turkey. The accuracy of the developed performance function to truly represent the limit state surface is evaluated by monitoring the slope behavior. The calculated probability of failure is compared with Monte Carlo simulation (MCS) method. The proposed methodology is found to be 72% more efficient than MCS, while the accuracy is decreased with an error of 24%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, H. -Y.; Klein, S. A.; Xie, S.
Many weather forecasting and climate models simulate a warm surface air temperature (T2m) bias over mid-latitude continents during the summertime, especially over the Great Plains. We present here one of a series of papers from a multi-model intercomparison project (CAUSES: Cloud Above the United States and Errors at the Surface), which aims to evaluate the role of cloud, radiation, and precipitation biases in contributing to T2m bias using a short-term hindcast approach with observations mainly from the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) site during the period of April to August 2011. The present study examines the contributionmore » of surface energy budget errors to the bias. All participating models simulate higher net shortwave and longwave radiative fluxes at the surface but there is no consistency on signs of biases in latent and sensible heat fluxes over the Central U.S. and ARM SGP. Nevertheless, biases in net shortwave and downward longwave fluxes, as well as surface evaporative fraction (EF) are the main contributors to T2m bias. Radiation biases are largely affected by cloud simulations, while EF is affected by soil moisture modulated by seasonal accumulated precipitation and evaporation. An approximate equation is derived to further quantify the magnitudes of radiation and EF contributions to T2m bias. Our analysis suggests that radiation errors are always an important source of T2m error for long-term climate runs with EF errors either of equal or lesser importance. However, for the short-term hindcasts, EF errors are more important provided a model has a substantial EF bias.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muslimov, A. E., E-mail: amuslimov@mail.ru; Butashin, A. V.; Kanevsky, V. M.
The (001) cleavage surface of vanadium pentoxide (V{sub 2}O{sub 5}) crystal has been studied by scanning tunneling spectroscopy (STM). It is shown that the surface is not reconstructed; the STM image allows geometric lattice parameters to be determined with high accuracy. The nanostructure formed on the (001) cleavage surface of crystal consists of atomically smooth steps with a height multiple of unit-cell parameter c = 4.37 Å. The V{sub 2}O{sub 5} crystal cleavages can be used as references in calibration of a scanning tunneling microscope under atmospheric conditions both along the (Ñ…, y) surface and normally to the sample surfacemore » (along the z axis). It is found that the terrace surface is not perfectly atomically smooth; its roughness is estimated to be ~0.5 Å. This circumstance may introduce an additional error into the microscope calibration along the z coordinate.« less
Influence of Layup Sequence on the Surface Accuracy of Carbon Fiber Composite Space Mirrors
NASA Astrophysics Data System (ADS)
Yang, Zhiyong; Liu, Qingnian; Zhang, Boming; Xu, Liang; Tang, Zhanwen; Xie, Yongjie
2018-04-01
Layup sequence is directly related to stiffness and deformation resistance of the composite space mirror, and error caused by layup sequence can affect the surface precision of composite mirrors evidently. Variation of layup sequence with the same total thickness of composite space mirror changes surface form of the composite mirror, which is the focus of our study. In our research, the influence of varied quasi-isotropic stacking sequences and random angular deviation on the surface accuracy of composite space mirrors was investigated through finite element analyses (FEA). We established a simulation model for the studied concave mirror with 500 mm diameter, essential factors of layup sequences and random angular deviations on different plies were discussed. Five guiding findings were described in this study. Increasing total plies, optimizing stacking sequence and keeping consistency of ply alignment in ply placement are effective to improve surface accuracy of composite mirror.
HD 140283: A STAR IN THE SOLAR NEIGHBORHOOD THAT FORMED SHORTLY AFTER THE BIG BANG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bond, Howard E.; Nelan, Edmund P.; VandenBerg, Don A.
HD 140283 is an extremely metal-deficient and high-velocity subgiant in the solar neighborhood, having a location in the Hertzsprung-Russell diagram where absolute magnitude is most sensitive to stellar age. Because it is bright, nearby, unreddened, and has a well-determined chemical composition, this star avoids most of the issues involved in age determinations for globular clusters. Using the Fine Guidance Sensors on the Hubble Space Telescope, we have measured a trigonometric parallax of 17.15 {+-} 0.14 mas for HD 140283, with an error one-fifth of that determined by the Hipparcos mission. Employing modern theoretical isochrones, which include effects of helium diffusion,more » revised nuclear reaction rates, and enhanced oxygen abundance, we use the precise distance to infer an age of 14.46 {+-} 0.31 Gyr. The quoted error includes only the uncertainty in the parallax, and is for adopted surface oxygen and iron abundances of [O/H] = -1.67 and [Fe/H] = -2.40. Uncertainties in the stellar parameters and chemical composition, especially the oxygen content, now contribute more to the error budget for the age of HD 140283 than does its distance, increasing the total uncertainty to about {+-}0.8 Gyr. Within the errors, the age of HD 140283 does not conflict with the age of the Universe, 13.77 {+-} 0.06 Gyr, based on the microwave background and Hubble constant, but it must have formed soon after the big bang.« less
NASA Astrophysics Data System (ADS)
Korwin-Edson, Michelle Lynn
Previous works have shown that cells proliferate differently depending on the chemistry of the glass on which they are growing. Since proteins form the bonds between cells and glass, the hypothesis of this study is that proteins can distinguish between surface chemical variations of glass. This theory was examined through the use of various silica forms, a few select proteins, four surface treatment procedures, and a variety of characterization techniques. The silica forms include amorphous slides, cane, fiber, microspheres, fumed silica and quartz crystal terminals. The proteins selected were human serum albumin, mouse Immunoglobulin G, streptavidin, antimouse IgG, and biotin. The surface treatments utilized to bring about chemical variation on the silica surface were HF acid etching, ethanol cleaning, water plasma treatments, and 1000°C heat treatments. The characterization techniques encompassed both traditional material techniques and biological methods. The techniques studied were atomic force microscopy (AFM), chemical force microscopy (CFM), glancing incidence X-ray analysis (GIXA), fluorescence spectrometry, polyacrylamide gel electrophoresis (SDS-PAGE), and bicinchoninic acid (BCA) assay. It was the main goal of this project to determine the feasibility of these techniques in utilizing proteins as glass surface probes. Proteins were adsorbed to all of the various forms and the binding ability was studied by either stripping off the protein and quantifying them, or by deductive reasoning through the use of "depleted" protein solutions. Fluorimetry and BCA assay both utilized the depleted solutions, but the high error associated with this protocol was prohibitive. SDS-PAGE with streptavidin was very difficult due to staining problems, however the IgG proteins were able to be quantified with some success. GIXA showed that the protein layer thickness is monolayer in nature, which agrees well with the AFM fluid tapping data on protein height, but in addition showed features on the order of ten protein agglomerations. CFM is by far the most promising technique for utilizing proteins as surface probes. Functionalized tips of -COOH, streptavidin and -CH3 are able to discern between surface treatments, but not forms. A general conclusion is that adhesion forces are greatest for -COOH, then streptavidin, and least for -CH3.
NASA Astrophysics Data System (ADS)
Stimson, J.; Docker, P.; Ward, M.; Kay, J.; Chapon, L.; Diaz-Moreno, S.
2017-12-01
The work detailed here describes how a novel approach has been applied to overcome the challenging task of cryo-cooling the first monochromator crystals of many of the world’s synchrotrons’ more challenging beam lines. The beam line configuration investigated in this work requires the crystal to diffract 15 Watts of 4-34 keV X-ray wavelength and dissipate the additional 485 watts of redundant X-ray power without significant deformation of the crystal surface. In this case the beam foot print is 25 mm by 25 mm on a crystal surface measuring 38 mm by 25 mm and maintain a radius of curvature of more than 50 km. Currently the crystal is clamped between two copper heat exchangers which have LN2 flowing through them. There are two conditions that must be met simultaneously in this scenario: the crystal needs to be clamped strongly enough to prevent the thermal deformation developing whilst being loose enough not to mechanically deform the diffracting surface. An additional source of error also occurs as the configuration is assembled by hand, leading to human error in the assembly procedure. This new approach explores making the first crystal cylindrical with a sleeve heat exchanger. By manufacturing the copper sleeve to be slightly larger than the silicon crystal at room temperature the sleeve can be slid over the silicon and when cooled will form an interference fit. This has the additional advantage that the crystal and its heat exchanger become a single entity and will always perform the same way each time it is used, eliminating error due to assembly. Various fits have been explored to investigate the associated crystal surface deformations under such a regime
NASA Astrophysics Data System (ADS)
Rajesh, P. V.; Pattnaik, S.; Mohanty, U. C.; Rai, D.; Baisya, H.; Pandey, P. C.
2017-12-01
Monsoon depressions (MDs) constitute a large fraction of the total rainfall during the Indian summer monsoon season. In this study, the impact of high-resolution land state is addressed by assessing the evolution of inland moving depressions formed over the Bay of Bengal using a mesoscale modeling system. Improved land state is generated using High Resolution Land Data Assimilation System employing Noah-MP land-surface model. Verification of soil moisture using Soil Moisture and Ocean Salinity (SMOS) and soil temperature using tower observations demonstrate promising results. Incorporating high-resolution land state yielded least root mean squared errors with higher correlation coefficient in the surface and mid tropospheric parameters. Rainfall forecasts reveal that simulations are spatially and quantitatively in accordance with observations and provide better skill scores. The improved land surface characteristics have brought about the realistic evolution of surface, mid-tropospheric parameters, vorticity and moist static energy that facilitates the accurate MDs dynamics in the model. Composite moisture budget analysis reveals that the surface evaporation is negligible compared to moisture flux convergence of water vapor, which supplies moisture into the MDs over land. The temporal relationship between rainfall and moisture convergence show high correlation, suggesting a realistic representation of land state help restructure the moisture inflow into the system through rainfall-moisture convergence feedback.
Quasi-static shape adjustment of a 15 meter diameter space antenna
NASA Technical Reports Server (NTRS)
Belvin, W. Keith; Herstrom, Catherine L.; Edighoffer, Harold H.
1987-01-01
A 15 meter diameter Hoop-Column antenna has been analyzed and tested to study shape adjustment of the reflector surface. The Hoop-Column antenna concept employs pretensioned cables and mesh to produce a paraboloidal reflector surface. Fabrication errors and thermal distortions may significantly reduce surface accuracy and consequently degrade electromagnetic performance. Thus, the ability to adjust the surface shape is desirable. The shape adjustment algorithm consisted of finite element and least squares error analyses to minimize the surface distortions. Experimental results verified the analysis. Application of the procedure resulted in a reduction of surface error by 38 percent. Quasi-static shape adjustment has the potential for on-orbit compensation for a variety of surface shape distortions.
Development of a 3-D Pen Input Device
2008-09-01
of a unistroke which can be written on any surface or in the air while correcting integration errors from the...navigation frame of a unistroke, which can be written on any surface or in the air while correcting integration errors from the measurements of the IMU... be written on any surface or in the air while correcting integration errors from the measurements of the IMU (Inertial Measurement Unit) of the
Multiresolution molecular mechanics: Surface effects in nanoscale materials
NASA Astrophysics Data System (ADS)
Yang, Qingcheng; To, Albert C.
2017-05-01
Surface effects have been observed to contribute significantly to the mechanical response of nanoscale structures. The newly proposed energy-based coarse-grained atomistic method Multiresolution Molecular Mechanics (MMM) (Yang, To (2015), [57]) is applied to capture surface effect for nanosized structures by designing a surface summation rule SRS within the framework of MMM. Combined with previously proposed bulk summation rule SRB, the MMM summation rule SRMMM is completed. SRS and SRB are consistently formed within SRMMM for general finite element shape functions. Analogous to quadrature rules in finite element method (FEM), the key idea to the good performance of SRMMM lies in that the order or distribution of energy for coarse-grained atomistic model is mathematically derived such that the number, position and weight of quadrature-type (sampling) atoms can be determined. Mathematically, the derived energy distribution of surface area is different from that of bulk region. Physically, the difference is due to the fact that surface atoms lack neighboring bonding. As such, SRS and SRB are employed for surface and bulk domains, respectively. Two- and three-dimensional numerical examples using the respective 4-node bilinear quadrilateral, 8-node quadratic quadrilateral and 8-node hexahedral meshes are employed to verify and validate the proposed approach. It is shown that MMM with SRMMM accurately captures corner, edge and surface effects with less 0.3% degrees of freedom of the original atomistic system, compared against full atomistic simulation. The effectiveness of SRMMM with respect to high order element is also demonstrated by employing the 8-node quadratic quadrilateral to solve a beam bending problem considering surface effect. In addition, the introduced sampling error with SRMMM that is analogous to numerical integration error with quadrature rule in FEM is very small.
Differential Geometry Applied To Least-Square Error Surface Approximations
NASA Astrophysics Data System (ADS)
Bolle, Ruud M.; Sabbah, Daniel
1987-08-01
This paper focuses on extraction of the parameters of individual surfaces from noisy depth maps. The basis for this are least-square error polynomial approximations to the range data and the curvature properties that can be computed from these approximations. The curvature properties are derived using the invariants of the Weingarten Map evaluated at the origin of local coordinate systems centered at the range points. The Weingarten Map is a well-known concept in differential geometry; a brief treatment of the differential geometry pertinent to surface curvature is given. We use the curvature properties for extracting certain surface parameters from the curvature properties of the approximations. Then we show that curvature properties alone are not enough to obtain all the parameters of the surfaces; higher order properties (information about change of curvature) are needed to obtain full parametric descriptions. This surface parameter estimation problem arises in the design of a vision system to recognize 3D objects whose surfaces are composed of planar patches and patches of quadrics of revolution. (Quadrics of revolution are quadrics that are surfaces of revolution.) A significant portion of man-made objects can be modeled using these surfaces. The actual process of recognition and parameter extraction is framed as a set of stacked parameter space transforms. The transforms are "stacked" in the sense that any one transform computes only a partial geometric description that forms the input to the next transform. Those who are interested in the organization and control of the recognition and parameter recognition process are referred to [Sabbah86], this paper briefly touches upon the organization, but concentrates mainly on geometrical aspects of the parameter extraction.
Ménard, Richard; Deshaies-Jacques, Martin; Gasset, Nicolas
2016-09-01
An objective analysis is one of the main components of data assimilation. By combining observations with the output of a predictive model we combine the best features of each source of information: the complete spatial and temporal coverage provided by models, with a close representation of the truth provided by observations. The process of combining observations with a model output is called an analysis. To produce an analysis requires the knowledge of observation and model errors, as well as its spatial correlation. This paper is devoted to the development of methods of estimation of these error variances and the characteristic length-scale of the model error correlation for its operational use in the Canadian objective analysis system. We first argue in favor of using compact support correlation functions, and then introduce three estimation methods: the Hollingsworth-Lönnberg (HL) method in local and global form, the maximum likelihood method (ML), and the [Formula: see text] diagnostic method. We perform one-dimensional (1D) simulation studies where the error variance and true correlation length are known, and perform an estimation of both error variances and correlation length where both are non-uniform. We show that a local version of the HL method can capture accurately the error variances and correlation length at each observation site, provided that spatial variability is not too strong. However, the operational objective analysis requires only a single and globally valid correlation length. We examine whether any statistics of the local HL correlation lengths could be a useful estimate, or whether other global estimation methods such as by the global HL, ML, or [Formula: see text] should be used. We found in both 1D simulation and using real data that the ML method is able to capture physically significant aspects of the correlation length, while most other estimates give unphysical and larger length-scale values. This paper describes a proposed improvement of the objective analysis of surface pollutants at Environment and Climate Change Canada (formerly known as Environment Canada). Objective analyses are essentially surface maps of air pollutants that are obtained by combining observations with an air quality model output, and are thought to provide a complete and more accurate representation of the air quality. The highlight of this study is an analysis of methods to estimate the model (or background) error correlation length-scale. The error statistics are an important and critical component to the analysis scheme.
Wei, Xiaobo; Liu, Mengjiao; Ding, Yun; Li, Qilin; Cheng, Changhai; Zong, Xian; Yin, Wenming; Chen, Jie; Gu, Wendong
2018-05-08
Breast-conserving surgery (BCS) plus postoperative radiotherapy has become the standard treatment for early-stage breast cancer. The aim of this study was to compare the setup accuracy of optical surface imaging by the Sentinel system with cone-beam computerized tomography (CBCT) imaging currently used in our clinic for patients received BCS. Two optical surface scans were acquired before and immediately after couch movement correction. The correlation between the setup errors as determined by the initial optical surface scan and CBCT was analyzed. The deviation of the second optical surface scan from the reference planning CT was considered an estimate for the residual errors for the new method for patient setup correction. The consequences in terms for necessary planning target volume (PTV) margins for treatment sessions without setup correction applied. We analyzed 145 scans in 27 patients treated for early stage breast cancer. The setup errors of skin marker based patient alignment by optical surface scan and CBCT were correlated, and the residual setup errors as determined by the optical surface scan after couch movement correction were reduced. Optical surface imaging provides a convenient method for improving the setup accuracy for breast cancer patient without unnecessary imaging dose.
NASA Technical Reports Server (NTRS)
Li, Zhanqing; Whitlock, Charles H.; Charlock, Thomas P.
1995-01-01
Global sets of surface radiation budget (SRB) have been obtained from satellite programs. These satellite-based estimates need validation with ground-truth observations. This study validates the estimates of monthly mean surface insolation contained in two satellite-based SRB datasets with the surface measurements made at worldwide radiation stations from the Global Energy Balance Archive (GEBA). One dataset was developed from the Earth Radiation Budget Experiment (ERBE) using the algorithm of Li et al. (ERBE/SRB), and the other from the International Satellite Cloud Climatology Project (ISCCP) using the algorithm of Pinker and Laszlo and that of Staylor (GEWEX/SRB). Since the ERBE/SRB data contain the surface net solar radiation only, the values of surface insolation were derived by making use of the surface albedo data contained GEWEX/SRB product. The resulting surface insolation has a bias error near zero and a root-mean-square error (RMSE) between 8 and 28 W/sq m. The RMSE is mainly associated with poor representation of surface observations within a grid cell. When the number of surface observations are sufficient, the random error is estimated to be about 5 W/sq m with present satellite-based estimates. In addition to demonstrating the strength of the retrieving method, the small random error demonstrates how well the ERBE derives from the monthly mean fluxes at the top of the atmosphere (TOA). A larger scatter is found for the comparison of transmissivity than for that of insolation. Month to month comparison of insolation reveals a weak seasonal trend in bias error with an amplitude of about 3 W/sq m. As for the insolation data from the GEWEX/SRB, larger bias errors of 5-10 W/sq m are evident with stronger seasonal trends and almost identical RMSEs.
Generation of a crowned pinion tooth surface by a surface of revolution
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Zhang, J.; Handschuh, R. F.
1988-01-01
A method of generating crowned pinion tooth surfaces using a surface of revolution is developed. The crowned pinion meshes with a regular involute gear and has a prescribed parabolic type of transmission errors when the gears operate in the aligned mode. When the gears are misaligned the transmission error remains parabolic with the maximum level still remaining very small (less than 0.34 arc sec for the numerical examples). Tooth contact analysis (TCA) is used to simulate the conditions of meshing, determine the transmission error, and determine the bearing contact.
Pillay, Sara B.; Humphries, Colin J.; Gross, William L.; Graves, William W.; Book, Diane S.
2016-01-01
Patients with surface dyslexia have disproportionate difficulty pronouncing irregularly spelled words (e.g. pint), suggesting impaired use of lexical-semantic information to mediate phonological retrieval. Patients with this deficit also make characteristic ‘regularization’ errors, in which an irregularly spelled word is mispronounced by incorrect application of regular spelling-sound correspondences (e.g. reading plaid as ‘played’), indicating over-reliance on sublexical grapheme–phoneme correspondences. We examined the neuroanatomical correlates of this specific error type in 45 patients with left hemisphere chronic stroke. Voxel-based lesion–symptom mapping showed a strong positive relationship between the rate of regularization errors and damage to the posterior half of the left middle temporal gyrus. Semantic deficits on tests of single-word comprehension were generally mild, and these deficits were not correlated with the rate of regularization errors. Furthermore, the deep occipital-temporal white matter locus associated with these mild semantic deficits was distinct from the lesion site associated with regularization errors. Thus, in contrast to patients with surface dyslexia and semantic impairment from anterior temporal lobe degeneration, surface errors in our patients were not related to a semantic deficit. We propose that these patients have an inability to link intact semantic representations with phonological representations. The data provide novel evidence for a post-semantic mechanism mediating the production of surface errors, and suggest that the posterior middle temporal gyrus may compute an intermediate representation linking semantics with phonology. PMID:26966139
2015-02-01
WRF ) Model using a Geographic Information System (GIS) by Jeffrey A Smith, Theresa A Foley, John W Raby, and Brian Reen...ARL-TR-7212 ● FEB 2015 US Army Research Laboratory Investigating Surface Bias Errors in the Weather Research and Forecasting ( WRF ) Model...SUBTITLE Investigating surface bias errors in the Weather Research and Forecasting ( WRF ) Model using a Geographic Information System (GIS) 5a
On the effect of surface emissivity on temperature retrievals. [for meteorology
NASA Technical Reports Server (NTRS)
Kornfield, J.; Susskind, J.
1977-01-01
The paper is concerned with errors in temperature retrieval caused by incorrectly assuming that surface emissivity is equal to unity. An error equation that applies to present-day atmospheric temperature sounders is derived, and the bias errors resulting from various emissivity discrepancies are calculated. A model of downward flux is presented and used to determine the effective downward flux. In the 3.7-micron region of the spectrum, emissivities of 0.6 to 0.9 have been observed over land. At a surface temperature of 290 K, if the true emissivity is 0.6 and unit emissivity is assumed, the error would be approximately 11 C. In the 11-micron region, the maximum deviation of the surface emissivity from unity was 0.05.
Li, Zexiao; Liu, Xianlei; Fang, Fengzhou; Zhang, Xiaodong; Zeng, Zhen; Zhu, Linlin; Yan, Ning
2018-03-19
Multi-reflective imaging systems find wide applications in optical imaging and space detection. However, it is faced with difficulties in adjusting the freeform mirrors with high accuracy to guarantee the optical function. Motivated by this, an alignment-free manufacture approach is proposed to machine the optical system. The direct optical performance-guided manufacture route is established without measuring the form error of freeform optics. An analytical model is established to investigate the effects of machine errors to serve the error identification and compensation in machining. Based on the integrated manufactured system, an ingenious self-designed testing configuration is constructed to evaluate the optical performance by directly measuring the wavefront aberration. Experiments are carried out to manufacture a three-mirror anastigmat, surface topographical details and optical performance shows agreement to the designed expectation. The final system works as an off-axis infrared imaging system. Results validate the feasibility of the proposed method to achieve excellent optical application.
Developing Performance Estimates for High Precision Astrometry with TMT
NASA Astrophysics Data System (ADS)
Schoeck, Matthias; Do, Tuan; Ellerbroek, Brent; Herriot, Glen; Meyer, Leo; Suzuki, Ryuji; Wang, Lianqi; Yelda, Sylvana
2013-12-01
Adaptive optics on Extremely Large Telescopes will open up many new science cases or expand existing science into regimes unattainable with the current generation of telescopes. One example of this is high-precision astrometry, which has requirements in the range from 10 to 50 micro-arc-seconds for some instruments and science cases. Achieving these requirements imposes stringent constraints on the design of the entire observatory, but also on the calibration procedures, observing sequences and the data analysis techniques. This paper summarizes our efforts to develop a top down astrometry error budget for TMT. It is predominantly developed for the first-light AO system, NFIRAOS, and the IRIS instrument, but many terms are applicable to other configurations as well. Astrometry error sources are divided into 5 categories: Reference source and catalog errors, atmospheric refraction correction errors, other residual atmospheric effects, opto-mechanical errors and focal plane measurement errors. Results are developed in parametric form whenever possible. However, almost every error term in the error budget depends on the details of the astrometry observations, such as whether absolute or differential astrometry is the goal, whether one observes a sparse or crowded field, what the time scales of interest are, etc. Thus, it is not possible to develop a single error budget that applies to all science cases and separate budgets are developed and detailed for key astrometric observations. Our error budget is consistent with the requirements for differential astrometry of tens of micro-arc-seconds for certain science cases. While no show stoppers have been found, the work has resulted in several modifications to the NFIRAOS optical surface specifications and reference source design that will help improve the achievable astrometry precision even further.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)
2000-01-01
Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.
Rettmann, Maryam E.; Holmes, David R.; Kwartowitz, David M.; Gunawan, Mia; Johnson, Susan B.; Camp, Jon J.; Cameron, Bruce M.; Dalegrave, Charles; Kolasa, Mark W.; Packer, Douglas L.; Robb, Richard A.
2014-01-01
Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Data from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamic in vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved landmark-only registration provided the noise in the surface points is not excessively high. Increased variability on the landmark fiducials resulted in increased registration errors; however, refinement of the initial landmark registration by the surface-based algorithm can compensate for small initial misalignments. The surface-based registration algorithm is quite robust to noise on the surface points and continues to improve landmark registration even at high levels of noise on the surface points. Both the canine and patient studies also demonstrate that combined landmark and surface registration has lower errors than landmark registration alone. Conclusions: In this work, we describe a model for evaluating the impact of noise variability on the input parameters of a registration algorithm in the context of cardiac ablation therapy. The model can be used to predict both registration error as well as assess which inputs have the largest effect on registration accuracy. PMID:24506630
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rettmann, Maryam E., E-mail: rettmann.maryam@mayo.edu; Holmes, David R.; Camp, Jon J.
2014-02-15
Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Datamore » from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamicin vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved landmark-only registration provided the noise in the surface points is not excessively high. Increased variability on the landmark fiducials resulted in increased registration errors; however, refinement of the initial landmark registration by the surface-based algorithm can compensate for small initial misalignments. The surface-based registration algorithm is quite robust to noise on the surface points and continues to improve landmark registration even at high levels of noise on the surface points. Both the canine and patient studies also demonstrate that combined landmark and surface registration has lower errors than landmark registration alone. Conclusions: In this work, we describe a model for evaluating the impact of noise variability on the input parameters of a registration algorithm in the context of cardiac ablation therapy. The model can be used to predict both registration error as well as assess which inputs have the largest effect on registration accuracy.« less
Results from a NIST-EPA Interagency Agreement on Understanding Systematic Measurement Error in Thermal-Optical Analysis for PM Black Carbon Using Response Surfaces and Surface Confidence Intervals will be presented at the American Association for Aerosol Research (AAAR) 24th Annu...
NASA Astrophysics Data System (ADS)
Chen, Hua; Chen, Jihong; Wang, Baorui; Zheng, Yongcheng
2016-10-01
The Magnetorheological finishing (MRF) process, based on the dwell time method with the constant normal spacing for flexible polishing, would bring out the normal contour error in the fine polishing complex surface such as aspheric surface. The normal contour error would change the ribbon's shape and removal characteristics of consistency for MRF. Based on continuously scanning the normal spacing between the workpiece and the finder by the laser range finder, the novel method was put forward to measure the normal contour errors while polishing complex surface on the machining track. The normal contour errors was measured dynamically, by which the workpiece's clamping precision, multi-axis machining NC program and the dynamic performance of the MRF machine were achieved for the verification and security check of the MRF process. The unit for measuring the normal contour errors of complex surface on-machine was designed. Based on the measurement unit's results as feedback to adjust the parameters of the feed forward control and the multi-axis machining, the optimized servo control method was presented to compensate the normal contour errors. The experiment for polishing 180mm × 180mm aspherical workpiece of fused silica by MRF was set up to validate the method. The results show that the normal contour error was controlled in less than 10um. And the PV value of the polished surface accuracy was improved from 0.95λ to 0.09λ under the conditions of the same process parameters. The technology in the paper has been being applied in the PKC600-Q1 MRF machine developed by the China Academe of Engineering Physics for engineering application since 2014. It is being used in the national huge optical engineering for processing the ultra-precision optical parts.
Implement a Sub-grid Turbulent Orographic Form Drag in WRF and its application to Tibetan Plateau
NASA Astrophysics Data System (ADS)
Zhou, X.; Yang, K.; Wang, Y.; Huang, B.
2017-12-01
Sub-grid-scale orographic variation exerts turbulent form drag on atmospheric flows. The Weather Research and Forecasting model (WRF) includes a turbulent orographic form drag (TOFD) scheme that adds the stress to the surface layer. In this study, another TOFD scheme has been incorporated in WRF3.7, which exerts an exponentially decaying drag on each model layer. To investigate the effect of the new scheme, WRF with the old and new one was used to simulate the climate over the complex terrain of the Tibetan Plateau. The two schemes were evaluated in terms of the direct impact (on wind) and the indirect impact (on air temperature, surface pressure and precipitation). Both in winter and summer, the new TOFD scheme reduces the mean bias in the surface wind, and clearly reduces the root mean square error (RMSEs) in comparisons with the station measurements (Figure 1). Meanwhile, the 2-m air temperature and surface pressure is also improved (Figure 2) due to the more warm air northward transport across south boundary of TP in winter. The 2-m air temperature is hardly improved in summer but the precipitation improvement is more obvious, with reduced mean bias and RMSEs. This is due to the weakening of water vapor flux (at low-level flow with the new scheme) crossing the Himalayan Mountains from South Asia.
Synthesis of Arbitrary Quantum Circuits to Topological Assembly: Systematic, Online and Compact.
Paler, Alexandru; Fowler, Austin G; Wille, Robert
2017-09-05
It is challenging to transform an arbitrary quantum circuit into a form protected by surface code quantum error correcting codes (a variant of topological quantum error correction), especially if the goal is to minimise overhead. One of the issues is the efficient placement of magic state distillation sub circuits, so-called distillation boxes, in the space-time volume that abstracts the computation's required resources. This work presents a general, systematic, online method for the synthesis of such circuits. Distillation box placement is controlled by so-called schedulers. The work introduces a greedy scheduler generating compact box placements. The implemented software, whose source code is available at www.github.com/alexandrupaler/tqec, is used to illustrate and discuss synthesis examples. Synthesis and optimisation improvements are proposed.
NASA Astrophysics Data System (ADS)
Huang, Wei-Ren; Huang, Shih-Pu; Tsai, Tsung-Yueh; Lin, Yi-Jyun; Yu, Zong-Ru; Kuo, Ching-Hsiang; Hsu, Wei-Yao; Young, Hong-Tsu
2017-09-01
Spherical lenses lead to forming spherical aberration and reduced optical performance. Consequently, in practice optical system shall apply a combination of spherical lenses for aberration correction. Thus, the volume of the optical system increased. In modern optical systems, aspherical lenses have been widely used because of their high optical performance with less optical components. However, aspherical surfaces cannot be fabricated by traditional full aperture polishing process due to their varying curvature. Sub-aperture computer numerical control (CNC) polishing is adopted for aspherical surface fabrication in recent years. By using CNC polishing process, mid-spatial frequency (MSF) error is normally accompanied during this process. And the MSF surface texture of optics decreases the optical performance for high precision optical system, especially for short-wavelength applications. Based on a bonnet polishing CNC machine, this study focuses on the relationship between MSF surface texture and CNC polishing parameters, which include feed rate, head speed, track spacing and path direction. The power spectral density (PSD) analysis is used to judge the MSF level caused by those polishing parameters. The test results show that controlling the removal depth of single polishing path, through the feed rate, and without same direction polishing path for higher total removal depth can efficiently reduce the MSF error. To verify the optical polishing parameters, we divided a correction polishing process to several polishing runs with different direction polishing paths. Compare to one shot polishing run, multi-direction path polishing plan could produce better surface quality on the optics.
Deterministic ion beam material adding technology for high-precision optical surfaces.
Liao, Wenlin; Dai, Yifan; Xie, Xuhui; Zhou, Lin
2013-02-20
Although ion beam figuring (IBF) provides a highly deterministic method for the precision figuring of optical components, several problems still need to be addressed, such as the limited correcting capability for mid-to-high spatial frequency surface errors and low machining efficiency for pit defects on surfaces. We propose a figuring method named deterministic ion beam material adding (IBA) technology to solve those problems in IBF. The current deterministic optical figuring mechanism, which is dedicated to removing local protuberances on optical surfaces, is enriched and developed by the IBA technology. Compared with IBF, this method can realize the uniform convergence of surface errors, where the particle transferring effect generated in the IBA process can effectively correct the mid-to-high spatial frequency errors. In addition, IBA can rapidly correct the pit defects on the surface and greatly improve the machining efficiency of the figuring process. The verification experiments are accomplished on our experimental installation to validate the feasibility of the IBA method. First, a fused silica sample with a rectangular pit defect is figured by using IBA. Through two iterations within only 47.5 min, this highly steep pit is effectively corrected, and the surface error is improved from the original 24.69 nm root mean square (RMS) to the final 3.68 nm RMS. Then another experiment is carried out to demonstrate the correcting capability of IBA for mid-to-high spatial frequency surface errors, and the final results indicate that the surface accuracy and surface quality can be simultaneously improved.
Simulating a transmon implementation of the surface code, Part I
NASA Astrophysics Data System (ADS)
Tarasinski, Brian; O'Brien, Thomas; Rol, Adriaan; Bultink, Niels; Dicarlo, Leo
Current experimental efforts aim to realize Surface-17, a distance-3 surface-code logical qubit, using transmon qubits in a circuit QED architecture. Following experimental proposals for this device, and currently achieved fidelities on physical qubits, we define a detailed error model that takes experimentally relevant error sources into account, such as amplitude and phase damping, imperfect gate pulses, and coherent errors due to low-frequency flux noise. Using the GPU-accelerated software package 'quantumsim', we simulate the density matrix evolution of the logical qubit under this error model. Combining the simulation results with a minimum-weight matching decoder, we obtain predictions for the error rate of the resulting logical qubit when used as a quantum memory, and estimate the contribution of different error sources to the logical error budget. Research funded by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), IARPA, an ERC Synergy Grant, the China Scholarship Council, and Intel Corporation.
Gordon, H R; Wang, M
1992-07-20
In the algorithm for the atmospheric correction of coastal zone color scanner (CZCS) imagery, it is assumed that the sea surface is flat. Simulations are carried out to assess the error incurred when the CZCS-type algorithm is applied to a realistic ocean in which the surface is roughened by the wind. In situations where there is no direct Sun glitter (either a large solar zenith angle or the sensor tilted away from the specular image of the Sun), the following conclusions appear justified: (1) the error induced by ignoring the surface roughness is less, similar1 CZCS digital count for wind speeds up to approximately 17 m/s, and therefore can be ignored for this sensor; (2) the roughness-induced error is much more strongly dependent on the wind speed than on the wave shadowing, suggesting that surface effects can be adequately dealt with without precise knowledge of the shadowing; and (3) the error induced by ignoring the Rayleigh-aerosol interaction is usually larger than that caused by ignoring the surface roughness, suggesting that in refining algorithms for future sensors more effort should be placed on dealing with the Rayleigh-aerosol interaction than on the roughness of the sea surface.
Path planning and parameter optimization of uniform removal in active feed polishing
NASA Astrophysics Data System (ADS)
Liu, Jian; Wang, Shaozhi; Zhang, Chunlei; Zhang, Linghua; Chen, Huanan
2015-06-01
A high-quality ultrasmooth surface is demanded in short-wave optical systems. However, the existing polishing methods have difficulties meeting the requirement on spherical or aspheric surfaces. As a new kind of small tool polishing method, active feed polishing (AFP) could attain a surface roughness of less than 0.3 nm (RMS) on spherical elements, although AFP may magnify the residual figure error or mid-frequency error. The purpose of this work is to propose an effective algorithm to realize uniform removal of the surface in the processing. At first, the principle of the AFP and the mechanism of the polishing machine are introduced. In order to maintain the processed figure error, a variable pitch spiral path planning algorithm and the dwell time-solving model are proposed. For suppressing the possible mid-frequency error, the uniformity of the synthesis tool path, which is generated by an arbitrary point at the polishing tool bottom, is analyzed and evaluated, and the angular velocity ratio of the tool spinning motion to the revolution motion is optimized. Finally, an experiment is conducted on a convex spherical surface and an ultrasmooth surface is finally acquired. In conclusion, a high-quality ultrasmooth surface can be successfully obtained with little degradation of the figure and mid-frequency errors by the algorithm.
Backward-gazing method for measuring solar concentrators shape errors.
Coquand, Mathieu; Henault, François; Caliot, Cyril
2017-03-01
This paper describes a backward-gazing method for measuring the optomechanical errors of solar concentrating surfaces. It makes use of four cameras placed near the solar receiver and simultaneously recording images of the sun reflected by the optical surfaces. Simple data processing then allows reconstructing the slope and shape errors of the surfaces. The originality of the method is enforced by the use of generalized quad-cell formulas and approximate mathematical relations between the slope errors of the mirrors and their reflected wavefront in the case of sun-tracking heliostats at high-incidence angles. Numerical simulations demonstrate that the measurement accuracy is compliant with standard requirements of solar concentrating optics in the presence of noise or calibration errors. The method is suited to fine characterization of the optical and mechanical errors of heliostats and their facets, or to provide better control for real-time sun tracking.
NASA Technical Reports Server (NTRS)
Antonille, Scott
2004-01-01
For potential use on the SHARPI mission, Eastman Kodak has delivered a 50.8cm CA f/1.25 ultra-lightweight UV parabolic mirror with a surface figure error requirement of 6nm RMS. We address the challenges involved in verifying and mapping the surface error of this large lightweight mirror to +/-3nm using a diffractive CGH null lens. Of main concern is removal of large systematic errors resulting from surface deflections of the mirror due to gravity as well as smaller contributions from system misalignment and reference optic errors. We present our efforts to characterize these errors and remove their wavefront error contribution in post-processing as well as minimizing the uncertainty these calculations introduce. Data from Kodak and preliminary measurements from NASA Goddard will be included.
A new unified approach to determine geocentre motion using space geodetic and GRACE gravity data
NASA Astrophysics Data System (ADS)
Wu, Xiaoping; Kusche, Jürgen; Landerer, Felix W.
2017-06-01
Geocentre motion between the centre-of-mass of the Earth system and the centre-of-figure of the solid Earth surface is a critical signature of degree-1 components of global surface mass transport process that includes sea level rise, ice mass imbalance and continental-scale hydrological change. To complement GRACE data for complete-spectrum mass transport monitoring, geocentre motion needs to be measured accurately. However, current methods of geodetic translational approach and global inversions of various combinations of geodetic deformation, simulated ocean bottom pressure and GRACE data contain substantial biases and systematic errors. Here, we demonstrate a new and more reliable unified approach to geocentre motion determination using a recently formed satellite laser ranging based geocentric displacement time-series of an expanded geodetic network of all four space geodetic techniques and GRACE gravity data. The unified approach exploits both translational and deformational signatures of the displacement data, while the addition of GRACE's near global coverage significantly reduces biases found in the translational approach and spectral aliasing errors in the inversion.
Building the Traffic, Navigation, and Situation Awareness System (T-NASA) for Surface Operations
NASA Technical Reports Server (NTRS)
McCann, Robert S.
1996-01-01
We report the results of a part-task simulation evaluating the separate and combined effects of an electronic moving map display and newly developed HUD symbology on ground taxi performance, under moderate- and low-visibility conditions. Twenty-four commercial airline pilots carried out a series of 28 gate-to-runway taxi trials at Chicago O'Hare. Half of the trials were conducted under moderate visibility (RVR 1400 ft), and half under low visibility (RVR 700 ft). In the baseline condition, where navigation support was limited to surface features and a Jeppesen paper map, navigation errors were committed on almost half of the trials. These errors were virtually abolished when the electronic moving map or the HUD symbology was available; in addition, compare, the baseline condition, both forms of navigation aid yielded an increase in forward taxi speed. The speed increase was greater for HUD than the electronic moving map, and greater under low visibility than under moderate visibility. These results suggest that combination of electronic moving map and HUD symbology has the potential to greatly increase the efficiency of ground operations, particularly under low-visibility conditions.
Development of low cost and accurate homemade sensor system based on Surface Plasmon Resonance (SPR)
NASA Astrophysics Data System (ADS)
Laksono, F. D.; Supardianningsih; Arifin, M.; Abraha, K.
2018-04-01
In this paper, we developed homemade and computerized sensor system based on Surface Plasmon Resonance (SPR). The developed systems consist of mechanical system instrument, laser power sensor, and user interface. The mechanical system development that uses anti-backlash gear design was successfully able to enhance the angular resolution angle of incidence laser up to 0.01°. In this system, the laser detector acquisition system and stepper motor controller utilizing Arduino Uno which is easy to program, flexible, and low cost, was used. Furthermore, we employed LabView’s user interface as the virtual instrument for facilitating the sample measurement and for transforming the data recording directly into the digital form. The test results using gold-deposited half-cylinder prism showed the Total Internal Reflection (TIR) angle of 41,34°± 0,01° and SPR angle of 44,20°± 0,01°, respectively. The result demonstrated that the developed system managed to reduce the measurement duration and data recording errors caused by human error. Also, the test results also concluded that the system’s measurement is repeatable and accurate.
NASA Technical Reports Server (NTRS)
Larson, Kristine M.; Ray, Richard D.; Williams, Simon D. P.
2017-01-01
A standard geodetic GPS receiver and a conventional Aquatrak tide gauge, collocated at Friday Harbor, Washington, are used to assess the quality of 10 years of water levels estimated from GPS sea surface reflections.The GPS results are improved by accounting for (tidal) motion of the reflecting sea surface and for signal propagation delay by the troposphere. The RMS error of individual GPS water level estimates is about 12 cm. Lower water levels are measured slightly more accurately than higher water levels. Forming daily mean sea levels reduces the RMS difference with the tide gauge data to approximately 2 cm. For monthly means, the RMS difference is 1.3 cm. The GPS elevations, of course, can be automatically placed into a well-defined terrestrial reference frame. Ocean tide coefficients, determined from both the GPS and tide gauge data, are in good agreement, with absolute differences below 1 cm for all constituents save K1 and S1. The latter constituent is especially anomalous, probably owing to daily temperature-induced errors in the Aquatrak tide gauge
Pierson, T.C.
2007-01-01
Dating of dynamic, young (<500 years) geomorphic landforms, particularly volcanofluvial features, requires higher precision than is possible with radiocarbon dating. Minimum ages of recently created landforms have long been obtained from tree-ring ages of the oldest trees growing on new surfaces. But to estimate the year of landform creation requires that two time corrections be added to tree ages obtained from increment cores: (1) the time interval between stabilization of the new landform surface and germination of the sampled trees (germination lag time or GLT); and (2) the interval between seedling germination and growth to sampling height, if the trees are not cored at ground level. The sum of these two time intervals is the colonization time gap (CTG). Such time corrections have been needed for more precise dating of terraces and floodplains in lowland river valleys in the Cascade Range, where significant eruption-induced lateral shifting and vertical aggradation of channels can occur over years to decades, and where timing of such geomorphic changes can be critical to emergency planning. Earliest colonizing Douglas fir (Pseudotsuga menziesii) were sampled for tree-ring dating at eight sites on lowland (<750 m a.s.l.), recently formed surfaces of known age near three Cascade volcanoes - Mount Rainier, Mount St. Helens and Mount Hood - in southwestern Washington and northwestern Oregon. Increment cores or stem sections were taken at breast height and, where possible, at ground level from the largest, oldest-looking trees at each study site. At least ten trees were sampled at each site unless the total of early colonizers was less. Results indicate that a correction of four years should be used for GLT and 10 years for CTG if the single largest (and presumed oldest) Douglas fir growing on a surface of unknown age is sampled. This approach would have a potential error of up to 20 years. Error can be reduced by sampling the five largest Douglas fir instead of the single largest. A GLT correction of 5 years should be added to the mean ring-count age of the five largest trees growing on the surface being dated, if the trees are cored at ground level. This correction would have an approximate error of ??5 years. If the trees are cored at about 1.4 m above the round surface (breast height), a CTG correction of 11 years should be added to the mean age of the five sampled trees (with an error of about ??7 years).
Consequences of land-cover misclassification in models of impervious surface
McMahon, G.
2007-01-01
Model estimates of impervious area as a function of landcover area may be biased and imprecise because of errors in the land-cover classification. This investigation of the effects of land-cover misclassification on impervious surface models that use National Land Cover Data (NLCD) evaluates the consequences of adjusting land-cover within a watershed to reflect uncertainty assessment information. Model validation results indicate that using error-matrix information to adjust land-cover values used in impervious surface models does not substantially improve impervious surface predictions. Validation results indicate that the resolution of the landcover data (Level I and Level II) is more important in predicting impervious surface accurately than whether the land-cover data have been adjusted using information in the error matrix. Level I NLCD, adjusted for land-cover misclassification, is preferable to the other land-cover options for use in models of impervious surface. This result is tied to the lower classification error rates for the Level I NLCD. ?? 2007 American Society for Photogrammetry and Remote Sensing.
Taylor, John S.; Folta, James A.; Montcalm, Claude
2005-01-18
Figure errors are corrected on optical or other precision surfaces by changing the local density of material in a zone at or near the surface. Optical surface height is correlated with the localized density of the material within the same region. A change in the height of the optical surface can then be caused by a change in the localized density of the material at or near the surface.
Influence of OPD in wavelength-shifting interferometry
NASA Astrophysics Data System (ADS)
Wang, Hongjun; Tian, Ailing; Liu, Bingcai; Dang, Juanjuan
2009-12-01
Phase-shifting interferometry is a powerful tool for high accuracy optical measurement. It operates by change the optical path length in the reference arm or test arm. This method practices by move optical device. So it has much problem when the optical device is very large and heavy. For solve this problem, the wavelength-shifting interferometry was put forwarded. In wavelength-shifting interferometry, the phase shifting angle was achieved by change the wavelength of optical source. The phase shifting angle was decided by wavelength and OPD (Optical Path Difference) between test and reference wavefront. So the OPD is an important factor to measure results. But in measurement, because the positional error and profile error of under testing optical element is exist, the phase shifting angle is different in different test point when wavelength scanning, it will introduce phase shifting angle error, so it will introduce optical surface measure error. For analysis influence of OPD on optical surface error, the relation between surface error and OPD was researched. By simulation, the relation between phase shifting error and OPD was established. By analysis, the error compensation method was put forward. After error compensation, the measure results can be improved to great extend.
Influence of OPD in wavelength-shifting interferometry
NASA Astrophysics Data System (ADS)
Wang, Hongjun; Tian, Ailing; Liu, Bingcai; Dang, Juanjuan
2010-03-01
Phase-shifting interferometry is a powerful tool for high accuracy optical measurement. It operates by change the optical path length in the reference arm or test arm. This method practices by move optical device. So it has much problem when the optical device is very large and heavy. For solve this problem, the wavelength-shifting interferometry was put forwarded. In wavelength-shifting interferometry, the phase shifting angle was achieved by change the wavelength of optical source. The phase shifting angle was decided by wavelength and OPD (Optical Path Difference) between test and reference wavefront. So the OPD is an important factor to measure results. But in measurement, because the positional error and profile error of under testing optical element is exist, the phase shifting angle is different in different test point when wavelength scanning, it will introduce phase shifting angle error, so it will introduce optical surface measure error. For analysis influence of OPD on optical surface error, the relation between surface error and OPD was researched. By simulation, the relation between phase shifting error and OPD was established. By analysis, the error compensation method was put forward. After error compensation, the measure results can be improved to great extend.
Indirect measurements of hydrogen: The deficit method for a many-component system
NASA Astrophysics Data System (ADS)
Levine, Timothy E.; Yu, Ning; Kodali, Padma; Walter, Kevin C.; Nastasi, Michael; Tesmer, Joseph R.; Maggiore, Carl J.; Mayer, James W.
We have developed a simple technique for determining hydrogen atomic fraction from the ion backscattering spectrometry (IBS) signals of the remaining species. This technique uses the surface heights of various IBS signals in the form of a linear matrix equation. We apply this technique to in situ analysis of ion-beam-induced densification of sol-gel zirconia thin films, where hydrogen is the most volatile species during irradiation. Attendant errors are discussed with an emphasis on stopping powers and Bragg's rule.
Bilić, Ante; Reimers, Jeffrey R; Hush, Noel S
2005-03-01
The adsorption of phenylthiol on the Au(111) surface is modeled using Perdew and Wang density-functional calculations. Both direct molecular physisorption and dissociative chemisorption via S-H bond cleavage are considered as well as dimerization to form disulfides. For the major observed product, the chemisorbed thiol, an extensive potential-energy surface is produced as a function of both the azimuthal orientation of the adsorbate and the linear translation of the adsorbate through the key fcc, hcp, bridge, and top binding sites. Key structures are characterized, the lowest-energy one being a broad minimum of tilted orientation ranging from the bridge structure halfway towards the fcc one. The vertically oriented threefold binding sites, often assumed to dominate molecular electronics measurements, are identified as transition states at low coverage but become favored in dense monolayers. A similar surface is also produced for chemisorption of phenylthiol on Ag(111); this displays significant qualitative differences, consistent with the qualitatively different observed structures for thiol chemisorption on Ag and Au. Full contours of the minimum potential energy as a function of sulfur translation over the crystal face are described, from which the barrier to diffusion is deduced to be 5.8 kcal mol(-1), indicating that the potential-energy surface has low corrugation. The calculated bond lengths, adsorbate charge and spin density, and the density of electronic states all indicate that, at all sulfur locations, the adsorbate can be regarded as a thiyl species that forms a net single covalent bond to the surface of strength 31 kcal mol(-1). No detectable thiolate character is predicted, however, contrary to experimental results for alkyl thiols that indicate up to 20%-30% thiolate involvement. This effect is attributed to the asymptotic-potential error of all modern density functionals that becomes manifest through a 3-4 eV error in the lineup of the adsorbate and substrate bands. Significant implications are described for density-functional calculations of through-molecule electron transport in molecular electronics.
Methods to Prescribe Particle Motion to Minimize Quadrature Error in Meshfree Methods
NASA Astrophysics Data System (ADS)
Templeton, Jeremy; Erickson, Lindsay; Morris, Karla; Poliakoff, David
2015-11-01
Meshfree methods are an attractive approach for simulating material systems undergoing large-scale deformation, such as spray break up, free surface flows, and droplets. Particles, which can be easily moved, are used as nodes and/or quadrature points rather than a relying on a fixed mesh. Most methods move particles according to the local fluid velocity that allows for the convection terms in the Navier-Stokes equations to be easily accounted for. However, this is a trade-off against numerical accuracy as the flow can often move particles to configurations with high quadrature error, and artificial compressibility is often required to prevent particles from forming undesirable regions of high and low concentrations. In this work, we consider the other side of the trade-off: moving particles based on reducing numerical error. Methods derived from molecular dynamics show that particles can be moved to minimize a surrogate for the solution error, resulting in substantially more accurate simulations at a fixed cost. Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Zhang, Xinmu; Hao, Qun; Hu, Yao; Wang, Shaopu; Ning, Yan; Li, Tengfei; Chen, Shufen
2017-10-01
With no necessity of compensating the whole aberration introduced by the aspheric surfaces, non-null test has the advantage over null test in applicability. However, retrace error, which is brought by the path difference between the rays reflected from the surface under test (SUT) and the incident rays, is introduced into the measurement and makes up of the residual wavefront aberrations (RWAs) along with surface figure error (SFE), misalignment error and other influences. Being difficult to separate from RWAs, the misalignment error may remain after measurement and it is hard to identify whether it is removed or not. It is a primary task to study the removal of misalignment error. A brief demonstration of digital Moiré interferometric technique is presented and a calibration method for misalignment error on the basis of reverse iteration optimization (RIO) algorithm in non-null test method is addressed. The proposed method operates mostly in the virtual system, and requires no accurate adjustment in the real interferometer, which is of significant advantage in reducing the errors brought by repeating complicated manual adjustment, furthermore improving the accuracy of the aspheric surface test. Simulation verification is done in this paper. The calibration accuracy of the position and attitude can achieve at least a magnitude of 10-5 mm and 0.0056×10-6rad, respectively. The simulation demonstrates that the influence of misalignment error can be precisely calculated and removed after calibration.
Linear shoaling of free-surface waves in multi-layer non-hydrostatic models
NASA Astrophysics Data System (ADS)
Bai, Yefei; Cheung, Kwok Fai
2018-01-01
The capability to describe shoaling over sloping bottom is fundamental to modeling of coastal wave transformation. The linear shoaling gradient provides a metric to measure this property in non-hydrostatic models with layer-integrated formulations. The governing equations in Boussinesq form facilitate derivation of the linear shoaling gradient, which is in the form of a [ 2 P + 2 , 2 P ] expansion of the water depth parameter kd with P equal to 1 for a one-layer model and (4 N - 4) for an N-layer model. The expansion reproduces the analytical solution from Airy wave theory at the shallow water limit and maintains a reasonable approximation up to kd = 1.2 and 2 for the one and two-layer models. Additional layers provide rapid and monotonic convergence of the shoaling gradient into deep water. Numerical experiments of wave propagation over a plane slope illustrate manifestation of the shoaling errors through the transformation processes from deep to shallow water. Even though outside the zone of active wave transformation, shoaling errors from deep to intermediate water are cumulative to produce appreciable impact to the wave amplitude in shallow water.
Influence of surface error on electromagnetic performance of reflectors based on Zernike polynomials
NASA Astrophysics Data System (ADS)
Li, Tuanjie; Shi, Jiachen; Tang, Yaqiong
2018-04-01
This paper investigates the influence of surface error distribution on the electromagnetic performance of antennas. The normalized Zernike polynomials are used to describe a smooth and continuous deformation surface. Based on the geometrical optics and piecewise linear fitting method, the electrical performance of reflector described by the Zernike polynomials is derived to reveal the relationship between surface error distribution and electromagnetic performance. Then the relation database between surface figure and electric performance is built for ideal and deformed surfaces to realize rapidly calculation of far-field electric performances. The simulation analysis of the influence of Zernike polynomials on the electrical properties for the axis-symmetrical reflector with the axial mode helical antenna as feed is further conducted to verify the correctness of the proposed method. Finally, the influence rules of surface error distribution on electromagnetic performance are summarized. The simulation results show that some terms of Zernike polynomials may decrease the amplitude of main lobe of antenna pattern, and some may reduce the pointing accuracy. This work extracts a new concept for reflector's shape adjustment in manufacturing process.
Accurate fluid force measurement based on control surface integration
NASA Astrophysics Data System (ADS)
Lentink, David
2018-01-01
Nonintrusive 3D fluid force measurements are still challenging to conduct accurately for freely moving animals, vehicles, and deforming objects. Two techniques, 3D particle image velocimetry (PIV) and a new technique, the aerodynamic force platform (AFP), address this. Both rely on the control volume integral for momentum; whereas PIV requires numerical integration of flow fields, the AFP performs the integration mechanically based on rigid walls that form the control surface. The accuracy of both PIV and AFP measurements based on the control surface integration is thought to hinge on determining the unsteady body force associated with the acceleration of the volume of displaced fluid. Here, I introduce a set of non-dimensional error ratios to show which fluid and body parameters make the error negligible. The unsteady body force is insignificant in all conditions where the average density of the body is much greater than the density of the fluid, e.g., in gas. Whenever a strongly deforming body experiences significant buoyancy and acceleration, the error is significant. Remarkably, this error can be entirely corrected for with an exact factor provided that the body has a sufficiently homogenous density or acceleration distribution, which is common in liquids. The correction factor for omitting the unsteady body force, {{{ {ρ f}} {1 - {ρ f} ( {{ρ b}+{ρ f}} )}.{( {{{{ρ }}b}+{ρ f}} )}}} , depends only on the fluid, {ρ f}, and body, {{ρ }}b, density. Whereas these straightforward solutions work even at the liquid-gas interface in a significant number of cases, they do not work for generalized bodies undergoing buoyancy in combination with appreciable body density inhomogeneity, volume change (PIV), or volume rate-of-change (PIV and AFP). In these less common cases, the 3D body shape needs to be measured and resolved in time and space to estimate the unsteady body force. The analysis shows that accounting for the unsteady body force is straightforward to non-intrusively and accurately determine fluid force in most applications.
Surface proteins and the formation of biofilms by Staphylococcus aureus.
Kim, Sung Joon; Chang, James; Rimal, Binayak; Yang, Hao; Schaefer, Jacob
2018-03-01
Staphylococcus aureus biofilms pose a serious clinical threat as reservoirs for persistent infections. Despite this clinical significance, the composition and mechanism of formation of S. aureus biofilms are unknown. To address these problems, we used solid-state NMR to examine S. aureus (SA113), a strong biofilm-forming strain. We labeled whole cells and cell walls of planktonic cells, young biofilms formed for 12-24h after stationary phase, and more mature biofilms formed for up to 60h after stationary phase. All samples were labeled either by (i) [ 15 N]glycine and l-[1- 13 C]threonine, or in separate experiments, by (ii) l-[2- 13 C, 15 N]leucine. We then measured 13 C- 15 N direct bonds by C{N} rotational-echo double resonance (REDOR). The increase in peptidoglycan stems that have bridges connected to a surface protein was determined directly by a cell-wall double difference (biofilm REDOR difference minus planktonic REDOR difference). This procedure eliminates errors arising from differences in 15 N isotopic enrichments and from the routing of 13 C label from threonine degradation to glycine. For both planktonic cells and the mature biofilm, 20% of pentaglycyl bridges are not cross-linked and are potential surface-protein attachment sites. None of these sites has a surface protein attached in the planktonic cells, but one-fourth have a surface protein attached in the mature biofilm. Moreover, the leucine-label shows that the concentration of β-strands in leucine-rich regions doubles in the mature biofilm. Thus, a primary event in establishing a S. aureus biofilm is extensive decoration of the cell surface with surface proteins that are linked covalently to the cell wall and promote cell-cell adhesion. Copyright © 2017 Elsevier B.V. All rights reserved.
Reduction of Orifice-Induced Pressure Errors
NASA Technical Reports Server (NTRS)
Plentovich, Elizabeth B.; Gloss, Blair B.; Eves, John W.; Stack, John P.
1987-01-01
Use of porous-plug orifice reduces or eliminates errors, induced by orifice itself, in measuring static pressure on airfoil surface in wind-tunnel experiments. Piece of sintered metal press-fitted into static-pressure orifice so it matches surface contour of model. Porous material reduces orifice-induced pressure error associated with conventional orifice of same or smaller diameter. Also reduces or eliminates additional errors in pressure measurement caused by orifice imperfections. Provides more accurate measurements in regions with very thin boundary layers.
Response Surface Modeling Using Multivariate Orthogonal Functions
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; DeLoach, Richard
2001-01-01
A nonlinear modeling technique was used to characterize response surfaces for non-dimensional longitudinal aerodynamic force and moment coefficients, based on wind tunnel data from a commercial jet transport model. Data were collected using two experimental procedures - one based on modem design of experiments (MDOE), and one using a classical one factor at a time (OFAT) approach. The nonlinear modeling technique used multivariate orthogonal functions generated from the independent variable data as modeling functions in a least squares context to characterize the response surfaces. Model terms were selected automatically using a prediction error metric. Prediction error bounds computed from the modeling data alone were found to be- a good measure of actual prediction error for prediction points within the inference space. Root-mean-square model fit error and prediction error were less than 4 percent of the mean response value in all cases. Efficacy and prediction performance of the response surface models identified from both MDOE and OFAT experiments were investigated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alderliesten, Tanja; Sonke, Jan-Jakob; Betgen, Anja
2013-02-01
Purpose: To investigate the applicability of 3-dimensional (3D) surface imaging for image guidance in deep-inspiration breath-hold radiation therapy (DIBH-RT) for patients with left-sided breast cancer. For this purpose, setup data based on captured 3D surfaces was compared with setup data based on cone beam computed tomography (CBCT). Methods and Materials: Twenty patients treated with DIBH-RT after breast-conserving surgery (BCS) were included. Before the start of treatment, each patient underwent a breath-hold CT scan for planning purposes. During treatment, dose delivery was preceded by setup verification using CBCT of the left breast. 3D surfaces were captured by a surface imaging systemmore » concurrently with the CBCT scan. Retrospectively, surface registrations were performed for CBCT to CT and for a captured 3D surface to CT. The resulting setup errors were compared with linear regression analysis. For the differences between setup errors, group mean, systematic error, random error, and 95% limits of agreement were calculated. Furthermore, receiver operating characteristic (ROC) analysis was performed. Results: Good correlation between setup errors was found: R{sup 2}=0.70, 0.90, 0.82 in left-right, craniocaudal, and anterior-posterior directions, respectively. Systematic errors were {<=}0.17 cm in all directions. Random errors were {<=}0.15 cm. The limits of agreement were -0.34-0.48, -0.42-0.39, and -0.52-0.23 cm in left-right, craniocaudal, and anterior-posterior directions, respectively. ROC analysis showed that a threshold between 0.4 and 0.8 cm corresponds to promising true positive rates (0.78-0.95) and false positive rates (0.12-0.28). Conclusions: The results support the application of 3D surface imaging for image guidance in DIBH-RT after BCS.« less
Ik Han, Seong; Lee, Jangmyung
2016-11-01
This paper presents finite-time sliding mode control (FSMC) with predefined constraints for the tracking error and sliding surface in order to obtain robust positioning of a robot manipulator with input nonlinearity due to an unknown deadzone and external disturbance. An assumed model feedforward FSMC was designed to avoid tedious identification procedures for the manipulator parameters and to obtain a fast response time. Two constraint switching control functions based on the tracking error and finite-time sliding surface were added to the FSMC to guarantee the predefined tracking performance despite the presence of an unknown deadzone and disturbance. The tracking error due to the deadzone and disturbance can be suppressed within the predefined error boundary simply by tuning the gain value of the constraint switching function and without the addition of an extra compensator. Therefore, the designed constraint controller has a simpler structure than conventional transformed error constraint methods and the sliding surface constraint scheme can also indirectly guarantee the tracking error constraint while being more stable than the tracking error constraint control. A simulation and experiment were performed on an articulated robot manipulator to validate the proposed control schemes. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Analysing surface deformation in Surabaya from sentinel-1A data using DInSAR method
NASA Astrophysics Data System (ADS)
Anjasmara, Ira Mutiara; Yusfania, Meiriska; Kurniawan, Akbar; Resmi, Awalina L. C.; Kurniawan, Roni
2017-07-01
The rapid population growth and increasing industrial space in the urban area of Surabaya have caused an excessive ground water use and load of infrastructures. This condition triggers surface deformation, especially the vertical deformation (subsidence or uplift), in Surabaya and its surroundings. The presence of dynamic processes of the Earth and geological form of Surabaya area can also fasten the rate of the surface deformation. In this research, Differential Interferometry Synthetic Aperture Radar (DInSAR) method is chosen to infer the surface deformation over Surabaya area. The DInSAR processing utilized Sentinel 1A satellite images from May 2015 to September 2016 using two-pass interferometric. Two-pass interferometric method is a method that uses two SAR imageries and Digital Elevation Model (DEM). The results from four pairs of DInSAR processing indicate the occurrence of surface deformation in the form of land subsidence and uplift based on the displacement Line of Sight (LOS) in Surabaya. The average rate of surface deformation from May 2015 to September 2016 varies from -3.52 mm/4months to +2.35 mm/4months. The subsidence mostly occurs along the coastal area. However, the result still contains errors from the processing of displacement, due to the value of coherence between the image, noise, geometric distortion of a radar signal and large baseline on image pair.
NASA Astrophysics Data System (ADS)
Chen, Shanyong; Li, Shengyi; Wang, Guilin
2014-11-01
The wavefront error of large telescopes requires to be measured to check the system quality and also estimate the misalignment of the telescope optics including the primary, the secondary and so on. It is usually realized by a focal plane interferometer and an autocollimator flat (ACF) of the same aperture with the telescope. However, it is challenging for meter class telescopes due to high cost and technological challenges in producing the large ACF. Subaperture test with a smaller ACF is hence proposed in combination with advanced stitching algorithms. Major error sources include the surface error of the ACF, misalignment of the ACF and measurement noises. Different error sources have different impacts on the wavefront error. Basically the surface error of the ACF behaves like systematic error and the astigmatism will be cumulated and enlarged if the azimuth of subapertures remains fixed. It is difficult to accurately calibrate the ACF because it suffers considerable deformation induced by gravity or mechanical clamping force. Therefore a selfcalibrated stitching algorithm is employed to separate the ACF surface error from the subaperture wavefront error. We suggest the ACF be rotated around the optical axis of the telescope for subaperture test. The algorithm is also able to correct the subaperture tip-tilt based on the overlapping consistency. Since all subaperture measurements are obtained in the same imaging plane, lateral shift of the subapertures is always known and the real overlapping points can be recognized in this plane. Therefore lateral positioning error of subapertures has no impact on the stitched wavefront. In contrast, the angular positioning error changes the azimuth of the ACF and finally changes the systematic error. We propose an angularly uneven layout of subapertures to minimize the stitching error, which is very different from our knowledge. At last, measurement noises could never be corrected but be suppressed by means of averaging and environmental control. We simulate the performance of the stitching algorithm dealing with surface error and misalignment of the ACF, and noise suppression, which provides guidelines to optomechanical design of the stitching test system.
Twice cutting method reduces tibial cutting error in unicompartmental knee arthroplasty.
Inui, Hiroshi; Taketomi, Shuji; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae
2016-01-01
Bone cutting error can be one of the causes of malalignment in unicompartmental knee arthroplasty (UKA). The amount of cutting error in total knee arthroplasty has been reported. However, none have investigated cutting error in UKA. The purpose of this study was to reveal the amount of cutting error in UKA when open cutting guide was used and clarify whether cutting the tibia horizontally twice using the same cutting guide reduced the cutting errors in UKA. We measured the alignment of the tibial cutting guides, the first-cut cutting surfaces and the second cut cutting surfaces using the navigation system in 50 UKAs. Cutting error was defined as the angular difference between the cutting guide and cutting surface. The mean absolute first-cut cutting error was 1.9° (1.1° varus) in the coronal plane and 1.1° (0.6° anterior slope) in the sagittal plane, whereas the mean absolute second-cut cutting error was 1.1° (0.6° varus) in the coronal plane and 1.1° (0.4° anterior slope) in the sagittal plane. Cutting the tibia horizontally twice reduced the cutting errors in the coronal plane significantly (P<0.05). Our study demonstrated that in UKA, cutting the tibia horizontally twice using the same cutting guide reduced cutting error in the coronal plane. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hao, Qun; Li, Tengfei; Hu, Yao
2018-01-01
Surface parameters are the properties to describe the shape characters of aspheric surface, which mainly include vertex radius of curvature (VROC) and conic constant (CC). The VROC affects the basic properties, such as focal length of an aspheric surface, while the CC is the basis of classification for aspheric surface. The deviations of the two parameters are defined as surface parameter error (SPE). Precisely measuring SPE is critical for manufacturing and aligning aspheric surface. Generally, SPE of aspheric surface is measured directly by curvature fitting on the absolute profile measurement data from contact or non-contact testing. And most interferometry-based methods adopt null compensators or null computer-generated holograms to measure SPE. To our knowledge, there is no effective way to measure SPE of highorder aspheric surface with non-null interferometry. In this paper, based on the theory of slope asphericity and the best compensation distance (BCD) established in our previous work, we propose a SPE measurement method for high-order aspheric surface in partial compensation interferometry (PCI) system. In the procedure, firstly, we establish the system of two element equations by utilizing the SPE-caused BCD change and surface shape change. Then, we can simultaneously obtain the VROC error and CC error in PCI system by solving the equations. Simulations are made to verify the method, and the results show a high relative accuracy.
ERIC Educational Resources Information Center
Clayman, Deborah P. Goldweber
The ability of 100 second-grade boys and girls to self-correct oral reading errors was studied in relationship to visual-form perception, phonic skills, response speed, and reading level. Each child was tested individually with the Bender-Error Test, the Gray Oral Paragraphs, and the Roswell-Chall Diagnostic Reading Test and placed into a group of…
Dwell time method based on Richardson-Lucy algorithm
NASA Astrophysics Data System (ADS)
Jiang, Bo; Ma, Zhen
2017-10-01
When the noise in the surface error data given by the interferometer has no effect on the iterative convergence of the RL algorithm, the RL algorithm for deconvolution in image restoration can be applied to the CCOS model to solve the dwell time. By extending the initial error function on the edge and denoising the noise in the surface error data given by the interferometer , it makes the result more available . The simulation results show the final residual error 10.7912nm nm in PV and 0.4305 nm in RMS, when the initial surface error is 107.2414 nm in PV and 15.1331 nm in RMS. The convergence rates of the PV and RMS values can reach up to 89.9% and 96.0%, respectively . The algorithms can satisfy the requirement of fabrication very well.
In-situ Calibration Methods for Phased Array High Frequency Radars
NASA Astrophysics Data System (ADS)
Flament, P. J.; Flament, M.; Chavanne, C.; Flores-vidal, X.; Rodriguez, I.; Marié, L.; Hilmer, T.
2016-12-01
HF radars measure currents through the Doppler-shift of electromagnetic waves Bragg-scattered by surface gravity waves. While modern clocks and digital synthesizers yield range errors negligible compared to the bandwidth-limited range resolution, azimuth calibration issues arise for beam-forming phased arrays. Sources of errors in the phases of the received waves can be internal to the radar system (phase errors of filters, cable lengths, antenna tuning) and geophysical (standing waves, propagation and refraction anomalies). They result in azimuthal biases (which can be range-dependent) and beam-forming side-lobes (which induce Doppler ambiguities). We analyze the experimental calibrations of 17 deployments of WERA HF radars, performed between 2003 and 2012 in Hawaii, the Adriatic, France, Mexico and the Philippines. Several strategies were attempted: (i) passive reception of continuous multi-frequency transmitters on GPS-tracked boats, cars, and drones; (ii) bi-static calibrations of radars in mutual view; (iii) active echoes from vessels of opportunity of unknown positions or tracked through AIS; (iv) interference of unknown remote transmitters with the chirped local oscillator. We found that: (a) for antennas deployed on the sea shore, a single-azimuth calibration is sufficient to correct phases within a typical beam-forming azimuth range; (b) after applying this azimuth-independent correction, residual pointing errors are 1-2 deg. rms; (c) for antennas deployed on irregular cliffs or hills, back from shore, systematic biases appear for some azimuths at large incidence angles, suggesting that some of the ground-wave electromagnetic energy propagates in a terrain-following mode between the sea shore and the antennas; (d) for some sites, fluctuations of 10-25 deg. in radio phase at 20-40 deg. azimuthal period, not significantly correlated among antennas, are omnipresent in calibrations along a constant-range circle, suggesting standing waves or multiple paths in the presence of reflecting structures (buildings, fences), or possibly fractal nature of the wavefronts; (e) amplitudes lack stability in time and azimuth to be usable as a-priori calibrations, confirming the accepted method of re-normalizing amplitudes by the signal of nearby cells prior to beam-forming.
Radiometric analysis of the longwave infrared channel of the Thematic Mapper on LANDSAT 4 and 5
NASA Technical Reports Server (NTRS)
Schott, John R.; Volchok, William J.; Biegel, Joseph D.
1986-01-01
The first objective was to evaluate the postlaunch radiometric calibration of the LANDSAT Thematic Mapper (TM) band 6 data. The second objective was to determine to what extent surface temperatures could be computed from the TM and 6 data using atmospheric propagation models. To accomplish this, ground truth data were compared to a single TM-4 band 6 data set. This comparison indicated satisfactory agreement over a narrow temperature range. The atmospheric propagation model (modified LOWTRAN 5A) was used to predict surface temperature values based on the radiance at the spacecraft. The aircraft data were calibrated using a multi-altitude profile calibration technique which had been extensively tested in previous studies. This aircraft calibration permitted measurement of surface temperatures based on the radiance reaching the aircraft. When these temperature values are evaluated, an error in the satellite's ability to predict surface temperatures can be estimated. This study indicated that by carefully accounting for various sensor calibration and atmospheric propagation effects, and expected error (1 standard deviation) in surface temperature would be 0.9 K. This assumes no error in surface emissivity and no sampling error due to target location. These results indicate that the satellite calibration is within nominal limits to within this study's ability to measure error.
NASA Astrophysics Data System (ADS)
Hodge, R.; Brasington, J.; Richards, K.
2009-04-01
The ability to collect 3D elevation data at mm-resolution from in-situ natural surfaces, such as fluvial and coastal sediments, rock surfaces, soils and dunes, is beneficial for a range of geomorphological and geological research. From these data the properties of the surface can be measured, and Digital Terrain Models (DTM) can be constructed. Terrestrial Laser Scanning (TLS) can collect quickly such 3D data with mm-precision and mm-spacing. This paper presents a methodology for the collection and processing of such TLS data, and considers how the errors in this TLS data can be quantified. TLS has been used to collect elevation data from fluvial gravel surfaces. Data were collected from areas of approximately 1 m2, with median grain sizes ranging from 18 to 63 mm. Errors are inherent in such data as a result of the precision of the TLS, and the interaction of factors including laser footprint, surface topography, surface reflectivity and scanning geometry. The methodology for the collection and processing of TLS data from complex surfaces like these fluvial sediments aims to minimise the occurrence of, and remove, such errors. The methodology incorporates taking scans from multiple scanner locations, averaging repeat scans, and applying a series of filters to remove erroneous points. Analysis of 2.5D DTMs interpolated from the processed data has identified geomorphic properties of the gravel surfaces, including the distribution of surface elevations, preferential grain orientation and grain imbrication. However, validation of the data and interpolated DTMs is limited by the availability of techniques capable of collecting independent elevation data of comparable quality. Instead, two alternative approaches to data validation are presented. The first consists of careful internal validation to optimise filter parameter values during data processing combined with a series of laboratory experiments. In the experiments, TLS data were collected from a sphere and planes with different reflectivities to measure the accuracy and precision of TLS data of these geometrically simple objects. Whilst this first approach allows the maximum precision of TLS data from complex surfaces to be estimated, it cannot quantify the distribution of errors within the TLS data and across the interpolated DTMs. The second approach enables this by simulating the collection of TLS data from complex surfaces of a known geometry. This simulated scanning has been verified through systematic comparison with laboratory TLS data. Two types of surface geometry have been investigated: simulated regular arrays of uniform spheres used to analyse the effect of sphere size; and irregular beds of spheres with the same grain size distribution as the fluvial gravels, which provide a comparable complex geometry to the field sediment surfaces. A series of simulated scans of these surfaces has enabled the magnitude and spatial distribution of errors in the interpolated DTMs to be quantified, as well as demonstrating the utility of the different processing stages in removing errors from TLS data. As well as demonstrating the application of simulated scanning as a technique to quantify errors, these results can be used to estimate errors in comparable TLS data.
Interferometry On Grazing Incidence Optics
NASA Astrophysics Data System (ADS)
Geary, Joseph; Maeda, Riki
1988-08-01
A preliminary interferometric procedure is described showing potential for obtaining surface figure error maps of grazing incidence optics at normal incidence. The latter are found in some laser resonator configurations, and in Wolter type X-ray optics. The procedure makes use of cylindrical wavefronts and error subtraction techniques over subapertures. The surface error maps obtained will provide critical information to opticians in the fabrication process.
Interferometry on grazing incidence optics
NASA Astrophysics Data System (ADS)
Geary, Joseph M.; Maeda, Riki
1987-12-01
An interfeormetric procedure is described that shows potential for obtaining surface figure error maps of grazing incidence optics at normal incidence. Such optics are found in some laser resonator configurations and in Wolter-type X-ray optics. The procedure makes use of cylindrical wavefronts and error subtraction techniques over subapertures. The surface error maps obtained will provide critical information to opticians for the fabrication process.
NASA Astrophysics Data System (ADS)
Chu, Jiyoung; Cho, Sungwhi; Joo, Won Don; Jang, Sangdon
2017-08-01
One of the most popular methods for high precision lens assembly of an optical system is using an autocollimator and a rotation stage. Some companies provide software for calculating the state of the lens along with their lens assembly systems, but the calculation algorithms used by the software are unknown. In this paper, we suggest a calculation method for lens alignment errors using ray transfer matrices. Alignment errors resulting from tilting and decentering of a lens element can be calculated from the tilts of the front and back surfaces of the lens. The tilt of each surface can be obtained from the position of the reticle image on the CCD camera of the autocollimator. Rays from a reticle of the autocollimator are reflected from the target surface of the lens, which rotates with the rotation stage, and are imaged on the CCD camera. To obtain a clear image, the distance between the autocollimator and the first lens surface should be adjusted according to the focusing lens of the autocollimator and the lens surfaces from the first to the target surface. Ray propagations for the autocollimator and the tilted lens surfaces can be expressed effectively by using ray transfer matrices and lens alignment errors can be derived from them. This method was compared with Zemax simulation for various lenses with spherical or flat surfaces and the error was less than a few percent.
Tissue resistivity estimation in the presence of positional and geometrical uncertainties.
Baysal, U; Eyüboğlu, B M
2000-08-01
Geometrical uncertainties (organ boundary variation and electrode position uncertainties) are the biggest sources of error in estimating electrical resistivity of tissues from body surface measurements. In this study, in order to decrease estimation errors, the statistically constrained minimum mean squared error estimation algorithm (MiMSEE) is constrained with a priori knowledge of the geometrical uncertainties in addition to the constraints based on geometry, resistivity range, linearization and instrumentation errors. The MiMSEE calculates an optimum inverse matrix, which maps the surface measurements to the unknown resistivity distribution. The required data are obtained from four-electrode impedance measurements, similar to injected-current electrical impedance tomography (EIT). In this study, the surface measurements are simulated by using a numerical thorax model. The data are perturbed with additive instrumentation noise. Simulated surface measurements are then used to estimate the tissue resistivities by using the proposed algorithm. The results are compared with the results of conventional least squares error estimator (LSEE). Depending on the region, the MiMSEE yields an estimation error between 0.42% and 31.3% compared with 7.12% to 2010% for the LSEE. It is shown that the MiMSEE is quite robust even in the case of geometrical uncertainties.
Methods for comparing 3D surface attributes
NASA Astrophysics Data System (ADS)
Pang, Alex; Freeman, Adam
1996-03-01
A common task in data analysis is to compare two or more sets of data, statistics, presentations, etc. A predominant method in use is side-by-side visual comparison of images. While straightforward, it burdens the user with the task of discerning the differences between the two images. The user if further taxed when the images are of 3D scenes. This paper presents several methods for analyzing the extent, magnitude, and manner in which surfaces in 3D differ in their attributes. The surface geometry are assumed to be identical and only the surface attributes (color, texture, etc.) are variable. As a case in point, we examine the differences obtained when a 3D scene is rendered progressively using radiosity with different form factor calculation methods. The comparison methods include extensions of simple methods such as mapping difference information to color or transparency, and more recent methods including the use of surface texture, perturbation, and adaptive placements of error glyphs.
NASA Astrophysics Data System (ADS)
Carnes, Michael R.; Mitchell, Jim L.; de Witt, P. Webb
1990-10-01
Synthetic temperature profiles are computed from altimeter-derived sea surface heights in the Gulf Stream region. The required relationships between surface height (dynamic height at the surface relative to 1000 dbar) and subsurface temperature are provided from regression relationships between dynamic height and amplitudes of empirical orthogonal functions (EOFs) of the vertical structure of temperature derived by de Witt (1987). Relationships were derived for each month of the year from historical temperature and salinity profiles from the region surrounding the Gulf Stream northeast of Cape Hatteras. Sea surface heights are derived using two different geoid estimates, the feature-modeled geoid and the air-dropped expendable bathythermograph (AXBT) geoid, both described by Carnes et al. (1990). The accuracy of the synthetic profiles is assessed by comparison to 21 AXBT profile sections which were taken during three surveys along 12 Geosat ERM ground tracks nearly contemporaneously with Geosat overflights. The primary error statistic considered is the root-mean-square (rms) difference between AXBT and synthetic isotherm depths. The two sources of error are the EOF relationship and the altimeter-derived surface heights. EOF-related and surface height-related errors in synthetic temperature isotherm depth are of comparable magnitude; each translates into about a 60-m rms isotherm depth error, or a combined 80 m to 90 m error for isotherms in the permanent thermocline. EOF-related errors are responsible for the absence of the near-surface warm core of the Gulf Stream and for the reduced volume of Eighteen Degree Water in the upper few hundred meters of (apparently older) cold-core rings in the synthetic profiles. The overall rms difference between surface heights derived from the altimeter and those computed from AXBT profiles is 0.15 dyn m when the feature-modeled geoid is used and 0.19 dyn m when the AXBT geoid is used; the portion attributable to altimeter-derived surface height errors alone is 0.03 dyn m less for each. In most cases, the deeper structure of the Gulf Stream and eddies is reproduced well by vertical sections of synthetic temperature, with largest errors typically in regions of high horizontal gradient such as across rings and the Gulf Stream front.
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Handschuh, R. F.; Zhang, J.
1988-01-01
A method for generation of crowned pinion tooth surfaces using a surface of revolution is developed. The crowned pinion meshes with a regular involute gear and has a prescribed parabolic type of transmission errors when the gears operate in the aligned mode. When the gears are misaligned the transmission error remains parabolic with the maximum level still remaining very small (less than 0.34 arc second for the numerical examples). Tooth Contact Analysis (TCA) is used to simulate the conditions of meshing, determine the transmission error, and the bearing contact.
An Empirical State Error Covariance Matrix for Batch State Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).
First measurements of error fields on W7-X using flux surface mapping
Lazerson, Samuel A.; Otte, Matthias; Bozhenkov, Sergey; ...
2016-08-03
Error fields have been detected and quantified using the flux surface mapping diagnostic system on Wendelstein 7-X (W7-X). A low-field 'more » $${\\rlap{-}\\ \\iota} =1/2$$ ' magnetic configuration ($${\\rlap{-}\\ \\iota} =\\iota /2\\pi $$ ), sensitive to error fields, was developed in order to detect their presence using the flux surface mapping diagnostic. In this configuration, a vacuum flux surface with rotational transform of n/m = 1/2 is created at the mid-radius of the vacuum flux surfaces. If no error fields are present a vanishingly small n/m = 5/10 island chain should be present. Modeling indicates that if an n = 1 perturbing field is applied by the trim coils, a large n/m = 1/2 island chain will be opened. This island chain is used to create a perturbation large enough to be imaged by the diagnostic. Phase and amplitude scans of the applied field allow the measurement of a small $$\\sim 0.04$$ m intrinsic island chain with a $${{130}^{\\circ}}$$ phase relative to the first module of the W7-X experiment. Lastly, these error fields are determined to be small and easily correctable by the trim coil system.« less
NASA Astrophysics Data System (ADS)
Shulman, Igor; Gould, Richard W.; Frolov, Sergey; McCarthy, Sean; Penta, Brad; Anderson, Stephanie; Sakalaukus, Peter
2018-03-01
An ensemble-based approach to specify observational error covariance in the data assimilation of satellite bio-optical properties is proposed. The observational error covariance is derived from statistical properties of the generated ensemble of satellite MODIS-Aqua chlorophyll (Chl) images. The proposed observational error covariance is used in the Optimal Interpolation scheme for the assimilation of MODIS-Aqua Chl observations. The forecast error covariance is specified in the subspace of the multivariate (bio-optical, physical) empirical orthogonal functions (EOFs) estimated from a month-long model run. The assimilation of surface MODIS-Aqua Chl improved surface and subsurface model Chl predictions. Comparisons with surface and subsurface water samples demonstrate that data assimilation run with the proposed observational error covariance has higher RMSE than the data assimilation run with "optimistic" assumption about observational errors (10% of the ensemble mean), but has smaller or comparable RMSE than data assimilation run with an assumption that observational errors equal to 35% of the ensemble mean (the target error for satellite data product for chlorophyll). Also, with the assimilation of the MODIS-Aqua Chl data, the RMSE between observed and model-predicted fractions of diatoms to the total phytoplankton is reduced by a factor of two in comparison to the nonassimilative run.
Numerical simulation of KdV equation by finite difference method
NASA Astrophysics Data System (ADS)
Yokus, A.; Bulut, H.
2018-05-01
In this study, the numerical solutions to the KdV equation with dual power nonlinearity by using the finite difference method are obtained. Discretize equation is presented in the form of finite difference operators. The numerical solutions are secured via the analytical solution to the KdV equation with dual power nonlinearity which is present in the literature. Through the Fourier-Von Neumann technique and linear stable, we have seen that the FDM is stable. Accuracy of the method is analyzed via the L2 and L_{∞} norm errors. The numerical, exact approximations and absolute error are presented in tables. We compare the numerical solutions with the exact solutions and this comparison is supported with the graphic plots. Under the choice of suitable values of parameters, the 2D and 3D surfaces for the used analytical solution are plotted.
Prevalence of refractive errors in children in Equatorial Guinea.
Soler, Margarita; Anera, Rosario G; Castro, José J; Jiménez, Raimundo; Jiménez, José R
2015-01-01
The aim of this work is to evaluate the epidemiological aspects of the refractive errors in school-aged children in Malabo (Island of Bioko), Equatorial Guinea (western-central Africa). A total of 425 schoolchildren (209 male subjects and 216 female subjects, aged between 6 and 16 years) were examined to evaluate their refraction errors in Malabo, Equatorial Guinea (western-central Africa). The examination included autorefraction with cycloplegia, measurement of visual acuity (VA) for far vision, and the curvature radii of the main meridians of the anterior surface of the cornea. A low prevalence of myopia was found (≤-0.50 diopters [D] spherical equivalent), with unilateral and bilateral myopia being 10.4 and 5.2%, respectively. The prevalence of unilateral and bilateral hypermetropia (≥2.0 D spherical equivalent) was 3.1 and 1.6%, respectively. Astigmatism (≤-0.75 D) was found in unilateral form in 32.5% of these children, whereas bilateral astigmatism was found in 11.8%. After excluding children having any ocular pathology, the low prevalence of high refractive errors signified good VA in these children. Significant differences were found in the distribution of the refractive errors by age and type of schooling (public or private) but not by sex. In general, the radii of the anterior of the cornea did not vary significantly with age. The mean refractive errors found were low and therefore VA was high in these children. There was a low prevalence of myopia, with significantly higher values in those who attended private schools (educationally and socioeconomically more demanding). Astigmatism was the most frequent refractive error.
Optimized method for manufacturing large aspheric surfaces
NASA Astrophysics Data System (ADS)
Zhou, Xusheng; Li, Shengyi; Dai, Yifan; Xie, Xuhui
2007-12-01
Aspheric optics are being used more and more widely in modern optical systems, due to their ability of correcting aberrations, enhancing image quality, enlarging the field of view and extending the range of effect, while reducing the weight and volume of the system. With optical technology development, we have more pressing requirement to large-aperture and high-precision aspheric surfaces. The original computer controlled optical surfacing (CCOS) technique cannot meet the challenge of precision and machining efficiency. This problem has been thought highly of by researchers. Aiming at the problem of original polishing process, an optimized method for manufacturing large aspheric surfaces is put forward. Subsurface damage (SSD), full aperture errors and full band of frequency errors are all in control of this method. Lesser SSD depth can be gained by using little hardness tool and small abrasive grains in grinding process. For full aperture errors control, edge effects can be controlled by using smaller tools and amendment model with material removal function. For full band of frequency errors control, low frequency errors can be corrected with the optimized material removal function, while medium-high frequency errors by using uniform removing principle. With this optimized method, the accuracy of a K9 glass paraboloid mirror can reach rms 0.055 waves (where a wave is 0.6328μm) in a short time. The results show that the optimized method can guide large aspheric surface manufacturing effectively.
Estimation of open water evaporation using land-based meteorological data
NASA Astrophysics Data System (ADS)
Li, Fawen; Zhao, Yong
2017-10-01
Water surface evaporation is an important process in the hydrologic and energy cycles. Accurate simulation of water evaporation is important for the evaluation of water resources. In this paper, using meteorological data from the Aixinzhuang reservoir, the main factors affecting water surface evaporation were determined by the principal component analysis method. To illustrate the influence of these factors on water surface evaporation, the paper first adopted the Dalton model to simulate water surface evaporation. The results showed that the simulation precision was poor for the peak value zone. To improve the model simulation's precision, a modified Dalton model considering relative humidity was proposed. The results show that the 10-day average relative error is 17.2%, assessed as qualified; the monthly average relative error is 12.5%, assessed as qualified; and the yearly average relative error is 3.4%, assessed as excellent. To validate its applicability, the meteorological data of Kuancheng station in the Luan River basin were selected to test the modified model. The results show that the 10-day average relative error is 15.4%, assessed as qualified; the monthly average relative error is 13.3%, assessed as qualified; and the yearly average relative error is 6.0%, assessed as good. These results showed that the modified model had good applicability and versatility. The research results can provide technical support for the calculation of water surface evaporation in northern China or similar regions.
High-Accuracy Surface Figure Measurement of Silicon Mirrors at 80 K
NASA Technical Reports Server (NTRS)
Blake, Peter; Mink, Ronald G.; Chambers, John; Davila, Pamela; Robinson, F. David
2004-01-01
This report describes the equipment, experimental methods, and first results at a new facility for interferometric measurement of cryogenically-cooled spherical mirrors at the Goddard Space Flight Center Optics Branch. The procedure, using standard phase-shifting interferometry, has an standard combined uncertainty of 3.6 nm rms in its representation of the two-dimensional surface figure error at 80, and an uncertainty of plus or minus 1 nm in the rms statistic itself. The first mirror tested was a concave spherical silicon foam-core mirror, with a clear aperture of 120 mm. The optic surface was measured at room temperature using standard absolute techniques; and then the change in surface figure error from room temperature to 80 K was measured. The mirror was cooled within a cryostat. and its surface figure error measured through a fused-silica window. The facility and techniques will be used to measure the surface figure error at 20K of prototype lightweight silicon carbide and Cesic mirrors developed by Galileo Avionica (Italy) for the European Space Agency (ESA).
Error analysis of 3D-PTV through unsteady interfaces
NASA Astrophysics Data System (ADS)
Akutina, Yulia; Mydlarski, Laurent; Gaskin, Susan; Eiff, Olivier
2018-03-01
The feasibility of stereoscopic flow measurements through an unsteady optical interface is investigated. Position errors produced by a wavy optical surface are determined analytically, as are the optimal viewing angles of the cameras to minimize such errors. Two methods of measuring the resulting velocity errors are proposed. These methods are applied to 3D particle tracking velocimetry (3D-PTV) data obtained through the free surface of a water flow within a cavity adjacent to a shallow channel. The experiments were performed using two sets of conditions, one having no strong surface perturbations, and the other exhibiting surface gravity waves. In the latter case, the amplitude of the gravity waves was 6% of the water depth, resulting in water surface inclinations of about 0.2°. (The water depth is used herein as a relevant length scale, because the measurements are performed in the entire water column. In a more general case, the relevant scale is the maximum distance from the interface to the measurement plane, H, which here is the same as the water depth.) It was found that the contribution of the waves to the overall measurement error is low. The absolute position errors of the system were moderate (1.2% of H). However, given that the velocity is calculated from the relative displacement of a particle between two frames, the errors in the measured water velocities were reasonably small, because the error in the velocity is the relative position error over the average displacement distance. The relative position error was measured to be 0.04% of H, resulting in small velocity errors of 0.3% of the free-stream velocity (equivalent to 1.1% of the average velocity in the domain). It is concluded that even though the absolute positions to which the velocity vectors are assigned is distorted by the unsteady interface, the magnitude of the velocity vectors themselves remains accurate as long as the waves are slowly varying (have low curvature). The stronger the disturbances on the interface are (high amplitude, short wave length), the smaller is the distance from the interface at which the measurements can be performed.
Attention to Form or Meaning? Error Treatment in the Bangalore Project.
ERIC Educational Resources Information Center
Beretta, Alan
1989-01-01
Reports on an evaluation of the Bangalore/Madras Communicational Teaching Project (CTP), a content-based approach to language learning. Analysis of 21 lesson transcripts revealed a greater incidence of error treatment of content than linguistic error, consonant with the CTP focus on meaning rather than form. (26 references) (Author/CB)
Khwaileh, Tariq; Body, Richard; Herbert, Ruth
2015-01-01
Within the domain of inflectional morpho-syntax, differential processing of regular and irregular forms has been found in healthy speakers and in aphasia. One view assumes that irregular forms are retrieved as full entities, while regular forms are compiled on-line. An alternative view holds that a single mechanism oversees regular and irregular forms. Arabic offers an opportunity to study this phenomenon, as Arabic nouns contain a consonantal root, delivering lexical meaning, and a vocalic pattern, delivering syntactic information, such as gender and number. The aim of this study is to investigate morpho-syntactic processing of regular (sound) and irregular (broken) Arabic plurals in patients with morpho-syntactic impairment. Three participants with acquired agrammatic aphasia produced plural forms in a picture-naming task. We measured overall response accuracy, then analysed lexical errors and morpho-syntactic errors, separately. Error analysis revealed different patterns of morpho-syntactic errors depending on the type of pluralization (sound vs broken). Omissions formed the vast majority of errors in sound plurals, while substitution was the only error mechanism that occurred in broken plurals. The dissociation was statistically significant for retrieval of morpho-syntactic information (vocalic pattern) but not for lexical meaning (consonantal root), suggesting that the participants' selective impairment was an effect of the morpho-syntax of plurals. These results suggest that irregular plurals forms are stored, while regular forms are derived. The current findings support the findings from other languages and provide a new analysis technique for data from languages with non-concatenative morpho-syntax.
NASA Technical Reports Server (NTRS)
Wilson, Matthew D.; Durand, Michael; Alsdorf, Douglas; Chul-Jung, Hahn; Andreadis, Konstantinos M.; Lee, Hyongki
2012-01-01
The Surface Water and Ocean Topography (SWOT) satellite mission, scheduled for launch in 2020 with development commencing in 2015, will provide a step-change improvement in the measurement of terrestrial surface water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water surface elevations, which will allow for the estimation of river and floodplain flows via the water surface slope. In this paper, we characterize the measurements which may be obtained from SWOT and illustrate how they may be used to derive estimates of river discharge. In particular, we show (i) the spatia-temporal sampling scheme of SWOT, (ii) the errors which maybe expected in swath altimetry measurements of the terrestrial surface water, and (iii) the impacts such errors may have on estimates of water surface slope and river discharge, We illustrate this through a "virtual mission" study for a approximately 300 km reach of the central Amazon river, using a hydraulic model to provide water surface elevations according to the SWOT spatia-temporal sampling scheme (orbit with 78 degree inclination, 22 day repeat and 140 km swath width) to which errors were added based on a two-dimension height error spectrum derived from the SWOT design requirements. Water surface elevation measurements for the Amazon mainstem as may be observed by SWOT were thereby obtained. Using these measurements, estimates of river slope and discharge were derived and compared to those which may be obtained without error, and those obtained directly from the hydraulic model. It was found that discharge can be reproduced highly accurately from the water height, without knowledge of the detailed channel bathymetry using a modified Manning's equation, if friction, depth, width and slope are known. Increasing reach length was found to be an effective method to reduce systematic height error in SWOT measurements.
NASA Astrophysics Data System (ADS)
Nie, Xuqing; Li, Shengyi; Song, Ci; Hu, Hao
2014-08-01
Due to the different curvature everywhere, the aspheric surface is hard to achieve high-precision accuracy by the traditional polishing process. Controlling of the mid-spatial frequency errors (MSFR), in particular, is almost unapproachable. In this paper, the combined fabrication process based on the smoothing polishing (SP) and magnetorheological finishing (MRF) is proposed. The pressure distribution of the rigid polishing lap and semi-flexible polishing lap is calculated. The shape preserving capacity and smoothing effect are compared. The feasibility of smoothing aspheric surface with the semi-flexible polishing lap is verified, and the key technologies in the SP process are discussed. Then, A K4 parabolic surface with the diameter of 500mm is fabricated based on the combined fabrication process. A Φ150 mm semi-flexible lap is used in the SP process to control the MSFR, and the deterministic MRF process is applied to figure the surface error. The root mean square (RMS) error of the aspheric surface converges from 0.083λ (λ=632.8 nm) to 0.008λ. The power spectral density (PSD) result shows that the MSFR are well restrained while the surface error has a great convergence.
XCO2 retrieval error over deserts near critical surface albedo
NASA Astrophysics Data System (ADS)
Zhang, Qiong; Shia, Run-Lie; Sander, Stanley P.; Yung, Yuk L.
2016-02-01
Large retrieval errors in column-weighted CO2 mixing ratio (XCO2) over deserts are evident in the Orbiting Carbon Observatory 2 version 7 L2 products. We argue that these errors are caused by the surface albedo being close to a critical surface albedo (αc). Over a surface with albedo close to αc, increasing the aerosol optical depth (AOD) does not change the continuum radiance. The spectral signature caused by changing the AOD is identical to that caused by changing the absorbing gas column. The degeneracy in the retrievals of AOD and XCO2 results in a loss of degrees of freedom and information content. We employ a two-stream-exact single scattering radiative transfer model to study the physical mechanism of XCO2 retrieval error over a surface with albedo close to αc. Based on retrieval tests over surfaces with different albedos, we conclude that over a surface with albedo close to αc, the XCO2 retrieval suffers from a significant loss of accuracy. We recommend a bias correction approach that has significantly improved the XCO2 retrieval from the California Laboratory for Atmospheric Remote Sensing data in the presence of aerosol loading.
Form control in atmospheric pressure plasma processing of ground fused silica
NASA Astrophysics Data System (ADS)
Li, Duo; Wang, Bo; Xin, Qiang; Jin, Huiliang; Wang, Jun; Dong, Wenxia
2014-08-01
Atmospheric Pressure Plasma Processing (APPP) using inductively coupled plasma has demonstrated that it can achieve comparable removal rate on the optical surface of fused silica under the atmosphere pressure and has the advantage of inducing no sub-surface damage for its non-contact and chemical etching mechanism. APPP technology is a cost effective way, compared with traditional mechanical polishing, magnetorheological finishing and ion beam figuring. Thus, due to these advantages, this technology is being tested to fabricate large aperture optics of fused silica to help shorten the polishing time in optics fabrication chain. Now our group proposes to use inductively coupled plasma processing technology to fabricate ground surface of fused silica directly after the grinding stage. In this paper, form control method and several processing parameters are investigated to evaluate the removal efficiency and the surface quality, including the robustness of removal function, velocity control mode and tool path strategy. However, because of the high heat flux of inductively coupled plasma, the removal depth with time can be non-linear and the ground surface evolvement will be affected. The heat polishing phenomenon is founded. The value of surface roughness is reduced greatly, which is very helpful to reduce the time of follow-up mechanical polishing. Finally, conformal and deterministic polishing experiments are analyzed and discussed. The form error is less 3%, before and after the APPP, when 10μm depth of uniform removal is achieved on a 60×60mm ground fused silica. Also, a basin feature is fabricated to demonstrate the figuring capability and stability. Thus, APPP is a promising technology in processing the large aperture optics.
Cohesive Relations for Surface Atoms in the Iron-Technetium Binary System
Taylor, Christopher D.
2011-01-01
Iron-technetium alloys are of relevance to the development of waste forms for disposition of radioactive technetium-99 obtained from spent nuclear fuel. Corrosion of candidate waste forms is a function of the local cohesive energy () of surface atoms. A theoretical model for calculating is developed. Density functional theory was used to construct a modified embedded atom (MEAM) potential for iron-technetium. Materials properties determined for the iron-technetium system were in good agreement with the literature. To explore the relationship between local structure and corrosion, MEAM simulations were performed on representative iron-technetium alloys and intermetallics. Technetium-rich phases have lower , suggesting thatmore » these phases will be more noble than iron-rich ones. Quantitative estimates of based on numbers of nearest neighbors alone can lead to errors up to 0.5 eV. Consequently, atomistic corrosion simulations for alloy systems should utilize physics-based models that consider not only neighbor counts, but also local compositions and atomic arrangements.« less
NASA Astrophysics Data System (ADS)
Maier, Matthias; Margetis, Dionisios; Luskin, Mitchell
2017-06-01
We formulate and validate a finite element approach to the propagation of a slowly decaying electromagnetic wave, called surface plasmon-polariton, excited along a conducting sheet, e.g., a single-layer graphene sheet, by an electric Hertzian dipole. By using a suitably rescaled form of time-harmonic Maxwell's equations, we derive a variational formulation that enables a direct numerical treatment of the associated class of boundary value problems by appropriate curl-conforming finite elements. The conducting sheet is modeled as an idealized hypersurface with an effective electric conductivity. The requisite weak discontinuity for the tangential magnetic field across the hypersurface can be incorporated naturally into the variational formulation. We carry out numerical simulations for an infinite sheet with constant isotropic conductivity embedded in two spatial dimensions; and validate our numerics against the closed-form exact solution obtained by the Fourier transform in the tangential coordinate. Numerical aspects of our treatment such as an absorbing perfectly matched layer, as well as local refinement and a posteriori error control are discussed.
Simulating a transmon implementation of the surface code, Part II
NASA Astrophysics Data System (ADS)
O'Brien, Thomas; Tarasinski, Brian; Rol, Adriaan; Bultink, Niels; Fu, Xiang; Criger, Ben; Dicarlo, Leonardo
The majority of quantum error correcting circuit simulations use Pauli error channels, as they can be efficiently calculated. This raises two questions: what is the effect of more complicated physical errors on the logical qubit error rate, and how much more efficient can decoders become when accounting for realistic noise? To answer these questions, we design a minimal weight perfect matching decoder parametrized by a physically motivated noise model and test it on the full density matrix simulation of Surface-17, a distance-3 surface code. We compare performance against other decoders, for a range of physical parameters. Particular attention is paid to realistic sources of error for transmon qubits in a circuit QED architecture, and the requirements for real-time decoding via an FPGA Research funded by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), IARPA, an ERC Synergy Grant, the China Scholarship Council, and Intel Corporation.
NASA Technical Reports Server (NTRS)
Litvin, Faydor L.; Lee, Hong-Tao
1989-01-01
A new approach for determination of machine-tool settings for spiral bevel gears is proposed. The proposed settings provide a predesigned parabolic function of transmission errors and the desired location and orientation of the bearing contact. The predesigned parabolic function of transmission errors is able to absorb piece-wise linear functions of transmission errors that are caused by the gear misalignment and reduce gear noise. The gears are face-milled by head cutters with conical surfaces or surfaces of revolution. A computer program for simulation of meshing, bearing contact and determination of transmission errors for misaligned gear has been developed.
Role of color memory in successive color constancy.
Ling, Yazhu; Hurlbert, Anya
2008-06-01
We investigate color constancy for real 2D paper samples using a successive matching paradigm in which the observer memorizes a reference surface color under neutral illumination and after a temporal interval selects a matching test surface under the same or different illumination. We find significant effects of the illumination, reference surface, and their interaction on the matching error. We characterize the matching error in the absence of illumination change as the "pure color memory shift" and introduce a new index for successive color constancy that compares this shift against the matching error under changing illumination. The index also incorporates the vector direction of the matching errors in chromaticity space, unlike the traditional constancy index. With this index, we find that color constancy is nearly perfect.
Multipath induced errors in meteorological Doppler/interferometer location systems
NASA Technical Reports Server (NTRS)
Wallace, R. G.
1984-01-01
One application of an RF interferometer aboard a low-orbiting spacecraft to determine the location of ground-based transmitters is in tracking high-altitude balloons for meteorological studies. A source of error in this application is reflection of the signal from the sea surface. Through propagating and signal analysis, the magnitude of the reflection-induced error in both Doppler frequency measurements and interferometer phase measurements was estimated. The theory of diffuse scattering from random surfaces was applied to obtain the power spectral density of the reflected signal. The processing of the combined direct and reflected signals was then analyzed to find the statistics of the measurement error. It was found that the error varies greatly during the satellite overpass and attains its maximum value at closest approach. The maximum values of interferometer phase error and Doppler frequency error found for the system configuration considered were comparable to thermal noise-induced error.
A Method of Calculating Motion Error in a Linear Motion Bearing Stage
Khim, Gyungho; Park, Chun Hong; Oh, Jeong Seok
2015-01-01
We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement. PMID:25705715
Feedback controlled optics with wavefront compensation
NASA Technical Reports Server (NTRS)
Breckenridge, William G. (Inventor); Redding, David C. (Inventor)
1993-01-01
The sensitivity model of a complex optical system obtained by linear ray tracing is used to compute a control gain matrix by imposing the mathematical condition for minimizing the total wavefront error at the optical system's exit pupil. The most recent deformations or error states of the controlled segments or optical surfaces of the system are then assembled as an error vector, and the error vector is transformed by the control gain matrix to produce the exact control variables which will minimize the total wavefront error at the exit pupil of the optical system. These exact control variables are then applied to the actuators controlling the various optical surfaces in the system causing the immediate reduction in total wavefront error observed at the exit pupil of the optical system.
Evaluate error correction ability of magnetorheological finishing by smoothing spectral function
NASA Astrophysics Data System (ADS)
Wang, Jia; Fan, Bin; Wan, Yongjian; Shi, Chunyan; Zhuo, Bin
2014-08-01
Power Spectral Density (PSD) has been entrenched in optics design and manufacturing as a characterization of mid-high spatial frequency (MHSF) errors. Smoothing Spectral Function (SSF) is a newly proposed parameter that based on PSD to evaluate error correction ability of computer controlled optical surfacing (CCOS) technologies. As a typical deterministic and sub-aperture finishing technology based on CCOS, magnetorheological finishing (MRF) leads to MHSF errors inevitably. SSF is employed to research different spatial frequency error correction ability of MRF process. The surface figures and PSD curves of work-piece machined by MRF are presented. By calculating SSF curve, the correction ability of MRF for different spatial frequency errors will be indicated as a normalized numerical value.
Di Pietro, M; Schnider, A; Ptak, R
2011-10-01
Patients with peripheral dysgraphia due to impairment at the allographic level produce writing errors that affect the letter-form and are characterized by case confusions or the failure to write in a specific case or style (e.g., cursive). We studied the writing errors of a patient with pure peripheral dysgraphia who had entirely intact oral spelling, but produced many well-formed letter errors in written spelling. The comparison of uppercase print and lowercase cursive spelling revealed an uncommon pattern: while most uppercase errors were case substitutions (e.g., A - a), almost all lowercase errors were letter substitutions (e.g., n - r). Analyses of the relationship between target letters and substitution errors showed that errors were neither influenced by consonant-vowel status nor by letter frequency, though word length affected error frequency in lowercase writing. Moreover, while graphomotor similarity did not predict either the occurrence of uppercase or lowercase errors, visuospatial similarity was a significant predictor of lowercase errors. These results suggest that lowercase representations of cursive letter-forms are based on a description of entire letters (visuospatial features) and are not - as previously found for uppercase letters - specified in terms of strokes (graphomotor features). Copyright © 2010 Elsevier Srl. All rights reserved.
A Flexible Alignment Fixture for the Fabrication of Replication Mandrels
NASA Technical Reports Server (NTRS)
Cuttino, James F.; Todd, Michael W.
1996-01-01
NASA uses precision diamond turning technology to fabricate replication mandrels for its X-ray Calibration Facility (XRCF) optics. The XRCF optics are tubular, and the internal surface contains a parabolic profile over the first section and a hyperbolic profile over the last. The optic is fabricated by depositing layers of gold and nickel on to the replication mandrel and then separating it from the mandrel. Since the mandrel serves as a replication form, it must contain the inverse image of the surface. The difficulty in aligning the mandrel comes from the fabrication steps which it undergoes. The mandrel is rough machined and heat treated prior to diamond turning. After diamond turning, silicon rubber separators which are undercut in radius by 3 mm (0.12 in.) are inserted between the two end caps of the mandrel to allow the plating to wrap around the ends (to prevent flaking). The mandrel is then plated with a nickel-phosphor alloy using an electroless nickel process. At this point, the separators are removed and the mandrel is reassembled for the final cut on the DTM. The mandrel is measured for profile and finish, and polished to achieve an acceptable surface finish. Wrapping the plating around the edges helps to prevent flaking, but it also destroys the alignment surfaces between the parts of the mandrel that insure that the axes of the parts are coincident. Several mandrels have been realigned by trial-and-error methods, consuming significant amounts of setup time. When the mandrel studied in this paper was reassembled, multiple efforts resulted in a minimum radial error motion of 100 microns. Since 50 microns of nickel plating was to be removed, and a minimum plating thickness of 25 microns was to remain on the part, the radial error motion had to be reduced to less than 25 microns. The mandrel was therefore not usable in its current state.
NASA Astrophysics Data System (ADS)
Chen, Mingjun; Li, Ziang; Yu, Bo; Peng, Hui; Fang, Zhen
2013-09-01
In the grinding of high quality fused silica parts with complex surface or structure using ball-headed metal bonded diamond wheel with small diameter, the existing dressing methods are not suitable to dress the ball-headed diamond wheel precisely due to that they are either on-line in process dressing which may causes collision problem or without consideration for the effects of the tool setting error and electrode wear. An on-machine precision preparation and dressing method is proposed for ball-headed diamond wheel based on electrical discharge machining. By using this method the cylindrical diamond wheel with small diameter is manufactured to hemispherical-headed form. The obtained ball-headed diamond wheel is dressed after several grinding passes to recover geometrical accuracy and sharpness which is lost due to the wheel wear. A tool setting method based on high precision optical system is presented to reduce the wheel center setting error and dimension error. The effect of electrode tool wear is investigated by electrical dressing experiments, and the electrode tool wear compensation model is established based on the experimental results which show that the value of wear ratio coefficient K' tends to be constant with the increasing of the feed length of electrode and the mean value of K' is 0.156. Grinding experiments of fused silica are carried out on a test bench to evaluate the performance of the preparation and dressing method. The experimental results show that the surface roughness of the finished workpiece is 0.03 μm. The effect of the grinding parameter and dressing frequency on the surface roughness is investigated based on the measurement results of the surface roughness. This research provides an on-machine preparation and dressing method for ball-headed metal bonded diamond wheel used in the grinding of fused silica, which provides a solution to the tool setting method and the effect of electrode tool wear.
Gas turbine engine control system
NASA Technical Reports Server (NTRS)
Idelchik, Michael S. (Inventor)
1991-01-01
A control system and method of controlling a gas turbine engine. The control system receives an error signal and processes the error signal to form a primary fuel control signal. The control system also receives at least one anticipatory demand signal and processes the signal to form an anticipatory fuel control signal. The control system adjusts the value of the anticipatory fuel control signal based on the value of the error signal to form an adjusted anticipatory signal and then the adjusted anticipatory fuel control signal and the primary fuel control signal are combined to form a fuel command signal.
Reducing errors benefits the field-based learning of a fundamental movement skill in children.
Capio, C M; Poolton, J M; Sit, C H P; Holmstrom, M; Masters, R S W
2013-03-01
Proficient fundamental movement skills (FMS) are believed to form the basis of more complex movement patterns in sports. This study examined the development of the FMS of overhand throwing in children through either an error-reduced (ER) or error-strewn (ES) training program. Students (n = 216), aged 8-12 years (M = 9.16, SD = 0.96), practiced overhand throwing in either a program that reduced errors during practice (ER) or one that was ES. ER program reduced errors by incrementally raising the task difficulty, while the ES program had an incremental lowering of task difficulty. Process-oriented assessment of throwing movement form (Test of Gross Motor Development-2) and product-oriented assessment of throwing accuracy (absolute error) were performed. Changes in performance were examined among children in the upper and lower quartiles of the pretest throwing accuracy scores. ER training participants showed greater gains in movement form and accuracy, and performed throwing more effectively with a concurrent secondary cognitive task. Movement form improved among girls, while throwing accuracy improved among children with low ability. Reduced performance errors in FMS training resulted in greater learning than a program that did not restrict errors. Reduced cognitive processing costs (effective dual-task performance) associated with such approach suggest its potential benefits for children with developmental conditions. © 2011 John Wiley & Sons A/S.
Utilization of electrical impedance imaging for estimation of in-vivo tissue resistivities
NASA Astrophysics Data System (ADS)
Eyuboglu, B. Murat; Pilkington, Theo C.
1993-08-01
In order to determine in vivo resistivity of tissues in the thorax, the possibility of combining electrical impedance imaging (EII) techniques with (1) anatomical data extracted from high resolution images, (2) a prior knowledge of tissue resistivities, and (3) a priori noise information was assessed in this study. A Least Square Error Estimator (LSEE) and a statistically constrained Minimum Mean Square Error Estimator (MiMSEE) were implemented to estimate regional electrical resistivities from potential measurements made on the body surface. A two dimensional boundary element model of the human thorax, which consists of four different conductivity regions (the skeletal muscle, the heart, the right lung, and the left lung) was adopted to simulate the measured EII torso potentials. The calculated potentials were then perturbed by simulated instrumentation noise. The signal information used to form the statistical constraint for the MiMSEE was obtained from a prior knowledge of the physiological range of tissue resistivities. The noise constraint was determined from a priori knowledge of errors due to linearization of the forward problem and to the instrumentation noise.
Techniques for Down-Sampling a Measured Surface Height Map for Model Validation
NASA Technical Reports Server (NTRS)
Sidick, Erkin
2012-01-01
This software allows one to down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. The software tool of the current two new techniques can be used in all optical model validation processes involving large space optical surfaces
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Rahman, P.; Goldrich, R. N.
1982-01-01
The geometry of spiral bevel gears and to their rational design are studied. The nonconjugate tooth surfaces of spiral bevel gears are, in theory, replaced (or approximated) by conjugated tooth surfaces. These surfaces can be generated by two conical surfaces, and by a conical surface and a revolution. Although these conjugated tooth surfaces are simpler than the actual ones, the determination of their principal curvatures and directions is still a complicated problem. Therefore, a new approach, to the solution of these is proposed. Direct relationships between the principal curvatures and directions of the tool surface and those of the generated gear surface are obtained. With the aid of these analytical tools, the Hertzian contact problem for conjugate tooth surfaces can be solved. These results are useful in determining compressive load capacity and surface fatigue life of spiral bevel gears. A general theory of kinematical errors exerted by manufacturing and assembly errors is developed. This theory is used to determine the analytical relationship between gear misalignments and kinematical errors. This is important to the study of noise and vibration in geared systems.
Mean Bias in Seasonal Forecast Model and ENSO Prediction Error.
Kim, Seon Tae; Jeong, Hye-In; Jin, Fei-Fei
2017-07-20
This study uses retrospective forecasts made using an APEC Climate Center seasonal forecast model to investigate the cause of errors in predicting the amplitude of El Niño Southern Oscillation (ENSO)-driven sea surface temperature variability. When utilizing Bjerknes coupled stability (BJ) index analysis, enhanced errors in ENSO amplitude with forecast lead times are found to be well represented by those in the growth rate estimated by the BJ index. ENSO amplitude forecast errors are most strongly associated with the errors in both the thermocline slope response and surface wind response to forcing over the tropical Pacific, leading to errors in thermocline feedback. This study concludes that upper ocean temperature bias in the equatorial Pacific, which becomes more intense with increasing lead times, is a possible cause of forecast errors in the thermocline feedback and thus in ENSO amplitude.
Influence of survey strategy and interpolation model on DEM quality
NASA Astrophysics Data System (ADS)
Heritage, George L.; Milan, David J.; Large, Andrew R. G.; Fuller, Ian C.
2009-11-01
Accurate characterisation of morphology is critical to many studies in the field of geomorphology, particularly those dealing with changes over time. Digital elevation models (DEMs) are commonly used to represent morphology in three dimensions. The quality of the DEM is largely a function of the accuracy of individual survey points, field survey strategy, and the method of interpolation. Recommendations concerning field survey strategy and appropriate methods of interpolation are currently lacking. Furthermore, the majority of studies to date consider error to be uniform across a surface. This study quantifies survey strategy and interpolation error for a gravel bar on the River Nent, Blagill, Cumbria, UK. Five sampling strategies were compared: (i) cross section; (ii) bar outline only; (iii) bar and chute outline; (iv) bar and chute outline with spot heights; and (v) aerial LiDAR equivalent, derived from degraded terrestrial laser scan (TLS) data. Digital Elevation Models were then produced using five different common interpolation algorithms. Each resultant DEM was differentiated from a terrestrial laser scan of the gravel bar surface in order to define the spatial distribution of vertical and volumetric error. Overall triangulation with linear interpolation (TIN) or point kriging appeared to provide the best interpolators for the bar surface. Lowest error on average was found for the simulated aerial LiDAR survey strategy, regardless of interpolation technique. However, comparably low errors were also found for the bar-chute-spot sampling strategy when TINs or point kriging was used as the interpolator. The magnitude of the errors between survey strategy exceeded those found between interpolation technique for a specific survey strategy. Strong relationships between local surface topographic variation (as defined by the standard deviation of vertical elevations in a 0.2-m diameter moving window), and DEM errors were also found, with much greater errors found at slope breaks such as bank edges. A series of curves are presented that demonstrate these relationships for each interpolation and survey strategy. The simulated aerial LiDAR data set displayed the lowest errors across the flatter surfaces; however, sharp slope breaks are better modelled by the morphologically based survey strategy. The curves presented have general application to spatially distributed data of river beds and may be applied to standard deviation grids to predict spatial error within a surface, depending upon sampling strategy and interpolation algorithm.
Semiclassical Dynamicswith Exponentially Small Error Estimates
NASA Astrophysics Data System (ADS)
Hagedorn, George A.; Joye, Alain
We construct approximate solutions to the time-dependent Schrödingerequation
Smooth extrapolation of unknown anatomy via statistical shape models
NASA Astrophysics Data System (ADS)
Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.
2015-03-01
Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.
NASA Astrophysics Data System (ADS)
Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.
2015-04-01
The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface-water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water-surface elevations. In this paper, we aimed to (i) characterise and illustrate in two dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water-surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a virtual mission for a ~260 km reach of the central Amazon (Solimões) River, using a hydraulic model to provide water-surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimensional height error spectrum derived from the SWOT design requirements. We thereby obtained water-surface elevation measurements for the Amazon main stem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-sectional averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1 % average overall error in discharge, respectively. We extend the results to other rivers worldwide and infer that SWOT-derived discharge estimates may be more accurate for rivers with larger channel widths (permitting a greater level of cross-sectional averaging and the use of shorter reach lengths) and higher water-surface slopes (reducing the proportional impact of slope errors on discharge calculation).
Constraining storm-scale forecasts of deep convective initiation with surface weather observations
NASA Astrophysics Data System (ADS)
Madaus, Luke
Successfully forecasting when and where individual convective storms will form remains an elusive goal for short-term numerical weather prediction. In this dissertation, the convective initiation (CI) challenge is considered as a problem of insufficiently resolved initial conditions and dense surface weather observations are explored as a possible solution. To better quantify convective-scale surface variability in numerical simulations of discrete convective initiation, idealized ensemble simulations of a variety of environments where CI occurs in response to boundary-layer processes are examined. Coherent features 1-2 hours prior to CI are found in all surface fields examined. While some features were broadly expected, such as positive temperature anomalies and convergent winds, negative temperature anomalies due to cloud shadowing are the largest surface anomaly seen prior to CI. Based on these simulations, several hypotheses about the required characteristics of a surface observing network to constrain CI forecasts are developed. Principally, these suggest that observation spacings of less than 4---5 km would be required, based on correlation length scales. Furthermore, it is anticipated that 2-m temperature and 10-m wind observations would likely be more relevant for effectively constraining variability than surface pressure or 2-m moisture observations based on the magnitudes of observed anomalies relative to observation error. These hypotheses are tested with a series of observing system simulation experiments (OSSEs) using a single CI-capable environment. The OSSE results largely confirm the hypotheses, and with 4-km and particularly 1-km surface observation spacing, skillful forecasts of CI are possible, but only within two hours of CI time. Several facets of convective-scale assimilation, including the need for properly-calibrated localization and problems from non-Gaussian ensemble estimates of the cloud field are discussed. Finally, the characteristics of one candidate dense surface observing network are examined: smartphone pressure observations. Available smartphone pressure observations (and 1-hr pressure tendency observations) are tested by assimilating them into convective-allowing ensemble forecasts for a three-day active convective period in the eastern United States. Although smartphone observations contain noise and internal disagreement, they are effective at reducing short-term forecast errors in surface pressure, wind and precipitation. The results suggest that smartphone pressure observations could become a viable mesoscale observation platform, but more work is needed to enhance their density and reduce error. This work concludes by reviewing and suggesting other novel candidate observation platforms with a potential to improve convective-scale forecasts of CI.
Plant traits determine forest flammability
NASA Astrophysics Data System (ADS)
Zylstra, Philip; Bradstock, Ross
2016-04-01
Carbon and nutrient cycles in forest ecosystems are influenced by their inherent flammability - a property determined by the traits of the component plant species that form the fuel and influence the micro climate of a fire. In the absence of a model capable of explaining the complexity of such a system however, flammability is frequently represented by simple metrics such as surface fuel load. The implications of modelling fire - flammability feedbacks using surface fuel load were examined and compared to a biophysical, mechanistic model (Forest Flammability Model) that incorporates the influence of structural plant traits (e.g. crown shape and spacing) and leaf traits (e.g. thickness, dimensions and moisture). Fuels burn with values of combustibility modelled from leaf traits, transferring convective heat along vectors defined by flame angle and with plume temperatures that decrease with distance from the flame. Flames are re-calculated in one-second time-steps, with new leaves within the plant, neighbouring plants or higher strata ignited when the modelled time to ignition is reached, and other leaves extinguishing when their modelled flame duration is exceeded. The relative influence of surface fuels, vegetation structure and plant leaf traits were examined by comparing flame heights modelled using three treatments that successively added these components within the FFM. Validation was performed across a diverse range of eucalypt forests burnt under widely varying conditions during a forest fire in the Brindabella Ranges west of Canberra (ACT) in 2003. Flame heights ranged from 10 cm to more than 20 m, with an average of 4 m. When modelled from surface fuels alone, flame heights were on average 1.5m smaller than observed values, and were predicted within the error range 28% of the time. The addition of plant structure produced predicted flame heights that were on average 1.5m larger than observed, but were correct 53% of the time. The over-prediction in this case was the result of a small number of large errors, where higher strata such as forest canopy were modelled to ignite but did not. The addition of leaf traits largely addressed this error, so that the mean flame height over-prediction was reduced to 0.3m and the fully parameterised FFM gave correct predictions 62% of the time. When small (<1m) flames were excluded, the fully parameterised model correctly predicted flame heights 12 times more often than could be predicted using surface fuels alone, and the Mean Absolute Error was 4 times smaller. The inadequate consideration of plant traits within a mechanistic framework introduces significant error to forest fire behaviour modelling. The FFM provides a solution to this, and an avenue by which plant trait information can be used to better inform Global Vegetation Models and decision-making tools used to mitigate the impacts of fire.
HD 140283: A Star in the Solar Neighborhood that Formed Shortly after the Big Bang
NASA Astrophysics Data System (ADS)
Bond, Howard E.; Nelan, Edmund P.; VandenBerg, Don A.; Schaefer, Gail H.; Harmer, Dianne
2013-03-01
HD 140283 is an extremely metal-deficient and high-velocity subgiant in the solar neighborhood, having a location in the Hertzsprung-Russell diagram where absolute magnitude is most sensitive to stellar age. Because it is bright, nearby, unreddened, and has a well-determined chemical composition, this star avoids most of the issues involved in age determinations for globular clusters. Using the Fine Guidance Sensors on the Hubble Space Telescope, we have measured a trigonometric parallax of 17.15 ± 0.14 mas for HD 140283, with an error one-fifth of that determined by the Hipparcos mission. Employing modern theoretical isochrones, which include effects of helium diffusion, revised nuclear reaction rates, and enhanced oxygen abundance, we use the precise distance to infer an age of 14.46 ± 0.31 Gyr. The quoted error includes only the uncertainty in the parallax, and is for adopted surface oxygen and iron abundances of [O/H] = -1.67 and [Fe/H] = -2.40. Uncertainties in the stellar parameters and chemical composition, especially the oxygen content, now contribute more to the error budget for the age of HD 140283 than does its distance, increasing the total uncertainty to about ±0.8 Gyr. Within the errors, the age of HD 140283 does not conflict with the age of the Universe, 13.77 ± 0.06 Gyr, based on the microwave background and Hubble constant, but it must have formed soon after the big bang. Based in part on observations made with the NASA/ESA Hubble Space Telescope, obtained by the Space Telescope Science Institute. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.
Simulating the performance of a distance-3 surface code in a linear ion trap
NASA Astrophysics Data System (ADS)
Trout, Colin J.; Li, Muyuan; Gutiérrez, Mauricio; Wu, Yukai; Wang, Sheng-Tao; Duan, Luming; Brown, Kenneth R.
2018-04-01
We explore the feasibility of implementing a small surface code with 9 data qubits and 8 ancilla qubits, commonly referred to as surface-17, using a linear chain of 171Yb+ ions. Two-qubit gates can be performed between any two ions in the chain with gate time increasing linearly with ion distance. Measurement of the ion state by fluorescence requires that the ancilla qubits be physically separated from the data qubits to avoid errors on the data due to scattered photons. We minimize the time required to measure one round of stabilizers by optimizing the mapping of the two-dimensional surface code to the linear chain of ions. We develop a physically motivated Pauli error model that allows for fast simulation and captures the key sources of noise in an ion trap quantum computer including gate imperfections and ion heating. Our simulations showed a consistent requirement of a two-qubit gate fidelity of ≥99.9% for the logical memory to have a better fidelity than physical two-qubit operations. Finally, we perform an analysis of the error subsets from the importance sampling method used to bound the logical error rates to gain insight into which error sources are particularly detrimental to error correction.
Inversion of surface parameters using fast learning neural networks
NASA Technical Reports Server (NTRS)
Dawson, M. S.; Olvera, J.; Fung, A. K.; Manry, M. T.
1992-01-01
A neural network approach to the inversion of surface scattering parameters is presented. Simulated data sets based on a surface scattering model are used so that the data may be viewed as taken from a completely known randomly rough surface. The fast learning (FL) neural network and a multilayer perceptron (MLP) trained with backpropagation learning (BP network) are tested on the simulated backscattering data. The RMS error of training the FL network is found to be less than one half the error of the BP network while requiring one to two orders of magnitude less CPU time. When applied to inversion of parameters from a statistically rough surface, the FL method is successful at recovering the surface permittivity, the surface correlation length, and the RMS surface height in less time and with less error than the BP network. Further applications of the FL neural network to the inversion of parameters from backscatter measurements of an inhomogeneous layer above a half space are shown.
NASA Technical Reports Server (NTRS)
Taconet, O.; Carlson, T.; Bernard, R.; Vidal-Madjar, D.
1986-01-01
Ground measurements of surface-sensible heat flux and soil moisture for a wheat-growing area of Beauce in France were compared with the values derived by inverting two boundary layer models with a surface/vegetation formulation using surface temperature measurements made from NOAA-AVHRR. The results indicated that the trends in the surface heat fluxes and soil moisture observed during the 5 days of the field experiment were effectively captured by the inversion method using the remotely measured radiative temperatures and either of the two boundary layer methods, both of which contain nearly identical vegetation parameterizations described by Taconet et al. (1986). The sensitivity of the results to errors in the initial sounding values or measured surface temperature was tested by varying the initial sounding temperature, dewpoint, and wind speed and the measured surface temperature by amounts corresponding to typical measurement error. In general, the vegetation component was more sensitive to error than the bare soil model.
Impact of numerical choices on water conservation in the E3SM Atmosphere Model Version 1 (EAM V1)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.
The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations for sea level rise projection. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods formore » fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model is negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in the new model results in a very thin model layer at the Earth’s surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for this model.« less
Impact of numerical choices on water conservation in the E3SM Atmosphere Model version 1 (EAMv1)
NASA Astrophysics Data System (ADS)
Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.; Wan, Hui; Leung, Ruby; Ma, Po-Lun; Golaz, Jean-Christophe; Wolfe, Jon; Lin, Wuyin; Singh, Balwinder; Burrows, Susannah; Yoon, Jin-Ho; Wang, Hailong; Qian, Yun; Tang, Qi; Caldwell, Peter; Xie, Shaocheng
2018-06-01
The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods for fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model becomes negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors in early V1 versions decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in V1 results in a very thin model layer at the Earth's surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for V1.
Investigation of Primary Mirror Segment's Residual Errors for the Thirty Meter Telescope
NASA Technical Reports Server (NTRS)
Seo, Byoung-Joon; Nissly, Carl; Angeli, George; MacMynowski, Doug; Sigrist, Norbert; Troy, Mitchell; Williams, Eric
2009-01-01
The primary mirror segment aberrations after shape corrections with warping harness have been identified as the single largest error term in the Thirty Meter Telescope (TMT) image quality error budget. In order to better understand the likely errors and how they will impact the telescope performance we have performed detailed simulations. We first generated unwarped primary mirror segment surface shapes that met TMT specifications. Then we used the predicted warping harness influence functions and a Shack-Hartmann wavefront sensor model to determine estimates for the 492 corrected segment surfaces that make up the TMT primary mirror. Surface and control parameters, as well as the number of subapertures were varied to explore the parameter space. The corrected segment shapes were then passed to an optical TMT model built using the Jet Propulsion Laboratory (JPL) developed Modeling and Analysis for Controlled Optical Systems (MACOS) ray-trace simulator. The generated exit pupil wavefront error maps provided RMS wavefront error and image-plane characteristics like the Normalized Point Source Sensitivity (PSSN). The results have been used to optimize the segment shape correction and wavefront sensor designs as well as provide input to the TMT systems engineering error budgets.
A Noninvasive Body Setup Method for Radiotherapy by Using a Multimodal Image Fusion Technique
Zhang, Jie; Chen, Yunxia; Wang, Chenchen; Chu, Kaiyue; Jin, Jianhua; Huang, Xiaolin; Guan, Yue; Li, Weifeng
2017-01-01
Purpose: To minimize the mismatch error between patient surface and immobilization system for tumor location by a noninvasive patient setup method. Materials and Methods: The method, based on a point set registration, proposes a shift for patient positioning by integrating information of the computed tomography scans and that of optical surface landmarks. An evaluation of the method included 3 areas: (1) a validation on a phantom by estimating 100 known mismatch errors between patient surface and immobilization system. (2) Five patients with pelvic tumors were considered. The tumor location errors of the method were measured using the difference between the proposal shift of cone-beam computed tomography and that of our method. (3) The collected setup data from the evaluation of patients were compared with the published performance data of other 2 similar systems. Results: The phantom verification results showed that the method was capable of estimating mismatch error between patient surface and immobilization system in a precision of <0.22 mm. For the pelvic tumor, the method had an average tumor location error of 1.303, 2.602, and 1.684 mm in left–right, anterior–posterior, and superior–inferior directions, respectively. The performance comparison with other 2 similar systems suggested that the method had a better positioning accuracy for pelvic tumor location. Conclusion: By effectively decreasing an interfraction uncertainty source (mismatch error between patient surface and immobilization system) in radiotherapy, the method can improve patient positioning precision for pelvic tumor. PMID:29333959
Satellite-based Calibration of Heat Flux at the Ocean Surface
NASA Astrophysics Data System (ADS)
Barron, C. N.; Dastugue, J. M.; May, J. C.; Rowley, C. D.; Smith, S. R.; Spence, P. L.; Gremes-Cordero, S.
2016-02-01
Model forecasts of upper ocean heat content and variability on diurnal to daily scales are highly dependent on estimates of heat flux through the air-sea interface. Satellite remote sensing is applied to not only inform the initial ocean state but also to mitigate errors in surface heat flux and model representations affecting the distribution of heat in the upper ocean. Traditional assimilation of sea surface temperature (SST) observations re-centers ocean models at the start of each forecast cycle. Subsequent evolution depends on estimates of surface heat fluxes and upper-ocean processes over the forecast period. The COFFEE project (Calibration of Ocean Forcing with satellite Flux Estimates) endeavors to correct ocean forecast bias through a responsive error partition among surface heat flux and ocean dynamics sources. A suite of experiments in the southern California Current demonstrates a range of COFFEE capabilities, showing the impact on forecast error relative to a baseline three-dimensional variational (3DVAR) assimilation using Navy operational global or regional atmospheric forcing. COFFEE addresses satellite-calibration of surface fluxes to estimate surface error covariances and links these to the ocean interior. Experiment cases combine different levels of flux calibration with different assimilation alternatives. The cases may use the original fluxes, apply full satellite corrections during the forecast period, or extend hindcast corrections into the forecast period. Assimilation is either baseline 3DVAR or standard strong-constraint 4DVAR, with work proceeding to add a 4DVAR expanded to include a weak constraint treatment of the surface flux errors. Covariance of flux errors is estimated from the recent time series of forecast and calibrated flux terms. While the California Current examples are shown, the approach is equally applicable to other regions. These approaches within a 3DVAR application are anticipated to be useful for global and larger regional domains where a full 4DVAR methodology may be cost-prohibitive.
Density-matrix simulation of small surface codes under current and projected experimental noise
NASA Astrophysics Data System (ADS)
O'Brien, T. E.; Tarasinski, B.; DiCarlo, L.
2017-09-01
We present a density-matrix simulation of the quantum memory and computing performance of the distance-3 logical qubit Surface-17, following a recently proposed quantum circuit and using experimental error parameters for transmon qubits in a planar circuit QED architecture. We use this simulation to optimize components of the QEC scheme (e.g., trading off stabilizer measurement infidelity for reduced cycle time) and to investigate the benefits of feedback harnessing the fundamental asymmetry of relaxation-dominated error in the constituent transmons. A lower-order approximate calculation extends these predictions to the distance-5 Surface-49. These results clearly indicate error rates below the fault-tolerance threshold of the surface code, and the potential for Surface-17 to perform beyond the break-even point of quantum memory. However, Surface-49 is required to surpass the break-even point of computation at state-of-the-art qubit relaxation times and readout speeds.
A method of treating the non-grey error in total emittance measurements
NASA Technical Reports Server (NTRS)
Heaney, J. B.; Henninger, J. H.
1971-01-01
In techniques for the rapid determination of total emittance, the sample is generally exposed to surroundings that are at a different temperature than the sample's surface. When the infrared spectral reflectance of the surface is spectrally selective, these techniques introduce an error into the total emittance values. Surfaces of aluminum overcoated with oxides of various thicknesses fall into this class. Because they are often used as temperature control coatings on satellites, their emittances must be accurately known. The magnitude of the error was calculated for Alzak and silicon oxide-coated aluminum and was shown to be dependent on the thickness of the oxide coating. The results demonstrate that, because the magnitude of the error is thickness-dependent, it is generally impossible or impractical to eliminate it by calibrating the measuring device.
Modelling and analysis of flux surface mapping experiments on W7-X
NASA Astrophysics Data System (ADS)
Lazerson, Samuel; Otte, Matthias; Bozhenkov, Sergey; Sunn Pedersen, Thomas; Bräuer, Torsten; Gates, David; Neilson, Hutch; W7-X Team
2015-11-01
The measurement and compensation of error fields in W7-X will be key to the device achieving high beta steady state operations. Flux surface mapping utilizes the vacuum magnetic flux surfaces, a feature unique to stellarators and heliotrons, to allow direct measurement of magnetic topology, and thereby allows a highly accurate determination of remnant magnetic field errors. As will be reported separately at this meeting, the first measurements confirming the existence of nested flux surfaces in W7-X have been made. In this presentation, a synthetic diagnostic for the flux surface mapping diagnostic is presented. It utilizes Poincaré traces to construct an image of the flux surface consistent with the measured camera geometry, fluorescent rod sweep plane, and emitter beam position. Forward modeling of the high-iota configuration will be presented demonstrating an ability to measure the intrinsic error field using the U.S. supplied trim coil system on W7-X, and a first experimental assessment of error fields in W7-X will be presented. This work has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the US Department of Energy.
NASA Technical Reports Server (NTRS)
Gordon, Howard R.; Wang, Menghua
1992-01-01
The first step in the Coastal Zone Color Scanner (CZCS) atmospheric-correction algorithm is the computation of the Rayleigh-scattering (RS) contribution, L sub r, to the radiance leaving the top of the atmosphere over the ocean. In the present algorithm, L sub r is computed by assuming that the ocean surface is flat. Calculations of the radiance leaving an RS atmosphere overlying a rough Fresnel-reflecting ocean are presented to evaluate the radiance error caused by the flat-ocean assumption. Simulations are carried out to evaluate the error incurred when the CZCS-type algorithm is applied to a realistic ocean in which the surface is roughened by the wind. In situations where there is no direct sun glitter, it is concluded that the error induced by ignoring the Rayleigh-aerosol interaction is usually larger than that caused by ignoring the surface roughness. This suggests that, in refining algorithms for future sensors, more effort should be focused on dealing with the Rayleigh-aerosol interaction than on the roughness of the sea surface.
NASA Technical Reports Server (NTRS)
Stutzman, W. L.; Smith, W. T.
1990-01-01
Surface errors on parabolic reflector antennas degrade the overall performance of the antenna. Space antenna structures are difficult to build, deploy and control. They must maintain a nearly perfect parabolic shape in a harsh environment and must be lightweight. Electromagnetic compensation for surface errors in large space reflector antennas can be used to supplement mechanical compensation. Electromagnetic compensation for surface errors in large space reflector antennas has been the topic of several research studies. Most of these studies try to correct the focal plane fields of the reflector near the focal point and, hence, compensate for the distortions over the whole radiation pattern. An alternative approach to electromagnetic compensation is presented. The proposed technique uses pattern synthesis to compensate for the surface errors. The pattern synthesis approach uses a localized algorithm in which pattern corrections are directed specifically towards portions of the pattern requiring improvement. The pattern synthesis technique does not require knowledge of the reflector surface. It uses radiation pattern data to perform the compensation.
The Additional Error of Inertial Sensors Induced by Hypersonic Flight Conditions
Karachun, Volodimir; Mel’nick, Viktorij; Korobiichuk, Igor; Nowicki, Michał; Szewczyk, Roman; Kobzar, Svitlana
2016-01-01
The emergence of hypersonic technology pose a new challenge for inertial navigation sensors, widely used in aerospace industry. The main problems are: extremely high temperatures, vibration of the fuselage, penetrating acoustic radiation and shock N-waves. The nature of the additional errors of the gyroscopic inertial sensor with hydrostatic suspension components under operating conditions generated by forced precession of the movable part of the suspension due to diffraction phenomena in acoustic fields is explained. The cause of the disturbing moments in the form of the Coriolis inertia forces during the transition of the suspension surface into the category of impedance is revealed. The boundaries of occurrence of the features on the resonance wave match are described. The values of the “false” angular velocity as a result of the elastic-stress state of suspension in the acoustic fields are determined. PMID:26927122
NASA Astrophysics Data System (ADS)
Lin, Wei-Cheng; Chang, Shenq-Tsong; Yu, Zong-Ru; Lin, Yu-Chuan; Ho, Cheng-Fong; Huang, Ting-Ming; Chen, Cheng-Huan
2014-09-01
A Cassegrain telescope with a 450 mm clear aperture was developed for use in a spaceborne optical remote-sensing instrument. Self-weight deformation and thermal distortion were considered: to this end, Zerodur was used to manufacture the primary mirror. The lightweight scheme adopted a hexagonal cell structure yielding a lightweight ratio of 50%. In general, optical testing on a lightweight mirror is a critical technique during both the manufacturing and assembly processes. To prevent unexpected measurement errors that cause erroneous judgment, this paper proposes a novel and reliable analytical method for optical testing, called the bench test. The proposed algorithm was used to distinguish the manufacturing form error from surface deformation caused by the mounting, supporter and gravity effects for the optical testing. The performance of the proposed bench test was compared with a conventional vertical setup for optical testing during the manufacturing process of the lightweight mirror.
Large Angle Reorientation of a Solar Sail Using Gimballed Mass Control
NASA Astrophysics Data System (ADS)
Sperber, E.; Fu, B.; Eke, F. O.
2016-06-01
This paper proposes a control strategy for the large angle reorientation of a solar sail equipped with a gimballed mass. The algorithm consists of a first stage that manipulates the gimbal angle in order to minimize the attitude error about a single principal axis. Once certain termination conditions are reached, a regulator is employed that selects a single gimbal angle for minimizing both the residual attitude error concomitantly with the body rate. Because the force due to the specular reflection of radiation is always directed along a reflector's surface normal, this form of thrust vector control cannot generate torques about an axis normal to the plane of the sail. Thus, in order to achieve three-axis control authority a 1-2-1 or 2-1-2 sequence of rotations about principal axes is performed. The control algorithm is implemented directly in-line with the nonlinear equations of motion and key performance characteristics are identified.
Large mirror surface control by corrective coating
NASA Astrophysics Data System (ADS)
Bonnand, Romain; Degallaix, Jerome; Flaminio, Raffaele; Giacobone, Laurent; Lagrange, Bernard; Marion, Fréderique; Michel, Christophe; Mours, Benoit; Mugnier, Pierre; Pacaud, Emmanuel; Pinard, Laurent
2013-08-01
The Advanced Virgo gravitational wave detector aims at a sensitivity ten times better than the initial LIGO and Virgo detectors. This implies very stringent requirement on the optical losses in the interferometer arm cavities. In this paper we focus on the mirrors which form the interferometer arm cavities and that require a surface figure error to be well below one nanometre on a diameter of 150 mm. This ‘sub-nanometric flatness’ is not achievable by classical polishing on such a large diameter. Therefore we present the corrective coating technique which has been developed to reach this requirement. Its principle is to add a non-uniform thin film on top of the substrate in order to flatten its surface. In this paper we will introduce the Advanced Virgo requirements and present the basic principle of the corrective coating technique. Then we show the results obtained experimentally on an initial Virgo substrate. Finally we provide an evaluation of the round-trip losses in the Fabry-Perot arm cavities once the corrected surface is used.
Equilibrium configurations of the conducting liquid surface in a nonuniform electric field
NASA Astrophysics Data System (ADS)
Zubarev, N. M.; Zubareva, O. V.
2011-01-01
Possible equilibrium configurations of the free surface of a conducting liquid deformed by a nonuniform external electric field are investigated. The liquid rests on an electrode that has the shape of a dihedral angle formed by two intersecting equipotential half-planes (conducting wedge). It is assumed that the problem has plane symmetry: the surface is invariant under shift along the edge of the dihedral angle. A one-parametric family of exact solutions for the shape of the surface is found in which the opening angle of the region above the wedge serves as a parameter. The solutions are valid when the pressure difference between the inside and outside of the liquid is zero. For an arbitrary pressure difference, approximate solutions to the problem are constructed and it is demonstrated the approximation error is small. It is found that, when the potential difference exceeds a certain threshold value, equilibrium solutions are absent. In this case, the region occupied by the liquid disintegrates, the disintegration scenario depending on the opening angle.
NASA Astrophysics Data System (ADS)
Kamath, Aditya; Vargas-Hernández, Rodrigo A.; Krems, Roman V.; Carrington, Tucker; Manzhos, Sergei
2018-06-01
For molecules with more than three atoms, it is difficult to fit or interpolate a potential energy surface (PES) from a small number of (usually ab initio) energies at points. Many methods have been proposed in recent decades, each claiming a set of advantages. Unfortunately, there are few comparative studies. In this paper, we compare neural networks (NNs) with Gaussian process (GP) regression. We re-fit an accurate PES of formaldehyde and compare PES errors on the entire point set used to solve the vibrational Schrödinger equation, i.e., the only error that matters in quantum dynamics calculations. We also compare the vibrational spectra computed on the underlying reference PES and the NN and GP potential surfaces. The NN and GP surfaces are constructed with exactly the same points, and the corresponding spectra are computed with the same points and the same basis. The GP fitting error is lower, and the GP spectrum is more accurate. The best NN fits to 625/1250/2500 symmetry unique potential energy points have global PES root mean square errors (RMSEs) of 6.53/2.54/0.86 cm-1, whereas the best GP surfaces have RMSE values of 3.87/1.13/0.62 cm-1, respectively. When fitting 625 symmetry unique points, the error in the first 100 vibrational levels is only 0.06 cm-1 with the best GP fit, whereas the spectrum on the best NN PES has an error of 0.22 cm-1, with respect to the spectrum computed on the reference PES. This error is reduced to about 0.01 cm-1 when fitting 2500 points with either the NN or GP. We also find that the GP surface produces a relatively accurate spectrum when obtained based on as few as 313 points.
Inferring river properties with SWOT like data
NASA Astrophysics Data System (ADS)
Garambois, Pierre-André; Monnier, Jérôme; Roux, Hélène
2014-05-01
Inverse problems in hydraulics are still open questions such as the estimation of river discharges. Remotely sensed measurements of hydrosystems can provide valuable information but adequate methods are still required to exploit it. The future Surface Water and Ocean Topography (SWOT) mission would provide new cartographic measurements of inland water surfaces. The highlight of SWOT will be its almost global coverage and temporal revisits on the order of 1 to 4 times per 22 days repeat cycle [1]. Lots of studies have shown the possibility of retrieving discharge given the river bathymetry or roughness and/or in situ time series. The new challenge is to use SWOT type data to inverse the triplet formed by the roughness, the bathymetry and the discharge. The method presented here is composed of two steps: following an inverse formulation from [2], the first step consists in retrieving an equivalent bathymetry profile of a river given one in situ depth measurement and SWOT like data of the water surface, that is to say water elevation, free surface slope and width. From this equivalent bathymetry, the second step consists in solving mass and Manning equation in the least square sense [3]. Nevertheless, for cases where no in situ measurement of water depth is available, it is still possible to solve a system formed by mass and Manning equations in the least square sense (or with other methods such as Bayesian ones, see e.g. [4]). We show that a good a priori knowledge of bathymetry and roughness is compulsory for such methods. Depending on this a priori knowledge, the inversion of the triplet (roughness, bathymetry, discharge) in SWOT context was evaluated on the Garonne River [5, 6]. The results are presented on 80 km of the Garonne River downstream of Toulouse in France [7]. An equivalent bathymetry is retrieved with less than 10% relative error with SWOT like observations. After that, encouraging results are obtained with less than 10% relative error on the identified discharge. References [1] E. Rodriguez, SWOT science requirements document, JPL document, JPL, 2012. [2] A. Gessese, K. Wa, and M. Sellier, Bathymetry reconstruction based on the zero-inertia shallow water approximation, Theoretical and Computational Fluid Dynamics, vol. 27, no. 5, pp. 721-732, 2013. [3] P. A. Garambois and J. Monnier, Inference of river properties from remotly sensed observations of water surface, under final redaction for HESS, 2014. [4] M. Durand, Sacramento river airswot discharge estimation scenario. http://swotdawg.wordpress.com/2013/04/18/sacramento-river-airswot-discharge-estimation-scenario/, 2013. [5] P. A. Garambois and H. Roux, Garonne River discharge estimation. http://swotdawg.wordpress.com/2013/07/01/garonne-river-discharge-estimation/, 2013. [6] P. A. Garambois and H. Roux, Sensitivity of discharge uncertainty to measurement errors, case of the Garonne River. http://swotdawg.wordpress.com/2013/07/01/sensitivity-of-discharge-uncertainty-to-measurement-errors-case-of-the-garonne-river/, 2013. [7] H. Roux and P. A. Garambois, Tests of reach averaging and manning equation on the Garonne River. http://swotdawg.wordpress.com/2013/07/01/tests-of-reach-averaging-and-manning-equation-on-the-garonne-river/, 2013.
Van Weverberg, K.; Morcrette, C. J.; Petch, J.; ...
2018-02-28
Many Numerical Weather Prediction (NWP) and climate models exhibit too warm lower tropospheres near the midlatitude continents. The warm bias has been shown to coincide with important surface radiation biases that likely play a critical role in the inception or the growth of the warm bias. This paper presents an attribution study on the net radiation biases in nine model simulations, performed in the framework of the CAUSES project (Clouds Above the United States and Errors at the Surface). Contributions from deficiencies in the surface properties, clouds, water vapor, and aerosols are quantified, using an array of radiation measurement stationsmore » near the Atmospheric Radiation Measurement Southern Great Plains site. Furthermore, an in-depth analysis is shown to attribute the radiation errors to specific cloud regimes. The net surface shortwave radiation is overestimated in all models throughout most of the simulation period. Cloud errors are shown to contribute most to this overestimation, although nonnegligible contributions from the surface albedo exist in most models. Missing deep cloud events and/or simulating deep clouds with too weak cloud radiative effects dominate in the cloud-related radiation errors. Some models have compensating errors between excessive occurrence of deep cloud but largely underestimating their radiative effect, while other models miss deep cloud events altogether. Surprisingly, even the latter models tend to produce too much and too frequent afternoon surface precipitation. This suggests that rather than issues with the triggering of deep convection, cloud radiative deficiencies are related to too weak convective cloud detrainment and too large precipitation efficiencies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Weverberg, K.; Morcrette, C. J.; Petch, J.
Many Numerical Weather Prediction (NWP) and climate models exhibit too warm lower tropospheres near the midlatitude continents. The warm bias has been shown to coincide with important surface radiation biases that likely play a critical role in the inception or the growth of the warm bias. This paper presents an attribution study on the net radiation biases in nine model simulations, performed in the framework of the CAUSES project (Clouds Above the United States and Errors at the Surface). Contributions from deficiencies in the surface properties, clouds, water vapor, and aerosols are quantified, using an array of radiation measurement stationsmore » near the Atmospheric Radiation Measurement Southern Great Plains site. Furthermore, an in-depth analysis is shown to attribute the radiation errors to specific cloud regimes. The net surface shortwave radiation is overestimated in all models throughout most of the simulation period. Cloud errors are shown to contribute most to this overestimation, although nonnegligible contributions from the surface albedo exist in most models. Missing deep cloud events and/or simulating deep clouds with too weak cloud radiative effects dominate in the cloud-related radiation errors. Some models have compensating errors between excessive occurrence of deep cloud but largely underestimating their radiative effect, while other models miss deep cloud events altogether. Surprisingly, even the latter models tend to produce too much and too frequent afternoon surface precipitation. This suggests that rather than issues with the triggering of deep convection, cloud radiative deficiencies are related to too weak convective cloud detrainment and too large precipitation efficiencies.« less
NASA Astrophysics Data System (ADS)
Van Weverberg, K.; Morcrette, C. J.; Petch, J.; Klein, S. A.; Ma, H.-Y.; Zhang, C.; Xie, S.; Tang, Q.; Gustafson, W. I.; Qian, Y.; Berg, L. K.; Liu, Y.; Huang, M.; Ahlgrimm, M.; Forbes, R.; Bazile, E.; Roehrig, R.; Cole, J.; Merryfield, W.; Lee, W.-S.; Cheruy, F.; Mellul, L.; Wang, Y.-C.; Johnson, K.; Thieman, M. M.
2018-04-01
Many Numerical Weather Prediction (NWP) and climate models exhibit too warm lower tropospheres near the midlatitude continents. The warm bias has been shown to coincide with important surface radiation biases that likely play a critical role in the inception or the growth of the warm bias. This paper presents an attribution study on the net radiation biases in nine model simulations, performed in the framework of the CAUSES project (Clouds Above the United States and Errors at the Surface). Contributions from deficiencies in the surface properties, clouds, water vapor, and aerosols are quantified, using an array of radiation measurement stations near the Atmospheric Radiation Measurement Southern Great Plains site. Furthermore, an in-depth analysis is shown to attribute the radiation errors to specific cloud regimes. The net surface shortwave radiation is overestimated in all models throughout most of the simulation period. Cloud errors are shown to contribute most to this overestimation, although nonnegligible contributions from the surface albedo exist in most models. Missing deep cloud events and/or simulating deep clouds with too weak cloud radiative effects dominate in the cloud-related radiation errors. Some models have compensating errors between excessive occurrence of deep cloud but largely underestimating their radiative effect, while other models miss deep cloud events altogether. Surprisingly, even the latter models tend to produce too much and too frequent afternoon surface precipitation. This suggests that rather than issues with the triggering of deep convection, cloud radiative deficiencies are related to too weak convective cloud detrainment and too large precipitation efficiencies.
Ma, H. -Y.; Klein, S. A.; Xie, S.; ...
2018-02-27
Many weather forecast and climate models simulate warm surface air temperature (T 2m) biases over midlatitude continents during the summertime, especially over the Great Plains. We present here one of a series of papers from a multimodel intercomparison project (CAUSES: Cloud Above the United States and Errors at the Surface), which aims to evaluate the role of cloud, radiation, and precipitation biases in contributing to the T 2m bias using a short-term hindcast approach during the spring and summer of 2011. Observations are mainly from the Atmospheric Radiation Measurement Southern Great Plains sites. The present study examines the contributions ofmore » surface energy budget errors. All participating models simulate too much net shortwave and longwave fluxes at the surface but with no consistent mean bias sign in turbulent fluxes over the Central United States and Southern Great Plains. Nevertheless, biases in the net shortwave and downward longwave fluxes as well as surface evaporative fraction (EF) are contributors to T 2m bias. Radiation biases are largely affected by cloud simulations, while EF bias is largely affected by soil moisture modulated by seasonal accumulated precipitation and evaporation. An approximate equation based upon the surface energy budget is derived to further quantify the magnitudes of radiation and EF contributions to T 2m bias. Our analysis ascribes that a large EF underestimate is the dominant source of error in all models with a large positive temperature bias, whereas an EF overestimate compensates for an excess of absorbed shortwave radiation in nearly all the models with the smallest temperature bias.« less
NASA Astrophysics Data System (ADS)
Ma, H.-Y.; Klein, S. A.; Xie, S.; Zhang, C.; Tang, S.; Tang, Q.; Morcrette, C. J.; Van Weverberg, K.; Petch, J.; Ahlgrimm, M.; Berg, L. K.; Cheruy, F.; Cole, J.; Forbes, R.; Gustafson, W. I.; Huang, M.; Liu, Y.; Merryfield, W.; Qian, Y.; Roehrig, R.; Wang, Y.-C.
2018-03-01
Many weather forecast and climate models simulate warm surface air temperature (T2m) biases over midlatitude continents during the summertime, especially over the Great Plains. We present here one of a series of papers from a multimodel intercomparison project (CAUSES: Cloud Above the United States and Errors at the Surface), which aims to evaluate the role of cloud, radiation, and precipitation biases in contributing to the T2m bias using a short-term hindcast approach during the spring and summer of 2011. Observations are mainly from the Atmospheric Radiation Measurement Southern Great Plains sites. The present study examines the contributions of surface energy budget errors. All participating models simulate too much net shortwave and longwave fluxes at the surface but with no consistent mean bias sign in turbulent fluxes over the Central United States and Southern Great Plains. Nevertheless, biases in the net shortwave and downward longwave fluxes as well as surface evaporative fraction (EF) are contributors to T2m bias. Radiation biases are largely affected by cloud simulations, while EF bias is largely affected by soil moisture modulated by seasonal accumulated precipitation and evaporation. An approximate equation based upon the surface energy budget is derived to further quantify the magnitudes of radiation and EF contributions to T2m bias. Our analysis ascribes that a large EF underestimate is the dominant source of error in all models with a large positive temperature bias, whereas an EF overestimate compensates for an excess of absorbed shortwave radiation in nearly all the models with the smallest temperature bias.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, H. -Y.; Klein, S. A.; Xie, S.
Many weather forecast and climate models simulate warm surface air temperature (T 2m) biases over midlatitude continents during the summertime, especially over the Great Plains. We present here one of a series of papers from a multimodel intercomparison project (CAUSES: Cloud Above the United States and Errors at the Surface), which aims to evaluate the role of cloud, radiation, and precipitation biases in contributing to the T 2m bias using a short-term hindcast approach during the spring and summer of 2011. Observations are mainly from the Atmospheric Radiation Measurement Southern Great Plains sites. The present study examines the contributions ofmore » surface energy budget errors. All participating models simulate too much net shortwave and longwave fluxes at the surface but with no consistent mean bias sign in turbulent fluxes over the Central United States and Southern Great Plains. Nevertheless, biases in the net shortwave and downward longwave fluxes as well as surface evaporative fraction (EF) are contributors to T 2m bias. Radiation biases are largely affected by cloud simulations, while EF bias is largely affected by soil moisture modulated by seasonal accumulated precipitation and evaporation. An approximate equation based upon the surface energy budget is derived to further quantify the magnitudes of radiation and EF contributions to T 2m bias. Our analysis ascribes that a large EF underestimate is the dominant source of error in all models with a large positive temperature bias, whereas an EF overestimate compensates for an excess of absorbed shortwave radiation in nearly all the models with the smallest temperature bias.« less
NASA Astrophysics Data System (ADS)
Zhang, F. H.; Wang, S. F.; An, C. H.; Wang, J.; Xu, Q.
2017-06-01
Large-aperture potassium dihydrogen phosphate (KDP) crystals are widely used in the laser path of inertial confinement fusion (ICF) systems. The most common method of manufacturing half-meter KDP crystals is ultra-precision fly cutting. When processing KDP crystals by ultra-precision fly cutting, the dynamic characteristics of the fly cutting machine and fluctuations in the fly cutting environment are translated into surface errors at different spatial frequency bands. These machining errors should be suppressed effectively to guarantee that KDP crystals meet the full-band machining accuracy specified in the evaluation index. In this study, the anisotropic machinability of KDP crystals and the causes of typical surface errors in ultra-precision fly cutting of the material are investigated. The structures of the fly cutting machine and existing processing parameters are optimized to improve the machined surface quality. The findings are theoretically and practically important in the development of high-energy laser systems in China.
Lievens, Hans; Vernieuwe, Hilde; Álvarez-Mozos, Jesús; De Baets, Bernard; Verhoest, Niko E.C.
2009-01-01
In the past decades, many studies on soil moisture retrieval from SAR demonstrated a poor correlation between the top layer soil moisture content and observed backscatter coefficients, which mainly has been attributed to difficulties involved in the parameterization of surface roughness. The present paper describes a theoretical study, performed on synthetical surface profiles, which investigates how errors on roughness parameters are introduced by standard measurement techniques, and how they will propagate through the commonly used Integral Equation Model (IEM) into a corresponding soil moisture retrieval error for some of the currently most used SAR configurations. Key aspects influencing the error on the roughness parameterization and consequently on soil moisture retrieval are: the length of the surface profile, the number of profile measurements, the horizontal and vertical accuracy of profile measurements and the removal of trends along profiles. Moreover, it is found that soil moisture retrieval with C-band configuration generally is less sensitive to inaccuracies in roughness parameterization than retrieval with L-band configuration. PMID:22399956
Measurement-free implementations of small-scale surface codes for quantum-dot qubits
NASA Astrophysics Data System (ADS)
Ercan, H. Ekmel; Ghosh, Joydip; Crow, Daniel; Premakumar, Vickram N.; Joynt, Robert; Friesen, Mark; Coppersmith, S. N.
2018-01-01
The performance of quantum-error-correction schemes depends sensitively on the physical realizations of the qubits and the implementations of various operations. For example, in quantum-dot spin qubits, readout is typically much slower than gate operations, and conventional surface-code implementations that rely heavily on syndrome measurements could therefore be challenging. However, fast and accurate reset of quantum-dot qubits, without readout, can be achieved via tunneling to a reservoir. Here we propose small-scale surface-code implementations for which syndrome measurements are replaced by a combination of Toffoli gates and qubit reset. For quantum-dot qubits, this enables much faster error correction than measurement-based schemes, but requires additional ancilla qubits and non-nearest-neighbor interactions. We have performed numerical simulations of two different coding schemes, obtaining error thresholds on the orders of 10-2 for a one-dimensional architecture that only corrects bit-flip errors and 10-4 for a two-dimensional architecture that corrects bit- and phase-flip errors.
NASA Astrophysics Data System (ADS)
Zhao, Lei; Lee, Xuhui; Liu, Shoudong
2013-09-01
Solar radiation at the Earth's surface is an important driver of meteorological and ecological processes. The objective of this study is to evaluate the accuracy of the reanalysis solar radiation produced by NARR (North American Regional Reanalysis) and MERRA (Modern-Era Retrospective Analysis for Research and Applications) against the FLUXNET measurements in North America. We found that both assimilation systems systematically overestimated the surface solar radiation flux on the monthly and annual scale, with an average bias error of +37.2 Wm-2 for NARR and of +20.2 Wm-2 for MERRA. The bias errors were larger under cloudy skies than under clear skies. A postreanalysis algorithm consisting of empirical relationships between model bias, a clearness index, and site elevation was proposed to correct the model errors. Results show that the algorithm can remove the systematic bias errors for both FLUXNET calibration sites (sites used to establish the algorithm) and independent validation sites. After correction, the average annual mean bias errors were reduced to +1.3 Wm-2 for NARR and +2.7 Wm-2 for MERRA. Applying the correction algorithm to the global domain of MERRA brought the global mean surface incoming shortwave radiation down by 17.3 W m-2 to 175.5 W m-2. Under the constraint of the energy balance, other radiation and energy balance terms at the Earth's surface, estimated from independent global data products, also support the need for a downward adjustment of the MERRA surface solar radiation.
Docherty, Paul D; Schranz, Christoph; Chase, J Geoffrey; Chiew, Yeong Shiong; Möller, Knut
2014-05-01
Accurate model parameter identification relies on accurate forward model simulations to guide convergence. However, some forward simulation methodologies lack the precision required to properly define the local objective surface and can cause failed parameter identification. The role of objective surface smoothness in identification of a pulmonary mechanics model was assessed using forward simulation from a novel error-stepping method and a proprietary Runge-Kutta method. The objective surfaces were compared via the identified parameter discrepancy generated in a Monte Carlo simulation and the local smoothness of the objective surfaces they generate. The error-stepping method generated significantly smoother error surfaces in each of the cases tested (p<0.0001) and more accurate model parameter estimates than the Runge-Kutta method in three of the four cases tested (p<0.0001) despite a 75% reduction in computational cost. Of note, parameter discrepancy in most cases was limited to a particular oblique plane, indicating a non-intuitive multi-parameter trade-off was occurring. The error-stepping method consistently improved or equalled the outcomes of the Runge-Kutta time-integration method for forward simulations of the pulmonary mechanics model. This study indicates that accurate parameter identification relies on accurate definition of the local objective function, and that parameter trade-off can occur on oblique planes resulting prematurely halted parameter convergence. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Workflow Enhancement (WE) Improves Safety in Radiation Oncology: Putting the WE and Team Together
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, Samuel T., E-mail: chaos@ccf.org; Rose Ella Burkhardt Brain Tumor and Neuro-oncology Center, Cleveland Clinic, Cleveland, Ohio; Meier, Tim
Purpose: To review the impact of a workflow enhancement (WE) team in reducing treatment errors that reach patients within radiation oncology. Methods and Materials: It was determined that flaws in our workflow and processes resulted in errors reaching the patient. The process improvement team (PIT) was developed in 2010 to reduce errors and was later modified in 2012 into the current WE team. Workflow issues and solutions were discussed in PIT and WE team meetings. Due to tensions within PIT that resulted in employee dissatisfaction, there was a 6-month hiatus between the end of PIT and initiation of the renamed/redesigned WEmore » team. In addition to the PIT/WE team forms, the department had separate incident forms to document treatment errors reaching the patient. These incident forms are rapidly reviewed and monitored by our departmental and institutional quality and safety groups, reflecting how seriously these forms are treated. The number of these incident forms was compared before and after instituting the WE team. Results: When PIT was disbanded, a number of errors seemed to occur in succession, requiring reinstitution and redesign of this team, rebranded the WE team. Interestingly, the number of incident forms per patient visits did not change when comparing 6 months during the PIT, 6 months during the hiatus, and the first 6 months after instituting the WE team (P=.85). However, 6 to 12 months after instituting the WE team, the number of incident forms per patient visits decreased (P=.028). After the WE team, employee satisfaction and commitment to quality increased as demonstrated by Gallup surveys, suggesting a correlation to the WE team. Conclusions: A team focused on addressing workflow and improving processes can reduce the number of errors reaching the patient. Time is necessary before a reduction in errors reaching patients will be seen.« less
37 CFR 2.125 - Filing and service of testimony.
Code of Federal Regulations, 2012 CFR
2012-07-01
... having all typographical errors in the transcript and all errors of arrangement, indexing and form of the...(g) with respect to arrangement, indexing and form. (e) Upon motion by any party, for good cause, the...
37 CFR 2.125 - Filing and service of testimony.
Code of Federal Regulations, 2013 CFR
2013-07-01
... having all typographical errors in the transcript and all errors of arrangement, indexing and form of the...(g) with respect to arrangement, indexing and form. (e) Upon motion by any party, for good cause, the...
37 CFR 2.125 - Filing and service of testimony.
Code of Federal Regulations, 2010 CFR
2010-07-01
... having all typographical errors in the transcript and all errors of arrangement, indexing and form of the...(g) with respect to arrangement, indexing and form. (e) Upon motion by any party, for good cause, the...
37 CFR 2.125 - Filing and service of testimony.
Code of Federal Regulations, 2011 CFR
2011-07-01
... having all typographical errors in the transcript and all errors of arrangement, indexing and form of the...(g) with respect to arrangement, indexing and form. (e) Upon motion by any party, for good cause, the...
37 CFR 2.125 - Filing and service of testimony.
Code of Federal Regulations, 2014 CFR
2014-07-01
... having all typographical errors in the transcript and all errors of arrangement, indexing and form of the...(g) with respect to arrangement, indexing and form. (e) Upon motion by any party, for good cause, the...
Derivation of error sources for experimentally derived heliostat shapes
NASA Astrophysics Data System (ADS)
Cumpston, Jeff; Coventry, Joe
2017-06-01
Data gathered using photogrammetry that represents the surface and structure of a heliostat mirror panel is investigated in detail. A curve-fitting approach that allows the retrieval of four distinct mirror error components, while prioritizing the best fit possible to paraboloidal terms in the curve fitting equation, is presented. The angular errors associated with each of the four surfaces are calculated, and the relative magnitude for each of them is given. It is found that in this case, the mirror had a significant structural twist, and an estimate of the improvement to the mirror surface quality in the case of no twist was made.
Error analysis and system optimization of non-null aspheric testing system
NASA Astrophysics Data System (ADS)
Luo, Yongjie; Yang, Yongying; Liu, Dong; Tian, Chao; Zhuo, Yongmo
2010-10-01
A non-null aspheric testing system, which employs partial null lens (PNL for short) and reverse iterative optimization reconstruction (ROR for short) technique, is proposed in this paper. Based on system modeling in ray tracing software, the parameter of each optical element is optimized and this makes system modeling more precise. Systematic error of non-null aspheric testing system is analyzed and can be categorized into two types, the error due to surface parameters of PNL in the system modeling and the rest from non-null interferometer by the approach of error storage subtraction. Experimental results show that, after systematic error is removed from testing result of non-null aspheric testing system, the aspheric surface is precisely reconstructed by ROR technique and the consideration of systematic error greatly increase the test accuracy of non-null aspheric testing system.
Assessment of Satellite Surface Radiation Products in Highland Regions with Tibet Instrumental Data
NASA Technical Reports Server (NTRS)
Yang, Kun; Koike, Toshio; Stackhouse, Paul; Mikovitz, Colleen
2006-01-01
This study presents results of comparisons between instrumental radiation data in the elevated Tibetan Plateau and two global satellite products: the Global Energy and Water Cycle Experiment - Surface Radiation Budget (GEWEX-SRB) and International Satellite Cloud Climatology Project - Flux Data (ISCCP-FD). In general, shortwave radiation (SW) is estimated better by ISCCP-FD while longwave radiation (LW) is estimated better by GEWEX-SRB, but all the radiation components in both products are under-estimated. Severe and systematic errors were found in monthly-mean SRB SW (on plateau-average, -48 W/sq m for downward SW and -18 W/sq m for upward SW) and FD LW (on plateau-average, -37 W/sq m for downward LW and -62 W/sq m for upward LW) for radiation. Errors in monthly-mean diurnal variations are even larger than the monthly mean errors. Though the LW errors can be reduced about 10 W/sq m after a correction for altitude difference between the site and SRB and FD grids, these errors are still higher than that for other regions. The large errors in SRB SW was mainly due to a processing mistake for elevation effect, but the errors in SRB LW was mainly due to significant errors in input data. We suggest reprocessing satellite surface radiation budget data, at least for highland areas like Tibet.
NASA Technical Reports Server (NTRS)
Kahle, A. B.; Alley, R. E.; Schieldge, J. P.
1984-01-01
The sensitivity of thermal inertia (TI) calculations to errors in the measurement or parameterization of a number of environmental factors is considered here. The factors include effects of radiative transfer in the atmosphere, surface albedo and emissivity, variations in surface turbulent heat flux density, cloud cover, vegetative cover, and topography. The error analysis is based upon data from the Heat Capacity Mapping Mission (HCMM) satellite for July 1978 at three separate test sites in the deserts of the western United States. Results show that typical errors in atmospheric radiative transfer, cloud cover, and vegetative cover can individually cause root-mean-square (RMS) errors of about 10 percent (with atmospheric effects sometimes as large as 30-40 percent) in HCMM-derived thermal inertia images of 20,000-200,000 pixels.
New analysis strategies for micro aspheric lens metrology
NASA Astrophysics Data System (ADS)
Gugsa, Solomon Abebe
Effective characterization of an aspheric micro lens is critical for understanding and improving processing in micro-optic manufacturing. Since most microlenses are plano-convex, where the convex geometry is a conic surface, current practice is often limited to obtaining an estimate of the lens conic constant, which average out the surface geometry that departs from an exact conic surface and any addition surface irregularities. We have developed a comprehensive approach of estimating the best fit conic and its uncertainty, and in addition propose an alternative analysis that focuses on surface errors rather than best-fit conic constant. We describe our new analysis strategy based on the two most dominant micro lens metrology methods in use today, namely, scanning white light interferometry (SWLI) and phase shifting interferometry (PSI). We estimate several parameters from the measurement. The major uncertainty contributors for SWLI are the estimates of base radius of curvature, the aperture of the lens, the sag of the lens, noise in the measurement, and the center of the lens. In the case of PSI the dominant uncertainty contributors are noise in the measurement, the radius of curvature, and the aperture. Our best-fit conic procedure uses least squares minimization to extract a best-fit conic value, which is then subjected to a Monte Carlo analysis to capture combined uncertainty. In our surface errors analysis procedure, we consider the surface errors as the difference between the measured geometry and the best-fit conic surface or as the difference between the measured geometry and the design specification for the lens. We focus on a Zernike polynomial description of the surface error, and again a Monte Carlo analysis is used to estimate a combined uncertainty, which in this case is an uncertainty for each Zernike coefficient. Our approach also allows us to investigate the effect of individual uncertainty parameters and measurement noise on both the best-fit conic constant analysis and the surface errors analysis, and compare the individual contributions to the overall uncertainty.
2008-09-30
propagation effects by splitting apart the longer period surface waves from the shorter period, depth-sensitive Pnl waves. Problematic, or high-error... Pnl waves. Problematic, or high-error, stations and paths were further analyzed to identify systematic errors with unknown sensor responses and...frequency Pnl components and slower, longer period surface waves. All cut windows are fit simultaneously, allowing equal weighting of phases that may be
Speeding up Coarse Point Cloud Registration by Threshold-Independent Baysac Match Selection
NASA Astrophysics Data System (ADS)
Kang, Z.; Lindenbergh, R.; Pu, S.
2016-06-01
This paper presents an algorithm for the automatic registration of terrestrial point clouds by match selection using an efficiently conditional sampling method -- threshold-independent BaySAC (BAYes SAmpling Consensus) and employs the error metric of average point-to-surface residual to reduce the random measurement error and then approach the real registration error. BaySAC and other basic sampling algorithms usually need to artificially determine a threshold by which inlier points are identified, which leads to a threshold-dependent verification process. Therefore, we applied the LMedS method to construct the cost function that is used to determine the optimum model to reduce the influence of human factors and improve the robustness of the model estimate. Point-to-point and point-to-surface error metrics are most commonly used. However, point-to-point error in general consists of at least two components, random measurement error and systematic error as a result of a remaining error in the found rigid body transformation. Thus we employ the measure of the average point-to-surface residual to evaluate the registration accuracy. The proposed approaches, together with a traditional RANSAC approach, are tested on four data sets acquired by three different scanners in terms of their computational efficiency and quality of the final registration. The registration results show the st.dev of the average point-to-surface residuals is reduced from 1.4 cm (plain RANSAC) to 0.5 cm (threshold-independent BaySAC). The results also show that, compared to the performance of RANSAC, our BaySAC strategies lead to less iterations and cheaper computational cost when the hypothesis set is contaminated with more outliers.
Furness, Alan R; Callan, Richard S; Mackert, J Rodway; Mollica, Anthony G
2018-01-01
The aim of this study was to evaluate the effectiveness of the Planmeca Compare software in identifying and quantifying a common critical error in dental students' crown preparations. In 2014-17, a study was conducted at one U.S. dental school that evaluated an ideal crown prep made by a faculty member on a dentoform to modified preps. Two types of preparation errors were created by the addition of flowable composite to the occlusal surface of identical dies of the preparations to represent the underreduction of the distolingual cusp. The error was divided into two classes: the minor class allowed for 1 mm of occlusal clearance, and the major class allowed for no occlusal clearance. The preparations were then digitally evaluated against the ideal preparation using Planmeca Compare. Percent comparison values were obtained from each trial and averaged together. False positives and false negatives were also identified and used to determine the accuracy of the evaluation. Critical errors that did not involve a substantial change in the surface area of the preparation were inconsistently identified. Within the limitations of this study, the authors concluded that the Compare software was unable to consistently identify common critical errors within an acceptable degree of error.
Measurement of movements in the ionosphere using radio reflections
NASA Astrophysics Data System (ADS)
Whitehead, J. D.; From, W. R.; Jones, K. L.; Monro, P. E.
1983-05-01
Movements of the ionosphere may be measured using radio reflections either by observing the movement of the diffraction, or interference, pattern along the ground; or by using the Doppler shifts of the echo as a radar beam is scanned across the sky. The two methods may use the same experimental arrangement and even the same data. The error in the drift velocity measured for scattered echoes is inversely proportional to the square of the array size for both methods. More detail of the random motion may be observed with the Doppler method. When the radio reflections are from an undulating surface in the ionosphere which changes its form as it moves, the Doppler method combined with further analysis is required to measure the movement and change of the undulating surface.
Surface Curvatures Computation from Equidistance Contours
NASA Astrophysics Data System (ADS)
Tanaka, Hiromi T.; Kling, Olivier; Lee, Daniel T. L.
1990-03-01
The subject of our research is on the 3D shape representation problem for a special class of range image, one where the natural mode of the acquired range data is in the form of equidistance contours, as exemplified by a moire interferometry range system. In this paper we present a novel surface curvature computation scheme that directly computes the surface curvatures (the principal curvatures, Gaussian curvature and mean curvature) from the equidistance contours without any explicit computations or implicit estimates of partial derivatives. We show how the special nature of the equidistance contours, specifically, the dense information of the surface curves in the 2D contour plane, turns into an advantage for the computation of the surface curvatures. The approach is based on using simple geometric construction to obtain the normal sections and the normal curvatures. This method is general and can be extended to any dense range image data. We show in details how this computation is formulated and give an analysis on the error bounds of the computation steps showing that the method is stable. Computation results on real equidistance range contours are also shown.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Weverberg, K.; Morcrette, C. J.; Petch, J.
Many numerical weather prediction (NWP) and climate models exhibit too warm lower tropospheres near the mid-latitude continents. This warm bias has been extensively studied before, but evidence about its origin remains inconclusive. Some studies point to deficiencies in the deep convective or low clouds. Other studies found an important contribution from errors in the land surface properties. The warm bias has been shown to coincide with important surface radiation biases that likely play a critical role in the inception or the growth of the warm bias. Documenting these radiation errors is hence an important step towards understanding and alleviating themore » warm bias. This paper presents an attribution study to quantify the net radiation biases in 9 model simulations, performed in the framework of the CAUSES project (Clouds Above the United States and Errors at the Surface). Contributions from deficiencies in the surface properties, clouds, integrated water vapor (IWV) and aerosols are quantified, using an array of radiation measurement stations near the ARM SGP site. Furthermore, an in depth-analysis is shown to attribute the radiation errors to specific cloud regimes. The net surface SW radiation is overestimated (LW underestimated) in all models throughout most of the simulation period. Cloud errors are shown to contribute most to this overestimation in all but one model, which has a dominant albedo issue. Using a cloud regime analysis, it was shown that missing deep cloud events and/or simulating deep clouds with too weak cloud-radiative effects account for most of these cloud-related radiation errors. Some models have compensating errors between excessive occurrence of deep cloud, but largely underestimating their radiative effect, while other models miss deep cloud events altogether. Surprisingly however, even the latter models tend to produce too much and too frequent afternoon surface precipitation. This suggests that rather than issues with the triggering of deep convection, the deep cloud problem in many models could be related to too weak convective cloud detrainment and too large precipitation efficiencies. This does not rule out that previously documented issues with the evaporative fraction contribute to the warm bias as well, since the majority of the models underestimate the surface rain rates overall, as they miss the observed large nocturnal precipitation peak.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Qingcheng, E-mail: qiy9@pitt.edu; To, Albert C., E-mail: albertto@pitt.edu
Surface effects have been observed to contribute significantly to the mechanical response of nanoscale structures. The newly proposed energy-based coarse-grained atomistic method Multiresolution Molecular Mechanics (MMM) (Yang, To (2015), ) is applied to capture surface effect for nanosized structures by designing a surface summation rule SR{sup S} within the framework of MMM. Combined with previously proposed bulk summation rule SR{sup B}, the MMM summation rule SR{sup MMM} is completed. SR{sup S} and SR{sup B} are consistently formed within SR{sup MMM} for general finite element shape functions. Analogous to quadrature rules in finite element method (FEM), the key idea to themore » good performance of SR{sup MMM} lies in that the order or distribution of energy for coarse-grained atomistic model is mathematically derived such that the number, position and weight of quadrature-type (sampling) atoms can be determined. Mathematically, the derived energy distribution of surface area is different from that of bulk region. Physically, the difference is due to the fact that surface atoms lack neighboring bonding. As such, SR{sup S} and SR{sup B} are employed for surface and bulk domains, respectively. Two- and three-dimensional numerical examples using the respective 4-node bilinear quadrilateral, 8-node quadratic quadrilateral and 8-node hexahedral meshes are employed to verify and validate the proposed approach. It is shown that MMM with SR{sup MMM} accurately captures corner, edge and surface effects with less 0.3% degrees of freedom of the original atomistic system, compared against full atomistic simulation. The effectiveness of SR{sup MMM} with respect to high order element is also demonstrated by employing the 8-node quadratic quadrilateral to solve a beam bending problem considering surface effect. In addition, the introduced sampling error with SR{sup MMM} that is analogous to numerical integration error with quadrature rule in FEM is very small. - Highlights: • Surface effect captured by Multiresolution Molecular Mechanics (MMM) is presented. • A novel surface summation rule within the framework of MMM is proposed. • Surface, corner and edges effects are accuterly captured in two and three dimension. • MMM with less 0.3% degrees of freedom of atomistics reproduces atomistic results.« less
A quantitative comparison of soil moisture inversion algorithms
NASA Technical Reports Server (NTRS)
Zyl, J. J. van; Kim, Y.
2001-01-01
This paper compares the performance of four bare surface radar soil moisture inversion algorithms in the presence of measurement errors. The particular errors considered include calibration errors, system thermal noise, local topography and vegetation cover.
Generation of gear tooth surfaces by application of CNC machines
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Chen, N. X.
1994-01-01
This study will demonstrate the importance of application of computer numerically controlled (CNC) machines in generation of gear tooth surfaces with new topology. This topology decreases gear vibration and will extend the gear capacity and service life. A preliminary investigation by a tooth contact analysis (TCA) program has shown that gear tooth surfaces in line contact (for instance, involute helical gears with parallel axes, worm gear drives with cylindrical worms, etc.) are very sensitive to angular errors of misalignment that cause edge contact and an unfavorable shape of transmission errors and vibration. The new topology of gear tooth surfaces is based on the localization of bearing contact, and the synthesis of a predesigned parabolic function of transmission errors that is able to absorb a piecewise linear function of transmission errors caused by gear misalignment. The report will describe the following topics: description of kinematics of CNC machines with six degrees of freedom that can be applied for generation of gear tooth surfaces with new topology. A new method for grinding of gear tooth surfaces by a cone surface or surface of revolution based on application of CNC machines is described. This method provides an optimal approximation of the ground surface to the given one. This method is especially beneficial when undeveloped ruled surfaces are to be ground. Execution of motions of the CNC machine is also described. The solution to this problem can be applied as well for the transfer of machine tool settings from a conventional generator to the CNC machine. The developed theory required the derivation of a modified equation of meshing based on application of the concept of space curves, space curves represented on surfaces, geodesic curvature, surface torsion, etc. Condensed information on these topics of differential geometry is provided as well.
Fontaine, Patricia; Mendenhall, Tai J; Peterson, Kevin; Speedie, Stuart M
2007-01-01
The electronic Primary Care Research Network (ePCRN) enrolled PBRN researchers in a feasibility trial to test the functionality of the network's electronic architecture and investigate error rates associated with two data entry strategies used in clinical trials. PBRN physicians and research assistants who registered with the ePCRN were eligible to participate. After online consent and randomization, participants viewed simulated patient records, presented as either abstracted data (short form) or progress notes (long form). Participants transcribed 50 data elements onto electronic case report forms (CRFs) without integrated field restrictions. Data errors were analyzed. Ten geographically dispersed PBRNs enrolled 100 members and completed the study in less than 7 weeks. The estimated overall error rate if field restrictions had been applied was 2.3%. Participants entering data from the short form had a higher rate of correctly entered data fields (94.5% vs 90.8%, P = .004) and significantly more error-free records (P = .003). Feasibility outcomes integral to completion of an Internet-based, multisite study were successfully achieved. Further development of programmable electronic safeguards is indicated. The error analysis conducted in this study will aid design of specific field restrictions for electronic CRFs, an important component of clinical trial management systems.
Jones, J.W.; Jarnagin, T.
2009-01-01
Given the relatively high cost of mapping impervious surfaces at regional scales, substantial effort is being expended in the development of moderate-resolution, satellite-based methods for estimating impervious surface area (ISA). To rigorously assess the accuracy of these data products high quality, independently derived validation data are needed. High-resolution data were collected across a gradient of development within the Mid-Atlantic region to assess the accuracy of National Land Cover Data (NLCD) Landsat-based ISA estimates. Absolute error (satellite predicted area - "reference area") and relative error [satellite (predicted area - "reference area")/ "reference area"] were calculated for each of 240 sample regions that are each more than 15 Landsat pixels on a side. The ability to compile and examine ancillary data in a geographic information system environment provided for evaluation of both validation and NLCD data and afforded efficient exploration of observed errors. In a minority of cases, errors could be explained by temporal discontinuities between the date of satellite image capture and validation source data in rapidly changing places. In others, errors were created by vegetation cover over impervious surfaces and by other factors that bias the satellite processing algorithms. On average in the Mid-Atlantic region, the NLCD product underestimates ISA by approximately 5%. While the error range varies between 2 and 8%, this underestimation occurs regardless of development intensity. Through such analyses the errors, strengths, and weaknesses of particular satellite products can be explored to suggest appropriate uses for regional, satellite-based data in rapidly developing areas of environmental significance. ?? 2009 ASCE.
Chen, Shaoshan; He, Deyu; Wu, Yi; Chen, Huangfei; Zhang, Zaijing; Chen, Yunlei
2016-10-01
A new non-aqueous and abrasive-free magnetorheological finishing (MRF) method is adopted for processing potassium dihydrogen phosphate (KDP) crystal due to its low hardness, high brittleness, temperature sensitivity, and water solubility. This paper researches the convergence rules of the surface error of an initial single-point diamond turning (SPDT)-finished KDP crystal after MRF polishing. Currently, the SPDT process contains spiral cutting and fly cutting. The main difference of these two processes lies in the morphology of intermediate-frequency turning marks on the surface, which affects the convergence rules. The turning marks after spiral cutting are a series of concentric circles, while the turning marks after fly cutting are a series of parallel big arcs. Polishing results indicate that MRF polishing can only improve the low-frequency errors (L>10 mm) of a spiral-cutting KDP crystal. MRF polishing can improve the full-range surface errors (L>0.01 mm) of a fly-cutting KDP crystal if the polishing process is not done more than two times for single surface. We can conclude a fly-cutting KDP crystal will meet better optical performance after MRF figuring than a spiral-cutting KDP crystal with similar initial surface performance.
Error Reduction Methods for Integrated-path Differential-absorption Lidar Measurements
NASA Technical Reports Server (NTRS)
Chen, Jeffrey R.; Numata, Kenji; Wu, Stewart T.
2012-01-01
We report new modeling and error reduction methods for differential-absorption optical-depth (DAOD) measurements of atmospheric constituents using direct-detection integrated-path differential-absorption lidars. Errors from laser frequency noise are quantified in terms of the line center fluctuation and spectral line shape of the laser pulses, revealing relationships verified experimentally. A significant DAOD bias is removed by introducing a correction factor. Errors from surface height and reflectance variations can be reduced to tolerable levels by incorporating altimetry knowledge and "log after averaging", or by pointing the laser and receiver to a fixed surface spot during each wavelength cycle to shorten the time of "averaging before log".
Method for Pre-Conditioning a Measured Surface Height Map for Model Validation
NASA Technical Reports Server (NTRS)
Sidick, Erkin
2012-01-01
This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to eliminate the surface measurement noise or measurement errors can also suffer from aliasing effects. During re-sampling of a surface map, this software preserves the low spatial-frequency characteristic of a given surface map through the use of Zernike-polynomial fit coefficients, and maintains mid- and high-spatial-frequency characteristics of the given surface map by the use of a PSD model derived from the two-dimensional PSD data of the mid- and high-spatial-frequency components of the original surface map. Because this new method creates the new surface map in the desired sampling format from analytical expressions only, it does not encounter any aliasing effects and does not cause any discontinuity in the resultant surface map.
New Methods for Improved Double Circular-Arc Helical Gears
NASA Technical Reports Server (NTRS)
Litvin, Faydor L.; Lu, Jian
1997-01-01
The authors have extended the application of double circular-arc helical gears for internal gear drives. The geometry of the pinion and gear tooth surfaces has been determined. The influence of errors of alignment on the transmission errors and the shift of the bearing contact have been investigated. Application of a predesigned parabolic function for the reduction of transmission errors was proposed. Methods of grinding of the pinion-gear tooth surfaces by a disk-shaped tool and a grinding worm were proposed.
Protofit: A program for determining surface protonation constants from titration data
NASA Astrophysics Data System (ADS)
Turner, Benjamin F.; Fein, Jeremy B.
2006-11-01
Determining the surface protonation behavior of natural adsorbents is essential to understand how they interact with their environments. ProtoFit is a tool for analysis of acid-base titration data and optimization of surface protonation models. The program offers a number of useful features including: (1) enables visualization of adsorbent buffering behavior; (2) uses an optimization approach independent of starting titration conditions or initial surface charge; (3) does not require an initial surface charge to be defined or to be treated as an optimizable parameter; (4) includes an error analysis intrinsically as part of the computational methods; and (5) generates simulated titration curves for comparison with observation. ProtoFit will typically be run through ProtoFit-GUI, a graphical user interface providing user-friendly control of model optimization, simulation, and data visualization. ProtoFit calculates an adsorbent proton buffering value as a function of pH from raw titration data (including pH and volume of acid or base added). The data is reduced to a form where the protons required to change the pH of the solution are subtracted out, leaving protons exchanged between solution and surface per unit mass of adsorbent as a function of pH. The buffering intensity function Qads* is calculated as the instantaneous slope of this reduced titration curve. Parameters for a surface complexation model are obtained by minimizing the sum of squares between the modeled (i.e. simulated) buffering intensity curve and the experimental data. The variance in the slope estimate, intrinsically produced as part of the Qads* calculation, can be used to weight the sum of squares calculation between the measured buffering intensity and a simulated curve. Effects of analytical error on data visualization and model optimization are discussed. Examples are provided of using ProtoFit for data visualization, model optimization, and model evaluation.
Sensitivity to prediction error in reach adaptation
Haith, Adrian M.; Harran, Michelle D.; Shadmehr, Reza
2012-01-01
It has been proposed that the brain predicts the sensory consequences of a movement and compares it to the actual sensory feedback. When the two differ, an error signal is formed, driving adaptation. How does an error in one trial alter performance in the subsequent trial? Here we show that the sensitivity to error is not constant but declines as a function of error magnitude. That is, one learns relatively less from large errors compared with small errors. We performed an experiment in which humans made reaching movements and randomly experienced an error in both their visual and proprioceptive feedback. Proprioceptive errors were created with force fields, and visual errors were formed by perturbing the cursor trajectory to create a visual error that was smaller, the same size, or larger than the proprioceptive error. We measured single-trial adaptation and calculated sensitivity to error, i.e., the ratio of the trial-to-trial change in motor commands to error size. We found that for both sensory modalities sensitivity decreased with increasing error size. A reanalysis of a number of previously published psychophysical results also exhibited this feature. Finally, we asked how the brain might encode sensitivity to error. We reanalyzed previously published probabilities of cerebellar complex spikes (CSs) and found that this probability declined with increasing error size. From this we posit that a CS may be representative of the sensitivity to error, and not error itself, a hypothesis that may explain conflicting reports about CSs and their relationship to error. PMID:22773782
Gurtowski, Luke A; Griggs, Chris S; Gude, Veera G; Shukla, Manoj K
2018-02-01
This manuscript reports results of combined computational chemistry and batch adsorption investigation of insensitive munition compounds, 2,4-dinitroanisole (DNAN), triaminotrinitrobenzene (TATB), 1,1-diamino-2,2-dinitroethene (FOX-7) and nitroguanidine (NQ), and traditional munition compound 2,4,6-trinitrotoluene (TNT) on the surfaces of cellulose, cellulose triacetate, chitin and chitosan biopolymers. Cellulose, cellulose triacetate, chitin and chitosan were modeled as trimeric form of the linear chain of 4 C 1 chair conformation of β-d-glucopyranos, its triacetate form, β-N-acetylglucosamine and D-glucosamine, respectively, in the 1➔4 linkage. Geometries were optimized at the M062X functional level of the density functional theory (DFT) using the 6-31G(d,p) basis set in the gas phase and in the bulk water solution using the conductor-like polarizable continuum model (CPCM) approach. The nature of potential energy surfaces of the optimized geometries were ascertained through the harmonic vibrational frequency analysis. The basis set superposition error (BSSE) corrected interaction energies were obtained using the 6-311G(d,p) basis set at the same theoretical level. The computed BSSE in the gas phase was used to correct interaction energy in the bulk water solution. Computed and experimental results regarding the ability of considered surfaces in adsorbing the insensitive munitions compounds are discussed. Copyright © 2017. Published by Elsevier B.V.
Advancements in non-contact metrology of asphere and diffractive optics
NASA Astrophysics Data System (ADS)
DeFisher, Scott
2017-11-01
Advancements in optical manufacturing technology allow optical designers to implement steep aspheric or high departure surfaces into their systems. Measuring these surfaces with profilometers or CMMs can be difficult due to large surface slopes or sharp steps in the surface. OptiPro has developed UltraSurf to qualify the form and figure of steep aspheric and diffractive optics. UltraSurf is a computer controlled, non-contact coordinate measuring machine. It incorporates five air-bearing axes, linear motors, high-resolution feedback, and a non-contact probe. The measuring probe is scanned over the optical surface while maintaining perpendicularity and a constant focal offset. Multiple probe technologies are available on UltraSurf. Each probe has strengths and weaknesses relative to the material properties, surface finish, and figure error of an optical component. The measuring probes utilize absolute distance to resolve step heights and diffractive surface patterns. The non-contact scanning method avoids common pitfalls with stylus contact instruments. Advancements in measuring speed and precision has enabled fast and accurate non-contact metrology of diffractive and steep aspheric surfaces. The benefits of data sampling with twodimensional profiles and three-dimensional topography maps will be presented. In addition, accuracy, repeatability, and machine qualification will be discussed with regards to aspheres and diffractive surfaces.
Lee, Kyung-Min; Song, Jin-Myoung; Cho, Jin-Hyoung; Hwang, Hyeon-Shik
2016-01-01
The purpose of this study was to investigate the influence of head motion on the accuracy of three-dimensional (3D) reconstruction with cone-beam computed tomography (CBCT) scan. Fifteen dry skulls were incorporated into a motion controller which simulated four types of head motion during CBCT scan: 2 horizontal rotations (to the right/to the left) and 2 vertical rotations (upward/downward). Each movement was triggered to occur at the start of the scan for 1 second by remote control. Four maxillofacial surface models with head motion and one control surface model without motion were obtained for each skull. Nine landmarks were identified on the five maxillofacial surface models for each skull, and landmark identification errors were compared between the control model and each of the models with head motion. Rendered surface models with head motion were similar to the control model in appearance; however, the landmark identification errors showed larger values in models with head motion than in the control. In particular, the Porion in the horizontal rotation models presented statistically significant differences (P < .05). Statistically significant difference in the errors between the right and left side landmark was present in the left side rotation which was opposite direction to the scanner rotation (P < .05). Patient movement during CBCT scan might cause landmark identification errors on the 3D surface model in relation to the direction of the scanner rotation. Clinicians should take this into consideration to prevent patient movement during CBCT scan, particularly horizontal movement.
NASA Astrophysics Data System (ADS)
Nanda, Swadhin; de Graaf, Martin; Sneep, Maarten; de Haan, Johan F.; Stammes, Piet; Sanders, Abram F. J.; Tuinder, Olaf; Pepijn Veefkind, J.; Levelt, Pieternel F.
2018-01-01
Retrieving aerosol optical thickness and aerosol layer height over a bright surface from measured top-of-atmosphere reflectance spectrum in the oxygen A band is known to be challenging, often resulting in large errors. In certain atmospheric conditions and viewing geometries, a loss of sensitivity to aerosol optical thickness has been reported in the literature. This loss of sensitivity has been attributed to a phenomenon known as critical surface albedo regime, which is a range of surface albedos for which the top-of-atmosphere reflectance has minimal sensitivity to aerosol optical thickness. This paper extends the concept of critical surface albedo for aerosol layer height retrievals in the oxygen A band, and discusses its implications. The underlying physics are introduced by analysing the top-of-atmosphere reflectance spectrum as a sum of atmospheric path contribution and surface contribution, obtained using a radiative transfer model. Furthermore, error analysis of an aerosol layer height retrieval algorithm is conducted over dark and bright surfaces to show the dependence on surface reflectance. The analysis shows that the derivative with respect to aerosol layer height of the atmospheric path contribution to the top-of-atmosphere reflectance is opposite in sign to that of the surface contribution - an increase in surface brightness results in a decrease in information content. In the case of aerosol optical thickness, these derivatives are anti-correlated, leading to large retrieval errors in high surface albedo regimes. The consequence of this anti-correlation is demonstrated with measured spectra in the oxygen A band from the GOME-2 instrument on board the Metop-A satellite over the 2010 Russian wildfires incident.
Crowdsourcing for error detection in cortical surface delineations.
Ganz, Melanie; Kondermann, Daniel; Andrulis, Jonas; Knudsen, Gitte Moos; Maier-Hein, Lena
2017-01-01
With the recent trend toward big data analysis, neuroimaging datasets have grown substantially in the past years. While larger datasets potentially offer important insights for medical research, one major bottleneck is the requirement for resources of medical experts needed to validate automatic processing results. To address this issue, the goal of this paper was to assess whether anonymous nonexperts from an online community can perform quality control of MR-based cortical surface delineations derived by an automatic algorithm. So-called knowledge workers from an online crowdsourcing platform were asked to annotate errors in automatic cortical surface delineations on 100 central, coronal slices of MR images. On average, annotations for 100 images were obtained in less than an hour. When using expert annotations as reference, the crowd on average achieves a sensitivity of 82 % and a precision of 42 %. Merging multiple annotations per image significantly improves the sensitivity of the crowd (up to 95 %), but leads to a decrease in precision (as low as 22 %). Our experiments show that the detection of errors in automatic cortical surface delineations generated by anonymous untrained workers is feasible. Future work will focus on increasing the sensitivity of our method further, such that the error detection tasks can be handled exclusively by the crowd and expert resources can be focused on error correction.
Gas turbine engine control system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Idelchik, M.S.
1991-02-19
This paper describes a method for controlling a gas turbine engine. It includes receiving an error signal and processing the error signal to form a primary control signal; receiving at least one anticipatory demand signal and processing the signal to form an anticipatory fuel control signal.
NASA Astrophysics Data System (ADS)
Muñoz-Potosi, A. F.; Granados-Agustín, F.; Campos-García, M.; Valdivieso-González, L. G.; Percino-Zacarias, M. E.
2017-11-01
Among the various techniques that can be used to assess the quality of optical surfaces, deflectometry evaluates the reflection experienced by rays impinging on a surface whose topography is under study. We propose the use of a screen spatial filter to select rays from a light source. The screen must be placed at a distance shorter than the radius of curvature of the surface under study. The location of the screen depends on the exit pupil of the system and the caustic area. The reflected rays are measured using an observation plane/screen/CCD located beyond the point of convergence of the rays. To implement an experimental design of the proposed technique and determine the topography of the surface under study, it is necessary to measure tilt, decentering and focus errors caused by mechanical misalignment, which could influence the results of this technique but are not related to the quality of the surface. The aim of this study is to analyze an ideal spherical surface with known radius of curvature to identify the variations introduced by such misalignment errors.
Global Surface Temperature Change and Uncertainties Since 1861
NASA Technical Reports Server (NTRS)
Shen, Samuel S. P.; Lau, William K. M. (Technical Monitor)
2002-01-01
The objective of this talk is to analyze the warming trend and its uncertainties of the global and hemi-spheric surface temperatures. By the method of statistical optimal averaging scheme, the land surface air temperature and sea surface temperature observational data are used to compute the spatial average annual mean surface air temperature. The optimal averaging method is derived from the minimization of the mean square error between the true and estimated averages and uses the empirical orthogonal functions. The method can accurately estimate the errors of the spatial average due to observational gaps and random measurement errors. In addition, quantified are three independent uncertainty factors: urbanization, change of the in situ observational practices and sea surface temperature data corrections. Based on these uncertainties, the best linear fit to annual global surface temperature gives an increase of 0.61 +/- 0.16 C between 1861 and 2000. This lecture will also touch the topics on the impact of global change on nature and environment. as well as the latest assessment methods for the attributions of global change.
NASA Technical Reports Server (NTRS)
Solomon, G.
1993-01-01
A (72,36;15) box code is constructed as a 9 x 8 matrix whose columns add to form an extended BCH-Hamming (8,4;4) code and whose rows sum to odd or even parity. The newly constructed code, due to its matrix form, is easily decodable for all seven-error and many eight-error patterns. The code comes from a slight modification in the parity (eighth) dimension of the Reed-Solomon (8,4;5) code over GF(512). Error correction uses the row sum parity information to detect errors, which then become erasures in a Reed-Solomon correction algorithm.
A system analysis of the 13.3 GHz scatterometer. [antenna patterns and signal transmission
NASA Technical Reports Server (NTRS)
Wang, J. R.
1977-01-01
The performance of the 13.3 GHz airborne scatterometer system which is used as a microwave remote sensor to detect moisture content of soil is analyzed with respect to its antenna pattern, the signal flow in the receiver data channels, and the errors in the signal outputs. The operational principle and the sensitivity of the system, as well as data handling are also described. The dielectric property of the terrain surface, as far as the scatterometer is concerned, is contained in the assumed forms of the functional dependence of the backscattering coefficient of the incident angle.
Error simulation of paired-comparison-based scaling methods
NASA Astrophysics Data System (ADS)
Cui, Chengwu
2000-12-01
Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.
High-throughput methods for characterizing the mechanical properties of coatings
NASA Astrophysics Data System (ADS)
Siripirom, Chavanin
The characterization of mechanical properties in a combinatorial and high-throughput workflow has been a bottleneck that reduced the speed of the materials development process. High-throughput characterization of the mechanical properties was applied in this research in order to reduce the amount of sample handling and to accelerate the output. A puncture tester was designed and built to evaluate the toughness of materials using an innovative template design coupled with automation. The test is in the form of a circular free-film indentation. A single template contains 12 samples which are tested in a rapid serial approach. Next, the operational principles of a novel parallel dynamic mechanical-thermal analysis instrument were analyzed in detail for potential sources of errors. The test uses a model of a circular bilayer fixed-edge plate deformation. A total of 96 samples can be analyzed simultaneously which provides a tremendous increase in efficiency compared with a conventional dynamic test. The modulus values determined by the system had considerable variation. The errors were observed and improvements to the system were made. A finite element analysis was used to analyze the accuracy given by the closed-form solution with respect to testing geometries, such as thicknesses of the samples. A good control of the thickness of the sample was proven to be crucial to the accuracy and precision of the output. Then, the attempt to correlate the high-throughput experiments and conventional coating testing methods was made. Automated nanoindentation in dynamic mode was found to provide information on the near-surface modulus and could potentially correlate with the pendulum hardness test using the loss tangent component. Lastly, surface characterization of stratified siloxane-polyurethane coatings was carried out with X-ray photoelectron spectroscopy, Rutherford backscattering spectroscopy, transmission electron microscopy, and nanoindentation. The siloxane component segregates to the surface during curing. The distribution of siloxane as a function of thickness into the sample showed differences depending on the formulation parameters. The coatings which had higher siloxane content near the surface were those coatings found to perform well in field tests.
Statistical inference of seabed sound-speed structure in the Gulf of Oman Basin.
Sagers, Jason D; Knobles, David P
2014-06-01
Addressed is the statistical inference of the sound-speed depth profile of a thick soft seabed from broadband sound propagation data recorded in the Gulf of Oman Basin in 1977. The acoustic data are in the form of time series signals recorded on a sparse vertical line array and generated by explosive sources deployed along a 280 km track. The acoustic data offer a unique opportunity to study a deep-water bottom-limited thickly sedimented environment because of the large number of time series measurements, very low seabed attenuation, and auxiliary measurements. A maximum entropy method is employed to obtain a conditional posterior probability distribution (PPD) for the sound-speed ratio and the near-surface sound-speed gradient. The multiple data samples allow for a determination of the average error constraint value required to uniquely specify the PPD for each data sample. Two complicating features of the statistical inference study are addressed: (1) the need to develop an error function that can both utilize the measured multipath arrival structure and mitigate the effects of data errors and (2) the effect of small bathymetric slopes on the structure of the bottom interacting arrivals.
The role of the basic state in the ENSO-monsoon relationship and implications for predictability
NASA Astrophysics Data System (ADS)
Turner, A. G.; Inness, P. M.; Slingo, J. M.
2005-04-01
The impact of systematic model errors on a coupled simulation of the Asian summer monsoon and its interannual variability is studied. Although the mean monsoon climate is reasonably well captured, systematic errors in the equatorial Pacific mean that the monsoon-ENSO teleconnection is rather poorly represented in the general-circulation model. A system of ocean-surface heat flux adjustments is implemented in the tropical Pacific and Indian Oceans in order to reduce the systematic biases. In this version of the general-circulation model, the monsoon-ENSO teleconnection is better simulated, particularly the lag-lead relationships in which weak monsoons precede the peak of El Niño. In part this is related to changes in the characteristics of El Niño, which has a more realistic evolution in its developing phase. A stronger ENSO amplitude in the new model version also feeds back to further strengthen the teleconnection. These results have important implications for the use of coupled models for seasonal prediction of systems such as the monsoon, and suggest that some form of flux correction may have significant benefits where model systematic error compromises important teleconnections and modes of interannual variability.
Influence of Joint Angle on EMG-Torque Model During Constant-Posture, Torque-Varying Contractions.
Liu, Pu; Liu, Lukai; Clancy, Edward A
2015-11-01
Relating the electromyogram (EMG) to joint torque is useful in various application areas, including prosthesis control, ergonomics and clinical biomechanics. Limited study has related EMG to torque across varied joint angles, particularly when subjects performed force-varying contractions or when optimized modeling methods were utilized. We related the biceps-triceps surface EMG of 22 subjects to elbow torque at six joint angles (spanning 60° to 135°) during constant-posture, torque-varying contractions. Three nonlinear EMG σ -torque models, advanced EMG amplitude (EMG σ ) estimation processors (i.e., whitened, multiple-channel) and the duration of data used to train models were investigated. When EMG-torque models were formed separately for each of the six distinct joint angles, a minimum "gold standard" error of 4.01±1.2% MVC(F90) resulted (i.e., error relative to maximum voluntary contraction at 90° flexion). This model structure, however, did not directly facilitate interpolation across angles. The best model which did so achieved a statistically equivalent error of 4.06±1.2% MVC(F90). Results demonstrated that advanced EMG σ processors lead to improved joint torque estimation as do longer model training durations.
NASA Technical Reports Server (NTRS)
Tsaoussi, Lucia S.; Koblinsky, Chester J.
1994-01-01
In order to facilitate the use of satellite-derived sea surface topography and velocity oceanographic models, methodology is presented for deriving the total error covariance and its geographic distribution from TOPEX/POSEIDON measurements. The model is formulated using a parametric model fit to the altimeter range observations. The topography and velocity modeled with spherical harmonic expansions whose coefficients are found through optimal adjustment to the altimeter range residuals using Bayesian statistics. All other parameters, including the orbit, geoid, surface models, and range corrections are provided as unadjusted parameters. The maximum likelihood estimates and errors are derived from the probability density function of the altimeter range residuals conditioned with a priori information. Estimates of model errors for the unadjusted parameters are obtained from the TOPEX/POSEIDON postlaunch verification results and the error covariances for the orbit and the geoid, except for the ocean tides. The error in the ocean tides is modeled, first, as the difference between two global tide models and, second, as the correction to the present tide model, the correction derived from the TOPEX/POSEIDON data. A formal error covariance propagation scheme is used to derive the total error. Our global total error estimate for the TOPEX/POSEIDON topography relative to the geoid for one 10-day period is found tio be 11 cm RMS. When the error in the geoid is removed, thereby providing an estimate of the time dependent error, the uncertainty in the topography is 3.5 cm root mean square (RMS). This level of accuracy is consistent with direct comparisons of TOPEX/POSEIDON altimeter heights with tide gauge measurements at 28 stations. In addition, the error correlation length scales are derived globally in both east-west and north-south directions, which should prove useful for data assimilation. The largest error correlation length scales are found in the tropics. Errors in the velocity field are smallest in midlatitude regions. For both variables the largest errors caused by uncertainty in the geoid. More accurate representations of the geoid await a dedicated geopotential satellite mission. Substantial improvements in the accuracy of ocean tide models are expected in the very near future from research with TOPEX/POSEIDON data.
Gordon, H R; Wang, M
1992-07-20
The first step in the coastal zone color scanner (CZCS) atmospheric-correction algorithm is the computation of the Rayleigh-scattering contribution, Lr(r), to the radiance leaving the top of the atmosphere over the ocean. In the present algorithm Lr(r), is computed by assuming that the ocean surface is flat. Computations of the radiance leaving a Rayleigh-scattering atmosphere overlying a rough Fresnel-reflecting ocean are presented to assess the radiance error caused by the flat-ocean assumption. The surface-roughness model is described in detail for both scalar and vector (including polarization) radiative transfer theory. The computations utilizing the vector theory show that the magnitude of the error significantly depends on the assumptions made in regard to the shadowing of one wave by another. In the case of the coastal zone color scanner bands, we show that for moderate solar zenith angles the error is generally below the 1 digital count level, except near the edge of the scan for high wind speeds. For larger solar zenith angles, the error is generally larger and can exceed 1 digital count at some wavelengths over the entire scan, even for light winds. The error in Lr(r) caused by ignoring surface roughness is shown to be the same order of magnitude as that caused by uncertainties of +/- 15 mb in the surface atmospheric pressure or of +/- 50 Dobson units in the ozone concentration. For future sensors, which will have greater radiometric sensitivity, the error caused by the flat-ocean assumption in the computation of Lr(r) could be as much as an order of magnitude larger than the noise-equivalent spectral radiance in certain situations.
Linguistic pattern analysis of misspellings of typically developing writers in grades 1-9.
Bahr, Ruth Huntley; Sillian, Elaine R; Berninger, Virginia W; Dow, Michael
2012-12-01
A mixed-methods approach, evaluating triple word-form theory, was used to describe linguistic patterns of misspellings. Spelling errors were taken from narrative and expository writing samples provided by 888 typically developing students in Grades 1-9. Errors were coded by category (phonological, orthographic, and morphological) and specific linguistic feature affected. Grade-level effects were analyzed with trend analysis. Qualitative analyses determined frequent error types and how use of specific linguistic features varied across grades. Phonological, orthographic, and morphological errors were noted across all grades, but orthographic errors predominated. Linear trends revealed developmental shifts in error proportions for the orthographic and morphological categories between Grades 4 and 5. Similar error types were noted across age groups, but the nature of linguistic feature error changed with age. Triple word-form theory was supported. By Grade 1, orthographic errors predominated, and phonological and morphological error patterns were evident. Morphological errors increased in relative frequency in older students, probably due to a combination of word-formation issues and vocabulary growth. These patterns suggest that normal spelling development reflects nonlinear growth and that it takes a long time to develop a robust orthographic lexicon that coordinates phonology, orthography, and morphology and supports word-specific, conventional spelling.
NASA Astrophysics Data System (ADS)
Zapata, N.; Martínez-Cob, A.
2001-12-01
This paper reports a study undertaken to evaluate the feasibility of the surface renewal method to accurately estimate long-term evaporation from the playa and margins of an endorreic salty lagoon (Gallocanta lagoon, Spain) under semiarid conditions. High-frequency temperature readings were taken for two time lags ( r) and three measurement heights ( z) in order to get surface renewal sensible heat flux ( HSR) values. These values were compared against eddy covariance sensible heat flux ( HEC) values for a calibration period (25-30 July 2000). Error analysis statistics (index of agreement, IA; root mean square error, RMSE; and systematic mean square error, MSEs) showed that the agreement between HSR and HEC improved as measurement height decreased and time lag increased. Calibration factors α were obtained for all analyzed cases. The best results were obtained for the z=0.9 m ( r=0.75 s) case for which α=1.0 was observed. In this case, uncertainty was about 10% in terms of relative error ( RE). Latent heat flux values were obtained by solving the energy balance equation for both the surface renewal ( LESR) and the eddy covariance ( LEEC) methods, using HSR and HEC, respectively, and measurements of net radiation and soil heat flux. For the calibration period, error analysis statistics for LESR were quite similar to those for HSR, although errors were mostly at random. LESR uncertainty was less than 9%. Calibration factors were applied for a validation data subset (30 July-4 August 2000) for which meteorological conditions were somewhat different (higher temperatures and wind speed and lower solar and net radiation). Error analysis statistics for both HSR and LESR were quite good for all cases showing the goodness of the calibration factors. Nevertheless, the results obtained for the z=0.9 m ( r=0.75 s) case were still the best ones.
Ultrahigh Error Threshold for Surface Codes with Biased Noise
NASA Astrophysics Data System (ADS)
Tuckett, David K.; Bartlett, Stephen D.; Flammia, Steven T.
2018-02-01
We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli Z errors occur more frequently than X or Y errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor network decoder proposed by Bravyi, Suchara, and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example 28.2(2)% at a bias of 10. The performance is, in fact, at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of Y instead of Z around the faces, as this doubles the number of useful syndrome bits associated with the dominant Z errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.
Nkenke, Emeka; Lehner, Bernhard; Kramer, Manuel; Haeusler, Gerd; Benz, Stefanie; Schuster, Maria; Neukam, Friedrich W; Vairaktaris, Eleftherios G; Wurm, Jochen
2006-03-01
To assess measurement errors of a novel technique for the three-dimensional determination of the degree of facial symmetry in patients suffering from unilateral cleft lip and palate malformations. Technical report, reliability study. Cleft Lip and Palate Center of the University of Erlangen-Nuremberg, Erlangen, Germany. The three-dimensional facial surface data of five 10-year-old unilateral cleft lip and palate patients were subjected to the analysis. Distances, angles, surface areas, and volumes were assessed twice. Calculations were made for method error, intraclass correlation coefficient, and repeatability of the measurements of distances, angles, surface areas, and volumes. The method errors were less than 1 mm for distances and less than 1.5 degrees for angles. The intraclass correlation coefficients showed values greater than .90 for all parameters. The repeatability values were comparable for cleft and noncleft sides. The small method errors, high intraclass correlation coefficients, and comparable repeatability values for cleft and noncleft sides reveal that the new technique is appropriate for clinical use.
Procedural Error and Task Interruption
2016-09-30
red for research on errors and individual differences . Results indicate predictive validity for fluid intelligence and specifi c forms of work...TERMS procedural error, task interruption, individual differences , fluid intelligence, sleep deprivation 16. SECURITY CLASSIFICATION OF: 17...and individual differences . It generates rich data on several kinds of errors, including procedural errors in which steps are skipped or repeated
The Sources of Error in Spanish Writing.
ERIC Educational Resources Information Center
Justicia, Fernando; Defior, Sylvia; Pelegrina, Santiago; Martos, Francisco J.
1999-01-01
Determines the pattern of errors in Spanish spelling. Analyzes and proposes a classification system for the errors made by children in the initial stages of the acquisition of spelling skills. Finds the diverse forms of only 20 Spanish words produces 36% of the spelling errors in Spanish; and substitution is the most frequent type of error. (RS)
Sulcal set optimization for cortical surface registration.
Joshi, Anand A; Pantazis, Dimitrios; Li, Quanzheng; Damasio, Hanna; Shattuck, David W; Toga, Arthur W; Leahy, Richard M
2010-04-15
Flat mapping based cortical surface registration constrained by manually traced sulcal curves has been widely used for inter subject comparisons of neuroanatomical data. Even for an experienced neuroanatomist, manual sulcal tracing can be quite time consuming, with the cost increasing with the number of sulcal curves used for registration. We present a method for estimation of an optimal subset of size N(C) from N possible candidate sulcal curves that minimizes a mean squared error metric over all combinations of N(C) curves. The resulting procedure allows us to estimate a subset with a reduced number of curves to be traced as part of the registration procedure leading to optimal use of manual labeling effort for registration. To minimize the error metric we analyze the correlation structure of the errors in the sulcal curves by modeling them as a multivariate Gaussian distribution. For a given subset of sulci used as constraints in surface registration, the proposed model estimates registration error based on the correlation structure of the sulcal errors. The optimal subset of constraint curves consists of the N(C) sulci that jointly minimize the estimated error variance for the subset of unconstrained curves conditioned on the N(C) constraint curves. The optimal subsets of sulci are presented and the estimated and actual registration errors for these subsets are computed. Copyright 2009 Elsevier Inc. All rights reserved.
Segmented Mirror Image Degradation Due to Surface Dust, Alignment and Figure
NASA Technical Reports Server (NTRS)
Schreur, Julian J.
1999-01-01
In 1996 an algorithm was developed to include the effects of surface roughness in the calculation of the point spread function of a telescope mirror. This algorithm has been extended to include the effects of alignment errors and figure errors for the individual elements, and an overall contamination by surface dust. The final algorithm builds an array for a guard-banded pupil function of a mirror that may or may not have a central hole, a central reflecting segment, or an outer ring of segments. The central hole, central reflecting segment, and outer ring may be circular or polygonal, and the outer segments may have trimmed comers. The modeled point spread functions show that x-tilt and y-tilt, or the corresponding R-tilt and theta-tilt for a segment in an outer ring, is readily apparent for maximum wavefront errors of 0.1 lambda. A similar sized piston error is also apparent, but integral wavelength piston errors are not. Severe piston error introduces a focus error of the opposite sign, so piston could be adjusted to compensate for segments with varying focal lengths. Dust affects the image principally by decreasing the Strehl ratio, or peak intensity of the image. For an eight-meter telescope a 25% coverage by dust produced a scattered light intensity of 10(exp -9) of the peak intensity, a level well below detectability.
An Empirical State Error Covariance Matrix Orbit Determination Example
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2015-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Dongwoo; Lee, Eonseok; Kim, Hyunchang
2014-06-21
Offset printing processes are promising candidates for producing printed electronics due to their capacity for fine patterning and suitability for mass production. To print high-resolution patterns with good overlay using offset printing, the velocities of two contact surfaces, which ink is transferred between, should be synchronized perfectly. However, an exact velocity of the contact surfaces is unknown due to several imperfections, including tolerances, blanket swelling, and velocity ripple, which prevents the system from being operated in the synchronized condition. In this paper, a novel method of measurement based on the sticking model of friction force was proposed to determine themore » best synchronized condition, i.e., the condition in which the rate of synchronization error is minimized. It was verified by experiment that the friction force can accurately represent the rate of synchronization error. Based on the measurement results of the synchronization error, the allowable margin of synchronization error when printing high-resolution patterns was investigated experimentally using reverse offset printing. There is a region where the patterning performance is unchanged even though the synchronization error is varied, and this may be viewed as indirect evidence that printability performance is secured when there is no slip at the contact interface. To understand what happens at the contact surfaces during ink transfer, the deformation model of the blanket's surface was developed. The model estimates how much deformation on the blanket's surface can be borne by the synchronization error when there is no slip at the contact interface. In addition, the model shows that the synchronization error results in scale variation in the machine direction (MD), which means that the printing registration in the MD can be adjusted actively by controlling the synchronization if there is a sufficient margin of synchronization error to guarantee printability. The effect of synchronization on the printing registration was verified experimentally using gravure offset printing. The variations in synchronization result in the differences in the MD scale, and the measured MD scale matches exactly with the modeled MD scale.« less
Medication Errors in Patients with Enteral Feeding Tubes in the Intensive Care Unit.
Sohrevardi, Seyed Mojtaba; Jarahzadeh, Mohammad Hossein; Mirzaei, Ehsan; Mirjalili, Mahtabalsadat; Tafti, Arefeh Dehghani; Heydari, Behrooz
2017-01-01
Most patients admitted to Intensive Care Units (ICU) have problems in using oral medication or ingesting solid forms of drugs. Selecting the most suitable dosage form in such patients is a challenge. The current study was conducted to assess the frequency and types of errors of oral medication administration in patients with enteral feeding tubes or suffering swallowing problems. A cross-sectional study was performed in the ICU of Shahid Sadoughi Hospital, Yazd, Iran. Patients were assessed for the incidence and types of medication errors occurring in the process of preparation and administration of oral medicines. Ninety-four patients were involved in this study and 10,250 administrations were observed. Totally, 4753 errors occurred among the studied patients. The most commonly used drugs were pantoprazole tablet, piracetam syrup, and losartan tablet. A total of 128 different types of drugs and nine different oral pharmaceutical preparations were prescribed for the patients. Forty-one (35.34%) out of 116 different solid drugs (except effervescent tablets and powders) could be substituted by liquid or injectable forms. The most common error was the wrong time of administration. Errors of wrong dose preparation and administration accounted for 24.04% and 25.31% of all errors, respectively. In this study, at least three-fourth of the patients experienced medication errors. The occurrence of these errors can greatly impair the quality of the patients' pharmacotherapy, and more attention should be paid to this issue.
Registration of organs with sliding interfaces and changing topologies
NASA Astrophysics Data System (ADS)
Berendsen, Floris F.; Kotte, Alexis N. T. J.; Viergever, Max A.; Pluim, Josien P. W.
2014-03-01
Smoothness and continuity assumptions on the deformation field in deformable image registration do not hold for applications where the imaged objects have sliding interfaces. Recent extensions to deformable image registration that accommodate for sliding motion of organs are limited to sliding motion along approximately planar surfaces or cannot model sliding that changes the topological configuration in case of multiple organs. We propose a new extension to free-form image registration that is not limited in this way. Our method uses a transformation model that consists of uniform B-spline transformations for each organ region separately, which is based on segmentation of one image. Since this model can create overlapping regions or gaps between regions, we introduce a penalty term that minimizes this undesired effect. The penalty term acts on the surfaces of the organ regions and is optimized simultaneously with the image similarity. To evaluate our method registrations were performed on publicly available inhale-exhale CT scans for which performances of other methods are known. Target registration errors are computed on dense landmark sets that are available with these datasets. On these data our method outperforms the other methods in terms of target registration error and, where applicable, also in terms of overlap and gap volumes. The approximation of the other methods of sliding motion along planar surfaces is reasonably well suited for the motion present in the lung data. The ability of our method to handle sliding along curved boundaries and for changing region topology configurations was demonstrated on synthetic images.
Highly compact fiber Fabry-Perot interferometer: A new instrument design
NASA Astrophysics Data System (ADS)
Nowakowski, B. K.; Smith, D. T.; Smith, S. T.
2016-11-01
This paper presents the design, construction, and characterization of a new optical-fiber-based, low-finesse Fabry-Perot interferometer with a simple cavity formed by two reflecting surfaces (the end of a cleaved optical fiber and a plane, reflecting counter-surface), for the continuous measurement of displacements of several nanometers to several tens of millimeters. No beam collimation or focusing optics are required, resulting in a displacement sensor that is extremely compact (optical fiber diameter 125 μm), is surprisingly tolerant of misalignment (more than 5°), and can be used over a very wide range of temperatures and environmental conditions, including ultra-high-vacuum. The displacement measurement is derived from interferometric phase measurements using an infrared laser source whose wavelength is modulated sinusoidally at a frequency f. The phase signal is in turn derived from changes in the amplitudes of demodulated signals, at both the modulation frequency, f, and its harmonic at 2f, coming from a photodetector that is monitoring light intensity reflected back from the cavity as the cavity length changes. Simple quadrature detection results in phase errors corresponding to displacement errors of up to 25 nm, but by using compensation algorithms discussed in this paper, these inherent non-linearities can be reduced to below 3 nm. In addition, wavelength sweep capability enables measurement of the absolute surface separation. This experimental design creates a unique set of displacement measuring capabilities not previously combined in a single interferometer.
NASA Technical Reports Server (NTRS)
Mishchenko, M. I.; Lacis, A. A.; Travis, L. D.
1994-01-01
Although neglecting polarization and replacing the rigorous vector radiative transfer equation by its approximate scalar counterpart has no physical background, it is a widely used simplification when the incident light is unpolarized and only the intensity of the reflected light is to be computed. We employ accurate vector and scalar multiple-scattering calculations to perform a systematic study of the errors induced by the neglect of polarization in radiance calculations for a homogeneous, plane-parallel Rayleigh-scattering atmosphere (with and without depolarization) above a Lambertian surface. Specifically, we calculate percent errors in the reflected intensity for various directions of light incidence and reflection, optical thicknesses of the atmosphere, single-scattering albedos, depolarization factors, and surface albedos. The numerical data displayed can be used to decide whether or not the scalar approximation may be employed depending on the parameters of the problem. We show that the errors decrease with increasing depolarization factor and/or increasing surface albedo. For conservative or nearly conservative scattering and small surface albedos, the errors are maximum at optical thicknesses of about 1. The calculated errors may be too large for some practical applications, and, therefore, rigorous vector calculations should be employed whenever possible. However, if approximate scalar calculations are used, we recommend to avoid geometries involving phase angles equal or close to 0 deg and 90 deg, where the errors are especially significant. We propose a theoretical explanation of the large vector/scalar differences in the case of Rayleigh scattering. According to this explanation, the differences are caused by the particular structure of the Rayleigh scattering matrix and come from lower-order (except first-order) light scattering paths involving right scattering angles and right-angle rotations of the scattering plane.
Influence of sample pool on interference pattern in defocused interferometric particle imaging.
Zhang, Hongxia; Zhou, Ye; Liu, Jing; Jia, Dagong; Liu, Tiegen
2017-04-01
Particles widely exist in various fields. In practical experiments, sometimes it is necessary to dissolve particles in water in a sample pool. This article proposes two typical layouts of the sample pool in defocused interferometric particle imaging (IPI). Layout I is the sample pool surface perpendicular to the incident light and layout II is the sample pool surface perpendicular to the scattered light. For layout I, the scattered light of the particles does not keep symmetric at the meridional and sagittal planes after being refracted by the sample pool surface, and elliptical interference patterns are formed at the defocused IPI image plane. But for layout II, the scattered light keeps symmetric after being refracted, and circular interference patterns are formed. Aimed at the two sample pool layouts, the ray-tracing software ZEMAX was used to simulate the spot shape of particles at different defocus distances. Furthermore, its effect on the ellipticity of the interference pattern with the tilt angle of the sample pool is analyzed. The relative error of the axis ratio for layout I does not exceed 9.2% at different defocus distances. The experimental results have good agreement with the theoretical analyses, and it indicates that layout II is more reasonable for the IPI system.
Influence of sample pool on interference pattern in defocused interferometric particle imaging
NASA Astrophysics Data System (ADS)
Zhang, Hongxia; Zhou, Ye; Liu, Jing; Jia, Dagong; Liu, Tiegen
2017-04-01
Particles widely exist in various fields. In practical experiments, sometimes it is necessary to dissolve particles in water in a sample pool. This article proposes two typical layouts of the sample pool in defocused interferometric particle imaging (IPI). Layout I is the sample pool surface perpendicular to the incident light and layout II is the sample pool surface perpendicular to the scattered light. For layout I, the scattered light of the particles does not keep symmetric at the meridional and sagittal planes after being refracted by the sample pool surface, and elliptical interference patterns are formed at the defocused IPI image plane. But for layout II, the scattered light keeps symmetric after being refracted, and circular interference patterns are formed. Aimed at the two sample pool layouts, the ray-tracing software ZEMAX was used to simulate the spot shape of particles at different defocus distances. Furthermore, its effect on the ellipticity of the interference pattern with the tilt angle of the sample pool is analyzed. The relative error of the axis ratio for layout I does not exceed 9.2% at different defocus distances. The experimental results have good agreement with the theoretical analyses, and it indicates that layout II is more reasonable for the IPI system.
[Quantitative surface analysis of Pt-Co, Cu-Au and Cu-Ag alloy films by XPS and AES].
Li, Lian-Zhong; Zhuo, Shang-Jun; Shen, Ru-Xiang; Qian, Rong; Gao, Jie
2013-11-01
In order to improve the quantitative analysis accuracy of AES, We associated XPS with AES and studied the method to reduce the error of AES quantitative analysis, selected Pt-Co, Cu-Au and Cu-Ag binary alloy thin-films as the samples, used XPS to correct AES quantitative analysis results by changing the auger sensitivity factors to make their quantitative analysis results more similar. Then we verified the accuracy of the quantitative analysis of AES when using the revised sensitivity factors by other samples with different composition ratio, and the results showed that the corrected relative sensitivity factors can reduce the error in quantitative analysis of AES to less than 10%. Peak defining is difficult in the form of the integral spectrum of AES analysis since choosing the starting point and ending point when determining the characteristic auger peak intensity area with great uncertainty, and to make analysis easier, we also processed data in the form of the differential spectrum, made quantitative analysis on the basis of peak to peak height instead of peak area, corrected the relative sensitivity factors, and verified the accuracy of quantitative analysis by the other samples with different composition ratio. The result showed that the analytical error in quantitative analysis of AES reduced to less than 9%. It showed that the accuracy of AES quantitative analysis can be highly improved by the way of associating XPS with AES to correct the auger sensitivity factors since the matrix effects are taken into account. Good consistency was presented, proving the feasibility of this method.
Interferometric detection of freeze-thaw displacements of Alaskan permafrost using ERS-1 data
NASA Technical Reports Server (NTRS)
Werner, Charles L.; Gabriel, Andrew K.
1993-01-01
The possibility of making large scale (50 km) measurements of motions of the earth's surface with high resolution (10 m) and very high accuracy (1 cm) from multipass SAR interferometry was established in 1989. Other experiments have confirmed the viability and usefulness of the method. Work is underway in various groups to measure displacements from volcanic activity, seismic events, glacier motion, and in the present study, freeze-thaw cycles in Alaskan permafrost. The ground is known to move significantly in these cycles, and provided that freezing does not cause image decorrelation, it should be possible to measure both ground swelling and subsidence. The authors have obtained data from multiple passes of ERS-1 over the Toolik Lake region of northern Alaska of suitable quality for interferometry. The data are processed into images, and single interferograms are formed in the usual manner. Phase unwrapping is performed, and the multipass baselines are estimated from the images using both orbit ephemerides and scene tie points. The phases are scaled by the baseline ratio, and a double-difference interferogram (DDI) is formed. It is found that there is a residual 'saddle-shape' phase error across the image, which is postulated to be caused by a small divergence (10(exp -2) deg.) in the orbits. A simulation of a DDI from divergent orbits confirms the shape and magnitude of the error. A two-dimensional least squares fit to the error is performed, which is used to correct the DDI. The final, corrected DDI shows significant phase (altitude) changes over the period of the observation.
Round-off error in long-term orbital integrations using multistep methods
NASA Technical Reports Server (NTRS)
Quinlan, Gerald D.
1994-01-01
Techniques for reducing roundoff error are compared by testing them on high-order Stormer and summetric multistep methods. The best technique for most applications is to write the equation in summed, function-evaluation form and to store the coefficients as rational numbers. A larger error reduction can be achieved by writing the equation in backward-difference form and performing some of the additions in extended precision, but this entails a larger central processing unit (cpu) cost.
ERIC Educational Resources Information Center
Schretlen, David; And Others
1994-01-01
Composite reliability and standard errors of measurement were computed for prorated Verbal, Performance, and Full-Scale intelligence quotient (IQ) scores from a seven-subtest short form of the Wechsler Adult Intelligence Scale-Revised. Results with 1,880 adults (standardization sample) indicate that this form is as reliable as the complete test.…
NASA Astrophysics Data System (ADS)
Dondurur, Derman; Sarı, Coşkun
2004-07-01
A FORTRAN 77 computer code is presented that permits the inversion of Slingram electromagnetic anomalies to an optimal conductor model. Damped least-squares inversion algorithm is used to estimate the anomalous body parameters, e.g. depth, dip and surface projection point of the target. Iteration progress is controlled by maximum relative error value and iteration continued until a tolerance value was satisfied, while the modification of Marquardt's parameter is controlled by sum of the squared errors value. In order to form the Jacobian matrix, the partial derivatives of theoretical anomaly expression with respect to the parameters being optimised are calculated by numerical differentiation by using first-order forward finite differences. A theoretical and two field anomalies are inserted to test the accuracy and applicability of the present inversion program. Inversion of the field data indicated that depth and the surface projection point parameters of the conductor are estimated correctly, however, considerable discrepancies appeared on the estimated dip angles. It is therefore concluded that the most important factor resulting in the misfit between observed and calculated data is due to the fact that the theory used for computing Slingram anomalies is valid for only thin conductors and this assumption might have caused incorrect dip estimates in the case of wide conductors.
Integrating bathymetric and topographic data
NASA Astrophysics Data System (ADS)
Teh, Su Yean; Koh, Hock Lye; Lim, Yong Hui; Tan, Wai Kiat
2017-11-01
The quality of bathymetric and topographic resolution significantly affect the accuracy of tsunami run-up and inundation simulation. However, high resolution gridded bathymetric and topographic data sets for Malaysia are not freely available online. It is desirable to have seamless integration of high resolution bathymetric and topographic data. The bathymetric data available from the National Hydrographic Centre (NHC) of the Royal Malaysian Navy are in scattered form; while the topographic data from the Department of Survey and Mapping Malaysia (JUPEM) are given in regularly spaced grid systems. Hence, interpolation is required to integrate the bathymetric and topographic data into regularly-spaced grid systems for tsunami simulation. The objective of this research is to analyze the most suitable interpolation methods for integrating bathymetric and topographic data with minimal errors. We analyze four commonly used interpolation methods for generating gridded topographic and bathymetric surfaces, namely (i) Kriging, (ii) Multiquadric (MQ), (iii) Thin Plate Spline (TPS) and (iv) Inverse Distance to Power (IDP). Based upon the bathymetric and topographic data for the southern part of Penang Island, our study concluded, via qualitative visual comparison and Root Mean Square Error (RMSE) assessment, that the Kriging interpolation method produces an interpolated bathymetric and topographic surface that best approximate the admiralty nautical chart of south Penang Island.
Modeling radiation forces acting on TOPEX/Poseidon for precision orbit determination
NASA Technical Reports Server (NTRS)
Marshall, J. A.; Luthcke, S. B.; Antreasian, P. G.; Rosborough, G. W.
1992-01-01
Geodetic satellites such as GEOSAT, SPOT, ERS-1, and TOPEX/Poseidon require accurate orbital computations to support the scientific data they collect. Until recently, gravity field mismodeling was the major source of error in precise orbit definition. However, albedo and infrared re-radiation, and spacecraft thermal imbalances produce in combination no more than a 6-cm radial root-mean-square (RMS) error over a 10-day period. This requires the development of nonconservative force models that take the satellite's complex geometry, attitude, and surface properties into account. For TOPEX/Poseidon, a 'box-wing' satellite form was investigated that models the satellite as a combination of flat plates arranged in a box shape with a connected solar array. The nonconservative forces acting on each of the eight surfaces are computed independently, yielding vector accelerations which are summed to compute the total aggregate effect on the satellite center-of-mass. In order to test the validity of this concept, 'micro-models' based on finite element analysis of TOPEX/Poseidon were used to generate acceleration histories in a wide variety of orbit orientations. These profiles are then compared to the box-wing model. The results of these simulations and their implication on the ability to precisely model the TOPEX/Poseidon orbit are discussed.
Article Errors in the English Writing of Saudi EFL Preparatory Year Students
ERIC Educational Resources Information Center
Alhaisoni, Eid; Gaudel, Daya Ram; Al-Zuoud, Khalid M.
2017-01-01
This study aims at providing a comprehensive account of the types of errors produced by Saudi EFL students enrolled in the preparatory year programe in their use of articles, based on the Surface Structure Taxonomies (SST) of errors. The study describes the types, frequency and sources of the definite and indefinite article errors in writing…
Antunes, Sofia; Esposito, Antonio; Palmisano, Anna; Colantoni, Caterina; Cerutti, Sergio; Rizzo, Giovanna
2016-05-01
Extraction of the cardiac surfaces of interest from multi-detector computed tomographic (MDCT) data is a pre-requisite step for cardiac analysis, as well as for image guidance procedures. Most of the existing methods need manual corrections, which is time-consuming. We present a fully automatic segmentation technique for the extraction of the right ventricle, left ventricular endocardium and epicardium from MDCT images. The method consists in a 3D level set surface evolution approach coupled to a new stopping function based on a multiscale directional second derivative Gaussian filter, which is able to stop propagation precisely on the real boundary of the structures of interest. We validated the segmentation method on 18 MDCT volumes from healthy and pathologic subjects using manual segmentation performed by a team of expert radiologists as gold standard. Segmentation errors were assessed for each structure resulting in a surface-to-surface mean error below 0.5 mm and a percentage of surface distance with errors less than 1 mm above 80%. Moreover, in comparison to other segmentation approaches, already proposed in previous work, our method presented an improved accuracy (with surface distance errors less than 1 mm increased of 8-20% for all structures). The obtained results suggest that our approach is accurate and effective for the segmentation of ventricular cavities and myocardium from MDCT images.
Projections onto the Pareto surface in multicriteria radiation therapy optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bokrantz, Rasmus, E-mail: bokrantz@kth.se, E-mail: rasmus.bokrantz@raysearchlabs.com; Miettinen, Kaisa
2015-10-15
Purpose: To eliminate or reduce the error to Pareto optimality that arises in Pareto surface navigation when the Pareto surface is approximated by a small number of plans. Methods: The authors propose to project the navigated plan onto the Pareto surface as a postprocessing step to the navigation. The projection attempts to find a Pareto optimal plan that is at least as good as or better than the initial navigated plan with respect to all objective functions. An augmented form of projection is also suggested where dose–volume histogram constraints are used to prevent that the projection causes a violation ofmore » some clinical goal. The projections were evaluated with respect to planning for intensity modulated radiation therapy delivered by step-and-shoot and sliding window and spot-scanned intensity modulated proton therapy. Retrospective plans were generated for a prostate and a head and neck case. Results: The projections led to improved dose conformity and better sparing of organs at risk (OARs) for all three delivery techniques and both patient cases. The mean dose to OARs decreased by 3.1 Gy on average for the unconstrained form of the projection and by 2.0 Gy on average when dose–volume histogram constraints were used. No consistent improvements in target homogeneity were observed. Conclusions: There are situations when Pareto navigation leaves room for improvement in OAR sparing and dose conformity, for example, if the approximation of the Pareto surface is coarse or the problem formulation has too permissive constraints. A projection onto the Pareto surface can identify an inaccurate Pareto surface representation and, if necessary, improve the quality of the navigated plan.« less
The 129Xe nuclear shielding surfaces for Xe interacting with linear molecules CO2, N2, and CO
NASA Astrophysics Data System (ADS)
de Dios, Angel C.; Jameson, Cynthia J.
1997-09-01
We have calculated the intermolecular nuclear magnetic shielding surfaces for 129Xe in the systems Xe-CO2, Xe-N2, and Xe-CO using a gauge-invariant ab initio method at the coupled Hartree-Fock level with gauge-including atomic orbitals (GIAO). Implementation of a large basis set (240 basis functions) on the Xe gives very small counterpoise corrections which indicates that the basis set superposition errors in the calculated shielding values are negligible. These are the first intermolecular shielding surfaces for Xe-molecule systems. The surfaces are highly anisotropic and can be described adequately by a sum of inverse even powers of the distance with explicit angle dependence in the coefficients expressed by Legendre polynomials P2n(cos θ), n=0-3, for Xe-CO2 and Xe-N2. The Xe-CO shielding surface is well described by a similar functional form, except that Pn(cos θ), n=0-4 were used. When averaged over the anisotropic potential function these shielding surfaces provide the second virial coefficient of the nuclear magnetic resonance (NMR) chemical shift observed in gas mixtures. The energies from the self-consistent field (SCF) calculations were used to construct potential surfaces, using a damped dispersion form. These potential functions are compared with existing potentials in their predictions of the second virial coefficients of NMR shielding, the pressure virial coefficients, the density coefficient of the mean-square torque from infrared absorption, and the rotational constants and other average properties of the van der Waals complexes. Average properties of the van der Waals complexes were obtained by quantum diffusion Monte Carlo solutions of the vibrational motion using the various potentials and compared with experiment.
Attitude errors arising from antenna/satellite altitude errors - Recognition and reduction
NASA Technical Reports Server (NTRS)
Godbey, T. W.; Lambert, R.; Milano, G.
1972-01-01
A review is presented of the three basic types of pulsed radar altimeter designs, as well as the source and form of altitude bias errors arising from antenna/satellite attitude errors in each design type. A quantitative comparison of the three systems was also made.
High-frequency fluctuations of surface temperatures in an urban environment
NASA Astrophysics Data System (ADS)
Christen, Andreas; Meier, Fred; Scherer, Dieter
2012-04-01
This study presents an attempt to resolve fluctuations in surface temperatures at scales of a few seconds to several minutes using time-sequential thermography (TST) from a ground-based platform. A scheme is presented to decompose a TST dataset into fluctuating, high-frequency, and long-term mean parts. To demonstrate the scheme's application, a set of four TST runs (day/night, leaves-on/leaves-off) recorded from a 125-m-high platform above a complex urban environment in Berlin, Germany is used. Fluctuations in surface temperatures of different urban facets are measured and related to surface properties (material and form) and possible error sources. A number of relationships were found: (1) Surfaces with surface temperatures that were significantly different from air temperature experienced the highest fluctuations. (2) With increasing surface temperature above (below) air temperature, surface temperature fluctuations experienced a stronger negative (positive) skewness. (3) Surface materials with lower thermal admittance (lawns, leaves) showed higher fluctuations than surfaces with high thermal admittance (walls, roads). (4) Surface temperatures of emerged leaves fluctuate more compared to trees in a leaves-off situation. (5) In many cases, observed fluctuations were coherent across several neighboring pixels. The evidence from (1) to (5) suggests that atmospheric turbulence is a significant contributor to fluctuations. The study underlines the potential of using high-frequency thermal remote sensing in energy balance and turbulence studies at complex land-atmosphere interfaces.
Goldmann Tonometer Prism with an Optimized Error Correcting Applanation Surface.
McCafferty, Sean; Lim, Garrett; Duncan, William; Enikov, Eniko; Schwiegerling, Jim
2016-09-01
We evaluate solutions for an applanating surface modification to the Goldmann tonometer prism, which substantially negates the errors due to patient variability in biomechanics. A modified Goldmann or correcting applanation tonometry surface (CATS) prism is presented which was optimized to minimize the intraocular pressure (IOP) error due to corneal thickness, stiffness, curvature, and tear film. Mathematical modeling with finite element analysis (FEA) and manometric IOP referenced cadaver eyes were used to optimize and validate the design. Mathematical modeling of the optimized CATS prism indicates an approximate 50% reduction in each of the corneal biomechanical and tear film errors. Manometric IOP referenced pressure in cadaveric eyes demonstrates substantial equivalence to GAT in nominal eyes with the CATS prism as predicted by modeling theory. A CATS modified Goldmann prism is theoretically able to significantly improve the accuracy of IOP measurement without changing Goldmann measurement technique or interpretation. Clinical validation is needed but the analysis indicates a reduction in CCT error alone to less than ±2 mm Hg using the CATS prism in 100% of a standard population compared to only 54% less than ±2 mm Hg error with the present Goldmann prism. This article presents an easily adopted novel approach and critical design parameters to improve the accuracy of a Goldmann applanating tonometer.
NASA Astrophysics Data System (ADS)
Spiga, D.; Della Monica Ferreira, D.; Shortt, B.; Bavdaz, M.; Bergback Knudsen, E.; Bianucci, G.; Christensen, F.; Civitani, M.; Collon, M.; Conconi, P.; Fransen, S.; Marioni, F.; Massahi, S.; Pareschi, G.; Salmaso, B.; Jegers, A. S.; Tayabaly, K.; Valsecchi, G.; Westergaard, N.; Wille, E.
2017-09-01
The ATHENA X-ray observatory is a large-class ESA approved mission, with launch scheduled in 2028. The technology of silicon pore optics (SPO) was selected as baseline to assemble ATHENA's optic with hundreds of mirror modules, obtained by stacking wedged and ribbed silicon wafer plates onto silicon mandrels to form the Wolter-I configuration. In the current configuration, the optical assembly has a 3 m diameter and a 2 m2 effective area at 1 keV, with a required angular resolution of 5 arcsec. The angular resolution that can be achieved is chiefly the combination of 1) the focal spot size determined by the pore diffraction, 2) the focus degradation caused by surface and profile errors, 3) the aberrations introduced by the misalignments between primary and secondary segments, 4) imperfections in the co-focality of the mirror modules in the optical assembly. A detailed simulation of these aspects is required in order to assess the fabrication and alignment tolerances; moreover, the achievable effective area and angular resolution depend on the mirror module design. Therefore, guaranteeing these optical performances requires: a fast design tool to find the most performing solution in terms of mirror module geometry and population, and an accurate point spread function simulation from local metrology and positioning information. In this paper, we present the results of simulations in the framework of ESA-financed projects (SIMPOSiuM, ASPHEA, SPIRIT), in preparation of the ATHENA X-ray telescope, analyzing the mentioned points: 1) we deal with a detailed description of diffractive effects in an SPO mirror module, 2) we show ray-tracing results including surface and profile defects of the reflective surfaces, 3) we assess the effective area and angular resolution degradation caused by alignment errors between SPO mirror module's segments, and 4) we simulate the effects of co-focality errors in X-rays and in the UV optical bench used to study the mirror module alignment and integration.
Constraining the Surface Energy Balance of Snow in Complex Terrain
NASA Astrophysics Data System (ADS)
Lapo, Karl E.
Physically-based snow models form the basis of our understanding of current and future water and energy cycles, especially in mountainous terrain. These models are poorly constrained and widely diverge from each other, demonstrating a poor understanding of the surface energy balance. This research aims to improve our understanding of the surface energy balance in regions of complex terrain by improving our confidence in existing observations and improving our knowledge of remotely sensed irradiances (Chapter 1), critically analyzing the representation of boundary layer physics within land models (Chapter 2), and utilizing relatively novel observations to in the diagnoses of model performance (Chapter 3). This research has improved the understanding of the literal and metaphorical boundary between the atmosphere and land surface. Solar irradiances are difficult to observe in regions of complex terrain, as observations are subject to harsh conditions not found in other environments. Quality control methods were developed to handle these unique conditions. These quality control methods facilitated an analysis of estimated solar irradiances over mountainous environments. Errors in the estimated solar irradiance are caused by misrepresenting the effect of clouds over regions of topography and regularly exceed the range of observational uncertainty (up to 80Wm -2) in all regions examined. Uncertainty in the solar irradiance estimates were especially pronounced when averaging over high-elevation basins, with monthly differences between estimates up to 80Wm-2. These findings can inform the selection of a method for estimating the solar irradiance and suggest several avenues of future research for improving existing methods. Further research probed the relationship between the land surface and atmosphere as it pertains to the stable boundary layers that commonly form over snow-covered surfaces. Stable conditions are difficult to represent, especially for low wind speed values and coupled land-atmosphere models have difficulty representing these processes. We developed a new method analyzing turbulent fluxes at the land surface that relies on using the observed surface temperature, which we called the offline turbulence method. We used this method to test a number of stability schemes as they are implemented within land models. Stability schemes can cause small biases in the simulated sensible heat flux, but these are caused by compensating errors, as no single method was able to accurately reproduce the observed distribution of the sensible heat flux. We described how these turbulence schemes perform within different turbulence regimes, particularly noting the difficulty representing turbulence during conditions with faster wind speeds and the transition between weak and strong wind turbulence regimes. Heterogeneity in the horizontal distribution of surface temperature associated with different land surface types likely explains some of the missing physics within land models and is manifested as counter-gradient fluxes in observations. The coupling of land and atmospheric models needs further attention, as we highlight processes that are missing. Expanding on the utility of surface temperature, Ts, in model evaluations, we demonstrated the utility of using surface temperature in snow models evaluations. Ts is the diagnostic variable of the modeled surface energy balance within physically-based models and is an ideal supplement to traditional evaluation techniques. We demonstrated how modeling decisions affect Ts, specifically testing the impact of vertical layer structure, thermal conductivity, and stability corrections in addition to the effect of uncertainty in forcing data on simulated Ts. The internal modeling decisions had minimal impacts relative to uncertainty in the forcing data. Uncertainty in downwelling longwave was found to have the largest impact on simulated Ts. Using Ts, we demonstrated how various errors in the forcing data can be identified, noting that uncertainty in downwelling longwave and wind are the easiest to identify due to their effect on night time minimum Ts.
Phase Retrieval System for Assessing Diamond Turning and Optical Surface Defects
NASA Technical Reports Server (NTRS)
Dean, Bruce; Maldonado, Alex; Bolcar, Matthew
2011-01-01
An optical design is presented for a measurement system used to assess the impact of surface errors originating from diamond turning artifacts. Diamond turning artifacts are common by-products of optical surface shaping using the diamond turning process (a diamond-tipped cutting tool used in a lathe configuration). Assessing and evaluating the errors imparted by diamond turning (including other surface errors attributed to optical manufacturing techniques) can be problematic and generally requires the use of an optical interferometer. Commercial interferometers can be expensive when compared to the simple optical setup developed here, which is used in combination with an image-based sensing technique (phase retrieval). Phase retrieval is a general term used in optics to describe the estimation of optical imperfections or aberrations. This turnkey system uses only image-based data and has minimal hardware requirements. The system is straightforward to set up, easy to align, and can provide nanometer accuracy on the measurement of optical surface defects.
NASA Technical Reports Server (NTRS)
Liu, W. T.
1983-01-01
Ocean-surface momentum flux and latent heat flux are determined from Seasat-A data from 1978 and compared with ship observations. Momentum flux was measured using the Seasat-A scatterometer system (SASS) heat flux, with the scanning multichannel MW radiometer (SMMR). Ship measurements were quality selected and averaged to increase their reliability. The fluxes were computed using a bulk parameterization technique. It is found that although SASS effectively measures momentum flux, variations in atmospheric stability and sea-surface temperature cause deviations which are not accounted for by the present data-processing algorithm. The SMMR-latent-heat-flux algorithm, while needing refinement, is shown to given estimations to within 35 W/sq m in its present form, which removes systematic error and uses an empirically determined transfer coefficient.
Aliased tidal errors in TOPEX/POSEIDON sea surface height data
NASA Technical Reports Server (NTRS)
Schlax, Michael G.; Chelton, Dudley B.
1994-01-01
Alias periods and wavelengths for the M(sub 2, S(sub 2), N(sub 2), K(sub 1), O(sub 1), and P(sub 1) tidal constituents are calculated for TOPEX/POSEIDON. Alias wavelenghts calculated in previous studies are shown to be in error, and a correct method is presented. With the exception of the K(sub 1) constituent, all of these tidal aliases for TOPEX/POSEIDON have periods shorter than 90 days and are likely to be confounded with long-period sea surface height signals associated with real ocean processes. In particular, the correspondence between the periods and wavelengths of the M(sub 2) alias and annual baroclinic Rossby waves that plagued Geosat sea surface height data is avoided. The potential for aliasing residual tidal errors in smoothed estimates of sea surface height is calculated for the six tidal constituents. The potential for aliasing the lunar tidal constituents M(sub 2), N(sub 2) and O(sub 1) fluctuates with latitude and is different for estimates made at the crossovers of ascending and descending ground tracks than for estimates at points midway between crossovers. The potential for aliasing the solar tidal constituents S(sub 2), K(sub 1) and P(sub 1) varies smoothly with latitude. S(sub 2) is strongly aliased for latitudes within 50 degress of the equator, while K(sub 1) and P(sub 1) are only weakly aliased in that range. A weighted least squares method for estimating and removing residual tidal errors from TOPEX/POSEIDON sea surface height data is presented. A clear understanding of the nature of aliased tidal error in TOPEX/POSEIDON data aids the unambiguous identification of real propagating sea surface height signals. Unequivocal evidence of annual period, westward propagating waves in the North Atlantic is presented.
Evaluation of Preanalytical Quality Indicators by Six Sigma and Pareto`s Principle.
Kulkarni, Sweta; Ramesh, R; Srinivasan, A R; Silvia, C R Wilma Delphine
2018-01-01
Preanalytical steps are the major sources of error in clinical laboratory. The analytical errors can be corrected by quality control procedures but there is a need for stringent quality checks in preanalytical area as these processes are done outside the laboratory. Sigma value depicts the performance of laboratory and its quality measures. Hence in the present study six sigma and Pareto principle was applied to preanalytical quality indicators to evaluate the clinical biochemistry laboratory performance. This observational study was carried out for a period of 1 year from November 2015-2016. A total of 1,44,208 samples and 54,265 test requisition forms were screened for preanalytical errors like missing patient information, sample collection details in forms and hemolysed, lipemic, inappropriate, insufficient samples and total number of errors were calculated and converted into defects per million and sigma scale. Pareto`s chart was drawn using total number of errors and cumulative percentage. In 75% test requisition forms diagnosis was not mentioned and sigma value of 0.9 was obtained and for other errors like sample receiving time, stat and type of sample sigma values were 2.9, 2.6, and 2.8 respectively. For insufficient sample and improper ratio of blood to anticoagulant sigma value was 4.3. Pareto`s chart depicts out of 80% of errors in requisition forms, 20% is contributed by missing information like diagnosis. The development of quality indicators, application of six sigma and Pareto`s principle are quality measures by which not only preanalytical, the total testing process can be improved.
A Review of Depth and Normal Fusion Algorithms
Štolc, Svorad; Pock, Thomas
2018-01-01
Geometric surface information such as depth maps and surface normals can be acquired by various methods such as stereo light fields, shape from shading and photometric stereo techniques. We compare several algorithms which deal with the combination of depth with surface normal information in order to reconstruct a refined depth map. The reasons for performance differences are examined from the perspective of alternative formulations of surface normals for depth reconstruction. We review and analyze methods in a systematic way. Based on our findings, we introduce a new generalized fusion method, which is formulated as a least squares problem and outperforms previous methods in the depth error domain by introducing a novel normal weighting that performs closer to the geodesic distance measure. Furthermore, a novel method is introduced based on Total Generalized Variation (TGV) which further outperforms previous approaches in terms of the geodesic normal distance error and maintains comparable quality in the depth error domain. PMID:29389903
A conceptual design study of point focusing thin-film solar concentrators
NASA Technical Reports Server (NTRS)
1981-01-01
Candidates for reflector panel design concepts, including materials and configurations, were identified. The large list of candidates was screened and reduced to the five most promising ones. Cost and technical factors were used in making the final choices for the panel conceptual design, which was a stiffened steel skin substrate with a bonded, acrylic overcoated, aluminized polyester film reflective surface. Computer simulations were run for the concentrator optics using the selected panel design, and experimentally determined specularity and reflectivity values. Intercept factor curves and energy to the aperture curves were produced. These curves indicate that surface errors of 2 mrad (milliradians) or less would be required to capture the desired energy for a Brayton cycle 816 C case. Two test panels were fabricated to demonstrate manufacturability and optically tested for surface error. Surface errors in the range of 1.75 mrad and 2.2 mrad were measured.
Permeable Surface Corrections for Ffowcs Williams and Hawkings Integrals
NASA Technical Reports Server (NTRS)
Lockard, David P.; Casper, Jay H.
2005-01-01
The acoustic prediction methodology discussed herein applies an acoustic analogy to calculate the sound generated by sources in an aerodynamic simulation. Sound is propagated from the computed flow field by integrating the Ffowcs Williams and Hawkings equation on a suitable control surface. Previous research suggests that, for some applications, the integration surface must be placed away from the solid surface to incorporate source contributions from within the flow volume. As such, the fluid mechanisms in the input flow field that contribute to the far-field noise are accounted for by their mathematical projection as a distribution of source terms on a permeable surface. The passage of nonacoustic disturbances through such an integration surface can result in significant error in an acoustic calculation. A correction for the error is derived in the frequency domain using a frozen gust assumption. The correction is found to work reasonably well in several test cases where the error is a small fraction of the actual radiated noise. However, satisfactory agreement has not been obtained between noise predictions using the solution from a three-dimensional, detached-eddy simulation of flow over a cylinder.
Ion beam figuring of Φ520mm convex hyperbolic secondary mirror
NASA Astrophysics Data System (ADS)
Meng, Xiaohui; Wang, Yonggang; Li, Ang; Li, Wenqing
2016-10-01
The convex hyperbolic secondary mirror is a Φ520-mm Zerodur lightweight hyperbolic convex mirror. Typically conventional methods like CCOS, stressed-lap polishing are used to manufacture this secondary mirror. Nevertheless, the required surface accuracy cannot be achieved through the use of conventional polishing methods because of the unpredictable behavior of the polishing tools, which leads to an unstable removal rate. Ion beam figuring is an optical fabrication method that provides highly controlled error of previously polished surfaces using a directed, inert and neutralized ion beam to physically sputter material from the optic surface. Several iterations with different ion beam size are selected and optimized to fit different stages of surface figure error and spatial frequency components. Before ion beam figuring, surface figure error of the secondary mirror is 2.5λ p-v, 0.23λ rms, and is improved to 0.12λ p-v, 0.014λ rms in several process iterations. The demonstration clearly shows that ion beam figuring can not only be used to the final correction of aspheric, but also be suitable for polishing the coarse surface of large, complex mirror.
Shukla, Manoj K; Poda, Aimee
2016-06-01
This manuscript reports results of an integrated theoretical and experimental investigation of adsorption of two emerging contaminants (DNAN and FOX-7) and legacy compound TNT on cellulose surface. Cellulose was modeled as trimeric form of the linear chain of 1 → 4 linked of β-D-glucopyranos in (4)C1 chair conformation. Geometries of modeled cellulose, munitions compounds and their complexes were optimized at the M06-2X functional level of Density Functional Theory using the 6-31G(d,p) basis set in gas phase and in water solution. The effect of water solution was modeled using the CPCM approach. Nature of potential energy surfaces was ascertained through harmonic vibrational frequency analysis. Interaction energies were corrected for basis set superposition error and the 6-311G(d,p) basis set was used. Molecular electrostatic potential mapping was performed to understand the reactivity of the investigated systems. It was predicted that adsorbates will be weakly adsorbed on the cellulose surface in water solution than in the gas phase.
NASA Astrophysics Data System (ADS)
Zhang, Chengzhu; Xie, Shaocheng; Klein, Stephen A.; Ma, Hsi-yen; Tang, Shuaiqi; Van Weverberg, Kwinten; Morcrette, Cyril J.; Petch, Jon
2018-03-01
All the weather and climate models participating in the Clouds Above the United States and Errors at the Surface project show a summertime surface air temperature (T2 m) warm bias in the region of the central United States. To understand the warm bias in long-term climate simulations, we assess the Atmospheric Model Intercomparison Project simulations from the Coupled Model Intercomparison Project Phase 5, with long-term observations mainly from the Atmospheric Radiation Measurement program Southern Great Plains site. Quantities related to the surface energy and water budget, and large-scale circulation are analyzed to identify possible factors and plausible links involved in the warm bias. The systematic warm season bias is characterized by an overestimation of T2 m and underestimation of surface humidity, precipitation, and precipitable water. Accompanying the warm bias is an overestimation of absorbed solar radiation at the surface, which is due to a combination of insufficient cloud reflection and clear-sky shortwave absorption by water vapor and an underestimation in surface albedo. The bias in cloud is shown to contribute most to the radiation bias. The surface layer soil moisture impacts T2 m through its control on evaporative fraction. The error in evaporative fraction is another important contributor to T2 m. Similar sources of error are found in hindcast from other Clouds Above the United States and Errors at the Surface studies. In Atmospheric Model Intercomparison Project simulations, biases in meridional wind velocity associated with the low-level jet and the 500 hPa vertical velocity may also relate to T2 m bias through their control on the surface energy and water budget.
Hierarchical surface code for network quantum computing with modules of arbitrary size
NASA Astrophysics Data System (ADS)
Li, Ying; Benjamin, Simon C.
2016-10-01
The network paradigm for quantum computing involves interconnecting many modules to form a scalable machine. Typically it is assumed that the links between modules are prone to noise while operations within modules have a significantly higher fidelity. To optimize fault tolerance in such architectures we introduce a hierarchical generalization of the surface code: a small "patch" of the code exists within each module and constitutes a single effective qubit of the logic-level surface code. Errors primarily occur in a two-dimensional subspace, i.e., patch perimeters extruded over time, and the resulting noise threshold for intermodule links can exceed ˜10 % even in the absence of purification. Increasing the number of qubits within each module decreases the number of qubits necessary for encoding a logical qubit. But this advantage is relatively modest, and broadly speaking, a "fine-grained" network of small modules containing only about eight qubits is competitive in total qubit count versus a "course" network with modules containing many hundreds of qubits.
Form Overrides Meaning When Bilinguals Monitor for Errors
Ivanova, Iva; Ferreira, Victor S.; Gollan, Tamar H.
2016-01-01
Bilinguals rarely produce unintended language switches, which may in part be because switches are detected and corrected by an internal monitor. But are language switches easier or harder to detect than within-language semantic errors? To approximate internal monitoring, bilinguals listened (Experiment 1) or read aloud (Experiment 2) stories, and detected language switches (translation equivalents or semantically unrelated to expected words) and within-language errors (semantically related or unrelated to expected words). Bilinguals detected semantically related within-language errors most slowly and least accurately, language switches more quickly and accurately than within-language errors, and (in Experiment 2), translation equivalents as quickly and accurately as unrelated language switches. These results suggest that internal monitoring of form (which can detect mismatches in language membership) completes earlier than, and is independent of, monitoring of meaning. However, analysis of reading times prior to error detection revealed meaning violations to be more disruptive for processing than language violations. PMID:28649169
A probabilistic approach to remote compositional analysis of planetary surfaces
Lapotre, Mathieu G.A.; Ehlmann, Bethany L.; Minson, Sarah E.
2017-01-01
Reflected light from planetary surfaces provides information, including mineral/ice compositions and grain sizes, by study of albedo and absorption features as a function of wavelength. However, deconvolving the compositional signal in spectra is complicated by the nonuniqueness of the inverse problem. Trade-offs between mineral abundances and grain sizes in setting reflectance, instrument noise, and systematic errors in the forward model are potential sources of uncertainty, which are often unquantified. Here we adopt a Bayesian implementation of the Hapke model to determine sets of acceptable-fit mineral assemblages, as opposed to single best fit solutions. We quantify errors and uncertainties in mineral abundances and grain sizes that arise from instrument noise, compositional end members, optical constants, and systematic forward model errors for two suites of ternary mixtures (olivine-enstatite-anorthite and olivine-nontronite-basaltic glass) in a series of six experiments in the visible-shortwave infrared (VSWIR) wavelength range. We show that grain sizes are generally poorly constrained from VSWIR spectroscopy. Abundance and grain size trade-offs lead to typical abundance errors of ≤1 wt % (occasionally up to ~5 wt %), while ~3% noise in the data increases errors by up to ~2 wt %. Systematic errors further increase inaccuracies by a factor of 4. Finally, phases with low spectral contrast or inaccurate optical constants can further increase errors. Overall, typical errors in abundance are <10%, but sometimes significantly increase for specific mixtures, prone to abundance/grain-size trade-offs that lead to high unmixing uncertainties. These results highlight the need for probabilistic approaches to remote determination of planetary surface composition.
Error mechanism analyses of an ultra-precision stage for high speed scan motion over a large stroke
NASA Astrophysics Data System (ADS)
Wang, Shaokai; Tan, Jiubin; Cui, Jiwen
2015-02-01
Reticle Stage (RS) is designed to complete scan motion with high speed in nanometer-scale over a large stroke. Comparing with the allowable scan accuracy of a few nanometers, errors caused by any internal or external disturbances are critical and must not be ignored. In this paper, RS is firstly introduced in aspects of mechanical structure, forms of motion, and controlling method. Based on that, mechanisms of disturbances transferred to final servo-related error in scan direction are analyzed, including feedforward error, coupling between the large stroke stage (LS) and the short stroke stage (SS), and movement of measurement reference. Especially, different forms of coupling between SS and LS are discussed in detail. After theoretical analysis above, the contributions of these disturbances to final error are simulated numerically. The residual positioning error caused by feedforward error in acceleration process is about 2 nm after settling time, the coupling between SS and LS about 2.19 nm, and the movements of MF about 0.6 nm.
Self-Interaction Error in Density Functional Theory: An Appraisal.
Bao, Junwei Lucas; Gagliardi, Laura; Truhlar, Donald G
2018-05-03
Self-interaction error (SIE) is considered to be one of the major sources of error in most approximate exchange-correlation functionals for Kohn-Sham density-functional theory (KS-DFT), and it is large with all local exchange-correlation functionals and with some hybrid functionals. In this work, we consider systems conventionally considered to be dominated by SIE. For these systems, we demonstrate that by using multiconfiguration pair-density functional theory (MC-PDFT), the error of a translated local density-functional approximation is significantly reduced (by a factor of 3) when using an MCSCF density and on-top density, as compared to using KS-DFT with the parent functional; the error in MC-PDFT with local on-top functionals is even lower than the error in some popular KS-DFT hybrid functionals. Density-functional theory, either in MC-PDFT form with local on-top functionals or in KS-DFT form with some functionals having 50% or more nonlocal exchange, has smaller errors for SIE-prone systems than does CASSCF, which has no SIE.
Experimental study of an adaptive CFRC reflector for high order wave-front error correction
NASA Astrophysics Data System (ADS)
Lan, Lan; Fang, Houfei; Wu, Ke; Jiang, Shuidong; Zhou, Yang
2018-03-01
The recent radio frequency communication system developments are generating the need for creating space antennas with lightweight and high precision. The carbon fiber reinforced composite (CFRC) materials have been used to manufacture the high precision reflector. The wave-front errors caused by fabrication and on-orbit distortion are inevitable. The adaptive CFRC reflector has received much attention to do the wave-front error correction. Due to uneven stress distribution that is introduced by actuation force and fabrication, the high order wave-front errors such as print-through error is found on the reflector surface. However, the adaptive CFRC reflector with PZT actuators basically has no control authority over the high order wave-front errors. A new design architecture assembled secondary ribs at the weak triangular surfaces is presented in this paper. The virtual experimental study of the new adaptive CFRC reflector has conducted. The controllability of the original adaptive CFRC reflector and the new adaptive CFRC reflector with secondary ribs are investigated. The virtual experimental investigation shows that the new adaptive CFRC reflector is feasible and efficient to diminish the high order wave-front error.
Goldmann tonometer error correcting prism: clinical evaluation.
McCafferty, Sean; Lim, Garrett; Duncan, William; Enikov, Eniko T; Schwiegerling, Jim; Levine, Jason; Kew, Corin
2017-01-01
Clinically evaluate a modified applanating surface Goldmann tonometer prism designed to substantially negate errors due to patient variability in biomechanics. A modified Goldmann prism with a correcting applanation tonometry surface (CATS) was mathematically optimized to minimize the intraocular pressure (IOP) measurement error due to patient variability in corneal thickness, stiffness, curvature, and tear film adhesion force. A comparative clinical study of 109 eyes measured IOP with CATS and Goldmann prisms. The IOP measurement differences between the CATS and Goldmann prisms were correlated to corneal thickness, hysteresis, and curvature. The CATS tonometer prism in correcting for Goldmann central corneal thickness (CCT) error demonstrated a reduction to <±2 mmHg in 97% of a standard CCT population. This compares to only 54% with CCT error <±2 mmHg using the Goldmann prism. Equal reductions of ~50% in errors due to corneal rigidity and curvature were also demonstrated. The results validate the CATS prism's improved accuracy and expected reduced sensitivity to Goldmann errors without IOP bias as predicted by mathematical modeling. The CATS replacement for the Goldmann prism does not change Goldmann measurement technique or interpretation.
An improved methodology for heliostat testing and evaluation at the Plataforma Solar de Almería
NASA Astrophysics Data System (ADS)
Monterreal, Rafael; Enrique, Raúl; Fernández-Reche, Jesús
2017-06-01
The optical quality of a heliostat basically quantifies the difference between the scattering effects of the actual solar radiation reflected on its optical surface, compared to the so called canonical dispersion, that is, the one reflected on an optical surface free of constructional errors (paradigm). However, apart from the uncertainties of the measuring process itself, the value of the optical quality must be independent of the measuring instrument; so, any new measuring techniques that provide additional information about the error sources on the heliostat reflecting surface would be welcome. That error sources are responsible for the final optical quality value, with different degrees of influence. For the constructor of heliostats it will be extremely useful to know the value of the classical sources of error and their weight on the overall optical quality of a heliostat, such as facets geometry or focal length, as well as the characteristics of the heliostat as a whole, i.e., its geometry, focal length, facets misalignment and also the possible dependence of these effects with mechanical and/or meteorological factors. It is the goal of the present paper to unfold these optical quality error sources by exploring directly the reflecting surface of the heliostat with the help of a laser-scanner device and link the result with the traditional methods of heliostat evaluation at the Plataforma Solar de Almería.
Linguistic Pattern Analysis of Misspellings of Typically Developing Writers in Grades 1 to 9
Bahr, Ruth Huntley; Silliman, Elaine R.; Berninger, Virginia W.; Dow, Michael
2012-01-01
Purpose A mixed methods approach, evaluating triple word form theory, was used to describe linguistic patterns of misspellings. Method Spelling errors were taken from narrative and expository writing samples provided by 888 typically developing students in grades 1–9. Errors were coded by category (phonological, orthographic, and morphological) and specific linguistic feature affected. Grade level effects were analyzed with trend analysis. Qualitative analyses determined frequent error types and how use of specific linguistic features varied across grades. Results Phonological, orthographic, and morphological errors were noted across all grades, but orthographic errors predominated. Linear trends revealed developmental shifts in error proportions for the orthographic and morphological categories between grades 4–5. Similar error types were noted across age groups but the nature of linguistic feature error changed with age. Conclusions Triple word-form theory was supported. By grade 1, orthographic errors predominated and phonological and morphological error patterns were evident. Morphological errors increased in relative frequency in older students, probably due to a combination of word-formation issues and vocabulary growth. These patterns suggest that normal spelling development reflects non-linear growth and that it takes a long time to develop a robust orthographic lexicon that coordinates phonology, orthography, and morphology and supports word-specific, conventional spelling. PMID:22473834
NASA Technical Reports Server (NTRS)
Goad, C. C.
1977-01-01
The effects of tropospheric and ionospheric refraction errors are analyzed for the GEOS-C altimeter project in terms of their resultant effects on C-band orbits and the altimeter measurement itself. Operational procedures using surface meteorological measurements at ground stations and monthly means for ocean surface conditions are assumed, with no corrections made for ionospheric effects. Effects on the orbit height due to tropospheric errors are approximately 15 cm for single pass short arcs (such as for calibration) and 10 cm for global orbits of one revolution. Orbit height errors due to neglect of the ionosphere have an amplitude of approximately 40 cm when the orbits are determined from C-band range data with predominantly daylight tracking. Altimeter measurement errors are approximately 10 cm due to residual tropospheric refraction correction errors. Ionospheric effects on the altimeter range measurement are also on the order of 10 cm during the GEOS-C launch and early operation period.
Ravald, L; Fornstedt, T
2001-01-26
The bi-Langmuir equation has recently been proven essential to describe chiral chromatographic surfaces and we therefore investigated the accuracy of the elution by characteristic points method (ECP) for estimation of bi-Langmuir isotherm parameters. The ECP calculations was done on elution profiles generated by the equilibrium-dispersive model of chromatography for five different sets of bi-Langmuir parameters. The ECP method generates two different errors; (i) the error of the ECP calculated isotherm and (ii) the model error of the fitting to the ECP isotherm. Both errors decreased with increasing column efficiency. Moreover, the model error was strongly affected by the weight of the bi-Langmuir function fitted. For some bi-Langmuir compositions the error of the ECP calculated isotherm is too large even at high column efficiencies. Guidelines will be given on surface types to be avoided and on column efficiencies and loading factors required for adequate parameter estimations with ECP.
Sliding mode output feedback control based on tracking error observer with disturbance estimator.
Xiao, Lingfei; Zhu, Yue
2014-07-01
For a class of systems who suffers from disturbances, an original output feedback sliding mode control method is presented based on a novel tracking error observer with disturbance estimator. The mathematical models of the systems are not required to be with high accuracy, and the disturbances can be vanishing or nonvanishing, while the bounds of disturbances are unknown. By constructing a differential sliding surface and employing reaching law approach, a sliding mode controller is obtained. On the basis of an extended disturbance estimator, a creative tracking error observer is produced. By using the observation of tracking error and the estimation of disturbance, the sliding mode controller is implementable. It is proved that the disturbance estimation error and tracking observation error are bounded, the sliding surface is reachable and the closed-loop system is robustly stable. The simulations on a servomotor positioning system and a five-degree-of-freedom active magnetic bearings system verify the effect of the proposed method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Frasson, Renato Prata de Moraes; Wei, Rui; Durand, Michael; Minear, J. Toby; Domeneghetti, Alessio; Schumann, Guy; Williams, Brent A.; Rodriguez, Ernesto; Picamilh, Christophe; Lion, Christine; Pavelsky, Tamlin; Garambois, Pierre-André
2017-10-01
The upcoming Surface Water and Ocean Topography (SWOT) mission will measure water surface heights and widths for rivers wider than 100 m. At its native resolution, SWOT height errors are expected to be on the order of meters, which prevent the calculation of water surface slopes and the use of slope-dependent discharge equations. To mitigate height and width errors, the high-resolution measurements will be grouped into reaches (˜5 to 15 km), where slope and discharge are estimated. We describe three automated river segmentation strategies for defining optimum reaches for discharge estimation: (1) arbitrary lengths, (2) identification of hydraulic controls, and (3) sinuosity. We test our methodologies on 9 and 14 simulated SWOT overpasses over the Sacramento and the Po Rivers, respectively, which we compare against hydraulic models of each river. Our results show that generally, height, width, and slope errors decrease with increasing reach length. However, the hydraulic controls and the sinuosity methods led to better slopes and often height errors that were either smaller or comparable to those of arbitrary reaches of compatible sizes. Estimated discharge errors caused by the propagation of height, width, and slope errors through the discharge equation were often smaller for sinuosity (on average 8.5% for the Sacramento and 6.9% for the Po) and hydraulic control (Sacramento: 7.3% and Po: 5.9%) reaches than for arbitrary reaches of comparable lengths (Sacramento: 8.6% and Po: 7.8%). This analysis suggests that reach definition methods that preserve the hydraulic properties of the river network may lead to better discharge estimates.
Evaluation of the 3dMDface system as a tool for soft tissue analysis.
Hong, C; Choi, K; Kachroo, Y; Kwon, T; Nguyen, A; McComb, R; Moon, W
2017-06-01
To evaluate the accuracy of three-dimensional stereophotogrammetry by comparing values obtained from direct anthropometry and the 3dMDface system. To achieve a more comprehensive evaluation of the reliability of 3dMD, both linear and surface measurements were examined. UCLA Section of Orthodontics. Mannequin head as model for anthropometric measurements. Image acquisition and analysis were carried out on a mannequin head using 16 anthropometric landmarks and 21 measured parameters for linear and surface distances. 3D images using 3dMDface system were made at 0, 1 and 24 hours; 1, 2, 3 and 4 weeks. Error magnitude statistics used include mean absolute difference, standard deviation of error, relative error magnitude and root mean square error. Intra-observer agreement for all measurements was attained. Overall mean errors were lower than 1.00 mm for both linear and surface parameter measurements, except in 5 of the 21 measurements. The three longest parameter distances showed increased variation compared to shorter distances. No systematic errors were observed for all performed paired t tests (P<.05). Agreement values between two observers ranged from 0.91 to 0.99. Measurements on a mannequin confirmed the accuracy of all landmarks and parameters analysed in this study using the 3dMDface system. Results indicated that 3dMDface system is an accurate tool for linear and surface measurements, with potentially broad-reaching applications in orthodontics, surgical treatment planning and treatment evaluation. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Zhou, X.; Beljaars, A.; Wang, Y.; Huang, B.; Lin, C.; Chen, Y.; Wu, H.
2017-09-01
Weather Research and Forecasting (WRF) simulations with different selections of subgrid orographic drag over the Tibetan Plateau have been evaluated with observation and ERA-Interim reanalysis. Results show that the subgrid orographic drag schemes, especially the turbulent orographic form drag (TOFD) scheme, efficiently reduce the 10 m wind speed bias and RMS error with respect to station measurements. With the combination of gravity wave, flow blocking and TOFD schemes, wind speed is simulated more realistically than with the individual schemes only. Improvements are also seen in the 2 m air temperature and surface pressure. The gravity wave drag, flow blocking drag, and TOFD schemes combined have the smallest station mean bias (-2.05°C in 2 m air temperature and 1.27 hPa in surface pressure) and RMS error (3.59°C in 2 m air temperature and 2.37 hPa in surface pressure). Meanwhile, the TOFD scheme contributes more to the improvements than the gravity wave drag and flow blocking schemes. The improvements are more pronounced at low levels of the atmosphere than at high levels due to the stronger drag enhancement on the low-level flow. The reduced near-surface cold bias and high-pressure bias over the Tibetan Plateau are the result of changes in the low-level wind components associated with the geostrophic balance. The enhanced drag directly leads to weakened westerlies but also enhances the a-geostrophic flow in this case reducing (enhancing) the northerlies (southerlies), which bring more warm air across the Himalaya Mountain ranges from South Asia (bring less cold air from the north) to the interior Tibetan Plateau.
NASA Astrophysics Data System (ADS)
Choe, Gwangson; Kim, Sunjung; Kim, Kap-Sung; No, Jincheol
2015-08-01
As shown by Démoulin and Berger (2003), the magnetic helicity flux through the solar surface into the solar atmosphere can be exactly calculated if we can trace the motion of footpoints with infinite temporal and spatial resolutions. When there is a magnetic flux transport across the solar surface, the horizontal velocity of footpoints becomes infinite at the polarity inversion line, although the surface integral yielding the helicity flux does not diverge. In practical application, a finite temporal and spatial resolution causes an underestimate of the magnetic helicity flux when a magnetic flux emerges from below the surface, because there is an observational blackout area near a polarity inversion line whether it is pre-existing or newly formed. In this paper, we consider emergence of simple magnetic flux ropes and calculate the supremum of the magnitude of the helicity influx that can be estimated from footpoint tracking. The results depend on the ratio of the resolvable length scale and the flux rope diameter. For a Gold-Hoyle flux rope, in which all field lines are uniformly twisted, the observationally estimated helicity influx would be about 90% of the real influx when the flux rope diameter is one hundred times the spatial resolution (for a large flux rope), and about 45% when it is ten times (for a small flux rope). For Lundquist flux ropes, the errors incurred by observational estimation are smaller than the case of the Gold-Hoyle flux rope, but could be as large as 30% of the real influx. Our calculation suggests that the error in the helicity influx estimate is at least half of the real influx or even larger when small scale magnetic structures (less than 10,000 km) emerge into the solar atmosphere.
Fault-tolerant, high-level quantum circuits: form, compilation and description
NASA Astrophysics Data System (ADS)
Paler, Alexandru; Polian, Ilia; Nemoto, Kae; Devitt, Simon J.
2017-06-01
Fault-tolerant quantum error correction is a necessity for any quantum architecture destined to tackle interesting, large-scale problems. Its theoretical formalism has been well founded for nearly two decades. However, we still do not have an appropriate compiler to produce a fault-tolerant, error-corrected description from a higher-level quantum circuit for state-of the-art hardware models. There are many technical hurdles, including dynamic circuit constructions that occur when constructing fault-tolerant circuits with commonly used error correcting codes. We introduce a package that converts high-level quantum circuits consisting of commonly used gates into a form employing all decompositions and ancillary protocols needed for fault-tolerant error correction. We call this form the (I)initialisation, (C)NOT, (M)measurement form (ICM) and consists of an initialisation layer of qubits into one of four distinct states, a massive, deterministic array of CNOT operations and a series of time-ordered X- or Z-basis measurements. The form allows a more flexible approach towards circuit optimisation. At the same time, the package outputs a standard circuit or a canonical geometric description which is a necessity for operating current state-of-the-art hardware architectures using topological quantum codes.
NASA Astrophysics Data System (ADS)
Kadaj, Roman
2016-12-01
The adjustment problem of the so-called combined (hybrid, integrated) network created with GNSS vectors and terrestrial observations has been the subject of many theoretical and applied works. The network adjustment in various mathematical spaces was considered: in the Cartesian geocentric system on a reference ellipsoid and on a mapping plane. For practical reasons, it often takes a geodetic coordinate system associated with the reference ellipsoid. In this case, the Cartesian GNSS vectors are converted, for example, into geodesic parameters (azimuth and length) on the ellipsoid, but the simple form of converted pseudo-observations are the direct differences of the geodetic coordinates. Unfortunately, such an approach may be essentially distorted by a systematic error resulting from the position error of the GNSS vector, before its projection on the ellipsoid surface. In this paper, an analysis of the impact of this error on the determined measures of geometric ellipsoid elements, including the differences of geodetic coordinates or geodesic parameters is presented. Assuming that the adjustment of a combined network on the ellipsoid shows that the optimal functional approach in relation to the satellite observation, is to create the observational equations directly for the original GNSS Cartesian vector components, writing them directly as a function of the geodetic coordinates (in numerical applications, we use the linearized forms of observational equations with explicitly specified coefficients). While retaining the original character of the Cartesian vector, one avoids any systematic errors that may occur in the conversion of the original GNSS vectors to ellipsoid elements, for example the vector of the geodesic parameters. The problem is theoretically developed and numerically tested. An example of the adjustment of a subnet loaded from the database of reference stations of the ASG-EUPOS system was considered for the preferred functional model of the GNSS observations.
Information-Gathering Patterns Associated with Higher Rates of Diagnostic Error
ERIC Educational Resources Information Center
Delzell, John E., Jr.; Chumley, Heidi; Webb, Russell; Chakrabarti, Swapan; Relan, Anju
2009-01-01
Diagnostic errors are an important source of medical errors. Problematic information-gathering is a common cause of diagnostic errors among physicians and medical students. The objectives of this study were to (1) determine if medical students' information-gathering patterns formed clusters of similar strategies, and if so (2) to calculate the…
Outdoor surface temperature measurement: ground truth or lie?
NASA Astrophysics Data System (ADS)
Skauli, Torbjorn
2004-08-01
Contact surface temperature measurement in the field is essential in trials of thermal imaging systems and camouflage, as well as for scene modeling studies. The accuracy of such measurements is challenged by environmental factors such as sun and wind, which induce temperature gradients around a surface sensor and lead to incorrect temperature readings. In this work, a simple method is used to test temperature sensors under conditions representative of a surface whose temperature is determined by heat exchange with the environment. The tested sensors are different types of thermocouples and platinum thermistors typically used in field trials, as well as digital temperature sensors. The results illustrate that the actual measurement errors can be much larger than the specified accuracy of the sensors. The measurement error typically scales with the difference between surface temperature and ambient air temperature. Unless proper care is taken, systematic errors can easily reach 10% of this temperature difference, which is often unacceptable. Reasonably accurate readings are obtained using a miniature platinum thermistor. Thermocouples can perform well on bare metal surfaces if the connection to the surface is highly conductive. It is pointed out that digital temperature sensors have many advantages for field trials use.
Nelson, Jonathan M.; Kinzel, Paul J.; McDonald, Richard R.; Schmeeckle, Mark
2016-01-01
Recently developed optical and videographic methods for measuring water-surface properties in a noninvasive manner hold great promise for extracting river hydraulic and bathymetric information. This paper describes such a technique, concentrating on the method of infrared videog- raphy for measuring surface velocities and both acoustic (laboratory-based) and laser-scanning (field-based) techniques for measuring water-surface elevations. In ideal laboratory situations with simple flows, appropriate spatial and temporal averaging results in accurate water-surface elevations and water-surface velocities. In test cases, this accuracy is sufficient to allow direct inversion of the governing equations of motion to produce estimates of depth and discharge. Unlike other optical techniques for determining local depth that rely on transmissivity of the water column (bathymetric lidar, multi/hyperspectral correlation), this method uses only water-surface information, so even deep and/or turbid flows can be investigated. However, significant errors arise in areas of nonhydrostatic spatial accelerations, such as those associated with flow over bedforms or other relatively steep obstacles. Using laboratory measurements for test cases, the cause of these errors is examined and both a simple semi-empirical method and computational results are presented that can potentially reduce bathymetric inversion errors.
Porous plug for reducing orifice induced pressure error in airfoils
NASA Technical Reports Server (NTRS)
Plentovich, Elizabeth B. (Inventor); Gloss, Blair B. (Inventor); Eves, John W. (Inventor); Stack, John P. (Inventor)
1988-01-01
A porous plug is provided for the reduction or elimination of positive error caused by the orifice during static pressure measurements of airfoils. The porous plug is press fitted into the orifice, thereby preventing the error caused either by fluid flow turning into the exposed orifice or by the fluid flow stagnating at the downstream edge of the orifice. In addition, the porous plug is made flush with the outer surface of the airfoil, by filing and polishing, to provide a smooth surface which alleviates the error caused by imperfections in the orifice. The porous plug is preferably made of sintered metal, which allows air to pass through the pores, so that the static pressure measurements can be made by remote transducers.
Precision of spiral-bevel gears
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Goldrich, R. N.; Coy, J. J.; Zaretsky, E. V.
1982-01-01
The kinematic errors in spiral bevel gear trains caused by the generation of nonconjugate surfaces, by axial displacements of the gears during assembly, and by eccentricity of the assembled gears were determined. One mathematical model corresponds to the motion of the contact ellipse across the tooth surface, (geometry I) and the other along the tooth surface (geometry II). The following results were obtained: (1) kinematic errors induced by errors of manufacture may be minimized by applying special machine settings, the original error may be reduced by order of magnitude, the procedure is most effective for geometry 2 gears, (2) when trying to adjust the bearing contact pattern between the gear teeth for geometry 1 gears, it is more desirable to shim the gear axially; for geometry II gears, shim the pinion axially; (3) the kinematic accuracy of spiral bevel drives are most sensitive to eccentricities of the gear and less sensitive to eccentricities of the pinion. The precision of mounting accuracy and manufacture are most crucial for the gear, and less so for the pinion.
NASA Technical Reports Server (NTRS)
Ioup, George E.; Ioup, Juliette W.
1991-01-01
The final report for work on the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution is presented. Papers and theses prepared during the research report period are included. Among all the research results reported, note should be made of the specific investigation of the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution. A methodology was developed to determine design and operation parameters for error minimization when deconvolution is included in data analysis. An error surface is plotted versus the signal-to-noise ratio (SNR) and all parameters of interest. Instrumental characteristics will determine a curve in this space. The SNR and parameter values which give the projection from the curve to the surface, corresponding to the smallest value for the error, are the optimum values. These values are constrained by the curve and so will not necessarily correspond to an absolute minimum in the error surface.
Zhang, Xiaoying; Liu, Songhuai; Yang, Degang; Du, Liangjie; Wang, Ziyuan
2016-08-01
[Purpose] The purpose of this study was to examine the immediate effects of therapeutic keyboard music playing on the finger function of subjects' hands through measurements of the joint position error test, surface electromyography, probe reaction time, and writing time. [Subjects and Methods] Ten subjects were divided randomly into experimental and control groups. The experimental group used therapeutic keyboard music playing and the control group used grip training. All subjects were assessed and evaluated by the joint position error test, surface electromyography, probe reaction time, and writing time. [Results] After accomplishing therapeutic keyboard music playing and grip training, surface electromyography of the two groups showed no significant change, but joint position error test, probe reaction time, and writing time obviously improved. [Conclusion] These results suggest that therapeutic keyboard music playing is an effective and novel treatment for improving joint position error test scores, probe reaction time, and writing time, and it should be promoted widely in clinics.
NASA Astrophysics Data System (ADS)
Huo, Ming-Xia; Li, Ying
2017-12-01
Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.
NASA Technical Reports Server (NTRS)
Colombo, O. L.
1984-01-01
The nature of the orbit error and its effect on the sea surface heights calculated with satellite altimetry are explained. The elementary concepts of celestial mechanics required to follow a general discussion of the problem are included. Consideration of errors in the orbits of satellites with precisely repeating ground tracks (SEASAT, TOPEX, ERS-1, POSEIDON, amongst past and future altimeter satellites) are detailed. The theoretical conclusions are illustrated with the numerical results of computer simulations. The nature of the errors in this type of orbits is such that this error can be filtered out by using height differences along repeating (overlapping) passes. This makes them particularly valuable for the study and monitoring of changes in the sea surface, such as tides. Elements of tidal theory, showing how these principles can be combined with those pertinent to the orbit error to make direct maps of the tides using altimetry are presented.
NASA Technical Reports Server (NTRS)
Thurman, Sam W.; Estefan, Jeffrey A.
1991-01-01
Approximate analytical models are developed and used to construct an error covariance analysis for investigating the range of orbit determination accuracies which might be achieved for typical Mars approach trajectories. The sensitivity or orbit determination accuracy to beacon/orbiter position errors and to small spacecraft force modeling errors is also investigated. The results indicate that the orbit determination performance obtained from both Doppler and range data is a strong function of the inclination of the approach trajectory to the Martian equator, for surface beacons, and for orbiters, the inclination relative to the orbital plane. Large variations in performance were also observed for different approach velocity magnitudes; Doppler data in particular were found to perform poorly in determining the downtrack (along the direction of flight) component of spacecraft position. In addition, it was found that small spacecraft acceleration modeling errors can induce large errors in the Doppler-derived downtrack position estimate.
Xu, Z N; Wang, S Y
2015-02-01
To improve the accuracy in the calculation of dynamic contact angle for drops on the inclined surface, a significant number of numerical drop profiles on the inclined surface with different inclination angles, drop volumes, and contact angles are generated based on the finite difference method, a least-squares ellipse-fitting algorithm is used to calculate the dynamic contact angle. The influences of the above three factors are systematically investigated. The results reveal that the dynamic contact angle errors, including the errors of the left and right contact angles, evaluated by the ellipse-fitting algorithm tend to increase with inclination angle/drop volume/contact angle. If the drop volume and the solid substrate are fixed, the errors of the left and right contact angles increase with inclination angle. After performing a tremendous amount of computation, the critical dimensionless drop volumes corresponding to the critical contact angle error are obtained. Based on the values of the critical volumes, a highly accurate dynamic contact angle algorithm is proposed and fully validated. Within nearly the whole hydrophobicity range, it can decrease the dynamic contact angle error in the inclined plane method to less than a certain value even for different types of liquids.
Linear and nonlinear response of a rotating tokamak plasma to a resonant error-field
NASA Astrophysics Data System (ADS)
Fitzpatrick, Richard
2014-09-01
An in-depth investigation of the effect of a resonant error-field on a rotating, quasi-cylindrical, tokamak plasma is preformed within the context of constant-ψ, resistive-magnetohydrodynamical theory. General expressions for the response of the plasma at the rational surface to the error-field are derived in both the linear and nonlinear regimes, and the extents of these regimes mapped out in parameter space. Torque-balance equations are also obtained in both regimes. These equations are used to determine the steady-state plasma rotation at the rational surface in the presence of the error-field. It is found that, provided the intrinsic plasma rotation is sufficiently large, the torque-balance equations possess dynamically stable low-rotation and high-rotation solution branches, separated by a forbidden band of dynamically unstable solutions. Moreover, bifurcations between the two stable solution branches are triggered as the amplitude of the error-field is varied. A low- to high-rotation bifurcation is invariably associated with a significant reduction in the width of the magnetic island chain driven at the rational surface, and vice versa. General expressions for the bifurcation thresholds are derived and their domains of validity mapped out in parameter space.
NASA Astrophysics Data System (ADS)
Couvreur, A.
2009-05-01
The theory of algebraic-geometric codes has been developed in the beginning of the 80's after a paper of V.D. Goppa. Given a smooth projective algebraic curve X over a finite field, there are two different constructions of error-correcting codes. The first one, called "functional", uses some rational functions on X and the second one, called "differential", involves some rational 1-forms on this curve. Hundreds of papers are devoted to the study of such codes. In addition, a generalization of the functional construction for algebraic varieties of arbitrary dimension is given by Y. Manin in an article of 1984. A few papers about such codes has been published, but nothing has been done concerning a generalization of the differential construction to the higher-dimensional case. In this thesis, we propose a differential construction of codes on algebraic surfaces. Afterwards, we study the properties of these codes and particularly their relations with functional codes. A pretty surprising fact is that a main difference with the case of curves appears. Indeed, if in the case of curves, a differential code is always the orthogonal of a functional one, this assertion generally fails for surfaces. Last observation motivates the study of codes which are the orthogonal of some functional code on a surface. Therefore, we prove that, under some condition on the surface, these codes can be realized as sums of differential codes. Moreover, we show that some answers to some open problems "a la Bertini" could give very interesting informations on the parameters of these codes.
Calculation of the Nucleon Axial Form Factor Using Staggered Lattice QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Aaron S.; Hill, Richard J.; Kronfeld, Andreas S.
The nucleon axial form factor is a dominant contribution to errors in neutrino oscillation studies. Lattice QCD calculations can help control theory errors by providing first-principles information on nucleon form factors. In these proceedings, we present preliminary results on a blinded calculation ofmore » $$g_A$$ and the axial form factor using HISQ staggered baryons with 2+1+1 flavors of sea quarks. Calculations are done using physical light quark masses and are absolutely normalized. We discuss fitting form factor data with the model-independent $z$ expansion parametrization.« less
Analysis and improvement of gas turbine blade temperature measurement error
NASA Astrophysics Data System (ADS)
Gao, Shan; Wang, Lixin; Feng, Chi; Daniel, Ketui
2015-10-01
Gas turbine blade components are easily damaged; they also operate in harsh high-temperature, high-pressure environments over extended durations. Therefore, ensuring that the blade temperature remains within the design limits is very important. In this study, measurement errors in turbine blade temperatures were analyzed, taking into account detector lens contamination, the reflection of environmental energy from the target surface, the effects of the combustion gas, and the emissivity of the blade surface. In this paper, each of the above sources of measurement error is discussed, and an iterative computing method for calculating blade temperature is proposed.
Finding Blackbody Temperature and Emissivity on a Sub-Pixel Scale
NASA Astrophysics Data System (ADS)
Bernstein, D. J.; Bausell, J.; Grigsby, S.; Kudela, R. M.
2015-12-01
Surface temperature and emissivity provide important insight into the ecosystem being remotely sensed. Dozier (1981) proposed a an algorithm to solve for percent coverage and temperatures of two different surface types (e.g. sea surface, cloud cover, etc.) within a given pixel, with a constant value for emissivity assumed. Here we build on Dozier (1981) by proposing an algorithm that solves for both temperature and emissivity of a water body within a satellite pixel by assuming known percent coverage of surface types within the pixel. Our algorithm generates thermal infrared (TIR) and emissivity end-member spectra for the two surface types. Our algorithm then superposes these end-member spectra on emissivity and TIR spectra emitted from four pixels with varying percent coverage of different surface types. The algorithm was tested preliminarily (48 iterations) using simulated pixels containing more than one surface type, with temperature and emissivity percent errors of ranging from 0 to 1.071% and 2.516 to 15.311% respectively[1]. We then tested the algorithm using a MASTER image from MASTER collected as part of the NASA Student Airborne Research Program (NASA SARP). Here the temperature of water was calculated to be within 0.22 K of in situ data. The algorithm calculated emissivity of water with an accuracy of 0.13 to 1.53% error for Salton Sea pixels collected with MASTER, also collected as part of NASA SARP. This method could improve retrievals for the HyspIRI sensor. [1] Percent error for emissivity was generated by averaging percent error across all selected bands widths.
Assimilation of Freeze - Thaw Observations into the NASA Catchment Land Surface Model
NASA Technical Reports Server (NTRS)
Farhadi, Leila; Reichle, Rolf H.; DeLannoy, Gabrielle J. M.; Kimball, John S.
2014-01-01
The land surface freeze-thaw (F-T) state plays a key role in the hydrological and carbon cycles and thus affects water and energy exchanges and vegetation productivity at the land surface. In this study, we developed an F-T assimilation algorithm for the NASA Goddard Earth Observing System, version 5 (GEOS-5) modeling and assimilation framework. The algorithm includes a newly developed observation operator that diagnoses the landscape F-T state in the GEOS-5 Catchment land surface model. The F-T analysis is a rule-based approach that adjusts Catchment model state variables in response to binary F-T observations, while also considering forecast and observation errors. A regional observing system simulation experiment was conducted using synthetically generated F-T observations. The assimilation of perfect (error-free) F-T observations reduced the root-mean-square errors (RMSE) of surface temperature and soil temperature by 0.206 C and 0.061 C, respectively, when compared to model estimates (equivalent to a relative RMSE reduction of 6.7 percent and 3.1 percent, respectively). For a maximum classification error (CEmax) of 10 percent in the synthetic F-T observations, the F-T assimilation reduced the RMSE of surface temperature and soil temperature by 0.178 C and 0.036 C, respectively. For CEmax=20 percent, the F-T assimilation still reduces the RMSE of model surface temperature estimates by 0.149 C but yields no improvement over the model soil temperature estimates. The F-T assimilation scheme is being developed to exploit planned operational F-T products from the NASA Soil Moisture Active Passive (SMAP) mission.
Ingram, W Scott; Yang, Jinzhong; Wendt, Richard; Beadle, Beth M; Rao, Arvind; Wang, Xin A; Court, Laurence E
2017-08-01
To assess the influence of non-rigid anatomy and differences in patient positioning between CT acquisition and endoscopic examination on endoscopy-CT image registration in the head and neck. Radiotherapy planning CTs and 31-35 daily treatment-room CTs were acquired for nineteen patients. Diagnostic CTs were acquired for thirteen of the patients. The surfaces of the airways were segmented on all scans and triangular meshes were created to render virtual endoscopic images with a calibrated pinhole model of an endoscope. The virtual images were used to take projective measurements throughout the meshes, with reference measurements defined as those taken on the planning CTs and test measurements defined as those taken on the daily or diagnostic CTs. The influence of non-rigid anatomy was quantified by 3D distance errors between reference and test measurements on the daily CTs, and the influence of patient positioning was quantified by 3D distance errors between reference and test measurements on the diagnostic CTs. The daily CT measurements were also used to investigate the influences of camera-to-surface distance, surface angle, and the interval of time between scans. Average errors in the daily CTs were 0.36 ± 0.61 cm in the nasal cavity, 0.58 ± 0.83 cm in the naso- and oropharynx, and 0.47 ± 0.73 cm in the hypopharynx and larynx. Average errors in the diagnostic CTs in those regions were 0.52 ± 0.69 cm, 0.65 ± 0.84 cm, and 0.69 ± 0.90 cm, respectively. All CTs had errors heavily skewed towards 0, albeit with large outliers. Large camera-to-surface distances were found to increase the errors, but the angle at which the camera viewed the surface had no effect. The errors in the Day 1 and Day 15 CTs were found to be significantly smaller than those in the Day 30 CTs (P < 0.05). Inconsistencies of patient positioning have a larger influence than non-rigid anatomy on projective measurement errors. In general, these errors are largest when the camera is in the superior pharynx, where it sees large distances and a lot of muscle motion. The errors are larger when the interval of time between CT acquisitions is longer, which suggests that the interval of time between the CT acquisition and the endoscopic examination should be kept short. The median errors found in this study are comparable to acceptable levels of uncertainty in deformable CT registration. Large errors are possible even when image alignment is very good, indicating that projective measurements must be made carefully to avoid these outliers. © 2017 American Association of Physicists in Medicine.
Elsaid, K; Truong, T; Monckeberg, M; McCarthy, H; Butera, J; Collins, C
2013-12-01
To evaluate the impact of electronic standardized chemotherapy templates on incidence and types of prescribing errors. A quasi-experimental interrupted time series with segmented regression. A 700-bed multidisciplinary tertiary care hospital with an ambulatory cancer center. A multidisciplinary team including oncology physicians, nurses, pharmacists and information technologists. Standardized, regimen-specific, chemotherapy prescribing forms were developed and implemented over a 32-month period. Trend of monthly prevented prescribing errors per 1000 chemotherapy doses during the pre-implementation phase (30 months), immediate change in the error rate from pre-implementation to implementation and trend of errors during the implementation phase. Errors were analyzed according to their types: errors in communication or transcription, errors in dosing calculation and errors in regimen frequency or treatment duration. Relative risk (RR) of errors in the post-implementation phase (28 months) compared with the pre-implementation phase was computed with 95% confidence interval (CI). Baseline monthly error rate was stable with 16.7 prevented errors per 1000 chemotherapy doses. A 30% reduction in prescribing errors was observed with initiating the intervention. With implementation, a negative change in the slope of prescribing errors was observed (coefficient = -0.338; 95% CI: -0.612 to -0.064). The estimated RR of transcription errors was 0.74; 95% CI (0.59-0.92). The estimated RR of dosing calculation errors was 0.06; 95% CI (0.03-0.10). The estimated RR of chemotherapy frequency/duration errors was 0.51; 95% CI (0.42-0.62). Implementing standardized chemotherapy-prescribing templates significantly reduced all types of prescribing errors and improved chemotherapy safety.
Holonomic surface codes for fault-tolerant quantum computation
NASA Astrophysics Data System (ADS)
Zhang, Jiang; Devitt, Simon J.; You, J. Q.; Nori, Franco
2018-02-01
Surface codes can protect quantum information stored in qubits from local errors as long as the per-operation error rate is below a certain threshold. Here we propose holonomic surface codes by harnessing the quantum holonomy of the system. In our scheme, the holonomic gates are built via auxiliary qubits rather than the auxiliary levels in multilevel systems used in conventional holonomic quantum computation. The key advantage of our approach is that the auxiliary qubits are in their ground state before and after each gate operation, so they are not involved in the operation cycles of surface codes. This provides an advantageous way to implement surface codes for fault-tolerant quantum computation.
NASA Technical Reports Server (NTRS)
Ricks, Douglas W.
1993-01-01
There are a number of sources of scattering in binary optics: etch depth errors, line edge errors, quantization errors, roughness, and the binary approximation to the ideal surface. These sources of scattering can be systematic (deterministic) or random. In this paper, scattering formulas for both systematic and random errors are derived using Fourier optics. These formulas can be used to explain the results of scattering measurements and computer simulations.
Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu
2017-05-25
Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems.
Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu
2017-01-01
Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3–5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems. PMID:28587086
NASA Astrophysics Data System (ADS)
Zhao, Q.
2017-12-01
Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems.
Medhanyie, Araya Abrha; Spigt, Mark; Yebyo, Henock; Little, Alex; Tadesse, Kidane; Dinant, Geert-Jan; Blanco, Roman
2017-05-01
Mobile phone based applications are considered by many as potentially useful for addressing challenges and improving the quality of data collection in developing countries. Yet very little evidence is available supporting or refuting the potential and widely perceived benefits on the use of electronic forms on smartphones for routine patient data collection by health workers at primary health care facilities. A facility based cross sectional study using a structured paper checklist was prepared to assess the completeness and accuracy of 408 electronic records completed and submitted to a central database server using electronic forms on smartphones by 25 health workers. The 408 electronic records were selected randomly out of a total of 1772 maternal health records submitted by the health workers to the central database over a period of six months. Descriptive frequencies and percentages of data completeness and error rates were calculated. When compared to paper records, the use of electronic forms significantly improved data completeness by 209 (8%) entries. Of a total 2622 entries checked for completeness, 2602 (99.2%) electronic record entries were complete, while 2393 (91.3%) paper record entries were complete. A very small percentage of error rates, which was easily identifiable, occurred in both electronic and paper forms although the error rate in the electronic records was more than double that of paper records (2.8% vs. 1.1%). More than half of entry errors in the electronic records related to entering a text value. With minimal training, supervision, and no incentives, health care workers were able to use electronic forms for patient assessment and routine data collection appropriately and accurately with a very small error rate. Minimising the number of questions requiring text responses in electronic forms would be helpful in minimizing data errors. Copyright © 2017 Elsevier B.V. All rights reserved.
Digital identification of cartographic control points
NASA Technical Reports Server (NTRS)
Gaskell, R. W.
1988-01-01
Techniques have been developed for the sub-pixel location of control points in satellite images returned by the Voyager spacecraft. The procedure uses digital imaging data in the neighborhood of the point to form a multipicture model of a piece of the surface. Comparison of this model with the digital image in each picture determines the control point locations to about a tenth of a pixel. At this level of precision, previously insignificant effects must be considered, including chromatic aberration, high level imaging distortions, and systematic errors due to navigation uncertainties. Use of these methods in the study of Jupiter's satellite Io has proven very fruitful.
A new method of measuring gravitational acceleration in an undergraduate laboratory program
NASA Astrophysics Data System (ADS)
Wang, Qiaochu; Wang, Chang; Xiao, Yunhuan; Schulte, Jurgen; Shi, Qingfan
2018-01-01
This paper presents a high accuracy method to measure gravitational acceleration in an undergraduate laboratory program. The experiment is based on water in a cylindrical vessel rotating about its vertical axis at a constant speed. The water surface forms a paraboloid whose focal length is related to rotational period and gravitational acceleration. This experimental setup avoids classical source errors in determining the local value of gravitational acceleration, so prevalent in the common simple pendulum and inclined plane experiments. The presented method combines multiple physics concepts such as kinematics, classical mechanics and geometric optics, offering the opportunity for lateral as well as project-based learning.
Fitting by Orthonormal Polynomials of Silver Nanoparticles Spectroscopic Data
NASA Astrophysics Data System (ADS)
Bogdanova, Nina; Koleva, Mihaela
2018-02-01
Our original Orthonormal Polynomial Expansion Method (OPEM) in one-dimensional version is applied for first time to describe the silver nanoparticles (NPs) spectroscopic data. The weights for approximation include experimental errors in variables. In this way we construct orthonormal polynomial expansion for approximating the curve on a non equidistant point grid. The corridors of given data and criteria define the optimal behavior of searched curve. The most important subinterval of spectra data is investigated, where the minimum (surface plasmon resonance absorption) is looking for. This study describes the Ag nanoparticles produced by laser approach in a ZnO medium forming a AgNPs/ZnO nanocomposite heterostructure.
Miyashita, Theresa L; Diakogeorgiou, Eleni; Marrie, Kaitlyn
Investigation into the effect of cumulative subconcussive head impacts has yielded various results in the literature, with many supporting a link to neurological deficits. Little research has been conducted on men's lacrosse and associated balance deficits from head impacts. (1) Athletes will commit more errors on the postseason Balance Error Scoring System (BESS) test. (2) There will be a positive correlation to change in BESS scores and head impact exposure data. Prospective longitudinal study. Level 3. Thirty-four Division I men's lacrosse players (age, 19.59 ± 1.42 years) wore helmets instrumented with a sensor to collect head impact exposure data over the course of a competitive season. Players completed a BESS test at the start and end of the competitive season. The number of errors from pre- to postseason increased during the double-leg stance on foam ( P < 0.001), tandem stance on foam ( P = 0.009), total number of errors on a firm surface ( P = 0.042), and total number of errors on a foam surface ( P = 0.007). There were significant correlations only between the total errors on a foam surface and linear acceleration ( P = 0.038, r = 0.36), head injury criteria ( P = 0.024, r = 0.39), and Gadd Severity Index scores ( P = 0.031, r = 0.37). Changes in the total number of errors on a foam surface may be considered a sensitive measure to detect balance deficits associated with cumulative subconcussive head impacts sustained over the course of 1 lacrosse season, as measured by average linear acceleration, head injury criteria, and Gadd Severity Index scores. If there is microtrauma to the vestibular system due to repetitive subconcussive impacts, only an assessment that highly stresses the vestibular system may be able to detect these changes. Cumulative subconcussive impacts may result in neurocognitive dysfunction, including balance deficits, which are associated with an increased risk for injury. The development of a strategy to reduce total number of head impacts may curb the associated sequelae. Incorporation of a modified BESS test, firm surface only, may not be recommended as it may not detect changes due to repetitive impacts over the course of a competitive season.
Goldmann tonometry tear film error and partial correction with a shaped applanation surface.
McCafferty, Sean J; Enikov, Eniko T; Schwiegerling, Jim; Ashley, Sean M
2018-01-01
The aim of the study was to quantify the isolated tear film adhesion error in a Goldmann applanation tonometer (GAT) prism and in a correcting applanation tonometry surface (CATS) prism. The separation force of a tonometer prism adhered by a tear film to a simulated cornea was measured to quantify an isolated tear film adhesion force. Acrylic hemispheres (7.8 mm radius) used as corneas were lathed over the apical 3.06 mm diameter to simulate full applanation contact with the prism surface for both GAT and CATS prisms. Tear film separation measurements were completed with both an artificial tear and fluorescein solutions as a fluid bridge. The applanation mire thicknesses were measured and correlated with the tear film separation measurements. Human cadaver eyes were used to validate simulated cornea tear film separation measurement differences between the GAT and CATS prisms. The CATS prism tear film adhesion error (2.74±0.21 mmHg) was significantly less than the GAT prism (4.57±0.18 mmHg, p <0.001). Tear film adhesion error was independent of applanation mire thickness ( R 2 =0.09, p =0.04). Fluorescein produces more tear film error than artificial tears (+0.51±0.04 mmHg; p <0.001). Cadaver eye validation indicated the CATS prism's tear film adhesion error (1.40±0.51 mmHg) was significantly less than that of the GAT prism (3.30±0.38 mmHg; p =0.002). Measured GAT tear film adhesion error is more than previously predicted. A CATS prism significantly reduced tear film adhesion error bŷ41%. Fluorescein solution increases the tear film adhesion compared to artificial tears, while mire thickness has a negligible effect.
Analytical and Photogrammetric Characterization of a Planar Tetrahedral Truss
NASA Technical Reports Server (NTRS)
Wu, K. Chauncey; Adams, Richard R.; Rhodes, Marvin D.
1990-01-01
Future space science missions are likely to require near-optical quality reflectors which are supported by a stiff truss structure. This support truss should conform closely with its intended shape to minimize its contribution to the overall surface error of the reflector. The current investigation was conducted to evaluate the planar surface accuracy of a regular tetrahedral truss structure by comparing the results of predicted and measured node locations. The truss is a 2-ring hexagonal structure composed of 102 equal-length truss members. Each truss member is nominally 2 meters in length between node centers and is comprised of a graphite/epoxy tube with aluminum nodes and joints. The axial stiffness and the length variation of the truss components were determined experimentally and incorporated into a static finite element analysis of the truss. From this analysis, the root mean square (RMS) surface error of the truss was predicted to be 0.11 mm (0004 in). Photogrammetry tests were performed on the assembled truss to measure the normal displacements of the upper surface nodes and to determine if the truss would maintain its intended shape when subjected to repeated assembly. Considering the variation in the truss component lengths, the measures rms error of 0.14 mm (0.006 in) in the assembled truss is relatively small. The test results also indicate that a repeatable truss surface is achievable. Several potential sources of error were identified and discussed.
NASA Astrophysics Data System (ADS)
Berlanga, Juan M.; Harbaugh, John W.
The Tabasco region contains a number of major oilfields, including some of the emerging "giant" oil fields which have received extensive publicity. Fields in the Tabasco region are associated with large geologic structures which are detected readily by seismic surveys. The structures seem to be associated with deepseated movement of salt, and they are complexly faulted. Some structures have as much as 1000 milliseconds relief of seismic lines. A study, interpreting the structure of the area, used initially only a fraction of the total seismic lines That part of Tabasco region that has been studied was surveyed with a close-spaced rectilinear network of seismic lines. A, interpreting the structure of the area, used initially only a fraction of the total seismic data available. The purpose was to compare "predictions" of reflection time based on widely spaced seismic lines, with "results" obtained along more closely spaced lines. This process of comparison simulates the sequence of events in which a reconnaissance network of seismic lines is used to guide a succession of progressively more closely spaced lines. A square gridwork was established with lines spaced at 10 km intervals, and using machine contour maps, compared the results with those obtained with seismic grids employing spacings of 5 and 2.5 km respectively. The comparisons of predictions based on widely spaced lines with observations along closely spaced lines provide information by which an error function can be established. The error at any point can be defined as the difference between the predicted value for that point, and the subsequently observed value at that point. Residuals obtained by fitting third-degree polynomial trend surfaces were used for comparison. The root mean square of the error measurement, (expressed in seconds or milliseconds reflection time) was found to increase more or less linearly with distance from the nearest seismic point. Oil-occurrence probabilities were established on the basis of frequency distributions of trend-surface residuals obtained by fitting and subtracting polynomial trend surfaces from the machine-contoured reflection time maps. We found that there is a strong preferential relationship between the occurrence of petroleum (i.e. its presence versus absence) and particular ranges of trend-surface residual values. An estimate of the probability of oil occurring at any particular geographic point can be calculated on the basis of the estimated trend-surface residual value. This estimate, however, must be tempered by the probable error in the estimate of the residual value provided by the error function. The result, we believe, is a simple but effective procedure for estimating exploration outcome probabilities where seismic data provide the principal form of information in advance of drilling. Implicit in this approach is the comparison between a maturely explored area, for which both seismic and production data are available, and which serves as a statistical "training area", with the "target" area which is undergoing exploration and for which probability forecasts are to be calculated.
NASA Astrophysics Data System (ADS)
Tada, Kohei; Koga, Hiroaki; Okumura, Mitsutaka; Tanaka, Shingo
2018-06-01
Spin contamination error in the total energy of the Au2/MgO system was estimated using the density functional theory/plane-wave scheme and approximate spin projection methods. This is the first investigation in which the errors in chemical phenomena on a periodic surface are estimated. The spin contamination error of the system was 0.06 eV. This value is smaller than that of the dissociation of Au2 in the gas phase (0.10 eV). This is because of the destabilization of the singlet spin state due to the weakening of the Au-Au interaction caused by the Au-MgO interaction.
Li, Ying
2016-09-16
Fault-tolerant quantum computing in systems composed of both Majorana fermions and topologically unprotected quantum systems, e.g., superconducting circuits or quantum dots, is studied in this Letter. Errors caused by topologically unprotected quantum systems need to be corrected with error-correction schemes, for instance, the surface code. We find that the error-correction performance of such a hybrid topological quantum computer is not superior to a normal quantum computer unless the topological charge of Majorana fermions is insusceptible to noise. If errors changing the topological charge are rare, the fault-tolerance threshold is much higher than the threshold of a normal quantum computer and a surface-code logical qubit could be encoded in only tens of topological qubits instead of about 1,000 normal qubits.
Random errors in interferometry with the least-squares method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Qi
2011-01-20
This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships havemore » also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.« less
NASA Astrophysics Data System (ADS)
Zhou, Xu; Yang, Kun; Wang, Yan
2018-04-01
Sub-grid-scale orographic variation (smaller than 5 km) exerts turbulent form drag on atmospheric flows and significantly retards the wind speed. The Weather Research and Forecasting model (WRF) includes a turbulent orographic form drag (TOFD) scheme that adds the drag to the surface layer. In this study, another TOFD scheme has been incorporated in WRF3.7, which exerts an exponentially decaying drag from the surface layer to upper layers. To investigate the effect of the new scheme, WRF with the old scheme and with the new one was used to simulate the climate over the complex terrain of the Tibetan Plateau from May to October 2010. The two schemes were evaluated in terms of the direct impact (on wind fields) and the indirect impact (on air temperature and precipitation). The new TOFD scheme alleviates the mean bias in the surface wind components, and clearly reduces the root mean square error (RMSEs) in seasonal mean wind speed (from 1.10 to 0.76 m s-1), when referring to the station observations. Furthermore, the new TOFD scheme also generally improves the simulation of wind profile, as characterized by smaller biases and RMSEs than the old one when referring to radio sounding data. Meanwhile, the simulated precipitation with the new scheme is improved, with reduced mean bias (from 1.34 to 1.12 mm day-1) and RMSEs, which is due to the weakening of water vapor flux at low-level atmosphere with the new scheme when crossing the Himalayan Mountains. However, the simulation of 2-m air temperature is little improved.
Tooth form and function: insights into adaptation through the analysis of dental microwear.
Ungar, Peter S
2009-01-01
Mammalian molar form is clearly adapted to fracture foods with specific material properties. Studies of dental functional morphology can therefore offer important clues about the diets of fossil taxa. That said, analyses of tooth form provide insights into ability to fracture resistant foods rather than the food preferences of individuals. Recent work suggests that specialized occlusal morphology can relate to either preferred foods, or to occasionally eaten fallback items critical for survival. This paper reviews dental microwear texture analysis, a new approach that can be used to infer fracture properties of foods eaten in life. High-resolution 3D point clouds of microwear surfaces are collected and analyzed using scale-sensitive fractal analyses. Resulting data are free from operator measurement error, and allow the characterization and comparison of within-species variation in microwear texture attributes. Examples given here include four extant primate species (two folivores and two hard object fallback feeders), and two fossil hominin taxa. All groups show at least some individuals with simple microwear surfaces that suggest a lack of consumption of hard and brittle abrasive foods during the last few meals. On the other hand, some hard object fallback specimens have very complex surfaces consistent with consumption of hard, brittle foods. The latter pattern is also found in one hominin species. These results suggest that dental microwear texture analysis can help us determine whether craniodental specializations in fossil species are adaptations to preferred foods, or to less often but still critical fallback items. Copyright (c) 2009 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Chung Liu, Wai; Wu, Bo; Wöhler, Christian
2018-02-01
Photoclinometric surface reconstruction techniques such as Shape-from-Shading (SfS) and Shape-and-Albedo-from-Shading (SAfS) retrieve topographic information of a surface on the basis of the reflectance information embedded in the image intensity of each pixel. SfS or SAfS techniques have been utilized to generate pixel-resolution digital elevation models (DEMs) of the Moon and other planetary bodies. Photometric stereo SAfS analyzes images under multiple illumination conditions to improve the robustness of reconstruction. In this case, the directional difference in illumination between the images is likely to affect the quality of the reconstruction result. In this study, we quantitatively investigate the effects of illumination differences on photometric stereo SAfS. Firstly, an algorithm for photometric stereo SAfS is developed, and then, an error model is derived to analyze the relationships between the azimuthal and zenith angles of illumination of the images and the reconstruction qualities. The developed algorithm and error model were verified with high-resolution images collected by the Narrow Angle Camera (NAC) of the Lunar Reconnaissance Orbiter Camera (LROC). Experimental analyses reveal that (1) the resulting error in photometric stereo SAfS depends on both the azimuthal and the zenith angles of illumination as well as the general intensity of the images and (2) the predictions from the proposed error model are consistent with the actual slope errors obtained by photometric stereo SAfS using the LROC NAC images. The proposed error model enriches the theory of photometric stereo SAfS and is of significance for optimized lunar surface reconstruction based on SAfS techniques.
Xia, Lang; Mao, Kebiao; Ma, Ying; Zhao, Fen; Jiang, Lipeng; Shen, Xinyi; Qin, Zhihao
2014-01-01
A practical algorithm was proposed to retrieve land surface temperature (LST) from Visible Infrared Imager Radiometer Suite (VIIRS) data in mid-latitude regions. The key parameter transmittance is generally computed from water vapor content, while water vapor channel is absent in VIIRS data. In order to overcome this shortcoming, the water vapor content was obtained from Moderate Resolution Imaging Spectroradiometer (MODIS) data in this study. The analyses on the estimation errors of vapor content and emissivity indicate that when the water vapor errors are within the range of ±0.5 g/cm2, the mean retrieval error of the present algorithm is 0.634 K; while the land surface emissivity errors range from −0.005 to +0.005, the mean retrieval error is less than 1.0 K. Validation with the standard atmospheric simulation shows the average LST retrieval error for the twenty-three land types is 0.734 K, with a standard deviation value of 0.575 K. The comparison between the ground station LST data indicates the retrieval mean accuracy is −0.395 K, and the standard deviation value is 1.490 K in the regions with vegetation and water cover. Besides, the retrieval results of the test data have also been compared with the results measured by the National Oceanic and Atmospheric Administration (NOAA) VIIRS LST products, and the results indicate that 82.63% of the difference values are within the range of −1 to 1 K, and 17.37% of the difference values are within the range of ±2 to ±1 K. In a conclusion, with the advantages of multi-sensors taken fully exploited, more accurate results can be achieved in the retrieval of land surface temperature. PMID:25397919
Combined fabrication technique for high-precision aspheric optical windows
NASA Astrophysics Data System (ADS)
Hu, Hao; Song, Ci; Xie, Xuhui
2016-07-01
Specifications made on optical components are becoming more and more stringent with the performance improvement of modern optical systems. These strict requirements not only involve low spatial frequency surface accuracy, mid-and-high spatial frequency surface errors, but also surface smoothness and so on. This presentation mainly focuses on the fabrication process for square aspheric window which combines accurate grinding, magnetorheological finishing (MRF) and smoothing polishing (SP). In order to remove the low spatial frequency surface errors and subsurface defects after accurate grinding, the deterministic polishing method MRF with high convergence and stable material removal rate is applied. Then the SP technology with pseudo-random path is adopted to eliminate the mid-and-high spatial frequency surface ripples and high slope errors which is the defect for MRF. Additionally, the coordinate measurement method and interferometry are combined in different phase. Acid-etched method and ion beam figuring (IBF) are also investigated on observing and reducing the subsurface defects. Actual fabrication result indicates that the combined fabrication technique can lead to high machining efficiency on manufaturing the high-precision and high-quality optical aspheric windows.
Lu, Yongtao; Boudiffa, Maya; Dall'Ara, Enrico; Bellantuono, Ilaria; Viceconti, Marco
2015-11-01
In vivo micro-computed tomography (µCT) scanning is an important tool for longitudinal monitoring of the bone adaptation process in animal models. However, the errors associated with the usage of in vivo µCT measurements for the evaluation of bone adaptations remain unclear. The aim of this study was to evaluate the measurement errors using the bone surface distance approach. The right tibiae of eight 14-week-old C57BL/6 J female mice were consecutively scanned four times in an in vivo µCT scanner using a nominal isotropic image voxel size (10.4 µm) and the tibiae were repositioned between each scan. The repeated scan image datasets were aligned to the corresponding baseline (first) scan image dataset using rigid registration and a region of interest was selected in the proximal tibia metaphysis for analysis. The bone surface distances between the repeated and the baseline scan datasets were evaluated. It was found that the average (±standard deviation) median and 95th percentile bone surface distances were 3.10 ± 0.76 µm and 9.58 ± 1.70 µm, respectively. This study indicated that there were inevitable errors associated with the in vivo µCT measurements of bone microarchitecture and these errors should be taken into account for a better interpretation of bone adaptations measured with in vivo µCT. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Acoustic sensor for real-time control for the inductive heating process
Kelley, John Bruce; Lu, Wei-Yang; Zutavern, Fred J.
2003-09-30
Disclosed is a system and method for providing closed-loop control of the heating of a workpiece by an induction heating machine, including generating an acoustic wave in the workpiece with a pulsed laser; optically measuring displacements of the surface of the workpiece in response to the acoustic wave; calculating a sub-surface material property by analyzing the measured surface displacements; creating an error signal by comparing an attribute of the calculated sub-surface material properties with a desired attribute; and reducing the error signal below an acceptable limit by adjusting, in real-time, as often as necessary, the operation of the inductive heating machine.
Further evaluation of the constrained least squares electromagnetic compensation method
NASA Technical Reports Server (NTRS)
Smith, William T.
1991-01-01
Technologies exist for construction of antennas with adaptive surfaces that can compensate for many of the larger distortions caused by thermal and gravitational forces. However, as the frequency and size of reflectors increase, the subtle surface errors become significant and degrade the overall electromagnetic performance. Electromagnetic (EM) compensation through an adaptive feed array offers means for mitigation of surface distortion effects. Implementation of EM compensation is investigated with the measured surface errors of the NASA 15 meter hoop/column reflector antenna. Computer simulations are presented for: (1) a hybrid EM compensation technique, and (2) evaluating the performance of a given EM compensation method when implemented with discretized weights.
Burgner, J.; Simpson, A. L.; Fitzpatrick, J. M.; Lathrop, R. A.; Herrell, S. D.; Miga, M. I.; Webster, R. J.
2013-01-01
Background Registered medical images can assist with surgical navigation and enable image-guided therapy delivery. In soft tissues, surface-based registration is often used and can be facilitated by laser surface scanning. Tracked conoscopic holography (which provides distance measurements) has been recently proposed as a minimally invasive way to obtain surface scans. Moving this technique from concept to clinical use requires a rigorous accuracy evaluation, which is the purpose of our paper. Methods We adapt recent non-homogeneous and anisotropic point-based registration results to provide a theoretical framework for predicting the accuracy of tracked distance measurement systems. Experiments are conducted a complex objects of defined geometry, an anthropomorphic kidney phantom and a human cadaver kidney. Results Experiments agree with model predictions, producing point RMS errors consistently < 1 mm, surface-based registration with mean closest point error < 1 mm in the phantom and a RMS target registration error of 0.8 mm in the human cadaver kidney. Conclusions Tracked conoscopic holography is clinically viable; it enables minimally invasive surface scan accuracy comparable to current clinical methods that require open surgery. PMID:22761086
Selectivity in analytical chemistry: two interpretations for univariate methods.
Dorkó, Zsanett; Verbić, Tatjana; Horvai, George
2015-01-01
Selectivity is extremely important in analytical chemistry but its definition is elusive despite continued efforts by professional organizations and individual scientists. This paper shows that the existing selectivity concepts for univariate analytical methods broadly fall in two classes: selectivity concepts based on measurement error and concepts based on response surfaces (the response surface being the 3D plot of the univariate signal as a function of analyte and interferent concentration, respectively). The strengths and weaknesses of the different definitions are analyzed and contradictions between them unveiled. The error based selectivity is very general and very safe but its application to a range of samples (as opposed to a single sample) requires the knowledge of some constraint about the possible sample compositions. The selectivity concepts based on the response surface are easily applied to linear response surfaces but may lead to difficulties and counterintuitive results when applied to nonlinear response surfaces. A particular advantage of this class of selectivity is that with linear response surfaces it can provide a concentration independent measure of selectivity. In contrast, the error based selectivity concept allows only yes/no type decision about selectivity. Copyright © 2014 Elsevier B.V. All rights reserved.
Operator Localization of Virtual Objects
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Menges, Brian M.; Null, Cynthia H. (Technical Monitor)
1998-01-01
Errors in the localization of nearby virtual objects presented via see-through, helmet mounted displays are examined as a function of viewing conditions and scene content. Monocular, biocular or stereoscopic presentation of the virtual objects, accommodation (required focus), subjects'age, and the position of physical surfaces are examined. Nearby physical surfaces are found to introduce localization errors that differ depending upon the other experimental factors. The apparent physical size and transparency of the virtual objects and physical surfaces respectively are also influenced by their relative position when superimposed. Design implications are discussed.
Learning from patients: Identifying design features of medicines that cause medication use problems.
Notenboom, Kim; Leufkens, Hubert Gm; Vromans, Herman; Bouvy, Marcel L
2017-01-30
Usability is a key factor in ensuring safe and efficacious use of medicines. However, several studies showed that people experience a variety of problems using their medicines. The purpose of this study was to identify design features of oral medicines that cause use problems among older patients in daily practice. A qualitative study with semi-structured interviews on the experiences of older people with the use of their medicines was performed (n=59). Information on practical problems, strategies to overcome these problems and the medicines' design features that caused these problems were collected. The practical problems and management strategies were categorised into 'use difficulties' and 'use errors'. A total of 158 use problems were identified, of which 45 were categorized as use difficulties and 113 as use error. Design features that contributed the most to the occurrence of use difficulties were the dimensions and surface texture of the dosage form (29.6% and 18.5%, respectively). Design features that contributed the most to the occurrence of use errors were the push-through force of blisters (22.1%) and tamper evident packaging (12.1%). These findings will help developers of medicinal products to proactively address potential usability issues with their medicines. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Maximum entropy approach to statistical inference for an ocean acoustic waveguide.
Knobles, D P; Sagers, J D; Koch, R A
2012-02-01
A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America
Fabrication of five-level ultraplanar micromirror arrays by flip-chip assembly
NASA Astrophysics Data System (ADS)
Michalicek, M. Adrian; Bright, Victor M.
2001-10-01
This paper reports a detailed study of the fabrication of various piston, torsion, and cantilever style micromirror arrays using a novel, simple, and inexpensive flip-chip assembly technique. Several rectangular and polar arrays were commercially prefabricated in the MUMPs process and then flip-chip bonded to form advanced micromirror arrays where adverse effects typically associated with surface micromachining were removed. These arrays were bonded by directly fusing the MUMPs gold layers with no complex preprocessing. The modules were assembled using a computer-controlled, custom-built flip-chip bonding machine. Topographically opposed bond pads were designed to correct for slight misalignment errors during bonding and typically result in less than 2 micrometers of lateral alignment error. Although flip-chip micromirror performance is briefly discussed, the means used to create these arrays is the focus of the paper. A detailed study of flip-chip process yield is presented which describes the primary failure mechanisms for flip-chip bonding. Studies of alignment tolerance, bonding force, stress concentration, module planarity, bonding machine calibration techniques, prefabrication errors, and release procedures are presented in relation to specific observations in process yield. Ultimately, the standard thermo-compression flip-chip assembly process remains a viable technique to develop highly complex prototypes of advanced micromirror arrays.
Non-null annular subaperture stitching interferometry for aspheric test
NASA Astrophysics Data System (ADS)
Zhang, Lei; Liu, Dong; Shi, Tu; Yang, Yongying; Chong, Shiyao; Miao, Liang; Huang, Wei; Shen, Yibing; Bai, Jian
2015-10-01
A non-null annular subaperture stitching interferometry (NASSI), combining the subaperture stitching idea and non-null test method, is proposed for steep aspheric testing. Compared with standard annular subaperture stitching interferometry (ASSI), a partial null lens (PNL) is employed as an alternative to the transmission sphere, to generate different aspherical wavefronts as the references. The coverage subaperture number would thus be reduced greatly for the better performance of aspherical wavefronts in matching the local slope of aspheric surfaces. Instead of various mathematical stitching algorithms, a simultaneous reverse optimizing reconstruction (SROR) method based on system modeling and ray tracing is proposed for full aperture figure error reconstruction. All the subaperture measurements are simulated simultaneously with a multi-configuration model in a ray-tracing program, including the interferometric system modeling and subaperture misalignments modeling. With the multi-configuration model, full aperture figure error would be extracted in form of Zernike polynomials from subapertures wavefront data by the SROR method. This method concurrently accomplishes subaperture retrace error and misalignment correction, requiring neither complex mathematical algorithms nor subaperture overlaps. A numerical simulation exhibits the comparison of the performance of the NASSI and standard ASSI, which demonstrates the high accuracy of the NASSI in testing steep aspheric. Experimental results of NASSI are shown to be in good agreement with that of Zygo® VerifireTM Asphere interferometer.
Sampling errors in blunt dust samplers arising from external wall loss effects
NASA Astrophysics Data System (ADS)
Vincent, J. H.; Gibson, H.
Evidence is given that, with some forms of blunt dust sampler under conditions relating to those encountered in practical occupational hygiene and environmental monitoring, particles which impact onto the outer surface of the sampler body may not adhere permanently, and may eventually enter the sampling orifice. The effect of such external wall loss is to bring about excess sampling, where errors as high as 100% could arise. The problem is particularly important in the sampling of dry airborne particulates of the type commonly found in practical situations. For a given sampler configuration, the effect becomes more marked as the particle size increases or as the ratio of sampling velocity to ambient wind speed increases. We would expect it be greater for gritty, crystalline material than for smoother, amorphous material. Possible mechanisms controlling external wall losses were examined, and it was concluded that particle 'blow-off' (as opposed to particle 'bounce') is the most plausible. On the basis of simple experiments, it might be possible to make corrections for the sampling errors in question, but caution is recommended in doing so because of the unpredictable effects of environmental factors such as temperature and relative humidity. Of the possible practical solutions to the problem, it is felt that the best approach lies in the correct choice of sampler inlet design.
Plans for a sensitivity analysis of bridge-scour computations
Dunn, David D.; Smith, Peter N.
1993-01-01
Plans for an analysis of the sensitivity of Level 2 bridge-scour computations are described. Cross-section data from 15 bridge sites in Texas are modified to reflect four levels of field effort ranging from no field surveys to complete surveys. Data from United States Geological Survey (USGS) topographic maps will be used to supplement incomplete field surveys. The cross sections are used to compute the water-surface profile through each bridge for several T-year recurrence-interval design discharges. The effect of determining the downstream energy grade-line slope from topographic maps is investigated by systematically varying the starting slope of each profile. The water-surface profile analyses are then used to compute potential scour resulting from each of the design discharges. The planned results will be presented in the form of exceedance-probability versus scour-depth plots with the maximum and minimum scour depths at each T-year discharge presented as error bars.
NASA Astrophysics Data System (ADS)
Plane, John M. C.; Saltzman, Eric S.
1987-10-01
A kinetic study is presented of the reaction between lithium atoms and hydrogen chloride over the temperature range 700-1000 K. Li atoms are produced in an excess of HCl and He bath gas by pulsed photolysis of LiCl vapor. The concentration of the metal atoms is then monitored in real time by the technique of laser-induced fluorescence of Li atoms at λ=670.7 nm using a pulsed nitrogen-pumped dye laser and box-car integration of the fluorescence signal. Absolute second-order rate constants for this reaction have been measured at T=700, 750, 800, and 900 K. At T=1000 K the reverse reaction is sufficiently fast that equilibrium is rapidly established on the time scale of the experiment. A fit of the data between 700 and 900 K to the Arrhenius form, with 2σ errors calculated from the absolute errors in the rate constants, yields k(T)=(3.8±1.1)×10-10 exp[-(883±218)/T] cm3 molecule-1 s-1. This result is interpreted through a modified form of collision theory which is constrained to take account of the conservation of total angular momentum during the reaction. Thereby we obtain an estimate for the reaction energy threshold, E0=8.2±1.4 kJ mol-1 (where the error arises from uncertainty in the exothermicity of the reaction), in very good agreement with a crossed molecular beam study of the title reaction, and substantially lower than estimates of E0 from both semiempirical and ab initio calculations of the potential energy surface.
Roth, Dan
2013-01-01
Objective This paper presents a coreference resolution system for clinical narratives. Coreference resolution aims at clustering all mentions in a single document to coherent entities. Materials and methods A knowledge-intensive approach for coreference resolution is employed. The domain knowledge used includes several domain-specific lists, a knowledge intensive mention parsing, and task informed discourse model. Mention parsing allows us to abstract over the surface form of the mention and represent each mention using a higher-level representation, which we call the mention's semantic representation (SR). SR reduces the mention to a standard form and hence provides better support for comparing and matching. Existing coreference resolution systems tend to ignore discourse aspects and rely heavily on lexical and structural cues in the text. The authors break from this tradition and present a discourse model for “person” type mentions in clinical narratives, which greatly simplifies the coreference resolution. Results This system was evaluated on four different datasets which were made available in the 2011 i2b2/VA coreference challenge. The unweighted average of F1 scores (over B-cubed, MUC and CEAF) varied from 84.2% to 88.1%. These experiments show that domain knowledge is effective for different mention types for all the datasets. Discussion Error analysis shows that most of the recall errors made by the system can be handled by further addition of domain knowledge. The precision errors, on the other hand, are more subtle and indicate the need to understand the relations in which mentions participate for building a robust coreference system. Conclusion This paper presents an approach that makes an extensive use of domain knowledge to significantly improve coreference resolution. The authors state that their system and the knowledge sources developed will be made publicly available. PMID:22781192
Optical surface pressure measurements: Accuracy and application field evaluation
NASA Astrophysics Data System (ADS)
Bukov, A.; Mosharov, V.; Orlov, A.; Pesetsky, V.; Radchenko, V.; Phonov, S.; Matyash, S.; Kuzmin, M.; Sadovskii, N.
1994-07-01
Optical pressure measurement (OPM) is a new pressure measurement method rapidly developed in several aerodynamic research centers: TsAGI (Russia), Boeing, NASA, McDonnell Douglas (all USA), and DLR (Germany). Present level of OPM-method provides its practice as standard experimental method of aerodynamic investigations in definite application fields. Applications of OPM-method are determined mainly by its accuracy. The accuracy of OPM-method is determined by the errors of three following groups: (1) errors of the luminescent pressure sensor (LPS) itself, such as uncompensated temperature influence, photo degradation, temperature and pressure hysteresis, variation of the LPS parameters from point to point on the model surface, etc.; (2) errors of the measurement system, such as noise of the photodetector, nonlinearity and nonuniformity of the photodetector, time and temperature offsets, etc.; and (3) methodological errors, owing to displacement and deformation of the model in an airflow, a contamination of the model surface, scattering of the excitation and luminescent light from the model surface and test section walls, etc. OPM-method allows getting total error of measured pressure not less than 1 percent. This accuracy is enough to visualize the pressure field and allows determining total and distributed aerodynamic loads and solving some problems of local aerodynamic investigations at transonic and supersonic velocities. OPM is less effective at low subsonic velocities (M less than 0.4), and for precise measurements, for example, an airfoil optimization. Current limitations of the OPM-method are discussed on an example of the surface pressure measurements and calculations of the integral loads on the wings of canard-aircraft model. The pressure measurement system and data reduction methods used on these tests are also described.
Rausch, R; MacDonald, K
1997-03-01
We used a protocol consisting of a continuous presentation of stimuli with associated response requests during an intracarotid sodium amobarbital procedure (IAP) to study the effects of hemisphere injected (speech dominant vs. nondominant) and seizure focus (left temporal lobe vs. right temporal lobe) on the pattern of behavioral response errors for three types of visual stimuli (pictures of common objects, words, and abstract forms). Injection of the left speech dominant hemisphere compared to the right nondominant hemisphere increased overall errors and affected the pattern of behavioral errors. The presence of a seizure focus in the contralateral hemisphere increased overall errors, particularly for the right temporal lobe seizure patients, but did not affect the pattern of behavioral errors. Left hemisphere injections disrupted both naming and reading responses at a rate similar to that of matching-to-sample performance. Also, a short-term memory deficit was observed with all three stimuli. Long-term memory testing following the left hemisphere injection indicated that only for pictures of common objects were there fewer errors during the early postinjection period than for the later long-term memory testing. Therefore, despite the inability to respond to picture stimuli, picture items, but not words or forms, could be sufficiently encoded for later recall. In contrast, right hemisphere injections resulted in few errors, with a pattern suggesting a mild general cognitive decrease. A selective weakness in learning unfamiliar forms was found. Our findings indicate that different patterns of behavioral deficits occur following the left vs. right hemisphere injections, with selective patterns specific to stimulus type.
NASA Astrophysics Data System (ADS)
Glover, Paul W. J.
2016-07-01
When scientists apply Archie's first law they often include an extra parameter a, which was introduced about 10 years after the equation's first publication by Winsauer et al. (1952), and which is sometimes called the "tortuosity" or "lithology" parameter. This parameter is not, however, theoretically justified. Paradoxically, the Winsauer et al. (1952) form of Archie's law often performs better than the original, more theoretically correct version. The difference in the cementation exponent calculated from these two forms of Archie's law is important, and can lead to a misestimation of reserves by at least 20 % for typical reservoir parameter values. We have examined the apparent paradox, and conclude that while the theoretical form of the law is correct, the data that we have been analysing with Archie's law have been in error. There are at least three types of systematic error that are present in most measurements: (i) a porosity error, (ii) a pore fluid salinity error, and (iii) a temperature error. Each of these systematic errors is sufficient to ensure that a non-unity value of the parameter a is required in order to fit the electrical data well. Fortunately, the inclusion of this parameter in the fit has compensated for the presence of the systematic errors in the electrical and porosity data, leading to a value of cementation exponent that is correct. The exceptions are those cementation exponents that have been calculated for individual core plugs. We make a number of recommendations for reducing the systematic errors that contribute to the problem and suggest that the value of the parameter a may now be used as an indication of data quality.
Fault-tolerance thresholds for the surface code with fabrication errors
NASA Astrophysics Data System (ADS)
Auger, James M.; Anwar, Hussain; Gimeno-Segovia, Mercedes; Stace, Thomas M.; Browne, Dan E.
2017-10-01
The construction of topological error correction codes requires the ability to fabricate a lattice of physical qubits embedded on a manifold with a nontrivial topology such that the quantum information is encoded in the global degrees of freedom (i.e., the topology) of the manifold. However, the manufacturing of large-scale topological devices will undoubtedly suffer from fabrication errors—permanent faulty components such as missing physical qubits or failed entangling gates—introducing permanent defects into the topology of the lattice and hence significantly reducing the distance of the code and the quality of the encoded logical qubits. In this work we investigate how fabrication errors affect the performance of topological codes, using the surface code as the test bed. A known approach to mitigate defective lattices involves the use of primitive swap gates in a long sequence of syndrome extraction circuits. Instead, we show that in the presence of fabrication errors the syndrome can be determined using the supercheck operator approach and the outcome of the defective gauge stabilizer generators without any additional computational overhead or use of swap gates. We report numerical fault-tolerance thresholds in the presence of both qubit fabrication and gate fabrication errors using a circuit-based noise model and the minimum-weight perfect-matching decoder. Our numerical analysis is most applicable to two-dimensional chip-based technologies, but the techniques presented here can be readily extended to other topological architectures. We find that in the presence of 8 % qubit fabrication errors, the surface code can still tolerate a computational error rate of up to 0.1 % .
Digital Paper Technologies for Topographical Applications
2011-09-19
measures examine were training time for each method, time for entry offeatures, procedural errors, handwriting recognition errors, and user preference...time for entry of features, procedural errors, handwriting recognition errors, and user preference. For these metrics, temporal association was...checkbox, text restricted to a specific list of values, etc.) that provides constraints to the handwriting recognizer. When the user fills out the form
ERIC Educational Resources Information Center
Hodgson, Catherine; Lambon Ralph, Matthew A.
2008-01-01
Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study…
ERIC Educational Resources Information Center
van den Bemt, P. M. L. A.; Robertz, R.; de Jong, A. L.; van Roon, E. N.; Leufkens, H. G. M.
2007-01-01
Background: Medication errors can result in harm, unless barriers to prevent them are present. Drug administration errors are less likely to be prevented, because they occur in the last stage of the drug distribution process. This is especially the case in non-alert patients, as patients often form the final barrier to prevention of errors.…
Hospital prescribing errors: epidemiological assessment of predictors
Fijn, R; Van den Bemt, P M L A; Chow, M; De Blaey, C J; De Jong-Van den Berg, L T W; Brouwers, J R B J
2002-01-01
Aims To demonstrate an epidemiological method to assess predictors of prescribing errors. Methods A retrospective case-control study, comparing prescriptions with and without errors. Results Only prescriber and drug characteristics were associated with errors. Prescriber characteristics were medical specialty (e.g. orthopaedics: OR: 3.4, 95% CI 2.1, 5.4) and prescriber status (e.g. verbal orders transcribed by nursing staff: OR: 2.5, 95% CI 1.8, 3.6). Drug characteristics were dosage form (e.g. inhalation devices: OR: 4.1, 95% CI 2.6, 6.6), therapeutic area (e.g. gastrointestinal tract: OR: 1.7, 95% CI 1.2, 2.4) and continuation of preadmission treatment (Yes: OR: 1.7, 95% CI 1.3, 2.3). Conclusions Other hospitals could use our epidemiological framework to identify their own error predictors. Our findings suggest a focus on specific prescribers, dosage forms and therapeutic areas. We also found that prescriptions originating from general practitioners involved errors and therefore, these should be checked when patients are hospitalized. PMID:11874397
Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors
NASA Astrophysics Data System (ADS)
Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.
2018-04-01
The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.
A preliminary estimate of geoid-induced variations in repeat orbit satellite altimeter observations
NASA Technical Reports Server (NTRS)
Brenner, Anita C.; Beckley, B. D.; Koblinsky, C. J.
1990-01-01
Altimeter satellites are often maintained in a repeating orbit to facilitate the separation of sea-height variations from the geoid. However, atmospheric drag and solar radiation pressure cause a satellite orbit to drift. For Geosat this drift causes the ground track to vary by + or - 1 km about the nominal repeat path. This misalignment leads to an error in the estimates of sea surface height variations because of the local slope in the geoid. This error has been estimated globally for the Geosat Exact Repeat Mission using a mean sea surface constructed from Geos 3 and Seasat altimeter data. Over most of the ocean the geoid gradient is small, and the repeat-track misalignment leads to errors of only 1 to 2 cm. However, in the vicinity of trenches, continental shelves, islands, and seamounts, errors can exceed 20 cm. The estimated error is compared with direct estimates from Geosat altimetry, and a strong correlation is found in the vicinity of the Tonga and Aleutian trenches. This correlation increases as the orbit error is reduced because of the increased signal-to-noise ratio.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katsumi Marukawa; Kazuki Nakashima; Masashi Koga
1994-12-31
This paper presents a paper form processing system with an error correcting function for reading handwritten kanji strings. In the paper form processing system, names and addresses are important key data, and especially this paper takes up an error correcting method for name and address recognition. The method automatically corrects errors of the kanji OCR (Optical Character Reader) with the help of word dictionaries and other knowledge. Moreover, it allows names and addresses to be written in any style. The method consists of word matching {open_quotes}furigana{close_quotes} verification for name strings, and address approval for address strings. For word matching, kanjimore » name candidates are extracted by automaton-type word matching. In {open_quotes}furigana{close_quotes} verification, kana candidate characters recognized by the kana OCR are compared with kana`s searched from the name dictionary based on kanji name candidates, given by the word matching. The correct name is selected from the results of word matching and furigana verification. Also, the address approval efficiently searches for the right address based on a bottom-up procedure which follows hierarchical relations from a lower placename to a upper one by using the positional condition among the placenames. We ascertained that the error correcting method substantially improves the recognition rate and processing speed in experiments on 5,032 forms.« less
Test functions for three-dimensional control-volume mixed finite-element methods on irregular grids
Naff, R.L.; Russell, T.F.; Wilson, J.D.; ,; ,; ,; ,; ,
2000-01-01
Numerical methods based on unstructured grids, with irregular cells, usually require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element methods, vector shape functions are used to approximate the distribution of velocities across cells and vector test functions are used to minimize the error associated with the numerical approximation scheme. For a logically cubic mesh, the lowest-order shape functions are chosen in a natural way to conserve intercell fluxes that vary linearly in logical space. Vector test functions, while somewhat restricted by the mapping into the logical reference cube, admit a wider class of possibilities. Ideally, an error minimization procedure to select the test function from an acceptable class of candidates would be the best procedure. Lacking such a procedure, we first investigate the effect of possible test functions on the pressure distribution over the control volume; specifically, we look for test functions that allow for the elimination of intermediate pressures on cell faces. From these results, we select three forms for the test function for use in a control-volume mixed method code and subject them to an error analysis for different forms of grid irregularity; errors are reported in terms of the discrete L2 norm of the velocity error. Of these three forms, one appears to produce optimal results for most forms of grid irregularity.
Quantum error-correction failure distributions: Comparison of coherent and stochastic error models
NASA Astrophysics Data System (ADS)
Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.
2017-06-01
We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.
Comparative performance of solar thermal power generation concepts
NASA Technical Reports Server (NTRS)
Wen, L.; Wu, Y. C.
1976-01-01
A performance comparison is made between the central receiver system (power tower) and a distributed system using either dishes or troughs and lines to transport fluids to the power station. These systems were analyzed at a rated capacity of 30 MW of thermal energy delivered in the form of superheated steam at 538 C (1000 F) and 68 atm (1000 psia), using consistent weather data, collector surface waviness, pointing error, and electric conversion efficiency. The comparisons include technical considerations for component requirements, land utilization, and annual thermal energy collection rates. The relative merits of different representative systems are dependent upon the overall conversion as expressed in the form of performance factors in this paper. These factors are essentially indices of the relative performance effectiveness for different concepts based upon unit collector area. These performance factors enable further economic tradeoff studies of systems to be made by comparing them with projected production costs for these systems.
The Barnes-Evans color-surface brightness relation: A preliminary theoretical interpretation
NASA Technical Reports Server (NTRS)
Shipman, H. L.
1980-01-01
Model atmosphere calculations are used to assess whether an empirically derived relation between V-R and surface brightness is independent of a variety of stellar paramters, including surface gravity. This relationship is used in a variety of applications, including the determination of the distances of Cepheid variables using a method based on the Beade-Wesselink method. It is concluded that the use of a main sequence relation between V-R color and surface brightness in determining radii of giant stars is subject to systematic errors that are smaller than 10% in the determination of a radius or distance for temperature cooler than 12,000 K. The error in white dwarf radii determined from a main sequence color surface brightness relation is roughly 10%.
NASA Astrophysics Data System (ADS)
Zheng, Fei; Zhu, Jiang
2017-04-01
How to design a reliable ensemble prediction strategy with considering the major uncertainties of a forecasting system is a crucial issue for performing an ensemble forecast. In this study, a new stochastic perturbation technique is developed to improve the prediction skills of El Niño-Southern Oscillation (ENSO) through using an intermediate coupled model. We first estimate and analyze the model uncertainties from the ensemble Kalman filter analysis results through assimilating the observed sea surface temperatures. Then, based on the pre-analyzed properties of model errors, we develop a zero-mean stochastic model-error model to characterize the model uncertainties mainly induced by the missed physical processes of the original model (e.g., stochastic atmospheric forcing, extra-tropical effects, Indian Ocean Dipole). Finally, we perturb each member of an ensemble forecast at each step by the developed stochastic model-error model during the 12-month forecasting process, and add the zero-mean perturbations into the physical fields to mimic the presence of missing processes and high-frequency stochastic noises. The impacts of stochastic model-error perturbations on ENSO deterministic predictions are examined by performing two sets of 21-yr hindcast experiments, which are initialized from the same initial conditions and differentiated by whether they consider the stochastic perturbations. The comparison results show that the stochastic perturbations have a significant effect on improving the ensemble-mean prediction skills during the entire 12-month forecasting process. This improvement occurs mainly because the nonlinear terms in the model can form a positive ensemble-mean from a series of zero-mean perturbations, which reduces the forecasting biases and then corrects the forecast through this nonlinear heating mechanism.
Improved Soundings and Error Estimates using AIRS/AMSU Data
NASA Technical Reports Server (NTRS)
Susskind, Joel
2006-01-01
AIRS was launched on EOS Aqua on May 4, 2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1 K, and layer precipitable water with an rms error of 20 percent, in cases with up to 80 percent effective cloud cover. The basic theory used to analyze AIRS/AMSU/HSB data in the presence of clouds, called the at-launch algorithm, and a post-launch algorithm which differed only in the minor details from the at-launch algorithm, have been described previously. The post-launch algorithm, referred to as AIRS Version 4.0, has been used by the Goddard DAAC to analyze and distribute AIRS retrieval products. In this paper we show progress made toward the AIRS Version 5.0 algorithm which will be used by the Goddard DAAC starting late in 2006. A new methodology has been developed to provide accurate case by case error estimates for retrieved geophysical parameters and for the channel by channel cloud cleared radiances used to derive the geophysical parameters from the AIRS/AMSU observations. These error estimates are in turn used for quality control of the derived geophysical parameters and clear column radiances. Improvements made to the retrieval algorithm since Version 4.0 are described as well as results comparing Version 5.0 retrieval accuracy and spatial coverage with those obtained using Version 4.0.
Bryson, Mitch; Ferrari, Renata; Figueira, Will; Pizarro, Oscar; Madin, Josh; Williams, Stefan; Byrne, Maria
2017-08-01
Habitat structural complexity is one of the most important factors in determining the makeup of biological communities. Recent advances in structure-from-motion and photogrammetry have resulted in a proliferation of 3D digital representations of habitats from which structural complexity can be measured. Little attention has been paid to quantifying the measurement errors associated with these techniques, including the variability of results under different surveying and environmental conditions. Such errors have the potential to confound studies that compare habitat complexity over space and time. This study evaluated the accuracy, precision, and bias in measurements of marine habitat structural complexity derived from structure-from-motion and photogrammetric measurements using repeated surveys of artificial reefs (with known structure) as well as natural coral reefs. We quantified measurement errors as a function of survey image coverage, actual surface rugosity, and the morphological community composition of the habitat-forming organisms (reef corals). Our results indicated that measurements could be biased by up to 7.5% of the total observed ranges of structural complexity based on the environmental conditions present during any particular survey. Positive relationships were found between measurement errors and actual complexity, and the strength of these relationships was increased when coral morphology and abundance were also used as predictors. The numerous advantages of structure-from-motion and photogrammetry techniques for quantifying and investigating marine habitats will mean that they are likely to replace traditional measurement techniques (e.g., chain-and-tape). To this end, our results have important implications for data collection and the interpretation of measurements when examining changes in habitat complexity using structure-from-motion and photogrammetry.
Optimal interpolation schemes to constrain pmPM2.5 in regional modeling over the United States
NASA Astrophysics Data System (ADS)
Sousan, Sinan Dhia Jameel
This thesis presents the use of data assimilation with optimal interpolation (OI) to develop atmospheric aerosol concentration estimates for the United States at high spatial and temporal resolutions. Concentration estimates are highly desirable for a wide range of applications, including visibility, climate, and human health. OI is a viable data assimilation method that can be used to improve Community Multiscale Air Quality (CMAQ) model fine particulate matter (PM2.5) estimates. PM2.5 is the mass of solid and liquid particles with diameters less than or equal to 2.5 µm suspended in the gas phase. OI was employed by combining model estimates with satellite and surface measurements. The satellite data assimilation combined 36 x 36 km aerosol concentrations from CMAQ with aerosol optical depth (AOD) measured by MODIS and AERONET over the continental United States for 2002. Posterior model concentrations generated by the OI algorithm were compared with surface PM2.5 measurements to evaluate a number of possible data assimilation parameters, including model error, observation error, and temporal averaging assumptions. Evaluation was conducted separately for six geographic U.S. regions in 2002. Variability in model error and MODIS biases limited the effectiveness of a single data assimilation system for the entire continental domain. The best combinations of four settings and three averaging schemes led to a domain-averaged improvement in fractional error from 1.2 to 0.97 and from 0.99 to 0.89 at respective IMPROVE and STN monitoring sites. For 38% of OI results, MODIS OI degraded the forward model skill due to biases and outliers in MODIS AOD. Surface data assimilation combined 36 × 36 km aerosol concentrations from the CMAQ model with surface PM2.5 measurements over the continental United States for 2002. The model error covariance matrix was constructed by using the observational method. The observation error covariance matrix included site representation that scaled the observation error by land use (i.e. urban or rural locations). In theory, urban locations should have less effect on surrounding areas than rural sites, which can be controlled using site representation error. The annual evaluations showed substantial improvements in model performance with increases in the correlation coefficient from 0.36 (prior) to 0.76 (posterior), and decreases in the fractional error from 0.43 (prior) to 0.15 (posterior). In addition, the normalized mean error decreased from 0.36 (prior) to 0.13 (posterior), and the RMSE decreased from 5.39 µg m-3 (prior) to 2.32 µg m-3 (posterior). OI decreased model bias for both large spatial areas and point locations, and could be extended to more advanced data assimilation methods. The current work will be applied to a five year (2000-2004) CMAQ simulation aimed at improving aerosol model estimates. The posterior model concentrations will be used to inform exposure studies over the U.S. that relate aerosol exposure to mortality and morbidity rates. Future improvements for the OI techniques used in the current study will include combining both surface and satellite data to improve posterior model estimates. Satellite data have high spatial and temporal resolutions in comparison to surface measurements, which are scarce but more accurate than model estimates. The satellite data are subject to noise affected by location and season of retrieval. The implementation of OI to combine satellite and surface data sets has the potential to improve posterior model estimates for locations that have no direct measurements.
Linear and Nonlinear Response of a Rotating Tokamak Plasma to a Resonant Error-Field
NASA Astrophysics Data System (ADS)
Fitzpatrick, Richard
2014-10-01
An in-depth investigation of the effect of a resonant error-field on a rotating, quasi-cylindrical, tokamak plasma is preformed within the context of resistive-MHD theory. General expressions for the response of the plasma at the rational surface to the error-field are derived in both the linear and nonlinear regimes, and the extents of these regimes mapped out in parameter space. Torque-balance equations are also obtained in both regimes. These equations are used to determine the steady-state plasma rotation at the rational surface in the presence of the error-field. It is found that, provided the intrinsic plasma rotation is sufficiently large, the torque-balance equations possess dynamically stable low-rotation and high-rotation solution branches, separated by a forbidden band of dynamically unstable solutions. Moreover, bifurcations between the two stable solution branches are triggered as the amplitude of the error-field is varied. A low- to high-rotation bifurcation is invariably associated with a significant reduction in the width of the magnetic island chain driven at the rational surface, and vice versa. General expressions for the bifurcation thresholds are derived, and their domains of validity mapped out in parameter space. This research was funded by the U.S. Department of Energy under Contract DE-FG02-04ER-54742.
NASA Astrophysics Data System (ADS)
Li, Xiaojun; Xin, Xiaozhou; Peng, Zhiqing; Zhang, Hailong; Li, Li; Shao, Shanshan; Liu, Qinhuo
2017-10-01
Evapotranspiration (ET) plays an important role in surface-atmosphere interactions and can be monitored using remote sensing data. The visible infrared imaging radiometer suite (VIIRS) sensor is a generation of optical satellite sensors that provide daily global coverage at 375- to 750-m spatial resolutions with 22 spectral channels (0.412 to 12.05 μm) and capable of monitoring ET from regional to global scales. However, few studies have focused on methods of acquiring ET from VIIRS images. The objective of this study is to introduce an algorithm that uses the VIIRS data and meteorological variables to estimate the energy budgets of land surfaces, including the net radiation, soil heat flux, sensible heat flux, and latent heat fluxes. A single-source model that based on surface energy balance equation is used to obtain surface heat fluxes within the Zhangye oasis in China. The results were validated using observations collected during the HiWATER (Heihe Watershed Allied Telemetry Experimental Research) project. To facilitate comparison, we also use moderate resolution imaging spectrometer (MODIS) data to retrieve the regional surface heat fluxes. The validation results show that it is feasible to estimate the turbulent heat flux based on the VIIRS sensor and that these data have certain advantages (i.e., the mean bias error of sensible heat flux is 15.23 W m-2) compared with MODIS data (i.e., the mean bias error of sensible heat flux is -29.36 W m-2). Error analysis indicates that, in our model, the accuracies of the estimated sensible heat fluxes rely on the errors in the retrieved surface temperatures and the canopy heights.
Error-Based Design Space Windowing
NASA Technical Reports Server (NTRS)
Papila, Melih; Papila, Nilay U.; Shyy, Wei; Haftka, Raphael T.; Fitz-Coy, Norman
2002-01-01
Windowing of design space is considered in order to reduce the bias errors due to low-order polynomial response surfaces (RS). Standard design space windowing (DSW) uses a region of interest by setting a requirement on response level and checks it by a global RS predictions over the design space. This approach, however, is vulnerable since RS modeling errors may lead to the wrong region to zoom on. The approach is modified by introducing an eigenvalue error measure based on point-to-point mean squared error criterion. Two examples are presented to demonstrate the benefit of the error-based DSW.
Precision of spiral-bevel gears
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Goldrich, R. N.; Coy, J. J.; Zaretsky, E. V.
1983-01-01
The kinematic errors in spiral bevel gear trains caused by the generation of nonconjugate surfaces, by axial displacements of the gears during assembly, and by eccentricity of the assembled gears were determined. One mathematical model corresponds to the motion of the contact ellipse across the tooth surface, (geometry I) and the other along the tooth surface (geometry II). The following results were obtained: (1) kinematic errors induced by errors of manufacture may be minimized by applying special machine settings, the original error may be reduced by order of magnitude, the procedure is most effective for geometry 2 gears, (2) when trying to adjust the bearing contact pattern between the gear teeth for geometry I gears, it is more desirable to shim the gear axially; for geometry II gears, shim the pinion axially; (3) the kinematic accuracy of spiral bevel drives are most sensitive to eccentricities of the gear and less sensitive to eccentricities of the pinion. The precision of mounting accuracy and manufacture are most crucial for the gear, and less so for the pinion. Previously announced in STAR as N82-30552
The Ohio State 1991 geopotential and sea surface topography harmonic coefficient models
NASA Technical Reports Server (NTRS)
Rapp, Richard H.; Wang, Yan Ming; Pavlis, Nikolaos K.
1991-01-01
The computation is described of a geopotential model to deg 360, a sea surface topography model to deg 10/15, and adjusted Geosat orbits for the first year of the exact repeat mission (ERM). This study started from the GEM-T2 potential coefficient model and it's error covariance matrix and Geosat orbits (for 22 ERMs) computed by Haines et al. using the GEM-T2 model. The first step followed the general procedures which use a radial orbit error theory originally developed by English. The Geosat data was processed to find corrections to the a priori geopotential model, corrections to a radial orbit error model for 76 Geosat arcs, and coefficients of a harmonic representation of the sea surface topography. The second stage of the analysis took place by doing a combination of the GEM-T2 coefficients with 30 deg gravity data derived from surface gravity data and anomalies obtained from altimeter data. The analysis has shown how a high degree spherical harmonic model can be determined combining the best aspects of two different analysis techniques. The error analysis was described that has led to the accuracy estimates for all the coefficients to deg 360. Significant work is needed to improve the modeling effort.
NASA Astrophysics Data System (ADS)
Ryu, Young-Hee; Hodzic, Alma; Barre, Jerome; Descombes, Gael; Minnis, Patrick
2018-05-01
Clouds play a key role in radiation and hence O3 photochemistry by modulating photolysis rates and light-dependent emissions of biogenic volatile organic compounds (BVOCs). It is not well known, however, how much error in O3 predictions can be directly attributed to error in cloud predictions. This study applies the Weather Research and Forecasting with Chemistry (WRF-Chem) model at 12 km horizontal resolution with the Morrison microphysics and Grell 3-D cumulus parameterization to quantify uncertainties in summertime surface O3 predictions associated with cloudiness over the contiguous United States (CONUS). All model simulations are driven by reanalysis of atmospheric data and reinitialized every 2 days. In sensitivity simulations, cloud fields used for photochemistry are corrected based on satellite cloud retrievals. The results show that WRF-Chem predicts about 55 % of clouds in the right locations and generally underpredicts cloud optical depths. These errors in cloud predictions can lead to up to 60 ppb of overestimation in hourly surface O3 concentrations on some days. The average difference in summertime surface O3 concentrations derived from the modeled clouds and satellite clouds ranges from 1 to 5 ppb for maximum daily 8 h average O3 (MDA8 O3) over the CONUS. This represents up to ˜ 40 % of the total MDA8 O3 bias under cloudy conditions in the tested model version. Surface O3 concentrations are sensitive to cloud errors mainly through the calculation of photolysis rates (for ˜ 80 %), and to a lesser extent to light-dependent BVOC emissions. The sensitivity of surface O3 concentrations to satellite-based cloud corrections is about 2 times larger in VOC-limited than NOx-limited regimes. Our results suggest that the benefits of accurate predictions of cloudiness would be significant in VOC-limited regions, which are typical of urban areas.
A water-vapor radiometer error model. [for ionosphere in geodetic microwave techniques
NASA Technical Reports Server (NTRS)
Beckman, B.
1985-01-01
The water-vapor radiometer (WVR) is used to calibrate unpredictable delays in the wet component of the troposphere in geodetic microwave techniques such as very-long-baseline interferometry (VLBI) and Global Positioning System (GPS) tracking. Based on experience with Jet Propulsion Laboratory (JPL) instruments, the current level of accuracy in wet-troposphere calibration limits the accuracy of local vertical measurements to 5-10 cm. The goal for the near future is 1-3 cm. Although the WVR is currently the best calibration method, many instruments are prone to systematic error. In this paper, a treatment of WVR data is proposed and evaluated. This treatment reduces the effect of WVR systematic errors by estimating parameters that specify an assumed functional form for the error. The assumed form of the treatment is evaluated by comparing the results of two similar WVR's operating near each other. Finally, the observability of the error parameters is estimated by covariance analysis.
An audit of request forms submitted in a multidisciplinary diagnostic center in Lagos.
Oyedeji, Olufemi Abiola; Ogbenna, Abiola Ann; Iwuala, Sandra Omozehio
2015-01-01
Request forms are important means of communication between physicians and diagnostic service providers. Pre-analytical errors account for over two thirds of errors encountered in diagnostic service provision. The importance of adequate completion of request forms is usually underestimated by physicians which may result in medical errors or delay in instituting appropriate treatment. The aim of this study was to audit the level of completion of request forms presented at a multidisciplinary diagnostic center. A review of all requests forms for investigations which included radiologic, laboratory and cardiac investigations received between July and December 2011 was performed to assess their level of completeness. The data was entered into a spreadsheet and analyzed. Only 1.3% of the 7,841 request forms reviewed were fully completed. Patient's names, the referring physician's name and gender were the most completed information on the forms evaluated with 99.0%, 99.0% and 90.3% completion respectively. Patient's age was provided in 68.0%, request date in 88.2%, and clinical notes/ diagnosis in 65.9% of the requests. Patient's full address was provided in only 5.6% of requests evaluated. This study shows that investigation request forms are inadequately filled by physicians in our environment. Continuous medical education of physicians on the need for adequate completion of request forms is needed.
A standardization model based on image recognition for performance evaluation of an oral scanner.
Seo, Sang-Wan; Lee, Wan-Sun; Byun, Jae-Young; Lee, Kyu-Bok
2017-12-01
Accurate information is essential in dentistry. The image information of missing teeth is used in optically based medical equipment in prosthodontic treatment. To evaluate oral scanners, the standardized model was examined from cases of image recognition errors of linear discriminant analysis (LDA), and a model that combines the variables with reference to ISO 12836:2015 was designed. The basic model was fabricated by applying 4 factors to the tooth profile (chamfer, groove, curve, and square) and the bottom surface. Photo-type and video-type scanners were used to analyze 3D images after image capture. The scans were performed several times according to the prescribed sequence to distinguish the model from the one that did not form, and the results confirmed it to be the best. In the case of the initial basic model, a 3D shape could not be obtained by scanning even if several shots were taken. Subsequently, the recognition rate of the image was improved with every variable factor, and the difference depends on the tooth profile and the pattern of the floor surface. Based on the recognition error of the LDA, the recognition rate decreases when the model has a similar pattern. Therefore, to obtain the accurate 3D data, the difference of each class needs to be provided when developing a standardized model.
An approach to develop an algorithm to detect the climbing height in radial-axial ring rolling
NASA Astrophysics Data System (ADS)
Husmann, Simon; Hohmann, Magnus; Kuhlenkötter, Bernd
2017-10-01
Radial-axial ring rolling is the mainly used forming process to produce seamless rings, which are applied in miscellaneous industries like the energy sector, the aerospace technology or in the automotive industry. Due to the simultaneously forming in two opposite rolling gaps and the fact that ring rolling is a mass forming process, different errors could occur during the rolling process. Ring climbing is one of the most occurring process errors leading to a distortion of the ring's cross section and a deformation of the rings geometry. The conventional sensors of a radial-axial rolling machine could not detect this error. Therefore, it is a common strategy to roll a slightly bigger ring, so that random occurring process errors could be reduce afterwards by removing the additional material. The LPS installed an image processing system to the radial rolling gap of their ring rolling machine to enable the recognition and measurement of climbing rings and by this, to reduce the additional material. This paper presents the algorithm which enables the image processing system to detect the error of a climbing ring and ensures comparable reliable results for the measurement of the climbing height of the rings.
NASA Astrophysics Data System (ADS)
Radziszewski, Kacper
2017-10-01
The following paper presents the results of the research in the field of the machine learning, investigating the scope of application of the artificial neural networks algorithms as a tool in architectural design. The computational experiment was held using the backward propagation of errors method of training the artificial neural network, which was trained based on the geometry of the details of the Roman Corinthian order capital. During the experiment, as an input training data set, five local geometry parameters combined has given the best results: Theta, Pi, Rho in spherical coordinate system based on the capital volume centroid, followed by Z value of the Cartesian coordinate system and a distance from vertical planes created based on the capital symmetry. Additionally during the experiment, artificial neural network hidden layers optimal count and structure was found, giving results of the error below 0.2% for the mentioned before input parameters. Once successfully trained artificial network, was able to mimic the details composition on any other geometry type given. Despite of calculating the transformed geometry locally and separately for each of the thousands of surface points, system could create visually attractive and diverse, complex patterns. Designed tool, based on the supervised learning method of machine learning, gives possibility of generating new architectural forms- free of the designer’s imagination bounds. Implementing the infinitely broad computational methods of machine learning, or Artificial Intelligence in general, not only could accelerate and simplify the design process, but give an opportunity to explore never seen before, unpredictable forms or everyday architectural practice solutions.
Analytical approximation of a distorted reflector surface defined by a discrete set of points
NASA Technical Reports Server (NTRS)
Acosta, Roberto J.; Zaman, Afroz A.
1988-01-01
Reflector antennas on Earth orbiting spacecrafts generally cannot be described analytically. The reflector surface is subjected to a large temperature fluctuation and gradients, and is thus warped from its true geometrical shape. Aside from distortion by thermal stresses, reflector surfaces are often purposely shaped to minimize phase aberrations and scanning losses. To analyze distorted reflector antennas defined by discrete surface points, a numerical technique must be applied to compute an interpolatory surface passing through a grid of discrete points. In this paper, the distorted reflector surface points are approximated by two analytical components: an undistorted surface component and a surface error component. The undistorted surface component is a best fit paraboloid polynomial for the given set of points and the surface error component is a Fourier series expansion of the deviation of the actual surface points, from the best fit paraboloid. By applying the numerical technique to approximate the surface normals of the distorted reflector surface, the induced surface current can be obtained using physical optics technique. These surface currents are integrated to find the far field radiation pattern.
Acetabular rim and surface segmentation for hip surgery planning and dysplasia evaluation
NASA Astrophysics Data System (ADS)
Tan, Sovira; Yao, Jianhua; Yao, Lawrence; Summers, Ronald M.; Ward, Michael M.
2008-03-01
Knowledge of the acetabular rim and surface can be invaluable for hip surgery planning and dysplasia evaluation. The acetabular rim can also be used as a landmark for registration purposes. At the present time acetabular features are mostly extracted manually at great cost of time and human labor. Using a recent level set algorithm that can evolve on the surface of a 3D object represented by a triangular mesh we automatically extracted rims and surfaces of acetabulae. The level set is guided by curvature features on the mesh. It can segment portions of a surface that are bounded by a line of extremal curvature (ridgeline or crestline). The rim of the acetabulum is such an extremal curvature line. Our material consists of eight hemi-pelvis surfaces. The algorithm is initiated by putting a small circle (level set seed) at the center of the acetabular surface. Because this surface distinctively has the form of a cup we were able to use the Shape Index feature to automatically extract an approximate center. The circle then expands and deforms so as to take the shape of the acetabular rim. The results were visually inspected. Only minor errors were detected. The algorithm also proved to be robust. Seed placement was satisfactory for the eight hemi-pelvis surfaces without changing any parameters. For the level set evolution we were able to use a single set of parameters for seven out of eight surfaces.
LANDSAT 4 band 6 data evaluation
NASA Technical Reports Server (NTRS)
1985-01-01
Comparison of underflight data with satellite estimates of temperature revealed significant gain calibration errors. The source of the LANDSAT 5 band 6 error and its reproducibility is not yet adequately defined. The error can be accounted for using underflight or ground truth data. When underflight data are used to correct the satellite data, the residual error for the scene studied was 1.3K when the predicted temperatures were compared to measured surface temperature.
Moments of inclination error distribution computer program
NASA Technical Reports Server (NTRS)
Myler, T. R.
1981-01-01
A FORTRAN coded computer program is described which calculates orbital inclination error statistics using a closed-form solution. This solution uses a data base of trajectory errors from actual flights to predict the orbital inclination error statistics. The Scott flight history data base consists of orbit insertion errors in the trajectory parameters - altitude, velocity, flight path angle, flight azimuth, latitude and longitude. The methods used to generate the error statistics are of general interest since they have other applications. Program theory, user instructions, output definitions, subroutine descriptions and detailed FORTRAN coding information are included.
Children's Overtensing Errors: Phonological and Lexical Effects on Syntax
ERIC Educational Resources Information Center
Stemberger, Joseph Paul
2007-01-01
Overtensing (the use of an inflected form in place of a nonfinite form, e.g. *"didn't broke" for target "didn't break") is common in early syntax. In a ChiLDES-based study of 36 children acquiring English, I examine the effects of phonological and lexical factors. For irregulars, errors are more common with verbs of low frequency and when…
Extreme Universe Space Observatory (EUSO) Optics Module
NASA Technical Reports Server (NTRS)
Young, Roy; Christl, Mark
2008-01-01
A demonstration part will be manufactured in Japan on one of the large Toshiba machines with a diameter of 2.5 meters. This will be a flat PMMA disk that is cut between 0.5 and 1.25 meters radius. The cut should demonstrate manufacturing the most difficult parts of the 2.5 meter Fresnel pattern and the blazed grating on the diffractive surface. Optical simulations, validated with the subscale prototype, will be used to determine the limits on manufacturing errors (tolerances) that will result in optics that meet EUSO s requirements. There will be limits on surface roughness (or errors at high spatial frequency); radial and azimuthal slope errors (at lower spatial frequencies) and plunge cut depth errors in the blazed grating. The demonstration part will be measured to determine whether it was made within the allowable tolerances.
NASA Astrophysics Data System (ADS)
Schlegel, N.-J.; Larour, E.; Seroussi, H.; Morlighem, M.; Box, J. E.
2013-06-01
The behavior of the Greenland Ice Sheet, which is considered a major contributor to sea level changes, is best understood on century and longer time scales. However, on decadal time scales, its response is less predictable due to the difficulty of modeling surface climate, as well as incomplete understanding of the dynamic processes responsible for ice flow. Therefore, it is imperative to understand how modeling advancements, such as increased spatial resolution or more comprehensive ice flow equations, might improve projections of ice sheet response to climatic trends. Here we examine how a finely resolved climate forcing influences a high-resolution ice stream model that considers longitudinal stresses. We simulate ice flow using a two-dimensional Shelfy-Stream Approximation implemented within the Ice Sheet System Model (ISSM) and use uncertainty quantification tools embedded within the model to calculate the sensitivity of ice flow within the Northeast Greenland Ice Stream to errors in surface mass balance (SMB) forcing. Our results suggest that the model tends to smooth ice velocities even when forced with extreme errors in SMB. Indeed, errors propagate linearly through the model, resulting in discharge uncertainty of 16% or 1.9 Gt/yr. We find that mass flux is most sensitive to local errors but is also affected by errors hundreds of kilometers away; thus, an accurate SMB map of the entire basin is critical for realistic simulation. Furthermore, sensitivity analyses indicate that SMB forcing needs to be provided at a resolution of at least 40 km.
Pointing error analysis of Risley-prism-based beam steering system.
Zhou, Yuan; Lu, Yafei; Hei, Mo; Liu, Guangcan; Fan, Dapeng
2014-09-01
Based on the vector form Snell's law, ray tracing is performed to quantify the pointing errors of Risley-prism-based beam steering systems, induced by component errors, prism orientation errors, and assembly errors. Case examples are given to elucidate the pointing error distributions in the field of regard and evaluate the allowances of the error sources for a given pointing accuracy. It is found that the assembly errors of the second prism will result in more remarkable pointing errors in contrast with the first one. The pointing errors induced by prism tilt depend on the tilt direction. The allowances of bearing tilt and prism tilt are almost identical if the same pointing accuracy is planned. All conclusions can provide a theoretical foundation for practical works.
Learning from Errors at Work: A Replication Study in Elder Care Nursing
ERIC Educational Resources Information Center
Leicher, Veronika; Mulder, Regina H.; Bauer, Johannes
2013-01-01
Learning from errors is an important way of learning at work. In this article, we analyse conditions under which elder care nurses use errors as a starting point for the engagement in social learning activities (ESLA) in the form of joint reflection with colleagues on potential causes of errors and ways to prevent them in future. The goal of our…
78 FR 15030 - Introduction of the Revised Employment Eligibility Verification Form
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-08
... several improvements designed to minimize errors in form completion. The key revisions to Form I-9 include... and email addresses. Improving the form's instructions. Revising the layout of the form, expanding the...
Computer Controlled Optical Surfacing With Orbital Tool Motion
NASA Astrophysics Data System (ADS)
Jones, Robert A.
1985-10-01
Asymmetric aspheric optical surfaces are very difficult to fabricate using classical techniques and laps the same size as the workpiece. Opticians can produce such surfaces by grinding and polishing, using small laps with orbital tool motion. However, hand correction is a time consuming process unsuitable for large optical elements. Itek has developed Computer Controlled Optical Surfacing (CCOS) for fabricating such aspheric optics. Automated equipment moves a nonrotating orbiting tool slowly over the workpiece surface. The process corrects low frequency surface errors by figuring. The velocity of the tool assembly over the workpiece surface is purposely varied. Since the amount of material removal is proportional to the polishing or grinding time, accurate control over material removal is achieved. The removal of middle and high frequency surface errors is accomplished by pad smoothing. For a soft pad material, the pad will compress to fit the workpiece surface producing greater pressure and more removal at the surface high areas. A harder pad will ride on only the high regions resulting in removal only for those locations.
Chaplain Corps Cadet Chapel Community Center Chapel Institutional Review Board Not Human Subjects Research Requirements 7 Not Human Subjects Research Form 8 Researcher Instructions - Activities Submitted to DoD IRB 9 Review 18 Not Human Subjects Errors 19 Exempt Research Most Frequent Errors 20 Most Frequent Errors for
Royer, Betina; Cardoso, Natali F; Lima, Eder C; Vaghetti, Julio C P; Simon, Nathalia M; Calvete, Tatiana; Veses, Renato Cataluña
2009-05-30
The Brazilian pine-fruit shell (Araucaria angustifolia) is a food residue, which was used in natural and carbonized forms, as low-cost adsorbents for the removal of methylene blue (MB) from aqueous solutions. Chemical treatment of Brazilian pine-fruit shell (PW), with sulfuric acid produced a non-activated carbonaceous material (C-PW). Both PW and C-PW were tested as low-cost adsorbents for the removal of MB from aqueous effluents. It was observed that C-PW leaded to a remarkable increase in the specific surface area, average porous volume, and average porous diameter of the adsorbent when compared to PW. The effects of shaking time, adsorbent dosage and pH on adsorption capacity were studied. In basic pH region (pH 8.5) the adsorption of MB was favorable. The contact time required to obtain the equilibrium was 6 and 4h at 25 degrees C, using PW and C-PW as adsorbents, respectively. Based on error function values (F(error)) the kinetic data were better fitted to fractionary-order kinetic model when compared to pseudo-first order, pseudo-second order, and chemisorption kinetic models. The equilibrium data were fitted to Langmuir, Freundlich, Sips and Redlich-Peterson isotherm models. For MB dye the equilibrium data were better fitted to the Sips isotherm model using PW and C-PW as adsorbents.
Characterization of a Method for Inverse Heat Conduction Using Real and Simulated Thermocouple Data
NASA Technical Reports Server (NTRS)
Pizzo, Michelle E.; Glass, David E.
2017-01-01
It is often impractical to instrument the external surface of high-speed vehicles due to the aerothermodynamic heating. Temperatures can instead be measured internal to the structure using embedded thermocouples, and direct and inverse methods can then be used to estimate temperature and heat flux on the external surface. Two thermocouples embedded at different depths are required to solve direct and inverse problems, and filtering schemes are used to reduce noise in the measured data. Accuracy in the estimated surface temperature and heat flux is dependent on several factors. Factors include the thermocouple location through the thickness of a material, the sensitivity of the surface solution to the error in the specified location of the embedded thermocouples, and the sensitivity to the error in thermocouple data. The effect of these factors on solution accuracy is studied using the methodology discussed in the work of Pizzo, et. al.1 A numerical study is performed to determine if there is an optimal depth at which to embed one thermocouple through the thickness of a material assuming that a second thermocouple is installed on the back face. Solution accuracy will be discussed for a range of embedded thermocouple depths. Moreover, the sensitivity of the surface solution to (a) the error in the specified location of the embedded thermocouple and to (b) the error in the thermocouple data are quantified using numerical simulation, and the results are discussed.
Impact of Low Level Clouds on radiative and turbulent surface flux in southern West Africa
NASA Astrophysics Data System (ADS)
Lohou, Fabienne; Kalthoff, Norbert; Dione, Cheikh; Lothon, Marie; Adler, Bianca; Babic, Karmen; Pedruzo-Bagazgoitia, Xabier; Vila-Guerau De Arellano, Jordi
2017-04-01
During the monsoon season in West Africa, low-level clouds form almost every night and break up between 0900 and the middle of the afternoon depending on the day. The break-up of these clouds leads to the formation of boundary-layer cumuli clouds, which can sometimes evolve into deep convection. The low-level clouds have a strong impact on the radiation and energy budget at the surface and consequently on the humidity in the boundary layer and the afternoon convection. During the DACCIWA ground campaign, which took place in June and July 2016, three supersites in Benin, Ghana, and Nigeria were instrumented to document the conditions within the lower troposphere including the cloud layers. Radiative and turbulent fluxes were measured at different places by several surface stations jointly with low-level cloud occurrence during 50 days. These datasets enable the analysis of modifications in the diurnal cycle of the radiative and turbulent surface flux induced by the formation and presence of the low-level clouds. The final objective of this study is to estimate the error made in some NWP simulations when the diurnal cycle of low-level clouds is poorly represented or not represented at all.
Detecting higher-order wavefront errors with an astigmatic hybrid wavefront sensor.
Barwick, Shane
2009-06-01
The reconstruction of wavefront errors from measurements over subapertures can be made more accurate if a fully characterized quadratic surface can be fitted to the local wavefront surface. An astigmatic hybrid wavefront sensor with added neural network postprocessing is shown to have this capability, provided that the focal image of each subaperture is sufficiently sampled. Furthermore, complete local curvature information is obtained with a single image without splitting beam power.
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Menges, Brian M.
1998-01-01
Errors in the localization of nearby virtual objects presented via see-through, helmet mounted displays are examined as a function of viewing conditions and scene content in four experiments using a total of 38 subjects. Monocular, biocular or stereoscopic presentation of the virtual objects, accommodation (required focus), subjects' age, and the position of physical surfaces are examined. Nearby physical surfaces are found to introduce localization errors that differ depending upon the other experimental factors. These errors apparently arise from the occlusion of the physical background by the optically superimposed virtual objects. But they are modified by subjects' accommodative competence and specific viewing conditions. The apparent physical size and transparency of the virtual objects and physical surfaces respectively are influenced by their relative position when superimposed. The design implications of the findings are discussed in a concluding section.
Experimental Study on the Axis Line Deflection of Ti6A14V Titanium Alloy in Gun-Drilling Process
NASA Astrophysics Data System (ADS)
Li, Liang; Xue, Hu; Wu, Peng
2018-01-01
Titanium alloy is widely used in aerospace industry, but it is also a typical difficult-to-cut material. During Deep hole drilling of the shaft parts of a certain large aircraft, there are problems of bad surface roughness, chip control and axis deviation, so experiments on gun-drilling of Ti6A14V titanium alloy were carried out to measure the axis line deflection, diameter error and surface integrity, and the reasons of these errors were analyzed. Then, the optimized process parameter was obtained during gun-drilling of Ti6A14V titanium alloy with deep hole diameter of 17mm. Finally, we finished the deep hole drilling of 860mm while the comprehensive error is smaller than 0.2mm and the surface roughness is less than 1.6μm.
Investigation of Optimal Digital Image Correlation Patterns for Deformation Measurement
NASA Technical Reports Server (NTRS)
Bomarito, G. F.; Ruggles, T. J.; Hochhalter, J. D.; Cannon, A. H.
2016-01-01
Digital image correlation (DIC) relies on the surface texture of a specimen to measure deformation. When the specimen itself has little or no texture, a pattern is applied to the surface which deforms with the specimen and acts as an artificial surface texture. Because the applied pattern has an effect on the accuracy of DIC, an ideal pattern is sought for which the error introduced into DIC measurements is minimal. In this work, a study is performed in which several DIC pattern quality metrics from the literature are correlated to DIC measurement error. The resulting correlations give insight on the optimality of DIC patterns in general. Optimizations are then performed to produce patterns which are well suited for DIC. These patterns are tested to show their relative benefits. Chief among these benefits are a reduction in error of approximately 30 with respect to a randomly generated pattern.
A Novel Extreme Learning Control Framework of Unmanned Surface Vehicles.
Wang, Ning; Sun, Jing-Chao; Er, Meng Joo; Liu, Yan-Cheng
2016-05-01
In this paper, an extreme learning control (ELC) framework using the single-hidden-layer feedforward network (SLFN) with random hidden nodes for tracking an unmanned surface vehicle suffering from unknown dynamics and external disturbances is proposed. By combining tracking errors with derivatives, an error surface and transformed states are defined to encapsulate unknown dynamics and disturbances into a lumped vector field of transformed states. The lumped nonlinearity is further identified accurately by an extreme-learning-machine-based SLFN approximator which does not require a priori system knowledge nor tuning input weights. Only output weights of the SLFN need to be updated by adaptive projection-based laws derived from the Lyapunov approach. Moreover, an error compensator is incorporated to suppress approximation residuals, and thereby contributing to the robustness and global asymptotic stability of the closed-loop ELC system. Simulation studies and comprehensive comparisons demonstrate that the ELC framework achieves high accuracy in both tracking and approximation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bazin, Alexandre; Monnier, Paul; Beaudoin, Grégoire
Ultrafast switching with low energies is demonstrated using InP photonic crystal nanocavities embedding InGaAs surface quantum wells heterogeneously integrated to a silicon on insulator waveguide circuitry. Thanks to the engineered enhancement of surface non radiative recombination of carriers, switching time is obtained to be as fast as 10 ps. These hybrid nanostructures are shown to be capable of achieving systems level performance by demonstrating error free wavelength conversion at 10 Gbit/s with 6 mW switching powers.
Tooth-meshing-harmonic static-transmission-error amplitudes of helical gears
NASA Astrophysics Data System (ADS)
Mark, William D.
2018-01-01
The static transmission errors of meshing gear pairs arise from deviations of loaded tooth working surfaces from equispaced perfect involute surfaces. Such deviations consist of tooth-pair elastic deformations and geometric deviations (modifications) of tooth working surfaces. To a very good approximation, the static-transmission-error tooth-meshing-harmonic amplitudes of helical gears are herein expressed by superposition of Fourier transforms of the quantities: (1) the combination of tooth-pair elastic deformations and geometric tooth-pair modifications and (2) fractional mesh-stiffness fluctuations, each quantity (1) and (2) expressed as a function of involute "roll distance." Normalization of the total roll-distance single-tooth contact span to unity allows tooth-meshing-harmonic amplitudes to be computed for different shapes of the above-described quantities (1) and (2). Tooth-meshing harmonics p = 1, 2, … are shown to occur at Fourier-transform harmonic values of Qp, p = 1, 2, …, where Q is the actual (total) contact ratio, thereby verifying its importance in minimizing transmission-error tooth-meshing-harmonic amplitudes. Two individual shapes and two series of shapes of the quantities (1) and (2) are chosen to illustrate a wide variety of shapes. In most cases representative of helical gears, tooth-meshing-harmonic values p = 1, 2, … are shown to occur in Fourier-transform harmonic regions governed by discontinuities arising from tooth-pair-contact initiation and termination, thereby showing the importance of minimizing such discontinuities. Plots and analytical expressions for all such Fourier transforms are presented, thereby illustrating the effects of various types of tooth-working-surface modifications and tooth-pair stiffnesses on transmission-error generation.
Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.
Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F
2013-09-01
The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.
Chen, Mingshi; Senay, Gabriel B.; Singh, Ramesh K.; Verdin, James P.
2016-01-01
Evapotranspiration (ET) is an important component of the water cycle – ET from the land surface returns approximately 60% of the global precipitation back to the atmosphere. ET also plays an important role in energy transport among the biosphere, atmosphere, and hydrosphere. Current regional to global and daily to annual ET estimation relies mainly on surface energy balance (SEB) ET models or statistical and empirical methods driven by remote sensing data and various climatological databases. These models have uncertainties due to inevitable input errors, poorly defined parameters, and inadequate model structures. The eddy covariance measurements on water, energy, and carbon fluxes at the AmeriFlux tower sites provide an opportunity to assess the ET modeling uncertainties. In this study, we focused on uncertainty analysis of the Operational Simplified Surface Energy Balance (SSEBop) model for ET estimation at multiple AmeriFlux tower sites with diverse land cover characteristics and climatic conditions. The 8-day composite 1-km MODerate resolution Imaging Spectroradiometer (MODIS) land surface temperature (LST) was used as input land surface temperature for the SSEBop algorithms. The other input data were taken from the AmeriFlux database. Results of statistical analysis indicated that the SSEBop model performed well in estimating ET with an R2 of 0.86 between estimated ET and eddy covariance measurements at 42 AmeriFlux tower sites during 2001–2007. It was encouraging to see that the best performance was observed for croplands, where R2 was 0.92 with a root mean square error of 13 mm/month. The uncertainties or random errors from input variables and parameters of the SSEBop model led to monthly ET estimates with relative errors less than 20% across multiple flux tower sites distributed across different biomes. This uncertainty of the SSEBop model lies within the error range of other SEB models, suggesting systematic error or bias of the SSEBop model is within the normal range. This finding implies that the simplified parameterization of the SSEBop model did not significantly affect the accuracy of the ET estimate while increasing the ease of model setup for operational applications. The sensitivity analysis indicated that the SSEBop model is most sensitive to input variables, land surface temperature (LST) and reference ET (ETo); and parameters, differential temperature (dT), and maximum ET scalar (Kmax), particularly during the non-growing season and in dry areas. In summary, the uncertainty assessment verifies that the SSEBop model is a reliable and robust method for large-area ET estimation. The SSEBop model estimates can be further improved by reducing errors in two input variables (ETo and LST) and two key parameters (Kmax and dT).
Effects of urban form on the urban heat island effect based on spatial regression model.
Yin, Chaohui; Yuan, Man; Lu, Youpeng; Huang, Yaping; Liu, Yanfang
2018-09-01
The urban heat island (UHI) effect is becoming more of a concern with the accelerated process of urbanization. However, few studies have examined the effect of urban form on land surface temperature (LST) especially from an urban planning perspective. This paper used spatial regression model to investigate the effects of both land use composition and urban form on LST in Wuhan City, China, based on the regulatory planning management unit. Landsat ETM+ image data was used to estimate LST. Land use composition was calculated by impervious surface area proportion, vegetated area proportion, and water proportion, while urban form indicators included sky view factor (SVF), building density, and floor area ratio (FAR). We first tested for spatial autocorrelation of urban LST, which confirmed that a traditional regression method would be invalid. A spatial error model (SEM) was chosen because its parameters were better than a spatial lag model (SLM). The results showed that urban form metrics should be the focus for mitigation efforts of UHI effects. In addition, analysis of the relationship between urban form and UHI effect based on the regulatory planning management unit was helpful for promoting corresponding UHI effect mitigation rules in practice. Finally, the spatial regression model was recommended to be an appropriate method for dealing with problems related to the urban thermal environment. Results suggested that the impact of urbanization on the UHI effect can be mitigated not only by balancing various land use types, but also by optimizing urban form, which is even more effective. This research expands the scientific understanding of effects of urban form on UHI by explicitly analyzing indicators closely related to urban detailed planning at the level of regulatory planning management unit. In addition, it may provide important insights and effective regulation measures for urban planners to mitigate future UHI effects. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Skourup, Henriette; Farrell, Sinéad Louise; Hendricks, Stefan; Ricker, Robert; Armitage, Thomas W. K.; Ridout, Andy; Andersen, Ole Baltazar; Haas, Christian; Baker, Steven
2017-11-01
State-of-the-art Arctic Ocean mean sea surface (MSS) models and global geoid models (GGMs) are used to support sea ice freeboard estimation from satellite altimeters, as well as in oceanographic studies such as mapping sea level anomalies and mean dynamic ocean topography. However, errors in a given model in the high-frequency domain, primarily due to unresolved gravity features, can result in errors in the estimated along-track freeboard. These errors are exacerbated in areas with a sparse lead distribution in consolidated ice pack conditions. Additionally model errors can impact ocean geostrophic currents, derived from satellite altimeter data, while remaining biases in these models may impact longer-term, multisensor oceanographic time series of sea level change in the Arctic. This study focuses on an assessment of five state-of-the-art Arctic MSS models (UCL13/04 and DTU15/13/10) and a commonly used GGM (EGM2008). We describe errors due to unresolved gravity features, intersatellite biases, and remaining satellite orbit errors, and their impact on the derivation of sea ice freeboard. The latest MSS models, incorporating CryoSat-2 sea surface height measurements, show improved definition of gravity features, such as the Gakkel Ridge. The standard deviation between models ranges 0.03-0.25 m. The impact of remaining MSS/GGM errors on freeboard retrieval can reach several decimeters in parts of the Arctic. While the maximum observed freeboard difference found in the central Arctic was 0.59 m (UCL13 MSS minus EGM2008 GGM), the standard deviation in freeboard differences is 0.03-0.06 m.
Image-based overlay measurement using subsurface ultrasonic resonance force microscopy
NASA Astrophysics Data System (ADS)
Tamer, M. S.; van der Lans, M. J.; Sadeghian, H.
2018-03-01
Image Based Overlay (IBO) measurement is one of the most common techniques used in Integrated Circuit (IC) manufacturing to extract the overlay error values. The overlay error is measured using dedicated overlay targets which are optimized to increase the accuracy and the resolution, but these features are much larger than the IC feature size. IBO measurements are realized on the dedicated targets instead of product features, because the current overlay metrology solutions, mainly based on optics, cannot provide sufficient resolution on product features. However, considering the fact that the overlay error tolerance is approaching 2 nm, the overlay error measurement on product features becomes a need for the industry. For sub-nanometer resolution metrology, Scanning Probe Microscopy (SPM) is widely used, though at the cost of very low throughput. The semiconductor industry is interested in non-destructive imaging of buried structures under one or more layers for the application of overlay and wafer alignment, specifically through optically opaque media. Recently an SPM technique has been developed for imaging subsurface features which can be potentially considered as a solution for overlay metrology. In this paper we present the use of Subsurface Ultrasonic Resonance Force Microscopy (SSURFM) used for IBO measurement. We used SSURFM for imaging the most commonly used overlay targets on a silicon substrate and photoresist. As a proof of concept we have imaged surface and subsurface structures simultaneously. The surface and subsurface features of the overlay targets are fabricated with programmed overlay errors of +/-40 nm, +/-20 nm, and 0 nm. The top layer thickness changes between 30 nm and 80 nm. Using SSURFM the surface and subsurface features were successfully imaged and the overlay errors were extracted, via a rudimentary image processing algorithm. The measurement results are in agreement with the nominal values of the programmed overlay errors.
McLaughlin, Douglas B
2012-01-01
The utility of numeric nutrient criteria established for certain surface waters is likely to be affected by the uncertainty that exists in the presence of a causal link between nutrient stressor variables and designated use-related biological responses in those waters. This uncertainty can be difficult to characterize, interpret, and communicate to a broad audience of environmental stakeholders. The US Environmental Protection Agency (USEPA) has developed a systematic planning process to support a variety of environmental decisions, but this process is not generally applied to the development of national or state-level numeric nutrient criteria. This article describes a method for implementing such an approach and uses it to evaluate the numeric total P criteria recently proposed by USEPA for colored lakes in Florida, USA. An empirical, log-linear relationship between geometric mean concentrations of total P (a potential stressor variable) and chlorophyll a (a nutrient-related response variable) in these lakes-that is assumed to be causal in nature-forms the basis for the analysis. The use of the geometric mean total P concentration of a lake to correctly indicate designated use status, defined in terms of a 20 µg/L geometric mean chlorophyll a threshold, is evaluated. Rates of decision errors analogous to the Type I and Type II error rates familiar in hypothesis testing, and a 3rd error rate, E(ni) , referred to as the nutrient criterion-based impairment error rate, are estimated. The results show that USEPA's proposed "baseline" and "modified" nutrient criteria approach, in which data on both total P and chlorophyll a may be considered in establishing numeric nutrient criteria for a given lake within a specified range, provides a means for balancing and minimizing designated use attainment decision errors. Copyright © 2011 SETAC.
Acoustic evidence for phonologically mismatched speech errors.
Gormley, Andrea
2015-04-01
Speech errors are generally said to accommodate to their new phonological context. This accommodation has been validated by several transcription studies. The transcription methodology is not the best choice for detecting errors at this level, however, as this type of error can be difficult to perceive. This paper presents an acoustic analysis of speech errors that uncovers non-accommodated or mismatch errors. A mismatch error is a sub-phonemic error that results in an incorrect surface phonology. This type of error could arise during the processing of phonological rules or they could be made at the motor level of implementation. The results of this work have important implications for both experimental and theoretical research. For experimentalists, it validates the tools used for error induction and the acoustic determination of errors free of the perceptual bias. For theorists, this methodology can be used to test the nature of the processes proposed in language production.
NOAA AVHRR Land Surface Albedo Algorithm Development
NASA Technical Reports Server (NTRS)
Toll, D. L.; Shirey, D.; Kimes, D. S.
1997-01-01
The primary objective of this research is to develop a surface albedo model for the National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR). The primary test site is the Konza prairie, Kansas (U.S.A.), used by the International Satellite Land Surface Climatology Project (ISLSCP) in the First ISLSCP Field Experiment (FIFE). In this research, high spectral resolution field spectrometer data was analyzed to simulate AVHRR wavebands and to derive surface albedos. Development of a surface albedo algorithm was completed by analysing a combination of satellite, field spectrometer, and ancillary data. Estimated albedos from the field spectrometer data were compared to reference albedos derived using pyranometer data. Variations from surface anisotropy of reflected solar radiation were found to be the most significant albedo-related error. Additional error or sensitivity came from estimation of a shortwave mid-IR reflectance (1.3-4.0 micro-m) using the AVHRR red and near-IR bands. Errors caused by the use of AVHRR spectral reflectance to estimate both a total visible (0.4-0.7 micro-m) and near-IR (0.7-1.3 micro-m) reflectance were small. The solar spectral integration, using the derived ultraviolet, visible, near-IR and SW mid-IR reflectivities, was not sensitive to many clear-sky changes in atmospheric properties and illumination conditions.
NASA Astrophysics Data System (ADS)
Li, Dong; Feng, Chi; Gao, Shan; Chen, Liwei; Daniel, Ketui
2018-06-01
Accurate measurement of gas turbine blade temperature is of great significance as far as blade health monitoring is concerned. An important method for measuring this temperature is the use of a radiation pyrometer. In this research, error of the pyrometer caused by reflected radiation from the surfaces surrounding the target and the emission angle of the target was analyzed. Important parameters for this analysis were the view factor between interacting surfaces, spectral directional emissivity, pyrometer operating wavelength and the surface temperature distribution on the blades and the vanes. The interacting surface of the rotor blade and the vane models used were discretized using triangular surface elements from which contour integral was used to calculate the view factor between the surface elements. Spectral directional emissivities were obtained from an experimental setup of Ni based alloy samples. A pyrometer operating wavelength of 1.6 μm was chosen. Computational fluid dynamics software was used to simulate the temperature distribution of the rotor blade and the guide vane based on the actual gas turbine input parameters. Results obtained in this analysis show that temperature error introduced by reflected radiation and emission angle ranges from ‑23 K to 49 K.
Topology of modified helical gears and Tooth Contact Analysis (TCA) program
NASA Technical Reports Server (NTRS)
Litvin, Faydor L.; Zhang, Jiao
1989-01-01
The contents of this report covers: (1) development of optimal geometries for crowned helical gears; (2) a method for their generation; (3) tooth contact analysis (TCA) computer programs for the analysis of meshing and bearing contact of the crowned helical gears; and (4) modelling and simulation of gear shaft deflection. The developed method for synthesis was used to determine the optimal geometry for a crowned helical pinion surface and was directed to localize the bearing contact and guarantee favorable shape and a low level of transmission errors. Two new methods for generation of the crowned helical pinion surface are proposed. One is based on the application of a tool with a surface of revolution that slightly deviates from a regular cone surface. The tool can be used as a grinding wheel or as a shaver. The other is based on a crowning pinion tooth surface with predesigned transmission errors. The pinion tooth surface can be generated by a computer-controlled automatic grinding machine. The TCA program simulates the meshing and bearing contact of the misaligned gears. The transmission errors are also determined. The gear shaft deformation was modelled and investigated. It was found that the deflection of gear shafts has the same effect as gear misalignment.
SMAP Level 4 Surface and Root Zone Soil Moisture
NASA Technical Reports Server (NTRS)
Reichle, R.; De Lannoy, G.; Liu, Q.; Ardizzone, J.; Kimball, J.; Koster, R.
2017-01-01
The SMAP Level 4 soil moisture (L4_SM) product provides global estimates of surface and root zone soil moisture, along with other land surface variables and their error estimates. These estimates are obtained through assimilation of SMAP brightness temperature observations into the Goddard Earth Observing System (GEOS-5) land surface model. The L4_SM product is provided at 9 km spatial and 3-hourly temporal resolution and with about 2.5 day latency. The soil moisture and temperature estimates in the L4_SM product are validated against in situ observations. The L4_SM product meets the required target uncertainty of 0.04 m(exp. 3)m(exp. -3), measured in terms of unbiased root-mean-square-error, for both surface and root zone soil moisture.
Linear error analysis of slope-area discharge determinations
Kirby, W.H.
1987-01-01
The slope-area method can be used to calculate peak flood discharges when current-meter measurements are not possible. This calculation depends on several quantities, such as water-surface fall, that are subject to large measurement errors. Other critical quantities, such as Manning's n, are not even amenable to direct measurement but can only be estimated. Finally, scour and fill may cause gross discrepancies between the observed condition of the channel and the hydraulic conditions during the flood peak. The effects of these potential errors on the accuracy of the computed discharge have been estimated by statistical error analysis using a Taylor-series approximation of the discharge formula and the well-known formula for the variance of a sum of correlated random variates. The resultant error variance of the computed discharge is a weighted sum of covariances of the various observational errors. The weights depend on the hydraulic and geometric configuration of the channel. The mathematical analysis confirms the rule of thumb that relative errors in computed discharge increase rapidly when velocity heads exceed the water-surface fall, when the flow field is expanding and when lateral velocity variation (alpha) is large. It also confirms the extreme importance of accurately assessing the presence of scour or fill. ?? 1987.
DeLorenzo, Christine; Papademetris, Xenophon; Staib, Lawrence H.; Vives, Kenneth P.; Spencer, Dennis D.; Duncan, James S.
2010-01-01
During neurosurgery, nonrigid brain deformation prevents preoperatively-acquired images from accurately depicting the intraoperative brain. Stereo vision systems can be used to track intraoperative cortical surface deformation and update preoperative brain images in conjunction with a biomechanical model. However, these stereo systems are often plagued with calibration error, which can corrupt the deformation estimation. In order to decouple the effects of camera calibration from the surface deformation estimation, a framework that can solve for disparate and often competing variables is needed. Game theory, which was developed to handle decision making in this type of competitive environment, has been applied to various fields from economics to biology. In this paper, game theory is applied to cortical surface tracking during neocortical epilepsy surgery and used to infer information about the physical processes of brain surface deformation and image acquisition. The method is successfully applied to eight in vivo cases, resulting in an 81% decrease in mean surface displacement error. This includes a case in which some of the initial camera calibration parameters had errors of 70%. Additionally, the advantages of using a game theoretic approach in neocortical epilepsy surgery are clearly demonstrated in its robustness to initial conditions. PMID:20129844
David W. MacFarlane; Neil R. Ver Planck
2012-01-01
Data from hardwood trees in Michigan were analyzed to investigate how differences in whole-tree form and wood density between trees of different stem diameter relate to residual error in standard-type biomass equations. The results suggested that whole-tree wood density, measured at breast height, explained a significant proportion of residual error in standard-type...
Infinitives or Bare Stems? Are English-Speaking Children Defaulting to the Highest-Frequency Form?
ERIC Educational Resources Information Center
Räsänen, Sanna H. M.; Ambridge, Ben; Pine, Julian M.
2014-01-01
Young English-speaking children often produce utterances with missing 3sg -s (e.g., *He play). Since the mid 1990s, such errors have tended to be treated as Optional Infinitive (OI) errors, in which the verb is a non-finite form (e.g., Wexler, 1998; Legate & Yang, 2007). The present article reports the results of a cross-sectional…
Spectral feature measurements and analyses of the East Lake
NASA Astrophysics Data System (ADS)
Fang, Shenghui; Zhou, Yuan; Zhu, Wu
2005-10-01
It is one of basis of water color remote sensing to investigate the method to obtain and analyze the spectral features of the water bodies. This paper concerns the above-water method for the spectral measurements of inland water. A series of experiments were taken in areas of the East Lake with the EPP2000CCD radiometer, and the geometry attitude of the observation and the method of the elimination of the noise of the water Signals will be discussed. The method of the above-water spectral measurements was studied from the point of view of error source. On the basis of the experiments of the water depth and the observing direction form the sun and surface, it is suggested to remove the radiances of the whitecaps, surface-reflected sun glint and skylight which have not the spectral features of water from the lake surface by specialized observing attitude and data processing. At last, a suit of methods is concluded for the water body of the East Lake in measuring and analyzing the spectral features from above-water.
Garcia, Tanya P; Ma, Yanyuan
2017-10-01
We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.
NASA Astrophysics Data System (ADS)
Wang, Biao; Yu, Xiaofen; Li, Qinzhao; Zheng, Yu
2008-10-01
The paper aiming at the influence factor of round grating dividing error, rolling-wheel produce eccentricity and surface shape errors provides an amendment method based on rolling-wheel to get the composite error model which includes all influence factors above, and then corrects the non-circle measurement angle error of the rolling-wheel. We make soft simulation verification and have experiment; the result indicates that the composite error amendment method can improve the diameter measurement accuracy with rolling-wheel theory. It has wide application prospect for the measurement accuracy higher than 5 μm/m.
Tearing-off method based on single carbon nanocoil for liquid surface tension measurement
NASA Astrophysics Data System (ADS)
Wang, Peng; Pan, Lujun; Deng, Chenghao; Li, Chengwei
2016-11-01
A single carbon nanocoil (CNC) is used as a highly sensitive mechanical sensor to measure the surface tension coefficient of deionized water and alcohol in the tearing-off method. The error can be constrained to within 3.8%. Conversely, the elastic spring constant of a CNC can be accurately measured using a liquid, and the error is constrained to within 3.2%. Compared with traditional methods, the CNC is used as a ring and a sensor at the same time, which may simplify the measurement device and reduce error, also all measurements can be performed under a very low liquid dosage owing to the small size of the CNC.
Autonomous Navigation Error Propagation Assessment for Lunar Surface Mobility Applications
NASA Technical Reports Server (NTRS)
Welch, Bryan W.; Connolly, Joseph W.
2006-01-01
The NASA Vision for Space Exploration is focused on the return of astronauts to the Moon. While navigation systems have already been proven in the Apollo missions to the moon, the current exploration campaign will involve more extensive and extended missions requiring new concepts for lunar navigation. In this document, the results of an autonomous navigation error propagation assessment are provided. The analysis is intended to be the baseline error propagation analysis for which Earth-based and Lunar-based radiometric data are added to compare these different architecture schemes, and quantify the benefits of an integrated approach, in how they can handle lunar surface mobility applications when near the Lunar South pole or on the Lunar Farside.
Generated spiral bevel gears: Optimal machine-tool settings and tooth contact analysis
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Tsung, W. J.; Coy, J. J.; Heine, C.
1985-01-01
Geometry and kinematic errors were studied for Gleason generated spiral bevel gears. A new method was devised for choosing optimal machine settings. These settings provide zero kinematic errors and an improved bearing contact. The kinematic errors are a major source of noise and vibration in spiral bevel gears. The improved bearing contact gives improved conditions for lubrication. A computer program for tooth contact analysis was developed, and thereby the new generation process was confirmed. The new process is governed by the requirement that during the generation process there is directional constancy of the common normal of the contacting surfaces for generator and generated surfaces of pinion and gear.
Emissivity correction for interpreting thermal radiation from a terrestrial surface
NASA Technical Reports Server (NTRS)
Sutherland, R. A.; Bartholic, J. F.; Gerber, J. F.
1979-01-01
A general method of accounting for emissivity in making temperature determinations of graybody surfaces from radiometric data is presented. The method differs from previous treatments in that a simple blackbody calibration and graphical approach is used rather than numerical integrations which require detailed knowledge of an instrument's spectral characteristics. Also, errors caused by approximating instrumental response with the Stephan-Boltzman law rather than with an appropriately weighted Planck integral are examined. In the 8-14 micron wavelength interval, it is shown that errors are at most on the order of 3 C for the extremes of the earth's temperature and emissivity. For more practical limits, however, errors are less than 0.5 C.